diff --git "a/title_31K_G/test_title_long_2405.02225v1.json" "b/title_31K_G/test_title_long_2405.02225v1.json" new file mode 100644--- /dev/null +++ "b/title_31K_G/test_title_long_2405.02225v1.json" @@ -0,0 +1,979 @@ +{ + "url": "http://arxiv.org/abs/2405.02225v1", + "title": "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks", + "abstract": "This paper introduces a framework for post-processing machine learning models\nso that their predictions satisfy multi-group fairness guarantees. Based on the\ncelebrated notion of multicalibration, we introduce $(\\mathbf{s},\\mathcal{G},\n\\alpha)-$GMC (Generalized Multi-Dimensional Multicalibration) for\nmulti-dimensional mappings $\\mathbf{s}$, constraint set $\\mathcal{G}$, and a\npre-specified threshold level $\\alpha$. We propose associated algorithms to\nachieve this notion in general settings. This framework is then applied to\ndiverse scenarios encompassing different fairness concerns, including false\nnegative rate control in image segmentation, prediction set conditional\nuncertainty quantification in hierarchical classification, and de-biased text\ngeneration in language models. We conduct numerical studies on several datasets\nand tasks.", + "authors": "Lujing Zhang, Aaron Roth, Linjun Zhang", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.CY", + "cs.LG", + "stat.ME" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks", + "main_content": "Introduction A common theme across the fairness in machine learning literature is that some measure of error or risk should be equalized across sub-populations. Common measures evaluated across demographic groups include false positive and false negative rates (Hardt et al., 2016) and calibration error (Kleinberg et al., 2016; Chouldechova, 2017). Initial work in this line gave methods for equalizing different risk measures on disjoint groups. A second generation of work gave methods for equalizing measures of risk across groups even when the groups could intersect \u2013 e.g. for false positive and negative rates (Kearns et al., 2018), calibration error (\u00darsula H\u00e9bert-Johnson et al., 2018), regret (Blum & Lykouris, 2019; Rothblum & Yona, 2021), prediction set coverage (Jung et al., 2021, 2022; Deng et al., 2023), among other risk measures. In general, distinct algorithms are derived for each of these settings, and they are generally limited to one-dimensional predictors of various sorts. In this work, we propose a unifying framework for fair risk control in settings with multi-dimensional outputs, based on multicalibration (\u00darsula H\u00e9bert-Johnson et al., 2018). This framework is developed as an extension of the work by Deng et al. (2023); Noarov & Roth (2023), and addresses the need for calibrating multi-dimensional output functions. To illustrate the usefulness of this framework, we apply it to a variety of settings, including false negative rate control in image segmentation, prediction set conditional coverage guarantees in hierarchical classification, and de-biased text generation in language models. These applications make use of the additional power granted by our multi-dimensional extension of multicalibration. 1.1 Related Work Multicalibration was introduced by \u00darsula H\u00e9bert-Johnson et al. (2018) as a fairness motivated constraint that informally asks that a 1-dimensional predictor of a binary-valued outcome be unbiased, conditional 1Work was done during Lujing Zhang\u2019s remote research internship at Rutgers and Penn. Email: misdrifter@stu.pku.edu.cn 2University of Pennsylvania. Email: aaroth@cis.upenn.edu 3Rutgers University. Email: linjun.zhang@rutgers.edu 3Corresponding Author. 1 arXiv:2405.02225v1 [stat.ML] 3 May 2024 \fon both its own prediction and on membership of the input in some number of pre-defined groups (see also a line of prior work that asks for a similar set of guarantees under slightly different conditions (Dawid, 1985; Sandroni et al., 2003; Foster & Kakade, 2006)). Subsequently, multicalibration has been generalized in a number of ways. Jung et al. (2021) generalizes multicalibration to real-valued outcomes, and defines and studies a variant of multicalibration that predicts variance and higher moments rather than means. Gupta et al. (2022) extends the study of multicalibration of both means and moments to the online setting, and defines a variant of mulicalibration for quantiles, with applications to uncertainty estimation. Bastani et al. (2022); Jung et al. (2022) gives more practical variants of quantile multicalibration with applications to conditional coverage guarantees in conformal prediction, together with experimental evaluation. Deng et al. (2023) gives an abstract generalization of 1-dimensional multicalibration, and show how to cast other algorithmic fairness desiderata like false positive rate control in this framework. Noarov & Roth (2023) gives a characterization of the scope of 1-dimensional multicalibration variants via a connection to property elicitation: informally, a property of a distribution can be multicalibrated if and only if it minimizes some 1-dimensional separable regression function. The primary point of departure of this paper is that we propose a multi-dimensional generalization of multicalibration: it can be viewed as the natural multi-dimensional generalization of Deng et al. (2023). Another line of work generalizes multicalibration in an orthogonal direction, leaving the outcomes binary valued but generalizing the class of checking rules that are applied. Dwork et al. (2021) defines outcome indistinguishability, which generalizes multicalibration to require indistinguishability between the predicted and true label distributions with respect to a fixed but arbitrary set of distinguishers. Kakade & Foster (2008); Foster & Hart (2018) define \u201csmooth calibration\u201d that relaxes calibration\u2019s conditioning event to be a smooth function of the prediction. Gopalan et al. (2022) defines a hierarchy of relaxations called low-degree multicalibration that further relaxes smooth calibration and demonstrates desirable statistical properties. Zhao et al. (2021) and Noarov et al. (2023) define notions of calibration tailored to the objective function of a downstream decision maker. These last lines of work focus on multi-dimensional outputs. These lines of work are part of a more general literature studying multi-group fairness. Work in this line aims e.g. to minimize disparities between false positive or false negative rates across groups (Kearns et al., 2018, 2019), or to minimize regret (measured in terms of accuracy) simultaneously across all groups (Blum & Lykouris, 2019; Rothblum & Yona, 2021; Globus-Harris et al., 2022; Tosh & Hsu, 2022). A common theme across these works is that the groups may be arbitrary and intersecting. 1.2 Notation Let X represent a feature domain, Y represent a label domain, and D denote a joint (feature, label) data distribution. For a finite set A, we use |A| and \u2206A, to denote the cardinality of A and the simplex over A respectively. Specifically, \u2206A = {(p1, p2, . . . , p|A|) : 0 \u2264pi \u22641, P|A| i=1 pi = 1}. Given a set F, we use ProjF to denote the \u21132-projection onto the set. We also introduce some shorthand notation. For two vectors a and b, \u27e8a, b\u27e9represents their inner product. For a positive integer T, we define [T] = {1, 2, . . . , T}. For a function f(x) = (f1(x), f2(x), ..., fm(x)), we denote \u2225f\u2225\u221e= supx\u2208X,i\u2208[m][fi(x)]. 2 Formulation and Algorithm 2.1 A generalized notion of Multicalibration Let x \u2208X represent the feature vector of the input, y \u2208Y represent the label, and let h(x) \u2208H denote a multi-dimensional scoring function associated with the input. For example, in image segmentation tasks, h(x) \u2208Rk (k is the number of pixels) is intended to approximate the probability of a pixel being part of a relevant segment, often learned by a neural network. In text generation tasks, h(x) is the distribution over the vocabulary produced by a language model given context x. For x \u2208X, consider an output function f : X \u2192F \u2282Rm, defined as f(x) = (f1(x), . . . , fm(x)), where F is a convex set. We denote the class of functions that f belongs to by Q. For example, in text 2 \fgeneration tasks, f(x) is the calibrated distribution over the output vocabulary and is multi-dimensional (with dimension equal to the vocabulary size); in binary classification tasks where h and f are both scalars, f(x) is the threshold used to convert the raw score h(x) into binary predictions, i.e. 1{h(x)>f(x)}. We write s(f, x, h, y, D) : Q \u00d7 X \u00d7 H \u00d7 Y \u00d7 P \u2192Rl to denote a mapping functional of interest, where D is the joint distribution of (x, h, y) and P is the distribution space. Here, s is set to be a functional of f rather than a function of f(x), which offers us more flexibility that will be useful in our applications. For example, in text generation, where h(x) \u2208\u2206Y is the distribution over tokens output by an initial language model, our goal might be to find f(x) \u2208\u2206Y, an adjusted distribution over tokens y \u2208Y with |Y| = m. In this case we could set s = f(x) \u2212Exf(x) \u2208Rm to be the mapping functional. We can calibrate the probabilities (through s) to be \u201cfair\u201d in some way \u2013 e.g. that the probability of outputting various words denoting professions should be the same regardless of the gender of pronouns used in the prompt. We note that we do not always use the dependence of s on all of its inputs and assign different s in different settings. We write G to denote the class of functions that encode demographic subgroups (along with other information) and for each g \u2208G, g(f(x), x) \u2208Rl, consistent with the dimension of s(f, x, h, y, D) so that we can calibrate over every dimension of s. For example, when l = 1, G can be set to be the indicator function of different sensitive subgroups of X. Alternately, in fair text generation tasks, when the dimension of s equals the size of the set Y, denoted as l = m, we can set the vector g \u2208G to have a value of 1 in the dimensions corresponding to certain types of sensitive words, and 0 in all other dimensions. We now formally introduce the (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) definition. Definition 1 ((s, G, \u03b1)-GMC). Let x, h, y, D denote the feature vector, the scoring function, the label vector, and the joint distribution of (x, h, y) respectively. Given a function class G, mapping functional s, and a threshold \u03b1 > 0, we say f satisfies (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) if E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. (s, G, \u03b1)-GMC is a flexible framework that can instantiate many existing multi-group fairness notions, including s-HappyMap (Deng et al., 2023), property multicalibration (Noarov & Roth, 2023), calibrated multivalid coverage (Jung et al., 2022) and outcome indistinguishability (Dwork et al., 2021). More generally, compared to these notions, (s, G, \u03b1)-GMC extends the literature in two ways. First, it allows the functions s and g to be multi-dimensional (most prior definitions look similar, but with 1-dimensional s and g functions). Second, the function s here is more general and allowed to be a functional of f (rather than just a function of f(x), the evaluation of f at x). These generalizations will be important in our applications. 2.2 Algorithm and Convergence Results To achieve (s, G, \u03b1)-GMC, we present the (s, G, \u03b1)-GMC Algorithm, which can be seen as a natural generalization of algorithms used for more specific notions of multicalibration in previous work (\u00darsula H\u00e9bert-Johnson et al., 2018; Dwork et al., 2021; Jung et al., 2022; Deng et al., 2023): Algorithm 1 (s, G, \u03b1)-GMC lgorithm Input: step size \u03b7 > 0, initialization f (0) \u2208Q, max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G s.t : E(x,h,y)\u223cD[\u27e8s(f (t), x, h, y, D), g(t)(f (t)(x), x)\u27e9] > \u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) It is worth noting that our goal involves functionals concerning our objective function f in order to capture its global properties. We aim to find a function f such that a functional associated with it 3 \f(obtained by taking the expectation over x) satisfies the inequalities we have set to meet different fairness demands. Before delving into the main part of our convergence analysis, we introduce some definitions related to functionals. Examples of these definitions can be found in the appendix Section B. Definition 2 (The derivative of a functional). Given a function f : X \u2192F, consider a functional L(f, D) : Q\u00d7P \u2192R, where Q is the function space of f and P is a distribution space over X. Assume that L(f, D) follows the formulation that L(f, D) = Ex\u223cD[L(f(x))]. The derivative function of L(f, D) with respect to f, denoted as \u2207fL(f, D) : X \u2192F, exists if \u2200w \u2208Q, y \u2208Rm, D \u2208P, Ex\u223cD[\u27e8\u2207fL(f, D), w\u27e9] = \u2202 \u2202\u03f5 L(f + \u03f5w, D)|\u03f5=0 . In the following, we introduce the definitions of convexity and smoothness of a functional. Definition 3 (Convexity of a functional). Let L and f be defined as in Definition 2. A functional L is convex with respect to f if for any f1, f2 \u2208Q, L(f1, D) \u2212L(f2, D) \u2265Ex\u223cD[\u27e8\u2207fL(f2, D), f1 \u2212f2\u27e9]. Definition 4 (KL-smoothness of a functional). Let L and f be defined as in Definition 2. A functional L is KL\u2212smooth if for any f1, f2 \u2208Q, L(f1, D)\u2212L(f2, D) \u2264Ex\u223cD[\u27e8\u2207L(f2, D), f1\u2212f2\u27e9]+Ex\u223cD[ KL 2 \u2225f1\u2212 f2\u22252]. We will prove that this algorithm converges and outputs an f satisfying (s, G, \u03b1)-GMC whenever the following assumptions are satisfied. These are multidimensional generalizations of the conditions given by Deng et al. (2023). Assumptions (1). There exists a potential functional L(f, h, y, D), such that \u2207fL(f, h, y, D)(x) = s(f, x, h, y, D), and L(f, h, y, D) is KL-smooth with respect to f for any x \u2208X. (2). Let f \u2217(x) \u225cProjFf(x) for all x \u2208X. For any f \u2208Q, L(f \u2217, h, y, D) \u2264L(f, h, y, D) . (3). There exists a positive number B, such that for all g \u2208G and all f \u2208Q, Ex\u223cD[\u2225g(f(x), x)\u22252] \u2264B. (4). There exists two numbers Cl, Cu such that for all f \u2208Q, L(f, h, y, D) \u2265Cl, L(f (0), h, y, D) \u2264Cu. Assumption (1) says that a potential functional L exists and it satisfies a KL-smoothness condition with respect to f. For example, when f is a predicted distribution, we often set s = f(x) \u2212Ex\u223cDf(x). In this situation, L = Ex\u223cD[ 1 2\u2225f(x) \u2212Ex\u223cDf(x)\u22252] satisfies the assumption. Assumption (2) states that the potential function decreases when projected with respect to f. A specific example is when F = Y = [0, 1] and L = E(x,y)\u223cD|f(x) \u2212y|2. Assumption (3) states that the \u21132-norm of the functions in G is uniformly bounded. It always holds when G contains indicator functions, which is the most common case in fairness-motivated problems (these are usually the indicator functions for subgroups of the data). Assumption (4) says that the potential functional L is lower bounded and this generally holds true when L is convex. One concrete example is when s(f(x), h, y) = f(x) \u2212y and we have L(f, h, y, D) = Ex\u223cD[(f(x) \u2212y)2], which is lower bounded by 0. Theorem 1. Under Assumptions 1-4, the (s, G, \u03b1)-GMC Algorithm with a suitably chosen \u03b7 = O(\u03b1/(KLB)) converges in T = O( 2KL(Cu\u2212Cl)B) \u03b12 ) iterations and outputs a function f satisfying E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. The proof is provided in Appendix C. At a high level, if we consider g as a generalized direction vector and s as the gradient of L, each violation can be interpreted as detecting a direction where the first-order difference of L is significant. By introducing the assumption of smoothness, our update can result in a decrease in L that exceeds a constant value. Since L is lower bounded by assumption, the updates can terminate as described. 4 \f2.3 Finite-Sample Results We have presented Algorithm 1 as if we have direct access to the true data distribution D. In practice, we only have a finite calibration set D, whose data is sampled i.i.d from D. In this subsection, we show how a variant of Algorithm 1 achieves the same goal from finite samples. First, we introduce a useful measure which we call the dimension of the function class, as similarly defined by Kim et al. (2019); Deng et al. (2023). For a dataset D, we use E(x,h,y)\u223cD to denote the empirical expectation over D. We need T datasets in all and we assume that the whole sample size is m (m/T for each dataset). Definition 5 (Dimension of the function class). We use d(G) to denote the dimension of class G, defined to be a quantity such that if the sample size m \u2265C1 d(G)+log(1/\u03b4) \u03b12 , then a random sample Sm of m elements from D guarantees uniform convergence over G with error at most \u03b1 with failure probability at most \u03b4. That is, for any fixed f and fixed s with \u2225s\u2225\u221e\u2264C2 (C1, C2 > 0 are universal constants): sup g\u2208G |E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2212E(x,h,y)\u223cSm[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9]| \u2264\u03b1. A discussion of this definition is given in the appendix. We now give the finite sample version of the (s, G, \u03b1)-GMC Algorithm and its convergence results below. The detailed proof is in the appendix; we use the uniform convergence guarantee arising from Definition 5 to relate the problem to its distributional counterpart. Algorithm 2 (s, G, \u03b1)-GMC Algorithm (Finite Sample) Input: step size \u03b7 > 0, initialization f (0)(x) \u2208F, validation datasets D[2T ], max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G, s.t. : E(x,h,y)\u223cD2t\u22121[\u27e8s(f (t)(x), h, y, D2t), g(t)(f (t)(x), x)\u27e9] > 3 4\u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) Theorem 2. Under the assumptions 1-4 given in section 3, suppose we run Algorithm 2 with a suitably chosen \u03b7 = O (\u03b1/ (\u03baLB)) and sample size m = O \u0010 T \u00b7 d(G)+log(T/\u03b4) \u03b12 \u0011 , then with probability at least 1 \u2212\u03b4, the algorithm converges in T = O \u0000(Cu \u2212Cl) \u03baLB/\u03b12\u0001 steps and returns a function f satisfying: E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. 3 Applications In this section, we explore three applications of our framework: De-biased text generation in language modeling \u2013 where the output function is multi-dimensional and can\u2019t be addressed in other frameworks, uncertainty quantification in hierarchical classification \u2014 in which we can offer prediction set conditional coverage guarantees, and group-wise false-positive rate control in image segmentation. We begin by outlining the challenges related to fairness and robustness inherent to these applications. Subsequently, we illustrate how to integrate these challenges within the (s, G, \u03b1)-GMC framework, enabling their resolution through Algorithm 1. 3.1 De-Biased Text Generation This section applies our framework to fair word prediction in language modelling. We think of a language model as a function that maps prompts to a distribution over the next word. More specifically, we write 5 \fx \u2208X to denote a prompt, given which the language model outputs a distribution over the vocabulary, denoted by Y. Namely, the language model generates the probability vector h(x) \u2208\u2206Y, and then samples a word (output) following o(x) \u223ch(x). Previous studies (Lu et al., 2018; Hoffmann et al., 2022) demonstrated the presence of gender bias in contemporary language models. Our objective in this section is to mitigate this issue through an approach that post-processes h(x) to a probability distribution p(x) \u2208\u2206Y that has better fairness properties in specific ways. To take advantage of the information in initial language model, p is initialized at h. At the high level, we aim to produce p(x) so that the probabilities of certain groups of words remain the same whether the prompt includes male-indicating words or female-indicating words. For example, we might not want \u201cHe was a \u201d to be completed with \u201cdoctor\u201d more frequently than \u201cShe was a \u201d to be completed with \u201cdoctor\u201d. We define an attribute set U as a collection of specific sensitive words and U to be the set of all U, which stands for different kinds of sensitive words. Following the work by Lu et al. (2018); Hoffmann et al. (2022), we measure the bias of the model on sensitive attribute U by |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)|, where the probability is taken over o(x) \u223cp(x), and x \u2208F and x \u2208M denotes that x indicates female and male pronouns respectively. Suppose the marginal distribution over prompt x (which is drawn uniformly from the given corpus) satisfies that P(x \u2208F), P(x \u2208M) \u2265\u03b3 for some positive constant \u03b3 > 0, we get: |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)| \u22641 \u03b3 (|P(x \u2208F)(P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U))| + |P(x \u2208M)(P(o(x) \u2208U|x \u2208M) \u2212P(o(x) \u2208U))|). (1) As a result, we only need to control the terms on the right side of (1) instead. More specifically, we want to calibrate the output so that for any subset U \u2208U \u2282Y (e.g., gender-stereotyped professions) and subgroups A \u2208A \u2282X (e.g., gender-related pronouns), |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. To better understand this fairness notion, let us consider a toy example where X = {he, she, his, her}, A = {{he,his}, {she,her}}, Y = {lawyer, doctor, dream, nurse}, U = {{lawyer, doctor}, {nurse}}. Our aim is to calibrate the output so that |P[o(x) \u2208{lawyer, doctor}|x \u2208{she, her}] \u2212P[o(x) \u2208 {lawyer, doctor}]| \u2264\u03b1 and |P[o(x) \u2208{lawyer, doctor}|x \u2208{he, his}] \u2212P[o(x) \u2208{lawyer, doctor}]| \u2264\u03b1. We can define V \u225c{(1, 1, 0, 0), (0, 0, 0, 1)} to be the set of indicator vectors of sensitive attributes defined by U. Setting G \u225c{1{x\u2208A}v : A \u2208A, v \u2208V} \u222a{\u22121{x\u2208A}v : A \u2208A, v \u2208V}, this problem can be cast in the GMC framework, and leads to the following theorem: Theorem 3. Assuming that x is a prompt that is uniformly drawn from the given corpus, and h is given by any fixed language model and the size of the largest attribute set in U is upper bounded by B. With a suitably chosen \u03b7 = O(\u03b1/B), our algorithm halts after T = O(B/\u03b12) iterations and outputs a function p satisfying: \u2200A \u2208A, U \u2208U, when o(x) \u223cp(x), sup A\u2208A |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. For the finite-sample counterpart, by applying theorem 2, the sample complexity required in this setting is O( log(2|U||A|)+log( 1 \u03b4 ) \u03b12 ). 3.2 Prediction-Set Conditional Coverage in Hierarchical Classification Hierarchical classification is a machine learning task where the labels are organized in a hierarchical tree structure (Tieppo et al., 2022). More specifically, at the most granular level, predictions are made using labels on the leaves of the tree. These leaves are grouped together into semantically meaningful categories through their parent nodes, which are, in turn, grouped together through their parents, and so on up to 6 \fthe root of the tree. Such a tree structure allows us\u2014when there is uncertainty as to the correct label\u2014to predict intermediate nodes, which correspond to predicting sets of labels \u2014 the set of leaves descended from the intermediate node \u2014 giving us a way to quantify the uncertainty of our predictions. Our goal is to produce such set-valued predictions that have a uniform coverage rate conditional on the prediction we make, where a prediction set is said to \u201ccover\u201d the true label if the true label is a descendent of (or equal to) the node we predicted. For example, in a K-class hierarchical text classification problem, our input x \u2208X is a document and the label is a leaf node y on a classification tree with nodes V and edges E. For simplicity, set V = {1, 2, ..., |V |} where the first K indices {1, 2, .., K} denote leaf nodes (so the groundtruth label y \u2208{1, ..., K}). The tree is of depth H. For a given single-class classification model h : x \u2192[0, 1]K, let u(x) \u225carg maxk hk(x) denote the candidate with the highest score over all leaf nodes according to h. u(x) here corresponds to the most natural point prediction we might make given h. Figure 1: A demo of hierarchical text classification using a subset of labels from the Web of Science dataset. (Kowsari et al., 2017). As a concrete example, in the tree diagram above, we map the set {1, 2, 3, 4, 5, 6, 7} to represent the categories: Green Building, Water Pollution, Cancer, Alzheimer\u2019s Disease, Civil, Medical and Root. Consider a document x with the true label \u2018Cancer\u2019 and an initial model predicting scores h(x) = (0.1, 0.1, 0.5, 0.6). If we used the scores to make a point prediction, we would be incorrect \u2014 the highest scoring label u(x) is \u201cAltzheimer\u2019s disease\u201d, and is wrong: u(x) \u0338= y. If we output the parent node ( \u2018Medical\u2019) instead, our prediction would be less specific (a larger prediction set, here corresponding to both \u201cCancer\u201d and \u201cAlzheimer\u2019s Disease\u201d), but it would cover the true label. We would like to output nodes such that we obtain our target coverage rate (say 90%), without over-covering (say by always outputting \u201cRoot\u201d, which would be trivial). Traditional conformal prediction methods (see Angelopoulos & Bates (2021) for a gentle introduction) give prediction sets that offer marginal guarantees of this sort, but not prediction-set conditional guarantees: i.e. they offer that for 90% of examples, we produce a prediction set that covers the true label. Recent applications of multicalibration related techniques ((Jung et al., 2021; Gupta et al., 2022; Bastani et al., 2022; Jung et al., 2022; Deng et al., 2023; Gibbs et al., 2023) are able to give \u201cgroup conditional\u201d coverage guarantees which offer (e.g.) 90% coverage as averaged over examples within each of a number of intersecting groups, but once again these methods are not able to offer prediction-set conditional guarantees. Prediction set conditional guarantees promise that for each prediction set that we produce, we cover 90% of example labels, even conditional on the prediction set we offer. This precludes the possibility of our model being over-confident in some prediction sets and under-confident in others, as demonstrated in our experimental results. We now define some useful functional notation. Let A : V \u2192V H return the set of all the ancestor nodes of the input node. Let q : V \u00d7 V \u2192V compute the nearest common ancestor of its two input nodes. Let R : X \u2192R|V | be the function that computes for each node i, Ri, the sum of the raw scores h(x) assigned to each leaf that is a descendent of node i (or itself if i is a leaf). When needed, we may randomize R by letting ri(x) \u225cRi(x) + \u03f5i(x), where \u03f5(x) is an independent random variable with zero-mean and constant variance. We define a natural method to choose a node o(x) to output given a scoring function h(x) and a threshold function \u03bb(x). We define o(x) \u225carg minv{d(v) : v \u2208A(u(x)), rv < \u03bb(x)}, where d(v) denotes the depth of the node v in the tree. In other words, we output the highest ancestor i of u(x) (which we recall is the point prediction we would make given h alone) whose cumulative score ri is below 7 \fsome threshold \u2014 which we will select to obtain some target coverage probability. Other natural choices of o(x) are possible \u2014 what follows uses this choice for concreteness, but is not dependent on the specific choice. Recall that an output covers the label if it is the ancestor of the label or the label itself. Our goal is to find a \u03bb(x), such that the rate at which the output covers the label is roughly equal to a given target \u03c3, not just overall, but conditional on the prediction set we output lying in various sets U \u22822V : |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200U \u2208U. Back to our example, we can specify U in various ways. For example, we can set U = {{1, 2, 5}, {3, 4, 6}} to require equal coverage cross the parent categories \u2018Civil\u2019 and \u2018Medical\u2019. Or, we can set U = {{1}, {2}, . . . , {6}, {7}} to obtain our target coverage rate \u03c3 conditionally on the prediction set we output for all possible prediction sets we might output. We set G \u225c{1{o(x)\u2208U} : U \u2208U} \u222a{\u22121{o(x)\u2208U} : U \u2208U}, fitting this problem into our GMC framework: |E(x,h,y)\u223cD[g(o(x))(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200g \u2208G. Using PK i=1 1{rq(i,u)(x)<\u03bb}1{y=i} = 1{o(x) covers y} and applying Algorithm 1, we obtain the following theorem: Theorem 4. Assume (1). \u2200u, \u2200i \u2208V, fri|x(u) \u2264Kp, where fri|x(u) denotes the density function of ri conditioned on x; (2). There exists a real number M > 0 such that \u2200i \u2208V, ri \u2208[\u2212M, M]. With a suitably chosen \u03b7 = O(\u03b1/KP ), our algorithm halts after T = O(KP M/\u03b12) iterations and outputs a function \u03bb satisfying that \u2200U \u2208U, |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1. Applying theorem 2, we can see that the sample complexity for the finite-sample version of the algorithm is O( log(2|U|)+log( 1 \u03b4 ) \u03b12 ). 3.3 Fair FNR Control in Image Segmentation In image segmentation, the input is an image of m = w \u00d7 l (w for width and l for length) pixels and the task is to distinguish the pixels corresponding to certain components of the image, e.g., tumors in a medical image, eyes in the picture of a face, etc. As pointed out by Lee et al. (2023), gender and racial biases are witnessed when evaluating image segmentation models. Among the common evaluations of image segmentation, we consider the False Negative Rate (FNR), defined as False Negatives False Negatives+True Positives. In image segmentation when O, O\u2032 denotes the set of the actual selected segments and the predicted segments respectively, FNR = 1 \u2212|O\u2229O\u2032| |O| . We write x \u2208X to denote the input, which includes both image and demographic group information and y \u2208{0, 1}m to denote the label, which is a binary vector denoting the true inclusion of each of the m pixels. To yield the prediction of y, namely \u02c6 y \u2208{0, 1}m, a scoring function h(x) \u2208Rm and a threshold function \u03bb(x) are needed, so that \u02c6 yi = 1{hi(x)>\u03bb(x)} for i \u2208[m]. As in Section 3.2, for technical reasons we may randomize hi by perturbing it with a zero-mean random variable of modest scale. Our objective is to determine the threshold function \u03bb(x). In the context of algorithmic fairness in image segmentation, one specific application is face segmentation, where the objective is to precisely identify and segment regions containing human faces within an image. The aim is to achieve accurate face segmentation while ensuring consistent levels of precision across various demographic groups defined by sensitive attributes, like gender and race. Thus, our objective is to determine the function \u03bb(x) that ensures multi-group fairness in terms of the FNR \u2014 a natural multi-group fairness extension of the FNR control problem for image segmentation studied by Angelopoulos et al. (2023). Letting A be the set of sensitive subgroups of X, our goal is to ensure that the FNR across different 8 \fgroups are approximately (1 \u2212\u03c3) for some prespecified \u03c3 > 0: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. We can write |O \u2229O\u2032| = Pm i=1 yi1{hi(x)>\u03bb(x)}, so the object is converted to sup A\u2208A |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3)]| \u2264\u03b1. Let s(\u03bb, x, h, y) = 1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3 and G \u225c{\u00b11{x\u2208A} : A \u2208A}. Rewriting the inequality we get: sup g\u2208G E(x,h,y)\u223cD[g(\u03bb(x), x)s(\u03bb, x, h, y)] \u2264\u03b1. Cast in the GMC framework, we obtain the following result: Theorem 5. Assume (1) For all i \u2208[n], |hi| \u2264M for some universal constant M > 0; (2) the density function of hi conditioned on x is upper bounded by some universal constant Kp > 0. Let C be the set of sensitive subgroups of X. Then with a suitably chosen \u03b7 = O(\u03b1/(KP )), the algorithm halts after T = O( 2KP M \u03b1 ) iterations and outputs a function \u03bb satisfying: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. Similar to the previous two applications, by applying Theorem 2 for the finite-sample version of the algorithm, the sample complexity required is O( log(2|A|)+log( 1 \u03b4 ) \u03b12 ). We note that equalizing false negative rates across groups can be achieved trivially by setting \u03bb to be large enough so that the FNR is equalized (at 0) \u2014 which would of course destroy the accuracy of the method. Thus when we set an objective like this, it is important to empirically show that not only does the method lead to low disparity across false negative rates, but does so without loss in accuracy. The experiments that we carry out in Section 4 indeed bear this out. 4 Experiments In this section, we conduct numerical experiments and evaluate the performance of our algorithms within each application from both the fairness and accuracy perspectives. We compare the results with baseline methods to assess their effectiveness. The code can be found in the supplementary material. For more detailed experiment settings and additional results, please refer to Appendix D. 4.1 De-Biased Text Generation In text generation, we consider two datasets and run experiments separately. The first dataset is the corpus data from Liang et al. (2021), which extracts sentences with both terms indicative of biases (e.g., gender indicator words) and attributes (e.g., professions) from real-world articles. The second dataset is made up of synthetic templates based on combining words indicative of bias targets and attributes with simple placeholder templates, e.g., \u201cThe woman worked as ...\u201d; \u201cThe man was known for ...\u201d, constructed in (Lu et al., 2019). Then, we define two kinds of terms indicative of bias targets: female-indicator words and male-indicator words; we also define six types of attributes: female-adj words, male-adj words, male-stereotyped jobs, female-stereotyped jobs, pleasant words, and unpleasant words, by drawing on existing word lists in the fair text generation context (Caliskan et al., 2017) (Gonen & Goldberg, 2019). Each input x is a sentence where sensitive attributes are masked. We use the BERT model (Devlin et al., 2018) to generate the initial probability distribution over the entire vocabulary for the word at 9 \fthe masked position, denoted by h(x). We then use our algorithm to post-process h(x) and obtain the function p(x), which is the calibrated probability of the output. We define two sets of prompts: Afemale and Amale be the set of prompts containing female-indicator and male-indicator words, respectively. We aim to control the gender disparity gap |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| for A \u2208{Afemale, Amale}. Figure 2 plots the disparity gap for A = Amale (the result for A = Afemale is deferred to the appendix due to space constraints). It is evident that our post-processing technique effectively limits the disparity between the probabilities of outputting biased terms related to different gender groups, ensuring that it remains consistently below a specified threshold value of \u03b1 = 0.002 (we will further discuss the way of choosing \u03b1 in the Appendix D). Additionally, we assess the cross-entropy loss between the calibrated output distribution and the corresponding labels. Unlike the calibration set where sensitive words are deliberately masked, we randomly mask words during the cross-entropy test to evaluate the model\u2019s overall performance, extending beyond the prediction of sensitive words. The cross-entropy of the test set is 9.9291 before post-processing and 9.9285 after it, indicating that our algorithm does not reduce the accuracy of the model while reducing gender disparities. We would like to note that our algorithm is not designed to enhance accuracy but to improve fairness while ensuring that the performance of cross-entropy does not deteriorate too much. Figure 2: The bias on outputting different types of sensitive attributes measured on the corpus data. The results for the synthetic data are deferred to the appendix. 4.2 Prediction-Set Conditional Coverage in Hierarchical Classification For hierarchical classification, we use the Web of Science dataset (Kowsari et al., 2017) that contains 46, 985 documents with 134 categories including 7 parent categories. We choose HiAGM (Wang et al., 2022) as the network to generate the initial scoring. Our algorithm is then applied to find the threshold function that yields a fair output. We set our coverage target to be \u03c3 = 0.95 with a tolerance for coverage deviations of \u03b1 = 0.025. Equivalently put, our goal is that for each of the predictions, we aim to cover the true label with probability 95 \u00b1 2.5%, even conditional on the prediction we make. We choose naively outputting the leaf node (denoted as \u201cunprocessed\u201d in the figure) as one baseline and the split conformal method (Angelopoulos et al., 2023) as another baseline. Figure 3 shows that our method achieves coverage within the target tolerance for all predictions, while the two baselines fail to satisfy the coverage guarantee for predicting \u2019CS\u2019 and \u2019Medical\u2019. 10 \fFigure 3: The deviation of prediction-set conditional coverage from the target. 4.3 Fair FNR Control in Image Segmentation We use the FASSEG (Khan et al., 2015) dataset and adopt the U-net (Ronneberger et al., 2015) network to generate the initial scoring function for each pixel, representing the predicted probability of this pixel corresponding to the signal. The dataset contains 118 human facial images and their semantic segmentations. We set our target FNR to be \u03c3 = 0.075 with a tolerance for deviations of \u03b1 = 0.005 and calibrate the FNR across different gender subgroups and racial subgroups. In addition, we compare with the method proposed in (Angelopoulos et al., 2023) that controls on-average FNR in a finite-sample manner based on the split conformal prediction method. The results yielded by U-net and the split conformal are plotted as baselines for comparison in Figure 4. Our algorithm demonstrates its effectiveness as the deviations of the FNRs of GMC from the target \u03b1 across all subgroups are controlled below \u03c3, while the baselines are found to perform poorly on male and white subgroups. Since equalizing FNR does not necessarily imply accuracy, we compute the accuracy of our model\u2019s output together with that of the baseline. The accuracy of our model, measured as the ratio of correctly predicted pixels to the total number of pixels, is 0.86. In comparison, the baseline models achieve an accuracy of 0.84 and 0.92, respectively. This result suggests that our algorithm empirically yields significant gains in mitigating FNR disparities without a significant sacrifice in accuracy. Figure 4: The deviation of the false negative rate from the target in image segmentation. 11", + "additional_graph_info": { + "graph": [ + [ + "Lujing Zhang", + "Aaron Roth" + ], + [ + "Lujing Zhang", + "Linjun Zhang" + ], + [ + "Aaron Roth", + "Jonathan Ullman" + ], + [ + "Linjun Zhang", + "Zhun Deng" + ], + [ + "Linjun Zhang", + "Kenji Kawaguchi" + ], + [ + "Linjun Zhang", + "T. Tony Cai" + ] + ], + "node_feat": { + "Lujing Zhang": [ + { + "url": "http://arxiv.org/abs/2405.02225v1", + "title": "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks", + "abstract": "This paper introduces a framework for post-processing machine learning models\nso that their predictions satisfy multi-group fairness guarantees. Based on the\ncelebrated notion of multicalibration, we introduce $(\\mathbf{s},\\mathcal{G},\n\\alpha)-$GMC (Generalized Multi-Dimensional Multicalibration) for\nmulti-dimensional mappings $\\mathbf{s}$, constraint set $\\mathcal{G}$, and a\npre-specified threshold level $\\alpha$. We propose associated algorithms to\nachieve this notion in general settings. This framework is then applied to\ndiverse scenarios encompassing different fairness concerns, including false\nnegative rate control in image segmentation, prediction set conditional\nuncertainty quantification in hierarchical classification, and de-biased text\ngeneration in language models. We conduct numerical studies on several datasets\nand tasks.", + "authors": "Lujing Zhang, Aaron Roth, Linjun Zhang", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.CY", + "cs.LG", + "stat.ME" + ], + "main_content": "Introduction A common theme across the fairness in machine learning literature is that some measure of error or risk should be equalized across sub-populations. Common measures evaluated across demographic groups include false positive and false negative rates (Hardt et al., 2016) and calibration error (Kleinberg et al., 2016; Chouldechova, 2017). Initial work in this line gave methods for equalizing different risk measures on disjoint groups. A second generation of work gave methods for equalizing measures of risk across groups even when the groups could intersect \u2013 e.g. for false positive and negative rates (Kearns et al., 2018), calibration error (\u00darsula H\u00e9bert-Johnson et al., 2018), regret (Blum & Lykouris, 2019; Rothblum & Yona, 2021), prediction set coverage (Jung et al., 2021, 2022; Deng et al., 2023), among other risk measures. In general, distinct algorithms are derived for each of these settings, and they are generally limited to one-dimensional predictors of various sorts. In this work, we propose a unifying framework for fair risk control in settings with multi-dimensional outputs, based on multicalibration (\u00darsula H\u00e9bert-Johnson et al., 2018). This framework is developed as an extension of the work by Deng et al. (2023); Noarov & Roth (2023), and addresses the need for calibrating multi-dimensional output functions. To illustrate the usefulness of this framework, we apply it to a variety of settings, including false negative rate control in image segmentation, prediction set conditional coverage guarantees in hierarchical classification, and de-biased text generation in language models. These applications make use of the additional power granted by our multi-dimensional extension of multicalibration. 1.1 Related Work Multicalibration was introduced by \u00darsula H\u00e9bert-Johnson et al. (2018) as a fairness motivated constraint that informally asks that a 1-dimensional predictor of a binary-valued outcome be unbiased, conditional 1Work was done during Lujing Zhang\u2019s remote research internship at Rutgers and Penn. Email: misdrifter@stu.pku.edu.cn 2University of Pennsylvania. Email: aaroth@cis.upenn.edu 3Rutgers University. Email: linjun.zhang@rutgers.edu 3Corresponding Author. 1 arXiv:2405.02225v1 [stat.ML] 3 May 2024 \fon both its own prediction and on membership of the input in some number of pre-defined groups (see also a line of prior work that asks for a similar set of guarantees under slightly different conditions (Dawid, 1985; Sandroni et al., 2003; Foster & Kakade, 2006)). Subsequently, multicalibration has been generalized in a number of ways. Jung et al. (2021) generalizes multicalibration to real-valued outcomes, and defines and studies a variant of multicalibration that predicts variance and higher moments rather than means. Gupta et al. (2022) extends the study of multicalibration of both means and moments to the online setting, and defines a variant of mulicalibration for quantiles, with applications to uncertainty estimation. Bastani et al. (2022); Jung et al. (2022) gives more practical variants of quantile multicalibration with applications to conditional coverage guarantees in conformal prediction, together with experimental evaluation. Deng et al. (2023) gives an abstract generalization of 1-dimensional multicalibration, and show how to cast other algorithmic fairness desiderata like false positive rate control in this framework. Noarov & Roth (2023) gives a characterization of the scope of 1-dimensional multicalibration variants via a connection to property elicitation: informally, a property of a distribution can be multicalibrated if and only if it minimizes some 1-dimensional separable regression function. The primary point of departure of this paper is that we propose a multi-dimensional generalization of multicalibration: it can be viewed as the natural multi-dimensional generalization of Deng et al. (2023). Another line of work generalizes multicalibration in an orthogonal direction, leaving the outcomes binary valued but generalizing the class of checking rules that are applied. Dwork et al. (2021) defines outcome indistinguishability, which generalizes multicalibration to require indistinguishability between the predicted and true label distributions with respect to a fixed but arbitrary set of distinguishers. Kakade & Foster (2008); Foster & Hart (2018) define \u201csmooth calibration\u201d that relaxes calibration\u2019s conditioning event to be a smooth function of the prediction. Gopalan et al. (2022) defines a hierarchy of relaxations called low-degree multicalibration that further relaxes smooth calibration and demonstrates desirable statistical properties. Zhao et al. (2021) and Noarov et al. (2023) define notions of calibration tailored to the objective function of a downstream decision maker. These last lines of work focus on multi-dimensional outputs. These lines of work are part of a more general literature studying multi-group fairness. Work in this line aims e.g. to minimize disparities between false positive or false negative rates across groups (Kearns et al., 2018, 2019), or to minimize regret (measured in terms of accuracy) simultaneously across all groups (Blum & Lykouris, 2019; Rothblum & Yona, 2021; Globus-Harris et al., 2022; Tosh & Hsu, 2022). A common theme across these works is that the groups may be arbitrary and intersecting. 1.2 Notation Let X represent a feature domain, Y represent a label domain, and D denote a joint (feature, label) data distribution. For a finite set A, we use |A| and \u2206A, to denote the cardinality of A and the simplex over A respectively. Specifically, \u2206A = {(p1, p2, . . . , p|A|) : 0 \u2264pi \u22641, P|A| i=1 pi = 1}. Given a set F, we use ProjF to denote the \u21132-projection onto the set. We also introduce some shorthand notation. For two vectors a and b, \u27e8a, b\u27e9represents their inner product. For a positive integer T, we define [T] = {1, 2, . . . , T}. For a function f(x) = (f1(x), f2(x), ..., fm(x)), we denote \u2225f\u2225\u221e= supx\u2208X,i\u2208[m][fi(x)]. 2 Formulation and Algorithm 2.1 A generalized notion of Multicalibration Let x \u2208X represent the feature vector of the input, y \u2208Y represent the label, and let h(x) \u2208H denote a multi-dimensional scoring function associated with the input. For example, in image segmentation tasks, h(x) \u2208Rk (k is the number of pixels) is intended to approximate the probability of a pixel being part of a relevant segment, often learned by a neural network. In text generation tasks, h(x) is the distribution over the vocabulary produced by a language model given context x. For x \u2208X, consider an output function f : X \u2192F \u2282Rm, defined as f(x) = (f1(x), . . . , fm(x)), where F is a convex set. We denote the class of functions that f belongs to by Q. For example, in text 2 \fgeneration tasks, f(x) is the calibrated distribution over the output vocabulary and is multi-dimensional (with dimension equal to the vocabulary size); in binary classification tasks where h and f are both scalars, f(x) is the threshold used to convert the raw score h(x) into binary predictions, i.e. 1{h(x)>f(x)}. We write s(f, x, h, y, D) : Q \u00d7 X \u00d7 H \u00d7 Y \u00d7 P \u2192Rl to denote a mapping functional of interest, where D is the joint distribution of (x, h, y) and P is the distribution space. Here, s is set to be a functional of f rather than a function of f(x), which offers us more flexibility that will be useful in our applications. For example, in text generation, where h(x) \u2208\u2206Y is the distribution over tokens output by an initial language model, our goal might be to find f(x) \u2208\u2206Y, an adjusted distribution over tokens y \u2208Y with |Y| = m. In this case we could set s = f(x) \u2212Exf(x) \u2208Rm to be the mapping functional. We can calibrate the probabilities (through s) to be \u201cfair\u201d in some way \u2013 e.g. that the probability of outputting various words denoting professions should be the same regardless of the gender of pronouns used in the prompt. We note that we do not always use the dependence of s on all of its inputs and assign different s in different settings. We write G to denote the class of functions that encode demographic subgroups (along with other information) and for each g \u2208G, g(f(x), x) \u2208Rl, consistent with the dimension of s(f, x, h, y, D) so that we can calibrate over every dimension of s. For example, when l = 1, G can be set to be the indicator function of different sensitive subgroups of X. Alternately, in fair text generation tasks, when the dimension of s equals the size of the set Y, denoted as l = m, we can set the vector g \u2208G to have a value of 1 in the dimensions corresponding to certain types of sensitive words, and 0 in all other dimensions. We now formally introduce the (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) definition. Definition 1 ((s, G, \u03b1)-GMC). Let x, h, y, D denote the feature vector, the scoring function, the label vector, and the joint distribution of (x, h, y) respectively. Given a function class G, mapping functional s, and a threshold \u03b1 > 0, we say f satisfies (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) if E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. (s, G, \u03b1)-GMC is a flexible framework that can instantiate many existing multi-group fairness notions, including s-HappyMap (Deng et al., 2023), property multicalibration (Noarov & Roth, 2023), calibrated multivalid coverage (Jung et al., 2022) and outcome indistinguishability (Dwork et al., 2021). More generally, compared to these notions, (s, G, \u03b1)-GMC extends the literature in two ways. First, it allows the functions s and g to be multi-dimensional (most prior definitions look similar, but with 1-dimensional s and g functions). Second, the function s here is more general and allowed to be a functional of f (rather than just a function of f(x), the evaluation of f at x). These generalizations will be important in our applications. 2.2 Algorithm and Convergence Results To achieve (s, G, \u03b1)-GMC, we present the (s, G, \u03b1)-GMC Algorithm, which can be seen as a natural generalization of algorithms used for more specific notions of multicalibration in previous work (\u00darsula H\u00e9bert-Johnson et al., 2018; Dwork et al., 2021; Jung et al., 2022; Deng et al., 2023): Algorithm 1 (s, G, \u03b1)-GMC lgorithm Input: step size \u03b7 > 0, initialization f (0) \u2208Q, max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G s.t : E(x,h,y)\u223cD[\u27e8s(f (t), x, h, y, D), g(t)(f (t)(x), x)\u27e9] > \u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) It is worth noting that our goal involves functionals concerning our objective function f in order to capture its global properties. We aim to find a function f such that a functional associated with it 3 \f(obtained by taking the expectation over x) satisfies the inequalities we have set to meet different fairness demands. Before delving into the main part of our convergence analysis, we introduce some definitions related to functionals. Examples of these definitions can be found in the appendix Section B. Definition 2 (The derivative of a functional). Given a function f : X \u2192F, consider a functional L(f, D) : Q\u00d7P \u2192R, where Q is the function space of f and P is a distribution space over X. Assume that L(f, D) follows the formulation that L(f, D) = Ex\u223cD[L(f(x))]. The derivative function of L(f, D) with respect to f, denoted as \u2207fL(f, D) : X \u2192F, exists if \u2200w \u2208Q, y \u2208Rm, D \u2208P, Ex\u223cD[\u27e8\u2207fL(f, D), w\u27e9] = \u2202 \u2202\u03f5 L(f + \u03f5w, D)|\u03f5=0 . In the following, we introduce the definitions of convexity and smoothness of a functional. Definition 3 (Convexity of a functional). Let L and f be defined as in Definition 2. A functional L is convex with respect to f if for any f1, f2 \u2208Q, L(f1, D) \u2212L(f2, D) \u2265Ex\u223cD[\u27e8\u2207fL(f2, D), f1 \u2212f2\u27e9]. Definition 4 (KL-smoothness of a functional). Let L and f be defined as in Definition 2. A functional L is KL\u2212smooth if for any f1, f2 \u2208Q, L(f1, D)\u2212L(f2, D) \u2264Ex\u223cD[\u27e8\u2207L(f2, D), f1\u2212f2\u27e9]+Ex\u223cD[ KL 2 \u2225f1\u2212 f2\u22252]. We will prove that this algorithm converges and outputs an f satisfying (s, G, \u03b1)-GMC whenever the following assumptions are satisfied. These are multidimensional generalizations of the conditions given by Deng et al. (2023). Assumptions (1). There exists a potential functional L(f, h, y, D), such that \u2207fL(f, h, y, D)(x) = s(f, x, h, y, D), and L(f, h, y, D) is KL-smooth with respect to f for any x \u2208X. (2). Let f \u2217(x) \u225cProjFf(x) for all x \u2208X. For any f \u2208Q, L(f \u2217, h, y, D) \u2264L(f, h, y, D) . (3). There exists a positive number B, such that for all g \u2208G and all f \u2208Q, Ex\u223cD[\u2225g(f(x), x)\u22252] \u2264B. (4). There exists two numbers Cl, Cu such that for all f \u2208Q, L(f, h, y, D) \u2265Cl, L(f (0), h, y, D) \u2264Cu. Assumption (1) says that a potential functional L exists and it satisfies a KL-smoothness condition with respect to f. For example, when f is a predicted distribution, we often set s = f(x) \u2212Ex\u223cDf(x). In this situation, L = Ex\u223cD[ 1 2\u2225f(x) \u2212Ex\u223cDf(x)\u22252] satisfies the assumption. Assumption (2) states that the potential function decreases when projected with respect to f. A specific example is when F = Y = [0, 1] and L = E(x,y)\u223cD|f(x) \u2212y|2. Assumption (3) states that the \u21132-norm of the functions in G is uniformly bounded. It always holds when G contains indicator functions, which is the most common case in fairness-motivated problems (these are usually the indicator functions for subgroups of the data). Assumption (4) says that the potential functional L is lower bounded and this generally holds true when L is convex. One concrete example is when s(f(x), h, y) = f(x) \u2212y and we have L(f, h, y, D) = Ex\u223cD[(f(x) \u2212y)2], which is lower bounded by 0. Theorem 1. Under Assumptions 1-4, the (s, G, \u03b1)-GMC Algorithm with a suitably chosen \u03b7 = O(\u03b1/(KLB)) converges in T = O( 2KL(Cu\u2212Cl)B) \u03b12 ) iterations and outputs a function f satisfying E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. The proof is provided in Appendix C. At a high level, if we consider g as a generalized direction vector and s as the gradient of L, each violation can be interpreted as detecting a direction where the first-order difference of L is significant. By introducing the assumption of smoothness, our update can result in a decrease in L that exceeds a constant value. Since L is lower bounded by assumption, the updates can terminate as described. 4 \f2.3 Finite-Sample Results We have presented Algorithm 1 as if we have direct access to the true data distribution D. In practice, we only have a finite calibration set D, whose data is sampled i.i.d from D. In this subsection, we show how a variant of Algorithm 1 achieves the same goal from finite samples. First, we introduce a useful measure which we call the dimension of the function class, as similarly defined by Kim et al. (2019); Deng et al. (2023). For a dataset D, we use E(x,h,y)\u223cD to denote the empirical expectation over D. We need T datasets in all and we assume that the whole sample size is m (m/T for each dataset). Definition 5 (Dimension of the function class). We use d(G) to denote the dimension of class G, defined to be a quantity such that if the sample size m \u2265C1 d(G)+log(1/\u03b4) \u03b12 , then a random sample Sm of m elements from D guarantees uniform convergence over G with error at most \u03b1 with failure probability at most \u03b4. That is, for any fixed f and fixed s with \u2225s\u2225\u221e\u2264C2 (C1, C2 > 0 are universal constants): sup g\u2208G |E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2212E(x,h,y)\u223cSm[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9]| \u2264\u03b1. A discussion of this definition is given in the appendix. We now give the finite sample version of the (s, G, \u03b1)-GMC Algorithm and its convergence results below. The detailed proof is in the appendix; we use the uniform convergence guarantee arising from Definition 5 to relate the problem to its distributional counterpart. Algorithm 2 (s, G, \u03b1)-GMC Algorithm (Finite Sample) Input: step size \u03b7 > 0, initialization f (0)(x) \u2208F, validation datasets D[2T ], max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G, s.t. : E(x,h,y)\u223cD2t\u22121[\u27e8s(f (t)(x), h, y, D2t), g(t)(f (t)(x), x)\u27e9] > 3 4\u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) Theorem 2. Under the assumptions 1-4 given in section 3, suppose we run Algorithm 2 with a suitably chosen \u03b7 = O (\u03b1/ (\u03baLB)) and sample size m = O \u0010 T \u00b7 d(G)+log(T/\u03b4) \u03b12 \u0011 , then with probability at least 1 \u2212\u03b4, the algorithm converges in T = O \u0000(Cu \u2212Cl) \u03baLB/\u03b12\u0001 steps and returns a function f satisfying: E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. 3 Applications In this section, we explore three applications of our framework: De-biased text generation in language modeling \u2013 where the output function is multi-dimensional and can\u2019t be addressed in other frameworks, uncertainty quantification in hierarchical classification \u2014 in which we can offer prediction set conditional coverage guarantees, and group-wise false-positive rate control in image segmentation. We begin by outlining the challenges related to fairness and robustness inherent to these applications. Subsequently, we illustrate how to integrate these challenges within the (s, G, \u03b1)-GMC framework, enabling their resolution through Algorithm 1. 3.1 De-Biased Text Generation This section applies our framework to fair word prediction in language modelling. We think of a language model as a function that maps prompts to a distribution over the next word. More specifically, we write 5 \fx \u2208X to denote a prompt, given which the language model outputs a distribution over the vocabulary, denoted by Y. Namely, the language model generates the probability vector h(x) \u2208\u2206Y, and then samples a word (output) following o(x) \u223ch(x). Previous studies (Lu et al., 2018; Hoffmann et al., 2022) demonstrated the presence of gender bias in contemporary language models. Our objective in this section is to mitigate this issue through an approach that post-processes h(x) to a probability distribution p(x) \u2208\u2206Y that has better fairness properties in specific ways. To take advantage of the information in initial language model, p is initialized at h. At the high level, we aim to produce p(x) so that the probabilities of certain groups of words remain the same whether the prompt includes male-indicating words or female-indicating words. For example, we might not want \u201cHe was a \u201d to be completed with \u201cdoctor\u201d more frequently than \u201cShe was a \u201d to be completed with \u201cdoctor\u201d. We define an attribute set U as a collection of specific sensitive words and U to be the set of all U, which stands for different kinds of sensitive words. Following the work by Lu et al. (2018); Hoffmann et al. (2022), we measure the bias of the model on sensitive attribute U by |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)|, where the probability is taken over o(x) \u223cp(x), and x \u2208F and x \u2208M denotes that x indicates female and male pronouns respectively. Suppose the marginal distribution over prompt x (which is drawn uniformly from the given corpus) satisfies that P(x \u2208F), P(x \u2208M) \u2265\u03b3 for some positive constant \u03b3 > 0, we get: |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)| \u22641 \u03b3 (|P(x \u2208F)(P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U))| + |P(x \u2208M)(P(o(x) \u2208U|x \u2208M) \u2212P(o(x) \u2208U))|). (1) As a result, we only need to control the terms on the right side of (1) instead. More specifically, we want to calibrate the output so that for any subset U \u2208U \u2282Y (e.g., gender-stereotyped professions) and subgroups A \u2208A \u2282X (e.g., gender-related pronouns), |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. To better understand this fairness notion, let us consider a toy example where X = {he, she, his, her}, A = {{he,his}, {she,her}}, Y = {lawyer, doctor, dream, nurse}, U = {{lawyer, doctor}, {nurse}}. Our aim is to calibrate the output so that |P[o(x) \u2208{lawyer, doctor}|x \u2208{she, her}] \u2212P[o(x) \u2208 {lawyer, doctor}]| \u2264\u03b1 and |P[o(x) \u2208{lawyer, doctor}|x \u2208{he, his}] \u2212P[o(x) \u2208{lawyer, doctor}]| \u2264\u03b1. We can define V \u225c{(1, 1, 0, 0), (0, 0, 0, 1)} to be the set of indicator vectors of sensitive attributes defined by U. Setting G \u225c{1{x\u2208A}v : A \u2208A, v \u2208V} \u222a{\u22121{x\u2208A}v : A \u2208A, v \u2208V}, this problem can be cast in the GMC framework, and leads to the following theorem: Theorem 3. Assuming that x is a prompt that is uniformly drawn from the given corpus, and h is given by any fixed language model and the size of the largest attribute set in U is upper bounded by B. With a suitably chosen \u03b7 = O(\u03b1/B), our algorithm halts after T = O(B/\u03b12) iterations and outputs a function p satisfying: \u2200A \u2208A, U \u2208U, when o(x) \u223cp(x), sup A\u2208A |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. For the finite-sample counterpart, by applying theorem 2, the sample complexity required in this setting is O( log(2|U||A|)+log( 1 \u03b4 ) \u03b12 ). 3.2 Prediction-Set Conditional Coverage in Hierarchical Classification Hierarchical classification is a machine learning task where the labels are organized in a hierarchical tree structure (Tieppo et al., 2022). More specifically, at the most granular level, predictions are made using labels on the leaves of the tree. These leaves are grouped together into semantically meaningful categories through their parent nodes, which are, in turn, grouped together through their parents, and so on up to 6 \fthe root of the tree. Such a tree structure allows us\u2014when there is uncertainty as to the correct label\u2014to predict intermediate nodes, which correspond to predicting sets of labels \u2014 the set of leaves descended from the intermediate node \u2014 giving us a way to quantify the uncertainty of our predictions. Our goal is to produce such set-valued predictions that have a uniform coverage rate conditional on the prediction we make, where a prediction set is said to \u201ccover\u201d the true label if the true label is a descendent of (or equal to) the node we predicted. For example, in a K-class hierarchical text classification problem, our input x \u2208X is a document and the label is a leaf node y on a classification tree with nodes V and edges E. For simplicity, set V = {1, 2, ..., |V |} where the first K indices {1, 2, .., K} denote leaf nodes (so the groundtruth label y \u2208{1, ..., K}). The tree is of depth H. For a given single-class classification model h : x \u2192[0, 1]K, let u(x) \u225carg maxk hk(x) denote the candidate with the highest score over all leaf nodes according to h. u(x) here corresponds to the most natural point prediction we might make given h. Figure 1: A demo of hierarchical text classification using a subset of labels from the Web of Science dataset. (Kowsari et al., 2017). As a concrete example, in the tree diagram above, we map the set {1, 2, 3, 4, 5, 6, 7} to represent the categories: Green Building, Water Pollution, Cancer, Alzheimer\u2019s Disease, Civil, Medical and Root. Consider a document x with the true label \u2018Cancer\u2019 and an initial model predicting scores h(x) = (0.1, 0.1, 0.5, 0.6). If we used the scores to make a point prediction, we would be incorrect \u2014 the highest scoring label u(x) is \u201cAltzheimer\u2019s disease\u201d, and is wrong: u(x) \u0338= y. If we output the parent node ( \u2018Medical\u2019) instead, our prediction would be less specific (a larger prediction set, here corresponding to both \u201cCancer\u201d and \u201cAlzheimer\u2019s Disease\u201d), but it would cover the true label. We would like to output nodes such that we obtain our target coverage rate (say 90%), without over-covering (say by always outputting \u201cRoot\u201d, which would be trivial). Traditional conformal prediction methods (see Angelopoulos & Bates (2021) for a gentle introduction) give prediction sets that offer marginal guarantees of this sort, but not prediction-set conditional guarantees: i.e. they offer that for 90% of examples, we produce a prediction set that covers the true label. Recent applications of multicalibration related techniques ((Jung et al., 2021; Gupta et al., 2022; Bastani et al., 2022; Jung et al., 2022; Deng et al., 2023; Gibbs et al., 2023) are able to give \u201cgroup conditional\u201d coverage guarantees which offer (e.g.) 90% coverage as averaged over examples within each of a number of intersecting groups, but once again these methods are not able to offer prediction-set conditional guarantees. Prediction set conditional guarantees promise that for each prediction set that we produce, we cover 90% of example labels, even conditional on the prediction set we offer. This precludes the possibility of our model being over-confident in some prediction sets and under-confident in others, as demonstrated in our experimental results. We now define some useful functional notation. Let A : V \u2192V H return the set of all the ancestor nodes of the input node. Let q : V \u00d7 V \u2192V compute the nearest common ancestor of its two input nodes. Let R : X \u2192R|V | be the function that computes for each node i, Ri, the sum of the raw scores h(x) assigned to each leaf that is a descendent of node i (or itself if i is a leaf). When needed, we may randomize R by letting ri(x) \u225cRi(x) + \u03f5i(x), where \u03f5(x) is an independent random variable with zero-mean and constant variance. We define a natural method to choose a node o(x) to output given a scoring function h(x) and a threshold function \u03bb(x). We define o(x) \u225carg minv{d(v) : v \u2208A(u(x)), rv < \u03bb(x)}, where d(v) denotes the depth of the node v in the tree. In other words, we output the highest ancestor i of u(x) (which we recall is the point prediction we would make given h alone) whose cumulative score ri is below 7 \fsome threshold \u2014 which we will select to obtain some target coverage probability. Other natural choices of o(x) are possible \u2014 what follows uses this choice for concreteness, but is not dependent on the specific choice. Recall that an output covers the label if it is the ancestor of the label or the label itself. Our goal is to find a \u03bb(x), such that the rate at which the output covers the label is roughly equal to a given target \u03c3, not just overall, but conditional on the prediction set we output lying in various sets U \u22822V : |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200U \u2208U. Back to our example, we can specify U in various ways. For example, we can set U = {{1, 2, 5}, {3, 4, 6}} to require equal coverage cross the parent categories \u2018Civil\u2019 and \u2018Medical\u2019. Or, we can set U = {{1}, {2}, . . . , {6}, {7}} to obtain our target coverage rate \u03c3 conditionally on the prediction set we output for all possible prediction sets we might output. We set G \u225c{1{o(x)\u2208U} : U \u2208U} \u222a{\u22121{o(x)\u2208U} : U \u2208U}, fitting this problem into our GMC framework: |E(x,h,y)\u223cD[g(o(x))(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200g \u2208G. Using PK i=1 1{rq(i,u)(x)<\u03bb}1{y=i} = 1{o(x) covers y} and applying Algorithm 1, we obtain the following theorem: Theorem 4. Assume (1). \u2200u, \u2200i \u2208V, fri|x(u) \u2264Kp, where fri|x(u) denotes the density function of ri conditioned on x; (2). There exists a real number M > 0 such that \u2200i \u2208V, ri \u2208[\u2212M, M]. With a suitably chosen \u03b7 = O(\u03b1/KP ), our algorithm halts after T = O(KP M/\u03b12) iterations and outputs a function \u03bb satisfying that \u2200U \u2208U, |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1. Applying theorem 2, we can see that the sample complexity for the finite-sample version of the algorithm is O( log(2|U|)+log( 1 \u03b4 ) \u03b12 ). 3.3 Fair FNR Control in Image Segmentation In image segmentation, the input is an image of m = w \u00d7 l (w for width and l for length) pixels and the task is to distinguish the pixels corresponding to certain components of the image, e.g., tumors in a medical image, eyes in the picture of a face, etc. As pointed out by Lee et al. (2023), gender and racial biases are witnessed when evaluating image segmentation models. Among the common evaluations of image segmentation, we consider the False Negative Rate (FNR), defined as False Negatives False Negatives+True Positives. In image segmentation when O, O\u2032 denotes the set of the actual selected segments and the predicted segments respectively, FNR = 1 \u2212|O\u2229O\u2032| |O| . We write x \u2208X to denote the input, which includes both image and demographic group information and y \u2208{0, 1}m to denote the label, which is a binary vector denoting the true inclusion of each of the m pixels. To yield the prediction of y, namely \u02c6 y \u2208{0, 1}m, a scoring function h(x) \u2208Rm and a threshold function \u03bb(x) are needed, so that \u02c6 yi = 1{hi(x)>\u03bb(x)} for i \u2208[m]. As in Section 3.2, for technical reasons we may randomize hi by perturbing it with a zero-mean random variable of modest scale. Our objective is to determine the threshold function \u03bb(x). In the context of algorithmic fairness in image segmentation, one specific application is face segmentation, where the objective is to precisely identify and segment regions containing human faces within an image. The aim is to achieve accurate face segmentation while ensuring consistent levels of precision across various demographic groups defined by sensitive attributes, like gender and race. Thus, our objective is to determine the function \u03bb(x) that ensures multi-group fairness in terms of the FNR \u2014 a natural multi-group fairness extension of the FNR control problem for image segmentation studied by Angelopoulos et al. (2023). Letting A be the set of sensitive subgroups of X, our goal is to ensure that the FNR across different 8 \fgroups are approximately (1 \u2212\u03c3) for some prespecified \u03c3 > 0: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. We can write |O \u2229O\u2032| = Pm i=1 yi1{hi(x)>\u03bb(x)}, so the object is converted to sup A\u2208A |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3)]| \u2264\u03b1. Let s(\u03bb, x, h, y) = 1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3 and G \u225c{\u00b11{x\u2208A} : A \u2208A}. Rewriting the inequality we get: sup g\u2208G E(x,h,y)\u223cD[g(\u03bb(x), x)s(\u03bb, x, h, y)] \u2264\u03b1. Cast in the GMC framework, we obtain the following result: Theorem 5. Assume (1) For all i \u2208[n], |hi| \u2264M for some universal constant M > 0; (2) the density function of hi conditioned on x is upper bounded by some universal constant Kp > 0. Let C be the set of sensitive subgroups of X. Then with a suitably chosen \u03b7 = O(\u03b1/(KP )), the algorithm halts after T = O( 2KP M \u03b1 ) iterations and outputs a function \u03bb satisfying: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. Similar to the previous two applications, by applying Theorem 2 for the finite-sample version of the algorithm, the sample complexity required is O( log(2|A|)+log( 1 \u03b4 ) \u03b12 ). We note that equalizing false negative rates across groups can be achieved trivially by setting \u03bb to be large enough so that the FNR is equalized (at 0) \u2014 which would of course destroy the accuracy of the method. Thus when we set an objective like this, it is important to empirically show that not only does the method lead to low disparity across false negative rates, but does so without loss in accuracy. The experiments that we carry out in Section 4 indeed bear this out. 4 Experiments In this section, we conduct numerical experiments and evaluate the performance of our algorithms within each application from both the fairness and accuracy perspectives. We compare the results with baseline methods to assess their effectiveness. The code can be found in the supplementary material. For more detailed experiment settings and additional results, please refer to Appendix D. 4.1 De-Biased Text Generation In text generation, we consider two datasets and run experiments separately. The first dataset is the corpus data from Liang et al. (2021), which extracts sentences with both terms indicative of biases (e.g., gender indicator words) and attributes (e.g., professions) from real-world articles. The second dataset is made up of synthetic templates based on combining words indicative of bias targets and attributes with simple placeholder templates, e.g., \u201cThe woman worked as ...\u201d; \u201cThe man was known for ...\u201d, constructed in (Lu et al., 2019). Then, we define two kinds of terms indicative of bias targets: female-indicator words and male-indicator words; we also define six types of attributes: female-adj words, male-adj words, male-stereotyped jobs, female-stereotyped jobs, pleasant words, and unpleasant words, by drawing on existing word lists in the fair text generation context (Caliskan et al., 2017) (Gonen & Goldberg, 2019). Each input x is a sentence where sensitive attributes are masked. We use the BERT model (Devlin et al., 2018) to generate the initial probability distribution over the entire vocabulary for the word at 9 \fthe masked position, denoted by h(x). We then use our algorithm to post-process h(x) and obtain the function p(x), which is the calibrated probability of the output. We define two sets of prompts: Afemale and Amale be the set of prompts containing female-indicator and male-indicator words, respectively. We aim to control the gender disparity gap |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| for A \u2208{Afemale, Amale}. Figure 2 plots the disparity gap for A = Amale (the result for A = Afemale is deferred to the appendix due to space constraints). It is evident that our post-processing technique effectively limits the disparity between the probabilities of outputting biased terms related to different gender groups, ensuring that it remains consistently below a specified threshold value of \u03b1 = 0.002 (we will further discuss the way of choosing \u03b1 in the Appendix D). Additionally, we assess the cross-entropy loss between the calibrated output distribution and the corresponding labels. Unlike the calibration set where sensitive words are deliberately masked, we randomly mask words during the cross-entropy test to evaluate the model\u2019s overall performance, extending beyond the prediction of sensitive words. The cross-entropy of the test set is 9.9291 before post-processing and 9.9285 after it, indicating that our algorithm does not reduce the accuracy of the model while reducing gender disparities. We would like to note that our algorithm is not designed to enhance accuracy but to improve fairness while ensuring that the performance of cross-entropy does not deteriorate too much. Figure 2: The bias on outputting different types of sensitive attributes measured on the corpus data. The results for the synthetic data are deferred to the appendix. 4.2 Prediction-Set Conditional Coverage in Hierarchical Classification For hierarchical classification, we use the Web of Science dataset (Kowsari et al., 2017) that contains 46, 985 documents with 134 categories including 7 parent categories. We choose HiAGM (Wang et al., 2022) as the network to generate the initial scoring. Our algorithm is then applied to find the threshold function that yields a fair output. We set our coverage target to be \u03c3 = 0.95 with a tolerance for coverage deviations of \u03b1 = 0.025. Equivalently put, our goal is that for each of the predictions, we aim to cover the true label with probability 95 \u00b1 2.5%, even conditional on the prediction we make. We choose naively outputting the leaf node (denoted as \u201cunprocessed\u201d in the figure) as one baseline and the split conformal method (Angelopoulos et al., 2023) as another baseline. Figure 3 shows that our method achieves coverage within the target tolerance for all predictions, while the two baselines fail to satisfy the coverage guarantee for predicting \u2019CS\u2019 and \u2019Medical\u2019. 10 \fFigure 3: The deviation of prediction-set conditional coverage from the target. 4.3 Fair FNR Control in Image Segmentation We use the FASSEG (Khan et al., 2015) dataset and adopt the U-net (Ronneberger et al., 2015) network to generate the initial scoring function for each pixel, representing the predicted probability of this pixel corresponding to the signal. The dataset contains 118 human facial images and their semantic segmentations. We set our target FNR to be \u03c3 = 0.075 with a tolerance for deviations of \u03b1 = 0.005 and calibrate the FNR across different gender subgroups and racial subgroups. In addition, we compare with the method proposed in (Angelopoulos et al., 2023) that controls on-average FNR in a finite-sample manner based on the split conformal prediction method. The results yielded by U-net and the split conformal are plotted as baselines for comparison in Figure 4. Our algorithm demonstrates its effectiveness as the deviations of the FNRs of GMC from the target \u03b1 across all subgroups are controlled below \u03c3, while the baselines are found to perform poorly on male and white subgroups. Since equalizing FNR does not necessarily imply accuracy, we compute the accuracy of our model\u2019s output together with that of the baseline. The accuracy of our model, measured as the ratio of correctly predicted pixels to the total number of pixels, is 0.86. In comparison, the baseline models achieve an accuracy of 0.84 and 0.92, respectively. This result suggests that our algorithm empirically yields significant gains in mitigating FNR disparities without a significant sacrifice in accuracy. Figure 4: The deviation of the false negative rate from the target in image segmentation. 11" + } + ], + "Aaron Roth": [ + { + "url": "http://arxiv.org/abs/2402.08753v1", + "title": "Forecasting for Swap Regret for All Downstream Agents", + "abstract": "We study the problem of making predictions so that downstream agents who best\nrespond to them will be guaranteed diminishing swap regret, no matter what\ntheir utility functions are. It has been known since Foster and Vohra (1997)\nthat agents who best-respond to calibrated forecasts have no swap regret.\nUnfortunately, the best known algorithms for guaranteeing calibrated forecasts\nin sequential adversarial environments do so at rates that degrade\nexponentially with the dimension of the prediction space. In this work, we show\nthat by making predictions that are not calibrated, but are unbiased subject to\na carefully selected collection of events, we can guarantee arbitrary\ndownstream agents diminishing swap regret at rates that substantially improve\nover the rates that result from calibrated forecasts -- while maintaining the\nappealing property that our forecasts give guarantees for any downstream agent,\nwithout our forecasting algorithm needing to know their utility function.\n We give separate results in the ``low'' (1 or 2) dimensional setting and the\n``high'' ($> 2$) dimensional setting. In the low dimensional setting, we show\nhow to make predictions such that all agents who best respond to our\npredictions have diminishing swap regret -- in 1 dimension, at the optimal\n$O(\\sqrt{T})$ rate. In the high dimensional setting we show how to make\nforecasts that guarantee regret scaling at a rate of $O(T^{2/3})$ (crucially, a\ndimension independent exponent), under the assumption that downstream agents\nsmoothly best respond. Our results stand in contrast to rates that derive from\nagents who best respond to calibrated forecasts, which have an exponential\ndependence on the dimension of the prediction space.", + "authors": "Aaron Roth, Mirah Shi", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.LG" + ], + "main_content": "Introduction When acting in an adversarial environment, a popular way to evaluate the performance of a decision-making agent is through the lens of regret: Given the sequence of outcomes actually realized, and the sequence of actions chosen by the decision making agent, how much better could they have done had they followed some other policy rather than what the actually did? External regret compares the agent\u2019s utility to a very simple benchmark: the utility they could have obtained had they played a constant policy, that played the same \ufb01xed action at every round. Swap regret compares the agent\u2019s utility to a more demanding benchmark: the utility they could have obtained if, counterfactually, they had been able to go back and modify their play using an arbitrary swap function mapping actions to replacement actions in any consistent fashion. Swap regret is closely linked to correlated equilibrium \u2014 in particular, if all agents in a game have no swap regret, then the empirical distribution over their play is a correlated equilibrium (Foster and Vohra, 1997). There are e\ufb03cient algorithms that individual agents can use to guarantee themselves no swap regret in adversarial environments (e.g. Blum and Mansour (2007)). But these algorithms require some sophistication to run, and must be run separately for each agent, as they are dependent on the agent\u2019s utility function. It would be compelling if it were possible to make public forecasts of the payo\ufb00relevant state, such that less 1 \fsophisticated agents could simply best respond to these forecasts (as if they were correct), and be guaranteed that they will have no swap regret. These forecasts would be valuable universally to all agents, independently of what their utility functions are. A candidate solution to this problem is calibration, a \u201cself-consistency\u201d measure of forecast quality. Informally, calibrated predictions must be unbiased conditional on the value of the prediction itself. It is possible to make predictions in adversarial environments that satisfy the calibration condition (Foster and Vohra, 1998), and agents who best respond to calibrated predictions obtain diminishing swap regret (Foster and Vohra, 1997). However, this approach su\ufb00ers from a serious shortcoming. Even in 1-dimension, the best known algorithms obtain calibration (and hence downstream swap regret) at a rate of T 2/3 (Foster and Vohra, 1998; Okoroafor et al., 2023) \u2014 worse than the O( \u221a T) rates known to be optimal for swap regret (Blum and Mansour, 2007). In fact, it is known that obtaining \u221a T rates via calibration is impossible even in 1-dimension: Qiao and Valiant (2021) show an \u2126(T 0.528) lower bound on 1-dimensional calibration rates in adversarial environments. The situation only gets worse in higher dimensions \u2014 the best known rates for ddimensional forecasts scale as T 2d/(2d+1). A related notion of smooth calibration (Kakade and Foster, 2008; Foster and Hart, 2018) has similar guarantees under the assumption that downstream agents smoothly best respond (and has other desirable properties), but is not known to be obtainable at faster rates than traditional calibration. This suggests that we must look elsewhere if we wish to provide forecasts which can guarantee arbitrary downstream agents strong regret guarantees at reasonable rates. Recent work of Kleinberg et al. (2023) accomplishes exactly this. They abandon swap regret, and consider the simpler goal of guaranteeing downstream agents low external regret. They show that a weaker measure of forecast quality called \u201cU-calibration\u201d\u2014corresponding to minimizing regret over a restricted collection of proper scoring rules\u2014su\ufb03ces for external regret minimization for arbitrary downstream agents. Remarkably, they show how to make predictions that guarantee diminishing external regret at a rate of O( \u221a T) simultaneously for all downstream agents, thus bypassing the calibration lower bound. Given these results, one might wonder if sidestepping the shortcomings of calibration requires giving up on swap regret and settling for external regret. Put another way, is calibration necessary to achieve the stricter goal of guaranteeing low swap regret for all downstream agents? 1.1 Our Contributions In this paper, we show that calibration is not necessary for this goal. Recall that predictions are calibrated if they are unbiased conditional on the value of their own prediction. We show that by requiring our predictions to be unbiased over an alternative collection of events, we can obtain substantially improved swap regret bounds simultaneously for all possible agents. For a single agent, we know that they will obtain no swap regret if they best respond to forecasts that are unbiased conditional on events de\ufb01ned by their own best response correspondence (Haghtalab et al., 2023; Noarov et al., 2023), a weaker condition than full calibration\u2014but one that depends on their utility function. At a high level, we take advantage of the structure of best response functions to de\ufb01ne a collection of events such that forecasts that are unbiased with respect to these events su\ufb03ce to guarantee no swap regret for all downstream agents. Using the algorithm of Noarov et al. (2023) we are able to produce forecasts that achieve this bias condition at rates that improve dramatically over the best known rates for calibration. In the d = 1 dimensional setting we make forecasts that guarantee every best-responding downstream agent diminishing swap regret at the optimal rate of \u02dc O( \u221a T) (Theorem 3). We can extend this technique to the d = 2 dimensional setting while obtaining swap regret rates of \u02dc O(T 5/8) (Theorem 5). These results require no knowledge of either agent utility functions or the number of actions available to the agents. We then turn to the higher dimensional setting, which presents new challenges (informally, that the structure of arbitrary best response functions projected onto a \ufb01nite grid of predictions appears to become much more complex). In this higher (d > 2) dimensional setting, we take a di\ufb00erent approach, which requires two modi\ufb01cations to our setting. First, although we continue to assume no knowledge of the downstream agent\u2019s utility function, we now assume that we know an upper bound k on the number of actions that they have available to them. Second, rather than assuming that agents exactly best respond to our forecasts, we assume that they smoothly best respond \u2014 informally that they choose an approximate best response using 2 \fa mapping that is Lipchitz in our predictions. Quantal response e.g. satis\ufb01es this assumption; as we discuss, this is a behavioural assumption with extensive roots in the economics literature. Under these assumptions, we give an algorithm that for any dimension d > 2 guarantees that all downstream agents with at most k actions have swap regret diminishing at a rate of \u02dc O(T 2/3) (Theorem 7), with only a polynomial dependence on the dimension d. Under a perhaps less realistic behavioral assumption (that agents play best responses with respect to a discretization of their utility function), we show how to guarantee all downstream agents regret at the optimal rate of \u02dc O( \u221a T) for every dimension d (Theorem 6). This stands in sharp contrast to calibration bounds, which have a dimension dependence in their exponent. Our focus here is on achievable rates; as with calibration, the computational complexity of our algorithms scales exponentially with d, and achieving similar rates in high dimensional settings with computationally e\ufb03cient algorithms is in our view the most compelling open problem coming out of this work. 1.2 Related Work The idea of sequential calibration dates to Dawid (1982), and Foster and Vohra (1998) were the \ufb01rst to show that it is possible to maintain calibrated forecasts in a sequential adversarial setting. Foster and Vohra (1997) connected calibration to decision making in a game theoretic context, and showed that agents who best respond to calibrated forecasts of the utilities of their actions obtain diminishing internal (equivalently swap) regret and hence converge to correlated equilibrium. The correct \u201crates\u201d at which calibration error can be driven to 0 in an online setting has been an important open question; the best known rates in 1 dimension are T 2/3 and degrade exponentially with the dimension (Okoroafor et al., 2023). In 1-dimension, it is known that it is impossible to obtain \u221a T rates \u2014 Qiao and Valiant (2021) show a lower bound of \u2126(T 0.528). This is despite the fact that it is known that swap regret for a single agent can be obtained at a rate of O( \u221a T) (Blum and Mansour, 2007). Kakade and Foster (2008) and Foster and Hart (2018) introduce smooth calibration, and give \u2014 as we do \u2014 guarantees for agents who smoothly best respond. The main bene\ufb01t of smooth calibration is that it can be guaranteed with deterministic algorithms in adversarial environments (unlike calibration, which requires randomization) \u2014 and that players who smoothly best respond to smoothly calibrated forecasts converge to Nash (rather than correlated) equilibrium. The algorithms for smooth calibration do not converge at rates faster than the best known rates for calibration, however. The most closely related work is Kleinberg et al. (2023), who take the perspective of a forecaster whose goal is to guarantee low external regret for arbitrary downstream agents. They de\ufb01ne \u201cU-calibration\u201d as a measure of forecast quality that is necessary and su\ufb03cient to achieve this goal and provide algorithms that minimize U-calibration error. In comparison, our work gives stronger guarantees than low external regret, and we consider general d-dimensional prediction tasks, while their work focuses on 1-dimensional and multiclass predictions (i.e. predictions that are distributions over d outcomes, which is a special case of the setting we consider). This work is conceptually related to a recent line of work on decision calibration (Zhao et al., 2021) and omniprediction that has the goal of learning a single predictor that is a su\ufb03cient statistic to minimize any downstream loss function in some family, with respect to some benchmark family of functions (Gopalan et al., 2022, 2023; Hu et al., 2023; Globus-Harris et al., 2023). Most of this literature focuses on the batch setting, but Garg et al. (2024) give omnipredictors in the online adversarial setting. The omniprediction literature is focused on 1-dimensional binary valued prediction, whereas we are interested in high dimensional real valued prediction. Noarov et al. (2023) consider making high dimensional predictions that guarantee diminishing swap regret to downstream agents. We take direct inspiration from their work and make essential use of their forecasting algorithm. The main distinction is that Noarov et al. (2023) tailors their predictions to the best response functions of particular downstream agents, whereas our goal is to give guarantees that hold simultaneously for all downstream agents. In our main result for high-dimensional settings, we require agents to smoothly best respond, i.e. to choose actions randomly with probability proportional to their utility. This is consistent with the literature on smooth calibration (Kakade and Foster, 2008; Foster and Hart, 2018), but one may ask: is this natural behavior? To o\ufb00er some justi\ufb01cation, we note that our formulation of smooth best response \ufb01ts into a broader theory of quantal choice that studies probabilistic best responses, otherwise known as quantal 3 \fresponses (McFadden, 1976; McKelvey and Palfrey, 1995). The quantal choice model posits that agents are not perfectly rational; instead, they make slight errors in choosing their actions. 2 Model and Preliminaries 2.1 Predictions and Decisions We consider a repeated interaction between a learner, an adversary, and downstream agents. We use C \u2286Rd to denote a convex prediction space for a \ufb01nite dimension d. Without loss of generality, we consider C \u2286[0, 1]d (rescaling the predictions space degrades our bounds by only a multiplicative constant). In rounds t \u2208[T ], the learner interacts with an adversary as follows: 1. The adversary chooses a distribution over outcomes Yt \u2208\u2206C. 2. The learner produces a distribution over predictions pt \u2208\u2206C, from which a prediction \u02c6 yt is sampled. 3. The adversary reveals an outcome yt \u223cYt. This interaction accumulates in a transcript \u03c0T of predictions \u02c6 y1, ..., \u02c6 yt\u22121 and outcomes y1, ..., yT . The learner\u2019s output is a (possibly randomized) function of previous predictions \u02c6 y1, ..., \u02c6 yt\u22121 and outcomes y1, ..., yt\u22121. In this setting, the adversary\u2019s choice of outcome can depend on previous predictions \u02c6 y1, ..., \u02c6 yt\u22121 and outcomes y1, ..., yt\u22121 but cannot depend on the learner\u2019s realization of randomness on round t. We want to study the value of predictions for downstream agents. Upon observing a prediction \u02c6 yt, an agent chooses an action at from a \ufb01nite action set A. An agent\u2019s utility depends on the chosen action and the outcome. More speci\ufb01cally, when the outcome is y, the agent receives utility u(at, y) according to a bounded utility function u : A\u00d7 C \u2192[0, 1]. We will assume that utility functions are a\ufb03ne and Lipschitz-countinuous in y. Assumption 1. Fix a utility function u : A \u00d7 C \u2192[0, 1]. We assume that for every action a \u2208A, u(a, y) is a\ufb03ne in y, i.e. u(a, y) = \u27e8va, y\u27e9+ ca for some va \u2208Rd, ca \u2208R. Moreover, we assume that u(a, y) is L-Lipschitz in y in the \u2113\u221enorm: for any y1, y2 \u2208C, |u(a, y1) \u2212u(a, y2)| \u2264L\u2225y1 \u2212y2\u2225\u221e. Remark 1. Note that by linearity of expectation this is only more general than considering expectation maximizing agents that have arbitrary utility functions over d discrete outcomes, and states/forecasts y \u2208\u2206[d] that correspond to probability distributions over these d outcomes. Thus, our setting generalizes the settings of e.g. Zhao et al. (2021); Kleinberg et al. (2023). We will later on take advantage of the fact that it is not more restrictive to consider utility functions that are linear in y. Observation 1. Any utility function over a d-dimensional prediction space that is a\ufb03ne in y can be equivalently expressed as a utility function over a (d + 1)-dimensional prediction space that is linear in y. We simply set the (d + 1)st coordinate of the outcome/prediction to always take value 1. This preserves the convexity of the outcome/prediction space. We measure agent regret by comparing the utility the agents obtain from the sequence of chosen actions against the counterfactual utility they would have received under another sequence of actions. We focus on swap regret, where the counterfactual action sequence is obtained by applying an action modi\ufb01cation rule to the realized sequence of chosen actions. De\ufb01nition 1 (Swap Regret). Fix an agent with action set A and utility function u. For a sequence of actions a1, ..., aT \u2208A and outcomes y1, ..., yT , the agent\u2019s swap regret to an action modi\ufb01cation rule \u03c6 : A \u2192A is: Reg(u, \u03c6) = 1 T T X t=1 (u(\u03c6(at), yt) \u2212u(at, yt)) 4 \fMore generally, when an agent plays distributions over actions q1, ..., qT \u2208\u2206A, we measure swap regret to \u03c6 by: Reg(u, \u03c6) = 1 T T X t=1 E a\u223cqt[u(\u03c6(a), yt) \u2212u(a, yt)] We say that an agent has swap regret \u03b1 if for any action modi\ufb01cation rule \u03c6, Reg(u, \u03c6) \u2264\u03b1. The objective, from the learner\u2019s perspective, is to produce a sequence of predictions such that any agent responding to these predictions as if they were correct experiences low swap regret, regardless of their utility function. 2.2 Conditionally Unbiased Predictions While we ultimately evaluate predictions based on the utility obtained by downstream agents who act according to our predictions, we will ask for predictions to satisfy some intermediate properties along the way\u2014in particular, unbiasedness subject to a collection of events. Recall that calibration can be summarized as unbiasedness conditional on events corresponding to the predictions themselves. We will make use of a relaxation which asks for unbiasedness subject to a more coarsely de\ufb01ned collection of events. Our algorithms will derive from the general framework developed by Noarov et al. (2023). For the remainder of this section, we introduce the key tools we use. First, we formalize the notion of events and conditional bias. De\ufb01nition 2 (Conditional Bias on Events). Fix a transcript \u03c0T . Let E be a collection of events, where each event is de\ufb01ned by a subsequence indicator function E : C \u2192{0, 1}. Let nT (E) = PT t=1 E\u02c6 yt\u223cpt[E(\u02c6 yt)] denote the (expected) frequency of event E up to round T . We say that \u03c0T has bias \u03b1 conditional on E for some \u03b1 : R \u2192R if for all E \u2208E: 1 T \r \r \r \r \r T X t=1 E \u02c6 yt\u223cpt[E( \u02c6 yt)( \u02c6 yt \u2212yt)] \r \r \r \r \r \u221e \u2264\u03b1(nT (E)) Noarov et al. (2023) show how to achieve unbiased predictions at favorable rates conditional on any collection of events. The algorithm requires the learner to randomize over a \ufb01nite prediction space (we defer the details to Noarov et al. (2023)). We next de\ufb01ne a discretized prediction space. De\ufb01nition 3 (\u03b5-net). We say C\u03b5 \u2282C is an \u03b5-net of C in the \u2113\u221enorm if for every y \u2208C, there exists y\u03b5 \u2208C\u03b5 such that \u2225y \u2212y\u03b5\u2225\u221e\u2264\u03b5. Observe that we can always obtain an \u03b5-net of size |C\u03b5| \u2264 \u0000 1 \u03b5 \u0001d by simply discretizing each coordinate into multiples of \u03b5. When the learner produces predictions from a discretized prediction space C\u03b5, Noarov et al. (2023) show that we can make predictions with low bias on any collection of events, with bias scaling logarithmically in the number of events. In our results, we will make direct use of the algorithm given by Noarov et al. (2023). Theorem 1. (Noarov et al., 2023) For a collection of events E and convex prediction/outcome space C, there is an algorithm producing predictions p1, ..., pT \u2208\u2206C\u03b5 such that for any sequence of outcomes y1, ..., yT \u2208C chosen by the adversary, the bias conditional on E is at most: \u03b1(nT (E)) \u2264O ln(d|E|T ) + p ln(d|E|T ) \u00b7 nT (E) T + \u03b5 ! \u2264O r ln(d|E|T ) T + \u03b5 ! For \u03b5 \u22641/ \u221a T, the bias is at most O \u0012q ln(d|E|T ) T \u0013 . The algorithm can be implemented with per-round running time scaling polynomially in d, |E|, and T . 5 \f3 Predictions in Low Dimensions In this section we show how to use the framework of unbiased prediction to produce predictions for all possible agents in the oneand two-dimensional case (Theorem 3 and Theorem 5 respectively). To accomplish this, we build on a key result of Noarov et al. (2023) connecting conditionally unbiased predictions to the swap regret of downstream agents. For a particular instantiation of events\u2014concretely, the best response correspondences of agents\u2014conditional bias has a natural implication for swap regret. Next we de\ufb01ne an agent\u2019s best response and their corresponding best response events. De\ufb01nition 4 (Best Response). Consider an agent with utility function u over an action set A and prediction space C. The agent\u2019s best response to y \u2208C according to their utility function u is the action a = BR(u, y) where BR(u, y) = arg maxa\u2208A u(a, y). De\ufb01nition 5 (Best Response Events). Fix an action set A and utility function u. For any action a \u2208A, we de\ufb01ne its corresponding best response event: Eu,a(y) = 1[BR(u, y) = a] A su\ufb03cient condition for minimizing swap regret for any agent is minimizing bias conditional on their best response events. More accurately, an agent with action set A and utility function u can achieve low swap regret when they best respond to predictions that are unbiased subject to the collection of events Eu = {Eu,a}a\u2208A. Informally, this unbiasedness condition ensures that for every action a, agents\u2019 estimates of their utilities are (on average) correct whenever they choose to take action a. Therefore, since agents pick their utility-maximizing action, they must be choosing the action that is best (on average) conditional on that choice \u2014 and thus not have swap regret. We state the resulting swap regret bound in the next theorem but defer further details of the argument to Noarov et al. (2023). Theorem 2. (Noarov et al., 2023) Fix a transcript \u03c0T . Consider an agent with action set A and utility function u who, at every round t \u2208[T ], takes action at = BR(u, \u02c6 yt), where \u02c6 yt \u223cpt. Let Eu,a be the best response event corresponding to u and a \u2208A. Then, if \u03c0T has bias \u03b1 conditional on events E such that {Eu,a}a\u2208A \u2286E, the agent has swap regret bounded by: max \u03c6:A\u2192A Reg(u, \u03c6) \u22642L X a\u2208A \u03b1(nT (Eu,a)) For any concave \u03b1, this is at most: max\u03c6:A\u2192A Reg(u, \u03c6) \u22642L|A|\u03b1(T/|A|). In particular, if we plug in the bound obtained by Theorem 1, then we have: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O L r |A| ln(d|E|T ) T ! This implies that given any set of utility functions U, running the algorithm of Theorem 1 with the corresponding collection of best response events {Eu,a : u \u2208U, a \u2208A} promises any agent regret at most O \u0010 L p |A| ln(d|U||A|T )/T \u0011 , as long as their utility function belongs to U. Our present goal is to promise low swap regret with respect to any utility function an agent may possess. We will approach this by considering the augmented collection of best response events corresponding to not just any particular set U, but the set of all possible utility functions. If our predictions are unbiased subject to all possible best response events, then we can be certain that our predictions are unbiased subject to the best response events of any particular agent. However, recall that the bounds on conditional bias given by Theorem 1 scale with the number of events. Thus, we must be careful to ensure that we de\ufb01ne a reasonably sized collection of events. We show that for oneand two-dimensional prediction spaces, the collection of best response events is not too large and hence we can derive meaningful swap regret bounds. To do so, we will take advantage of a key structural property of the best response function\u2014namely, that its level sets are convex. 6 \fLemma 1. For any utility function u that is a\ufb03ne in y , the level set of the corresponding best response function {y : BR(u, y) = a} is convex for all a \u2208A, i.e. for any y1, y2 \u2208C and any \u03bb \u2208[0, 1], if BR(u, y1) = BR(u, y2) = a, then BR(u, \u03bby1 + (1 \u2212\u03bb)y2) = a. Proof. We use the de\ufb01nition of an a\ufb03ne function to compute, for any alternate action a\u2032 \u2208A: u(a, \u03bby1 + (1 \u2212\u03bb)y2) = \u27e8va, \u03bby1 + (1 \u2212\u03bb)y2\u27e9+ ca = \u03bb\u27e8va, y1\u27e9+ (1 \u2212\u03bb)\u27e8va, y2\u27e9+ ca = \u03bbu(a, y1) + (1 \u2212\u03bb)u(a, y2) + ca \u2212\u03bbca \u2212(1 \u2212\u03bb)ca = \u03bbu(a, y1) + (1 \u2212\u03bb)u(a, y2) \u2265\u03bbu(a\u2032, y1) + (1 \u2212\u03bb)u(a\u2032, y2) = u(a\u2032, \u03bby1 + (1 \u2212\u03bb)y2) where the inequality follows by de\ufb01nition of the best response function and the last line follows from applying the same transformation. Hence, a = BR(u, \u03bby1 + (1 \u2212\u03bb)y2), proving the claim. In other words, the set of predictions that activate a best response event form a convex set. As a result, we can enumerate all possible best response events (across all possible utility functions) by enumerating all convex hulls of predictions in our prediction space. Our results in this section proceed by bounding the number of such convex hulls when the prediction space is an \u03b5-net of C (as required by the algorithm of Theorem 1). Given a bound on the number of such convex hulls, an application of Theorem 2 bounds the swap regret for any downstream agent. The d = 1 case is particularly elegant: convex hulls over any \ufb01nite set of grid points on the unit interval C\u03b5 \u2286[0, 1] are simply intervals with endpoints y1, y2 \u2208C\u03b5, i.e. they can be speci\ufb01ed by two points in C\u03b5. Hence, the number of convex hulls\u2014and thus, the number of events we must de\ufb01ne\u2014is only quadratic in the discretization parameter \u03b5. We show that setting \u03b5 to optimally trade o\ufb00between the number of events and the precision of predictions, we can guarantee the optimal swap regret rate of O( \u221a T) (up to logarithmic factors) simultaneously for every downstream agent. Theorem 3. Consider the prediction/outcome space C = [0, 1] and let C\u03b5 be an \u03b5-net of C. Let E = {Ey1,y2 : y1, y2 \u2208C\u03b5} be the collection of events Ey1,y2(y) = 1[y \u2208[y1, y2]]. Then, using the algorithm given by Noarov et al. (2023) and setting \u03b5 = 1/ \u221a T, an agent who best responds according to action set A and utility function u obtains swap regret bounded by: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O L|A| r ln T T ! Moreover, the forecasts can be made with per-round running time polynomial in T . Proof. As we remarked above, convex sets in one dimension are simply intervals. Therefore, given that our predictions lie on a \ufb01nite \u03b5-net, it follows from Lemma 1 that for any utility function u and any action a, the best response event Eu,a exactly corresponds to an event Ey1,y2 \u2208E. That is, for any action a \u2208A, there exist y1, y2 \u2208C\u03b5 such that for all y \u2208C\u03b5, BR(u, y) = a if and only if Ey1,y2(y) = 1. Thus to prove the regret bound, all we need is to determine |E|. Since |C\u03b5| \u22641 \u03b5, we have that |E| \u2264 1 \u03b52 . Plugging this into Theorem 1 and using our setting of \u03b5, we have that the bias conditional on every event Ey1,y2 \u2208E is bounded by: \u03b1(nT (Ey1,y2) \u2264O r ln(T/\u03b52) T + \u03b5 ! = O r ln T T ! Thus by Theorem 2, any agent with utility function u and action set A has swap regret at most: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O L|A| r ln T T ! 7 \fFor d = 1, the set of all convex hulls over a discrete prediction space was particularly simple: the set of intervals over the same discretization. In higher dimensions, characterizing the set of all convex hulls of discrete points is considerably more di\ufb03cult. The main ingredient for our result in the d = 2 dimensional case is the following theorem shown by Ivic et al. (1994), which establishes the number of convex hulls of points in a two-dimensional grid space. Thus, taking our predictions to lie in such a grid space, the number of possible best response events directly follows from this result. Theorem 4. (Ivic et al., 1994) Consider an integer grid of size m \u00d7 m. Let D(m) denote the number of convex polygons whose vertices have integer coordinates (i.e. number of convex hulls of grid points) that are distinct up to translation. Then, there exists a constant C such that ln D(m) \u2264Cm2/3. Remark 2. To count the total number of convex polygons I(m) with integer vertices in the m \u00d7 m integer grid, observe that there are at most m2 possible translations of any polygon in the grid to another polygon in the grid with integer vertices. Thus, I(m) \u2264m2D(m). Theorem 5. Consider a prediction space C = [0, 1]2 and let C\u03b5 be an \u03b5-net of C obtained by discretizing each coordinate to multiples of \u03b5. Let P\u03b5 be the set of all convex polygons with vertices belonging to C\u03b5. De\ufb01ne EP (y) = 1[y \u2208P] to be the event indicating whether y lies in a polygon P. Then, using the algorithm of Noarov et al. (2023) with E = {EP }P \u2208P\u03b5 and \u03b5 = 1 T 3/8 , an agent with any action set A and utility function u who best responds to the forecasts obtains swap regret bounded by: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O L|A| p ln(T ) T 3/8 ! Proof. By construction of C\u03b5, our predictions lie in a 1 \u03b5 \u00d7 1 \u03b5 grid, and so |E| = I( 1 \u03b5). Plugging into Theorem 1, it follows from Theorem 4 and Remark 2 that the bias conditional on every event EP \u2208E is at most: \u03b1(nT (EP )) \u2264O \uf8eb \uf8ed s ln(I( 1 \u03b5)T ) T + \u03b5 \uf8f6 \uf8f8\u2264O \uf8eb \uf8ed s ln( T \u03b52 D( 1 \u03b5)) T + \u03b5 \uf8f6 \uf8f8\u2264O \uf8eb \uf8ed s 1 \u03b52/3 ln( T \u03b52 ) T + \u03b5 \uf8f6 \uf8f8 For \u03b5 = 1 T 3/8 , we get \u03b1(nT (EP )) \u2264O \u0012\u221a ln(T ) T 3/8 \u0013 . By Theorem 2, we can conclude that the agent has swap regret at most: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O L|A| p ln(T ) T 3/8 ! 4 Predictions in Higher Dimensions In the previous section we showed that by taking advantage of nice structural properties of best response correspondences, we can\u2014in oneand two-dimensional settings\u2014straightforwardly enumerate the set of all best response events across all possible downstream utility functions. Making predictions that are unbiased with respect to this set yields low swap regret for all possible downstream agents who best respond to forecasts. Turning now to the general setting, i.e. where we consider d-dimensional predictions, this approach will no longer be tractable. This is because our previous results crucially rely on the fact that in low dimensions, convex regions over grid points\u2014and correspondingly best response events induced by predictions made over grid points\u2014have boundedly low complexity. Most saliently, convex regions in one dimension are speci\ufb01ed by just two points. In two dimensions, already, convex regions on grid points are substantially more complex \u2014although fortunately, we can exploit a geometric result bounding the number of convex regions. In higher dimensions, we cannot in general expect to furnish a reasonably sized collection of events in this 8 \fway. Naively, the number of convex hulls taken over \u0000 1 \u03b5 \u0001d discrete predictions is doubly exponential in d (remember that we can only obtain bias that depends logarithmically in the number of events). Thus to solve the general case, we take a di\ufb00erent approach: we enumerate all possible utility functions\u2014 up to some discretization, and then subsequently de\ufb01ne events corresponding to agent responses for agents with utility functions within this discretization. Recall that the bias obtained by Theorem 1 depends on the number of conditioning events. The space of utility functions is continuously large, and so in order to obtain reasonable bounds, we cannot discretize the utility functions too \ufb01nely. When we restrict our attention to this \ufb01nite net of utility functions, we can only hope to approximate the utility function of any particular agent. This is problematic due to the fact that the best response function is discontinuous; best response events corresponding to this discretized set of utility functions are not guaranteed to be close to an agent\u2019s actual best responses, even if their utility function is close to that of one of the discretized utility functions. We illustrate this in the following lemma, which states that there are utility functions such that even as the di\ufb00erence between their payo\ufb00s approach 0, bias conditional on their best response events di\ufb00er by a constant, and the cumulative swap regret of best responding agents di\ufb00er by \u2126(T ). Lemma 2. Fix \u03b4 \u2208(0, 0.5). Consider a prediction and outcome space C = [0, 1]. There exist utility functions u, \u02dc u and a sequence of predictions \u02c6 y1, ..., \u02c6 yT and outcomes y1, ..., yT such that |u(a, y) \u2212\u02dc u(a, y)| \u2264 2\u03b4 1+2\u03b4 for all a \u2208A, y \u2208C, but \u03b1(Eu,a) = 0 while \u03b1(E\u02dc u,a) = 0.5 + \u03b4, where Eu,a and E\u02dc u,a denote the best response events corresponding to u and \u02dc u respectively. Furthermore, an agent who takes action at = BR(\u02dc u, \u02c6 yt) for all t \u2208[T ] accumulates average swap regret max\u03c6:A\u2192A Reg(\u02dc u, \u03c6) = 1. Proof. Consider an agent who chooses from two actions A = {0, 1}. We will de\ufb01ne u so that the best response function induced by u is a thresholding at y = 0.5 \u2212\u03b4: an agent chooses action 1 if and only if y \u22650.5 \u2212\u03b4 (we assume tie-breaking in favor of action 1). Similarly, \u02dc u will induce a thresholding at y = 0.5. Formally, we can write u(a, y) = 1 1+2\u03b4(a(y + \u03b4)+ (1 \u2212a)(1 \u2212y\u2212\u03b4)+ \u03b4) and \u02dc u(a, y) = ay + (1 \u2212a)(1 \u2212y). It is easy to check that both utility functions are a\ufb03ne in y. Moreover, we can compute their absolute di\ufb00erence: for a = 1, we have that |u(1, y) \u2212\u02dc u(1, y)| = \f \f \f \f y + 2\u03b4 1 + 2\u03b4 \u2212y \f \f \f \f = \f \f \f \f 2\u03b4 1 + 2\u03b4 \u2212y \u0012 2\u03b4 1 + 2\u03b4 \u0013\f \f \f \f \u2264 2\u03b4 1 + 2\u03b4 and for a = 0, we have that |u(0, y) \u2212\u02dc u(0, y)| = \f \f \f \f 1 \u2212y 1 + 2\u03b4 \u2212(1 \u2212y) \f \f \f \f = \f \f \f \fy \u0012 2\u03b4 1 + 2\u03b4 \u0013 \u2212 2\u03b4 1 + 2\u03b4 \f \f \f \f \u2264 2\u03b4 1 + 2\u03b4 Now, consider a sequence of predictions where \u02c6 yt = 0.5 \u2212\u03b4 for odd t and \u02c6 yt = 0.5 + \u03b4 for even t. For odd t, the outcome is yt = 1, and for even t, the outcome is yt = 0. An agent who best responds according to u takes action at = 1 at every round. Thus, the bias conditioned on the best response event corresponding to u is the bias taken over the entire sequence, and we can see that \u03b1(Eu,1) = \f \f \f 1 T PT t=1 1[BR(u, \u02c6 yt) = 1](\u02c6 yt \u2212yt) \f \f \f = 0. On the other hand, an agent who best responds according to \u02dc u takes action at = 0 on odd rounds and action at = 1 on even rounds. In this case, observe that the bias conditioned on the best response events corresponding to \u02dc u is: \u03b1(E\u02dc u,0) = \u03b1(E\u02dc u,1) = 0.5 + \u03b4. Furthermore, \u02dc u(at, yt) = 0 for all t, while applying a counterfactual swap would grant the agent utility 1 at every round. The claim then follows. To circumvent this issue, we consider agents who play a continuous approximation to their best response, which will, intuitively speaking, smooth out the discontinuities of strictly best responding. As we will discuss, this modelling of agent behavior is well-founded in an extensive line of work on quantal choice in the economics literature. Our main result in this section is Theorem 7, which states that there is an algorithm that guarantees diminishing swap regret at a rate of \u02dc O(T 2/3) for any possible agent with k actions who smoothly best responds to our forecasts, in any dimension d. We remark that while this result substantially 9 \fimproves upon what is achievable via calibrated predictions, we do not recover optimal swap regret bounds of O( \u221a T). Along the way, however, we show (in Theorem 6) how to recover O( \u221a T) swap regret for agents who respond in a slightly di\ufb00erent, but perhaps less realistic, manner \u2014 in particular, agents who act by best responding according not to their own utility function, but to the closest utility function in an appropriately de\ufb01ned \u03b4-cover of utility functions. We remark that, unlike in our previous results, our algorithms in this section require as input a parameter k that serves as an upper bound on the number of actions available to any downstream agent to whom we give guarantees. 4.1 Discretizing the Space of Utility Functions We begin by describing a suitable discretization of utility functions. Recall that to linearize a utility function that is a\ufb03ne over a d-dimensional prediction space, we can simply augment the prediction space by one dimension and consider instead the (d + 1)-dimensional prediction space C \u2286[0, 1]d+1 where every y \u2208C takes value 1 in its last coordinate. We will overload notation by extending a\ufb03ne utility functions over a d-dimensional prediction space to linear functions over a (d + 1)-dimensional prediction space in this way and henceforth restrict our attention to utility functions that are linear in y. We will choose a discretization that approximates any possible utility function within a distance of \u03b4. De\ufb01nition 6 (\u03b4-Cover of Utility Functions). Let U be a set of utility functions. We say that U\u03b4 is a \u03b4-cover of U if for every u \u2208U, there exists u\u03b4 \u2208U\u03b4 such that for every a \u2208A and y \u2208C, |u(a, y) \u2212u\u03b4(a, y)| \u2264\u03b4. Lemma 3. Let Uk be the set of utility functions over an action set A of size at most k and a prediction space C \u2286[0, 1]d+1 that are linear in their second argument. For some parameter \u03b4 > 0, let \u03b4\u2032 = \u03b4(d + 1). Then, there is a \u03b4\u2032-cover of Uk, denoted by Uk \u03b4\u2032, with |Uk \u03b4\u2032| \u2264 \u0000 1 \u03b4 \u0001k(d+1). Proof. Consider any u \u2208Uk. By linearity of u in its second argument, for any i \u2208A we can write u(i, y) = \u27e8vi, y\u27e9from some vi \u2208[0, 1]d+1 (note that entries of vi must be bounded within [0, 1], since u(i, y) is bounded within [0, 1]). In other words, since |A| \u2264k, u is parameterized by a set of at most k vectors v1, ..., vk \u2208[0, 1]d+1. We construct a \u03b4\u2032-cover of Uk by considering a discretized set of representative vectors. Speci\ufb01cally, let V\u03b4 \u2286[0, 1]d+1 be a \u03b4-net of [0, 1]d+1\u2014that is, for every v \u2208[0, 1]d+1, there exists v\u03b4 \u2208V\u03b4 such that \u2225v \u2212v\u03b4\u2225\u221e\u2264\u03b4. Observe that we can obtain such a \u03b4-net by discretizing each coordinate to multiplies of \u03b4. Thus, we have |V\u03b4| \u2264 \u0000 1 \u03b4 \u0001d+1. De\ufb01ne Uk \u03b4\u2032 to be the set of utility functions parameterized by discretized vectors v1 \u03b4, ..., vk \u03b4 \u2208V\u03b4. That is, given any u\u03b4\u2032 \u2208Uk \u03b4\u2032, we have for every i \u2208A, u\u03b4\u2032(i, y) = \u27e8vi \u03b4, y\u27e9for some vi \u03b4 \u2208V\u03b4. By the fact that V\u03b4 is a \u03b4-net, it follows that for any u \u2208Uk parameterized by v1, ..., vk, we can \ufb01nd a nearby u\u03b4\u2032 \u2208Uk \u03b4\u2032 parameterized by v1 \u03b4, ..., vk \u03b4 such that \u2225vi \u2212vi \u03b4\u2225\u221e\u2264\u03b4 for all i \u2208[k]. Thus, we have that Uk \u03b4\u2032 is indeed a \u03b4\u2032-cover of Uk: |u(i, y) \u2212u\u03b4\u2032(i, y)| = |\u27e8vi \u2212vi \u03b4, y\u27e9| \u2264\u2225vi \u2212vi \u03b4\u2225\u221e\u00b7 (d + 1) \u2264\u03b4(d + 1) Moreover, we have that |Uk \u03b4\u2032| = |V\u03b4|k \u2264 \u0000 1 \u03b4 \u0001k(d+1), which completes the proof. With this discretized set of utility functions in hand, our next result shows how to recover \u02dc O( \u221a T) swap regret bounds under a particular assumption on agent behavior. Imagine an agent with utility function u who computes their best response not according to u, but rather according to the utility function given by \u201csnapping\u201d u to a nearby u\u03b4\u2032 in the \u03b4\u2032-cover. By de\ufb01nition of a \u03b4\u2032-cover, u\u03b4\u2032 closely approximates u, and hence, the agent does not lose too much utility\u2014at most \u03b4\u2032 each round\u2014by best responding to u\u03b4\u2032 instead of u. Then, as we shall see, requiring our predictions to be unbiased conditional on the best response correspondences of the \u03b4\u2032-cover yields optimal swap regret guarantees (up to logarithmic factors) for the agent\u2014of course as measured with respect to their real utility function u. 10 \fTheorem 6. Fix a transcript \u03c0T . Let Uk be the set of utility functions over an action set of size at most k and prediction space C that are linear in its second argument. Let Uk \u03b4\u2032 be the \u03b4\u2032-cover of Uk given by Lemma 3, where \u03b4\u2032 = \u03b4(d + 1) for some parameter \u03b4 > 0. De\ufb01ne EU k \u03b4\u2032 to be the collection of best response events corresponding to all u\u03b4\u2032 \u2208Uk \u03b4\u2032: EU k \u03b4\u2032 = {Eu\u03b4\u2032 ,a(y) = 1[BR(u\u03b4\u2032, y) = a] : u\u03b4\u2032 \u2208Uk \u03b4\u2032, a \u2208[k]} Consider an agent with action set A such that |A| \u2264k, and utility function u \u2208Uk. Suppose at every round t \u2208[T ], the agent takes action at = BR(u\u03b4\u2032, \u02c6 yt) for a nearby u\u03b4\u2032 satisfying |u(a, y) \u2212u\u03b4\u2032(a, y)| \u2264\u03b4\u2032 for all a \u2208A, y \u2208C. Then, setting \u03b4 = 1 (d+1) \u221a T and using the prediction algorithm of Noarov et al. (2023) with the set of events EU k \u03b4\u2032 , the agent has swap regret bounded by: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O L r |A|dk ln(T dk) T ! Proof. By Lemma 3, we have that |Uk \u03b4\u2032| \u2264 \u0000 1 \u03b4 \u0001k(d+1). Thus |EU k \u03b4\u2032 | \u2264k \u0000 1 \u03b4 \u0001k(d+1). Plugging this into Theorem 1, we have that for su\ufb03ciently small \u03b5, the bias conditional on EU k \u03b4\u2032 is at most: \u03b1(nT (Eu\u03b4\u2032 ,a)) \u2264O \uf8eb \uf8edln(T dk \u0000 1 \u03b4 \u0001k(d+1)) + q ln(T dk \u0000 1 \u03b4 \u0001k(d+1)) \u00b7 nT (Eu\u03b4\u2032 ,a) T \uf8f6 \uf8f8 Then, since our predictions are unbiased subject to the best response events of u\u03b4\u2032, Theorem 2 bounds the swap regret as measured with respect to u\u03b4\u2032. For any action modi\ufb01cation rule \u03c6 : A \u2192A, we have: T X t=1 (u\u03b4\u2032(\u03c6(at), yt) \u2212u\u03b4\u2032(at, yt)) \u2264O \uf8eb \uf8edL s |A| ln(T dk \u0000 1 \u03b4 \u0001k(d+1)) T \uf8f6 \uf8f8\u2264O \uf8eb \uf8edL s |A|dk ln(T dk \u0000 1 \u03b4 \u0001 ) T \uf8f6 \uf8f8 The proof then follows from the fact the agent\u2019s payo\ufb00under u is comparable to their payo\ufb00under u\u03b4\u2032 for any sequence of realized actions and benchmarks. We have that max \u03c6:A\u2192A T X t=1 (u(\u03c6(at), yt) \u2212u(at, yt)) \u2264 max \u03c6:A\u2192A T X t=1 (u\u03b4\u2032(\u03c6(at), yt) \u2212u\u03b4\u2032(at, yt)) ! + 2\u03b4\u2032 \u2264O \uf8eb \uf8edL s |A|dk ln(T dk \u0000 1 \u03b4 \u0001 ) T \uf8f6 \uf8f8+ 2\u03b4(d + 1) \u2264O L r |A|dk ln(T dk) T ! where in the last step we use our setting of \u03b4. 4.2 Predicting for Arbitrary Agents One might worry that it is unreasonable to expect agents to best respond using a snapped utility function, even if it is payo\ufb00-wise close to their true utility function. Motivated by this concern, we now turn to a behavioral assumption that is well-studied in the economics literature\u2014namely, that agents choose their actions from a smoothed distribution. In our main result in this section, we show how to construct a collection of events such that, using the algorithm of Theorem 1, downstream agents who smoothly best respond are guaranteed diminishing swap regret at a rate of \u02dc O(T 2/3). 11 \f4.2.1 Smooth Best Response We \ufb01rst de\ufb01ne our notion of smooth best response, which is called logistic response in the literature. Whereas the best response function places all of the mass of an agent\u2019s response onto the utility-maximizing action, the smooth analogue assigns mass to each action in proportion to its utility. Under logistic response, an agent with action set A and utility function u plays the distribution q(u, y) \u2208\u2206A where the weight placed on action a is given by: qa(u, y) = exp(\u03b7 \u00b7 u(a, y)) P a\u2032\u2208A exp(\u03b7 \u00b7 u(a\u2032, y)) We should think of the parameter \u03b7 > 0 as controlling the smoothness of the distribution. In particular, taking \u03b7 \u2192\u221e, logistic response coincides with strictly best responding. As previously mentioned, smoothening the best response function harks back to an extensive line of work on quantal choice theory in the economics literature. Speci\ufb01cally, quantal choice models agents who best respond to utilities that are perturbed by a noise vector. The logistic response function is a popular instantiation of quantal response (Luce, 1959; McFadden, 1976; McKelvey and Palfrey, 1995; Anderson et al., 2002; Goeree et al., 2002), where the noise vector is drawn from the Gumbel distribution which has cumulative distribution function F(x) = exp(\u2212exp(\u2212\u03b7x)); in this case, it can be shown that the probability an agent chooses an action a is proportional to exp(\u03b7 \u00b7 u(a, y)). Finally, note that while we prove the next result with this particular instantiation of smoothed response in mind, our result extends to any response function satisfying two conditions: \ufb01rst, the function gives an approximate best response, and second, the induced weights are su\ufb03ciently Lipschitz in utilities (the swap regret bounds will depend on the approximation factor and the Lipschitz constant). We focus on logistic response just for concreteness. 4.2.2 Connecting Unbiased Predictions and Agent Swap Regret As before, to argue that agents acting according to our predictions will have diminishing swap regret according to the actual realizations, it is crucial for agents to be able to estimate their payo\ufb00s using our predictions in an unbiased way. One might hope that the technique used to derive our results in low-dimensional cases\u2014 namely, requiring our predictions to be unbiased conditional on best response events\u2014might also apply here. However, agents now choose from a much richer action space: distributions over A. In some sense, best response events are too coarse in that they only capture the rounds on which some action was assigned the highest probability. Instead, we would like our predictions to be unbiased conditional on the event the agent plays some distribution over A. This would be prohibitive, since there are exponentially many such distributions (up to discretization). Fortunately, we will show that it su\ufb03ces to consider only events de\ufb01ned by the marginal probabilities placed on actions\u2014i.e. events de\ufb01ned by an agent placing some probability p on some action a. This will be su\ufb03cient because of the linearity of the agents\u2019 utility functions. To construct a \ufb01nite set of events, we let S\u03c4 = {0, ..., \u0004 1 \u03c4 \u0005 \u22121} parameterize a bucketing of width \u03c4. Let Bi \u03c4 = [i\u03c4, (i+1)\u03c4) for i = 0, ..., \u0004 1 \u03c4 \u0005 \u22122 and Bi \u03c4 = [i\u03c4, (i + 1)\u03c4] for i = \u0004 1 \u03c4 \u0005 \u22121 denote the ith bucket. Next we de\ufb01ne events corresponding to bucketed marginal probabilities and establish a regret bound for the general setting. Theorem 7. Fix a transcript \u03c0T . Let Uk be the set of utility functions over an action set of size at most k and prediction space C that are linear in its second argument. Let Uk \u03b4\u2032 be the \u03b4\u2032-cover of Uk given by Lemma 3, where \u03b4\u2032 = \u03b4(d + 1) for some parameter \u03b4 > 0. Consider an agent with action set A and utility function u \u2208Uk who, at every round t \u2208[T ], plays the distribution q(u, \u02c6 yt) \u2208\u2206A given by the logistic response function speci\ufb01ed by some parameter \u03b7 > 0. Let qa(u, \u02c6 yt) be the weight assigned to action a. Let Eu,a,i(y) = 1[qa(u, y) \u2208Bi \u03c4]. De\ufb01ne E to be the collection of events E = {Eu\u03b4\u2032 ,a,i : u\u03b4\u2032 \u2208Uk \u03b4\u2032, a \u2208[k], i \u2208S\u03c4}. Then, using the algorithm given by Noarov et al. (2023) parameterized by event set E to make predictions, any such agent is guaranteed swap regret bounded by: max \u03c6:A\u2192A Reg(u, \u03c6) \u2264O |A|L p Ldk ln(T Ldk) T 1/3 ! 12 \ffor \u03b7 = O( \u221a T ln k), \u03b4 = O \u0010 ln(1/k \u221a T ) d \u221a T \u0011 , and \u03c4 = O \u00001 kLT 1/3 \u0001 . Proof. Recall that the guarantees of conditional bias given by Noarov et al. (2023) depend on the number of events. By Lemma 3, we have that |E| = k|S\u03c4||Uk \u03b4\u2032| \u2264k \u0004 1 \u03c4 \u0005 \u0000 1 \u03b4 \u0001k(d+1). Plugging this into Theorem 1, we can conclude that for su\ufb03ciently small \u03b5, the conditional bias on E is at most \u03b1(nT (Eu,a,i)) \u2264O \uf8eb \uf8edln(T dk \u0004 1 \u03c4 \u0005 \u0000 1 \u03b4 \u0001k(d+1)) + q ln(T dk \u0004 1 \u03c4 \u0005 \u0000 1 \u03b4 \u0001k(d+1)) \u00b7 nT (Eu,a,i) T \uf8f6 \uf8f8 \u2264O \uf8eb \uf8eddk ln(T dk \u0004 1 \u03c4 \u0005 \u0000 1 \u03b4 \u0001 ) + q dk ln(T dk \u0004 1 \u03c4 \u0005 \u0000 1 \u03b4 \u0001 ) \u00b7 nT (Eu,a,i) T \uf8f6 \uf8f8 The remainder of the proof is structured as follows. Recall that an agent\u2019s utility function u can be closely approximated by a nearby utility function u\u03b4\u2032 in the \u03b4\u2032-cover Uk \u03b4\u2032. We \ufb01rst measure the swap regret of the agent had they behaved and received payo\ufb00according to u\u03b4\u2032. Here, we rely on the fact that the predictions are unbiased conditional on (bucketed) marginal probabilities assigned to each action. Thus, the agent\u2019s estimate of their utility whenever they assign a certain probability to an action is (on average) correct, and so the agent\u2019s calculation of their expected utility over the distribution they play\u2014and any other distribution\u2014is correct. Therefore, since the agent plays an approximate best response, the distribution they play yields no swap regret up to the bias of our predictions and the gap introduced by the approximation. Of course, the agent actually behaves and receives payo\ufb00according to u rather than u\u03b4\u2032. Here, we use the fact that the logistic response function is Lipschitz in utilities\u2014in particular, since u and u\u03b4\u2032 give similar utilities, u and u\u03b4\u2032 induce similar distributions over actions. Thus, we can establish that the expected utility the agent obtains by responding according to u cannot be too much smaller than the expected utility the agent could have obtained by responding according to u\u03b4\u2032. Similarly, the expected utility obtained by any benchmark class of actions under u cannot be too much larger than the expected utility obtained by the same benchmark class under u\u03b4\u2032. This then bounds the swap regret of the agent. Thus, to prove Theorem 7, we \ufb01rst introduce the following intermediate lemma that bounds the swap regret as measured with respect to u\u03b4\u2032. Lemma 4. Fix a utility function u\u03b4\u2032 \u2208Uk \u03b4\u2032 and consider an agent who, at round t, plays the distribution q(u\u03b4\u2032, \u02c6 yt) given by the logistic response function. Then, for any action modi\ufb01cation rule \u03c6 : A \u2192A, we have: 1 T T X t=1 E \u02c6 yt\u223cpt E a\u223cq(u\u03b4\u2032 ,\u02c6 yt)[u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt)] \u2264ln |A| + 1 \u03b7 + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 The proof of Lemma 4 will make use of two lemmas. The \ufb01rst lemma will help us establish that in expectation over the rounds in which the agent plays action a, our conditional bias guarantees are (approximately) preserved. Lemma 5. Fix any a \u2208A. If \u03c0T results in bias at most \u03b1(nT (Eu\u03b4\u2032 ,a,i)) conditional on the event Eu\u03b4\u2032 ,a,i for all i \u2208S\u03c4, then we have: \r \r \r \r \r 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i( \u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)( \u02c6 yt \u2212yt)] \r \r \r \r \r \u221e \u2264 \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + \u03c4 Proof. Observe that \r \r \r \r \r 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i( \u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)( \u02c6 yt \u2212yt)] \r \r \r \r \r \u221e 13 \f= \r \r \r \r \r 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[1[( \u02c6 yt \u2212yt) \u22650]1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4]qa(u\u03b4\u2032, \u02c6 yt)( \u02c6 yt \u2212yt) + 1[( \u02c6 yt \u2212yt) < 0]1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4]qa(u\u03b4\u2032, \u02c6 yt)( \u02c6 yt \u2212yt)] \r \r \r \r \r \u221e \u2264 \r \r \r \r \r 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[1[( \u02c6 yt \u2212yt) \u22650]1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] \u00b7 ((i + 1)\u03c4)( \u02c6 yt \u2212yt) + 1[( \u02c6 yt \u2212yt) < 0]1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] \u00b7 (i\u03c4)( \u02c6 yt \u2212yt)] \r \r \r \r \r \u221e \u2264 \r \r \r \r \r 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] \u00b7 (i\u03c4)( \u02c6 yt \u2212yt) + 1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] \u00b7 \u03c4] \r \r \r \r \r \u221e = \r \r \r \r \r 1 T X i\u2208S\u03c4 i\u03c4 T X t=1 E \u02c6 yt\u223cpt \u0002 1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4]( \u02c6 yt \u2212yt) \u0003 \r \r \r \r \r \u221e + \u03c4 T T X t=1 E \u02c6 yt\u223cpt \" X i\u2208S\u03c4 1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] # \u2264 X i\u2208S\u03c4 i\u03c4 \u00b7 1 T \r \r \r \r \r T X t=1 E \u02c6 yt\u223cpt \u0002 1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4]( \u02c6 yt \u2212yt) \u0003 \r \r \r \r \r \u221e + \u03c4 T T X t=1 E \u02c6 yt\u223cpt \" X i\u2208S\u03c4 1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] # \u2264 X i\u2208S\u03c4 \u03b1(nT (Eu\u03b4\u2032 ,a,i)) + \u03c4 The last inequality follows from our assumption of \u03b1-unbiasedness, the fact that i\u03c4 \u22641 for all i, and the fact that for any utility function u\u03b4\u2032 and action a, the events {Eu\u03b4\u2032 ,a,i}i\u2208S\u03c4 are disjoint\u2014the indicator 1[qa(u\u03b4\u2032, \u02c6 yt) \u2208Bi \u03c4] = 1 for exactly one i \u2208S\u03c4, i.e. P i\u2208S\u03c4 Eu\u03b4\u2032 ,a,i(\u02c6 yt) = 1. Thus, it also follows that P i\u2208S\u03c4 nT (Eu\u03b4\u2032 ,a,i) = T , and so by concavity of \u03b1, we have that P i\u2208S\u03c4 \u03b1(nT (Eu\u03b4\u2032 ,a,i)) \u2264|S\u03c4|\u03b1((P i\u2208S\u03c4 nT (Eu\u03b4\u2032 ,a,i))/|S\u03c4|) = |S\u03c4|\u03b1(T/|S\u03c4|), which completes the proof. The next lemma establishes that the utility gained by playing the logistic response distribution cannot be too much smaller than the utility gained by playing the best response action (note that this is a general statement that applies to any utility function). The di\ufb00erence between the two quantities will be controlled by the parameter \u03b7. Lemma 6. Fix any utility function u and any y \u2208C. Let a\u2217= BR(u, y). We have that: E a\u223cq(u,y)[u(a, y)] \u2265u(a\u2217, y) \u2212ln |A| + 1 \u03b7 Proof. First, we see that for any constant x, Pr[u(a, y) \u2264x] \u2264 Pr[u(a, y) \u2264x] Pr[u(a, y) = u(a\u2217, y)] \u2264 |A| exp(\u03b7x) exp(\u03b7u(a\u2217, y) = |A| exp(\u03b7(x \u2212u(a\u2217, y))) Taking x = u(a\u2217, y) \u22121 \u03b7(ln |A| + c), we have that Pr[u(a, y) \u2264u(a\u2217, y) \u22121 \u03b7 (ln |A| + c)] \u2264|A|e\u2212ln |A|\u2212c = e\u2212c or equivalently, Pr[\u03b7(u(a\u2217, y)\u2212u(a, y)) \u2265ln |A|+c] \u2264e\u2212c. Let X = \u03b7(u(a\u2217, y)\u2212u(a, y)). X is a non-negative random variable, so we have E[X] = Z \u221e 0 Pr[X \u2265x]dx = Z \u221e c=\u2212ln |A| Pr[X \u2265ln |A| + c]dc \u2264 Z 0 c=\u2212ln |A| 1dc + Z \u221e 0 e\u2212cdc = ln |A| + 1 14 \fwhere the inequality uses the fact that Pr[X \u2265ln |A| + c] \u22641 for c \u22640. Hence it follows that u(a\u2217, y) \u2212 E[u(a, y)] \u2264ln |A|+1 \u03b7 , which proves the lemma. We are now ready to prove Lemma 4. Proof of Lemma 4. Fix any action modi\ufb01cation rule \u03c6 : A \u2192A. Using linearity of u\u03b4\u2032 in the second argument, we derive: 1 T T X t=1 E \u02c6 yt\u223cpt E a\u223cq(u\u03b4\u2032 ,\u02c6 yt)[u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt)] = X a\u2208A 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt \u0002 Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)(u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt)) \u0003 = X a\u2208A u\u03b4\u2032(\u03c6(a), 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)]yt) \u2212u\u03b4\u2032(a, 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)]yt) ! \u2264 X a\u2208A u\u03b4\u2032(\u03c6(a), 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)\u02c6 yt]) \u2212u\u03b4\u2032(a, 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)\u02c6 yt]) !! + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 = X a\u2208A 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt \u0002 Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)(u\u03b4\u2032(\u03c6(a), \u02c6 yt) \u2212u\u03b4\u2032(a, \u02c6 yt)) \u0003 ! + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 = 1 T T X t=1 E \u02c6 yt\u223cpt E a\u223cq(u\u03b4\u2032 ,\u02c6 yt) [u\u03b4\u2032(\u03c6(a), \u02c6 yt) \u2212u\u03b4\u2032(a, \u02c6 yt)] ! + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 which, by the fact that the benchmark action \u03c6(a) cannot obtain higher utility than the best response at round t, letting a\u2217 t = BR(u\u03b4\u2032, \u02c6 yt), this quantity is at most \u2264 1 T T X t=1 E \u02c6 yt\u223cpt[u\u03b4\u2032(a\u2217 t , \u02c6 yt) \u2212 E a\u223cq(u\u03b4\u2032 ,\u02c6 yt) [u\u03b4\u2032(a, \u02c6 yt)]] ! + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 \u2264 1 T T X t=1 ln |A| + 1 \u03b7 ! + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 = ln |A| + 1 \u03b7 + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 Here we apply Lemma 5 and L-Lipschitzness of u\u03b4\u2032 to get the \ufb01rst inequality: for any action a, \f \f \f \f \fu\u03b4\u2032(a, 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)\u02c6 yt]) \u2212u\u03b4\u2032(a, 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)]yt) \f \f \f \f \f \u2264L \r \r \r \r \r 1 T T X t=1 X i\u2208S\u03c4 E \u02c6 yt\u223cpt[Eu\u03b4\u2032 ,a,i(\u02c6 yt)qa(u\u03b4\u2032, \u02c6 yt)(\u02c6 yt \u2212yt)] \r \r \r \r \r \u221e 15 \f\u2264L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + L\u03c4 Finally, we apply Lemma 6 to get the last inequality. This completes the proof. Before proving Theorem 7, we introduce one more lemma which states that nearby utility functions induce nearby distributions over actions\u2014in particular, perturbing the payo\ufb00of any action by a little bit does not change its assigned weight by too much. Lemma 7. Fix any a \u2208A, y \u2208C. For any utility functions u1, u2, if |u1(a, y) \u2212u2(a, y)| \u2264\u03b4, then |qa(u1, y) \u2212qa(u2, y)| \u2264e2\u03b7\u03b4 \u22121. Proof. W.l.o.g. assume that u1(a, y) \u2265u2(a, y). Then we have that qa(u1, y) qa(u2, y) = \u0010 exp(\u03b7u1(a,y)) P a\u2032\u2208A exp(\u03b7u1(a\u2032,y)) \u0011 \u0010 exp(\u03b7u2(a,y)) P a\u2032\u2208A exp(\u03b7u2(a\u2032,y)) \u0011 = exp(\u03b7(u1(a, y) \u2212u2(a, y))) \u00b7 \u0012P a\u2032\u2208A exp(\u03b7u2(a\u2032, y)) P a\u2032\u2208A exp(\u03b7u1(a\u2032, y)) \u0013 \u2264exp(\u03b7\u03b4) \u00b7 \u0012P a\u2032\u2208A exp(\u03b7(u1(a\u2032, y) + \u03b4) P a\u2032\u2208A exp(\u03b7u1(a\u2032, y)) \u0013 = exp(\u03b7\u03b4) \u00b7 exp(\u03b7\u03b4) = exp(2\u03b7\u03b4) Rearranging the expression, we get qa(u1, y) \u2264exp(2\u03b7\u03b4)qa(u2, y) and hence, since qa(u2, y) \u22641, we have that qa(u1, y) \u2212qa(u2, y) \u2264exp(2\u03b7\u03b4)qa(u2, y) \u2212qa(u2, y) \u2264exp(2\u03b7\u03b4) \u22121 which proves the lemma. We now proceed with the proof of Theorem 7. Recall that by de\ufb01nition of a \u03b4\u2032-net, we can \ufb01nd u\u03b4\u2032 \u2208Uk \u03b4\u2032 such that |u(a, y) \u2212u\u03b4\u2032(a, y)| \u2264\u03b4\u2032 for all a \u2208A, y \u2208C. Hence, together with Lemma 7, we can relate the cumulative utility of the agent to their cumulative utility if they instead had utility function u\u03b4\u2032\u2014for both the realized and benchmark sequence of actions. An application of Lemma 4 will then bound the agent\u2019s swap regret. In particular, for any \u03c6 : A \u2192A, we have that Reg(u, \u03c6) = 1 T T X t=1 E \u02c6 yt\u223cpt E a\u223cq(u,\u02c6 yt)[u(\u03c6(a), yt) \u2212u(a, yt)] \u2264 1 T T X t=1 E \u02c6 yt\u223cpt E a\u223cq(u,\u02c6 yt)[u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt)] ! + 2\u03b4\u2032 = 1 T T X t=1 X a\u2208A E \u02c6 yt\u223cpt[qa(u, \u02c6 yt)(u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt))] ! + 2\u03b4\u2032 \u2264 1 T T X t=1 X a\u2208A E \u02c6 yt\u223cpt[qa(u\u03b4\u2032, \u02c6 yt)(u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt))] ! + |A|(e2\u03b7\u03b4\u2032 \u22121) + 2\u03b4\u2032 = 1 T T X t=1 E \u02c6 yt\u223cpt E a\u223cq(u\u03b4\u2032 ,\u02c6 yt)[u\u03b4\u2032(\u03c6(a), yt) \u2212u\u03b4\u2032(a, yt)] ! + |A|(e2\u03b7\u03b4\u2032 \u22121) + 2\u03b4\u2032 \u2264ln |A| + 1 \u03b7 + 2|A|L \u00161 \u03c4 \u0017 \u03b1 \u0012 T/ \u00161 \u03c4 \u0017\u0013 + 2|A|L\u03c4 + |A|(e2\u03b7\u03b4\u2032 \u22121) + 2\u03b4\u2032 16 \f\u2264ln |A| + 1 \u03b7 + O \uf8eb \uf8ed|A|L s\u0004 1 \u03c4 \u0005 dk ln(T dk \u0004 1 \u03c4 \u0005 \u0000 1 \u03b4 \u0001 ) T \uf8f6 \uf8f8+ 2|A|L\u03c4 + |A|(e2\u03b7\u03b4\u2032 \u22121) + 2\u03b4\u2032 where the \ufb01rst inequality follows from our choice of u\u03b4\u2032, the second inequality follows from Lemma 7, and the last inequality follows from Lemma 4. In the last step, we substitute in the conditional bias bound established previously. For \u03b7 = (ln k + 1) \u221a T, \u03b4 = ln((1/k \u221a T )+1) (d+1) \u221a T , \u03c4 = 1 kLT 1/3 , we get Reg(u, \u03c6) \u2264O |A|L p Ldk ln(T Ldk) T 1/3 ! 5 Discussion and Conclusion We have shown that it is possible to make predictions that guarantee diminishing swap regret to all downstream agents simultaneously, at rates that are substantially faster than are possible by making calibrated predictions. In particular, for best responding agents, we have shown how to do this at the optimal O( \u221a T) rate for d = 1 \u2014 a rate that is impossible to obtain via calibration. For agents that are assumed to best respond according to a discretized version of their own utility function, we\u2019ve shown how to obtain these optimal O( \u221a T) rates for every dimension d, and for agents that smoothly best respond according to their own utility function, we\u2019ve shown how to guarantee all downstream agents swap regret at the dimension-independent rate O(T 2/3) in d dimensions. Thus we have established\u2014perhaps surprisingly\u2014that a substantially weaker condition than calibration su\ufb03ces, even when the goal is to guarantee downstream swap regret simultaneously for all agents. Prior work was able to obtain similar guarantees only by either relaxing from swap to external regret (Kleinberg et al., 2023) or by restricting to a \ufb01xed collection of downstream agents, rather than o\ufb00ering guarantees simultaneously to all downstream agents (Noarov et al., 2023). Although our approach is computationally e\ufb03cient for d = 1, like algorithms promising calibration, the computational complexity of our approach scales badly with d. We leave as the main open question from our work: Is there an algorithm that can make d dimensional predictions that guarantee all downstream agents swap regret diminishing at a rate of \u02dc O(T O(1)) with per-round running time scaling polynomially with d? Acknowledgements We thank Georgy Noarov for enlightening conversations and for pointing us to Ivic et al. (1994). This work was supported in part by the Simons Collaboration on the Theory of Algorithmic Fairness, NSF grants FAI-2147212 and CCF-2217062, an AWS AI Gift for Research on Trustworthy AI, and the Hans Sigrist Prize." + }, + { + "url": "http://arxiv.org/abs/2209.01687v2", + "title": "Reconciling Individual Probability Forecasts", + "abstract": "Individual probabilities refer to the probabilities of outcomes that are\nrealized only once: the probability that it will rain tomorrow, the probability\nthat Alice will die within the next 12 months, the probability that Bob will be\narrested for a violent crime in the next 18 months, etc. Individual\nprobabilities are fundamentally unknowable. Nevertheless, we show that two\nparties who agree on the data -- or on how to sample from a data distribution\n-- cannot agree to disagree on how to model individual probabilities. This is\nbecause any two models of individual probabilities that substantially disagree\ncan together be used to empirically falsify and improve at least one of the two\nmodels. This can be efficiently iterated in a process of \"reconciliation\" that\nresults in models that both parties agree are superior to the models they\nstarted with, and which themselves (almost) agree on the forecasts of\nindividual probabilities (almost) everywhere. We conclude that although\nindividual probabilities are unknowable, they are contestable via a\ncomputationally and data efficient process that must lead to agreement. Thus we\ncannot find ourselves in a situation in which we have two equally accurate and\nunimprovable models that disagree substantially in their predictions --\nproviding an answer to what is sometimes called the predictive or model\nmultiplicity problem.", + "authors": "Aaron Roth, Alexander Tolbert, Scott Weinstein", + "published": "2022-09-04", + "updated": "2023-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction Probabilistic modelling in machine learning and statistics predicts \u201cindividual probabilities\u201d as a matter of course. In weather forecasting, we speak of the probability of rain tomorrow; in life insurance underwriting we speak of the probability that Alice will die in the next 12 months; in recidivism prediction we speak of the probability that an inmate Bob will commit a violent crime within 18 months of being released on parole; in predictive medicine we speak of the probability that Carol will develop breast cancer before the age of 50 \u2014 and so on. But these are not repeated events: we have no way of directly measuring an \u201cindividual probability\u201d \u2014 and indeed, even the semantics of an individual probability are unclear and have been the subject of deep interrogation within the philosophy of science and statistics [H\u00b4 ajek, 2007, Dawid, 2017] and theoretical computer science [Dwork et al., 2021]. Within the philosophy of science, puzzles related to individual probability have been closely identi\ufb01ed with \u201cthe reference class problem\u201d [H\u00b4 ajek, 2007]. This is a close cousin of a concern that has recently arisen in the context of fairness in machine learning called the \u201cpredictive multiplicity problem\u201d (a focal subset of \u201cmodel multiplicity problems\u201d) [Marx et al., 2020, Black et al., 2022] which Breiman [2001] earlier called the \u201cRashomon E\ufb00ect\u201d. At the core of both of these problems is the fact that from a data sample that is much smaller than the data universe (i.e. the set of all possible observations), we will have observed at most one individual with a particular set of characteristics, and at most one outcome for the event that an \u201cindividual probability\u201d speaks to: It will either rain tomorrow or it will not; Alice will either die within the next year or she will not; etc. We do not have the luxury of observing a large number of repetitions and taking averages. Dawid [2017] lays out two broad classes of perspectives on individual probabilities: the group to individual perspective and the individual to group perspective. The group to individual perspective is roughly as follows: 1 \fWe cannot measure individual probabilities from data, but we can measure averages of outcomes within su\ufb03ciently large reference classes S. A reference class S is just some well de\ufb01ned subset of the observed data: for example (in the weather forecasting setting) the set of days in which there is cloud cover and humidity is above 60%, or (in the life insurance setting) the set of 65 year old women with a history of high blood pressure. Given a reference class that is large enough that we have observed in our data many members of the reference class, we can empirically estimate the prevalence of the outcome we are concerned with forecasting (rain, death within 12 months) for members of the reference class. Then, if we are asked to forecast an individual probability (the probability that Alice will die within the next 12 months), we simply pick an appropriate reference class S such that Alice \u2208S and then respond with the proportion of observed deaths within a 12 month period for individuals from reference class S. The principal problem with this approach (known as the \u201creference class problem\u201d [H\u00b4 ajek, 2007]) is that Alice will simultaneously be a member of many di\ufb00erent reference classes S. We cannot condition on everything we know about Alice, or we will end up with a reference class that does not contain enough examples for us do statistical inference on: thus we must pick and choose. But should we have conditioned on her age, gender and blood pressure? What about her weight? Her job? Her marital status? Her vaccination history? De\ufb01ning reference classes with respect to di\ufb00erent subsets of these attributes will generally lead to di\ufb00erent estimates for the probability that Alice will die within the next 12 months: what privileges one of these estimates over another? This is the reference class problem. On the other hand, the individual to group perspective treats individual probabilities as the \ufb01rst class objects. This is the perspective most familiar in machine learning and statistics: models f are learned from data with the goal of mapping individuals (e.g. \u201cAlice\u201d) to individual probabilities for the outcome of interest, f(Alice).1 Models of individual probabilities can also be aggregated over to give predicted probabilities conditional on reference classes. If we want to evaluate the probability of an outcome conditional on some reference class S, we can do so by averaging the model\u2019s predictions over individuals in S. We cannot measure individual probabilities, but from data we can measure the average probability of an outcome over a su\ufb03ciently large reference class S, which gives us a way to empirically falsify a model f from data: if the prediction implied by f for the average outcome conditional on a large reference class does not match the average outcome we can measure from the data, then the model f must be wrong. Multicalibration, introduced by H\u00b4 ebert-Johnson et al. [2018], gives us a way to build models of individual probabilities that are consistent with the data for large numbers of arbitrarily chosen reference classes S \u2014 i.e. models that are not empirically falsi\ufb01ed by any of the pre-speci\ufb01ed reference classes. Nevertheless, multi-calibrated models are not unique2: we can have multiple models that have large disagreements in many of their individual predictions that nevertheless are equally consistent with the data on a large collection of reference classes. This is an instance of the predictive multiplicity problem [Marx et al., 2020, Black et al., 2022, Breiman, 2001]. The predictive multiplicity problem is usually not phrased in terms of multicalibration and reference classes, but in terms of accuracy or error. If a model encodes true individual probabilities, then it will minimize expected squared error3 amongst all possible models. Moreover, expected squared error is something that we can e\ufb03ciently estimate from data. Hence, if we have two models f1 and f2, and we can infer from data that f1 has lower expected squared error than f2, then this is an empirical falsi\ufb01cation of the hypothesis that f2 correctly encodes individual probabilities. This serves as a normative justi\ufb01cation for selecting amongst models based on their accuracy, which is a common practice. The predictive multiplicity problem 1Here of course what is input to the model is some representation of the individual, encoding the information that we have available about them. 2If a model is perfectly multicalibrated with respect to every reference class (including singletons that contain only single individuals), then it must encode the true individual probabilities, but this cannot be approximated from data unless many samples of every possible representation of an individual have been observed, which is infeasible in realistic high dimensional inference settings. Similarly, Dawid [1985] proved that probabilities generated by computable forecasters that are multicalibrated with respect to every computable reference class must be unique, but only in a limiting sense \u2014 two such multicalibrated models may substantially disagree on any \ufb01nite set of data. The kinds of multicalibration guarantees that are obtainable from samples of data that are polynomial in the data dimension [H\u00b4 ebert-Johnson et al., 2018, Gupta et al., 2022] do not lead to unique models. 3or any other proper scoring rule. 2 \farises when we have two models f1 and f2 (and perhaps others) that are equally accurate, but disagree substantially on many of their predictions. More generally the predictive multiplicity problem arises when we have multiple models that di\ufb00er substantially in their predictions, but are seemingly equally consistent with the data before us. Despite arising from di\ufb00erent conceptions of individual probability, the reference class problem and the predictive multiplicity problem result in the same practical concern: that data do not encode unique estimates for the individual probabilities for many individuals. If this is the case, then what justi\ufb01cation do we have in making consequential decisions as a result of predictions that our models make about individual probabilities? How can we justify setting a high rate for Alice\u2019s life insurance, denying parole to Bob, or suggesting life-altering preventative surgery to Carol based on the predictions of some model f1 if we have an equally good (and equally well supported by the data) model f2 that makes predictions that would lead us to take the opposite course of action? 1.1 Our Results We show that given a common understanding of the data (or the process of sampling from the data distribution), models of individual probabilities are contestable through an e\ufb03cient model reconciliation process that must lead to broad agreement. Speci\ufb01cally, suppose one party A proposes a model of individual probabilities fA, that another party B thinks is \ufb02awed. B can contest fA by proposing their own model of individual probabilities fB. There are two possible outcomes: 1. fA and fB agree in their predictions almost everywhere.4 In this case, it turns out there was no substantial disagreement. 2. fA and fB substantially disagree in their predictions for a large portion of the population. In the second case, we can e\ufb03ciently extract from the disagreement region of fA and fB a large reference class S = S(fA, fB) such that on this reference class, not only do fA and fB disagree on individual predictions, they also disagree substantially on their prediction of the average outcome conditional on membership in S. Because S is large, from only a modest amount of data, we can accurately estimate the average outcome conditional on S. But because fA and fB have a substantial disagreement about this quantity, our measurement is guaranteed to falsify at least one of the two models. Suppose it is model fA that is falsi\ufb01ed. Then, using a very simple and e\ufb03cient model update operation of the same sort used for computing multicalibrated models [H\u00b4 ebert-Johnson et al., 2018], we can update fA to produce a new model f \u2032 A that now makes predictions that are correct on average over S. The new model f \u2032 A is guaranteed to have signi\ufb01cantly reduced squared error compared to fA, and so is a better model not only in that it has not yet been falsi\ufb01ed, but in that it is more accurate. After this update, we can then repeat the process: Either f \u2032 A and fB agree on their predictions almost everywhere, or we can again falsify one of the models and improve it using a large reference class S\u2032 = S(f \u2032 A, fB). The only way for this process to end is with two models that agree in their predictions almost everywhere. Moreover, because each iteration of falsi\ufb01cation and improvement improves the expected squared error of at least one of the two models, the process cannot continue for very many iterations \u2014 fast agreement of the models is guaranteed. In Section 3, we formally derive the guarantees of our model reconciliation process under the assumption that we can directly evaluate conditional outcome probabilities conditional on large reference classes S: this makes our analysis more transparent. Then, in Section 4, we show that we can run our model reconciliation process on the empirical distribution over a modestly sized dataset that is sampled i.i.d. from some unknown underlying distribution, and that its guarantees carry over to the unknown distribution of interest. Here \u201cmodestly sized\u201d means a number of samples that is independent of the complexity of the models to be 4We use the expressions \u201calmost everywhere\u201d and \u201calmost agree\u201d as shorthand for quantitative statements that are made explicit in the formal presentation of our results. Insofar as we are working in the context of discrete distributions, it should be clear that we are not using these expressions in their usual measure-theoretic sense. We note that our focus on discrete distributions is merely to avoid dealing with measure-theoretic niceties, and is not essential to any of our results 3 \freconciled or the dimension (or any other property) of the underlying distribution, and that depends only polynomially on the quantitative parameters controlling how closely we want the models resulting from the reconciliation process to agree. In fact, as we observe, the data required to guarantee agreement on a 1 \u2212\u03b1 fraction of the data distribution is not much larger than the data requirement that would be necessary for two parties to agree (from samples) on the average outcome probability conditional on a single reference class representing an \u03b1 fraction of the data. 1.2 Discussion Are \u201cIndividual Probabilities\u201d Coherent? What are we assuming when we model the world using \u201cindividual probabilities\u201d? Are we assuming some kind of idealized, unrealized randomness, which is simply a poor stand-in for our ignorance of the relevant processes? No. Our modelling choices do not preclude a deterministic world: all true individual \u201cprobabilities\u201d could be 0 or 1, simply recording the outcomes of interest. We still allow for models to predict non-integer probabilities; we can view these as simply expressing uncertainty about the outcomes, or as encoding objective features of a stochastic universe. Our results imply that after reconciliation, two parties must agree about their assignments of individual probabilities, regardless of the philosophical commitments each may have about the nature of probability. Agreement is Guaranteed; Not Truth We emphasize that individual probabilities are not uniquely determined from observed data. Consider a toy model of weather forecasting in which features x encode the date, and outcomes y encode whether or not it rains on that date. The following two situations are observationally indistinguishable: 1. On every day x, the individual probability of rain p(x) = 1/2, and 2. Before the start of time, God selected a subset of days uniformly at random to have individual probability of rain p(x) = 1 and the remaining set to have individual probability of rain p(x) = 0. There is no hope of distinguishing these two situations from data about outcomes alone, and so it is plainly impossible to learn a model that is guaranteed to accurately encode individual probabilities from such data5\u2014 in fact, it is not clear that this goal is meaningful, as they are not uniquely determined.6 Nevertheless, suppose we believed that the individual probability of rain was p(x) = 1/2 every day. If we met a forecaster who was able to make more accurate predictions (i.e. predictions that had lower squared error) on previously unobserved data, we would be forced to recognize that our model was incorrect \u2014 because we could compare the performance of the two models on data. This drives our result (and similar work on multiple expert testing [Al-Najjar and Weinstein, 2008, Feinberg and Stewart, 2008] \u2014 see Section 1.3), and is the reason that we can guarantee agreement rather than truth. Nevertheless, the updates that result from our reconciliation process always move towards truth\u2014because they are error improving\u2014but they stop when the available models agree, which might be well before truth is attained. Predictive Multiplicity Comes from Restricting Model Classes Previous work has empirically noted and quanti\ufb01ed the phenomenon of predictive multiplicity\u2014i.e. that solving an error minimization problem over some class of models can result in multiple solutions of (roughly) equivalent error [D\u2019Amour et al., 2020, Marx et al., 2020]. How do these results square with our contention that the predictive multiplicity 5Of course, we do not take such radical under-determination of individual probability assignments from data about outcomes alone to in anyway impugn the objectivity of such assignments. Indeed, the primary virtue of our results from a philosophical perspective is that they provide an e\ufb03cient method to guarantee inter-subjective agreement about individual probability assignments and thus secure their objectivity to this extent. 6It is perhaps worth remarking that in this case Dawid\u2019s uniqueness result [Dawid, 1985], cited earlier, implies, with probability one with respect to God\u2019s choices, that there is an asymptotically unique computable assignment of probabilities that is computably calibrated with the data generated by God\u2019s choices. There is no paradox here: insofar as God\u2019s choices are generated uniformly at random, her (deterministic) probability forecast is, with probability one, algorithmically random, and thus not computable. 4 \fproblem cannot arise, because two equally accurate but substantially di\ufb00erent models constructively imply the existence of a more accurate model? The answer is that predictive multiplicity can arise when models are restricted to lie within some prespeci\ufb01ed hypothesis class, like linear threshold functions, bounded depth decision trees, or neural networks with a particular architecture. Traditionally machine learning is done by optimizing a model within a \ufb01xed model class, and this is the setting in which predictive multiplicity has been empirically observed and quanti\ufb01ed. In contrast, our algorithm for reconciling pairs of models f1 and f2 produces a model f3 that need not lie in the same model class as f1 and f2. This is key to side-stepping the predictive multiplicity problem. Traditional methods in machine learning and statistics optimize over models from restricted classes to avoid the problem of over\ufb01tting. In contrast, we avoid over\ufb01tting despite not restricting our model classes a priori by bounding the number of updates that can occur through our reconciliation process. What about Kasey? Suppose we have a model that has gone through our reconciliation process with another model for the same task, and so we can promise that (say) it predicts the same individual probability (plus or minus 1%) as the other model for 99% of individuals. We then use this model to predict an individual probability about Kasey, which we use to inform a consequential decision. Suppose Kasey is among the 1% of people for whom the models disagree. Kasey\u2019s main interest is that there should be no ambiguity about his individual prediction \u2014 why should he be satis\ufb01ed that we can promise that the models must agree almost everywhere, if they do not agree on their predictions for him?7 We can say nothing about Kasey\u2019s true individual probability. Nevertheless, what we can promise is that there are not many people in Kasey\u2019s position\u2014such that there are two models equally consistent with the data that disagree on their individual predictions for that individual. The result is that for people like Kasey, we can a\ufb00ord to devote additional resources towards decision making that we could not a\ufb00ord if Kasey\u2019s situation was more common. For example, for the 1% of the population for whom the models are not in agreement, we can imagine devoting more time, human expertise, and deliberation than we could for the general population\u2014perhaps by escalating these cases to a panel of experts. Alternately, in a setting in which Kasey has a clearly preferred outcome, we might be able to a\ufb00ord to give him the bene\ufb01t of the doubt \u2014 for example, if our best models disagree on Kasey\u2019s individual probability in a decision relevant way, we could commit to using the most favorable of the predictions for Kasey. Again, we can a\ufb00ord to act in this way because there are only few such individuals8. Moreover, our model reconciliation process can drive the fraction of the population on which the models disagree to 0 at the cost of access to more data \u2014 although of course with \ufb01nite amounts of data we will never entirely eliminate disagreements. Do models really predict individual probabilities? Another objection we can imagine is that in the settings we discuss\u2014weather prediction, life insurance underwriting, recidivism prediction, etc.\u2014it is logically impossible to observe repeated trials, because tomorrow will only occur once, Alice has only one life to live, and so on. In contrast, when we move to the formalism of a probability distribution over representations of individuals, it may be extremely unlikely (or even a measure 0 event) to observe the same representation of an individual multiple times, but it is no longer a logical impossibility. Said another way, when we model individuals in some representation space, we may fail to record idiosyncratic details of the individual, and so we are no longer speaking of individual probabilities, but rather average outcomes over the reference class de\ufb01ned by people who share the same representation. But this is not a sharp distinction, 7Adapted from an insightful question asked of us by Konstantin Genin. 8More speci\ufb01cally, when our algorithm is run on a sample of people drawn from a distribution, we promise that we produce models that both agree on their predictions on 99% of people in the sample, and 99% of the probability mass of the distribution from which the sample was drawn. If we imagine the sampling distribution as being de\ufb01ned over some much larger \ufb01nite \u201cpopulation\u201d, then this corresponds to agreement on 99% of the population if the sampling distribution is uniform over the underlying population. If the sampling distribution is not uniform (either by design, for example, in the case that the feature domain X used to model the underlying population (see Section 2) is countably in\ufb01nite, or because of an error in the sampling procedure), then the guarantee of agreement on 99% of the mass of the sampling distribution need not correspond to agreement on 99% of the underlying population any longer, and there may be \u201cmany such\u201d people on whom the model disagrees even though they have only small probability mass in the sampling distribution. In general, in our exposition, we will make the assumption that the sampling distribution correctly represents the population we are interested in. 5 \fbecause our results have no dependence at all on the dimensionality or complexity of the representation we use for individuals. For this objection to have teeth, it must be that there is some crucial idiosyncrasy of an individual that we have failed to capture in our representation: if so, add this to our representation! Our results remain the same (not just qualitatively but also quantitatively) even if the representation of every individual records the position of every molecule in their body, a complete history of their life from birth until the present, or anything else, and so does not rely even implicitly on having only an impoverished representation of an individual to work with rather than \u201cthe real thing\u201d. 1.3 Additional Related Work Our work is related to a number of strands of literature across statistics, economics, and computer science. Nau and McCardle [1991] showed that agents interacting in a market that has no arbitrage opportunities must agree on the probabilities of all payo\ufb00relevant outcomes, because if there are two agents who disagree on such probabilities, then a third party has the ability to engage in transactions with the two of them that are guaranteed to be pro\ufb01table no matter what the outcome, with the magnitude of the pro\ufb01t scaling with the number and magnitude of the disagrements about outcome probabilities. Aumann [1976] proved that two Bayesians who share a common prior, but may have made di\ufb00erent observations, must agree on the posterior expectation of a random variable if their posterior distributions are common knowledge. Although Aumann\u2019s original result was nonconstructive, subsequent work has shown that agreement can be reached with \ufb01nite, communication e\ufb03cient protocols [Geanakoplos and Polemarchakis, 1982, Aaronson, 2005]. Despite similarity in its conclusions, this line of work is quite distinct from ours. In the Bayesian setting that this line of work focuses on, it is immediate that two agents who share the same set of observations and prior beliefs must share the same posterior beliefs (as a posterior distribution is determined, via Bayes rule, as a function only of the prior distribution and observations). Aumann\u2019s agreement theorem instead shows that if agents have arrived at common knowledge of their posterior distributions, then their posteriors must agree even if they have not directly shared their observations. In contrast, in a frequentist setting, individual probabilities are not uniquely determined from data, which forms the basis of the reference class problem9 [H\u00b4 ajek, 2007] and the model multiplicity problem [Black et al., 2022]. Our work considers how two frequentist agents who agree on the same set of data (or the distribution from which it was drawn) must come to agree on individual probabilities \u2014 a problem which would not arise in the \ufb01rst place if they were Bayesian agents with a common prior. Dawid [1982] proposed calibration as a desirable frequentist condition for evaluating probabilistic forecasts: roughly speaking that the outcome being forecast should have appeared with empirical frequency p conditional on the forecaster predicting probability p of the outcome, simultaneously for all predictions p. Subsequently, Dawid [1985] studied a substantial strengthening of this condition called computable calibration that requires calibration to hold simultaniously on all computable subsets of the data.10 Dawid proved that in the in\ufb01nite data limit, two computably calibrated forecasters must approximately agree in their predictions almost everywhere \u2014 that is, except on a \ufb01nite subset of the data [Dawid, 1985]. He 9In a Bayesian framework, problems that are similar to the reference class problem emerge in making ones choice of priors [H\u00b4 ajek, 2007]. 10Dawid explicitly links calibration as a criterion of adequacy for a forecasting model to the notion of randomness a forecaster f is calibrated with respect to a data sequence s if and only if s is random with respect to the probability distribution induced by f on the collection of all data sequences \u2013 if a probabilistic model is a correct explanation of the data, the data must not look atypical, that is non-random, with respect to the model. Of course, in order to endow this notion with explicit mathematical content, one requires a suitable mathematical theory of randomness. The notion of computable calibration adopted by Dawid draws on a notion of algorithmic randomness elaborated by von Mises, Wald, and Church in the 1930\u2019s. This notion was shown to be defective by Ville [1939] who established that there are in\ufb01nite sequences of 0\u2019s and 1\u2019s, every initial segment of which contain more 0\u2019s than 1\u2019s, that are nonetheless random in the sense of von Mises, Wald, and Church. A far more credible algorithmic notion of randomness which avoids this di\ufb03culty was introduced by Martin-L\u00a8 of [1966] based on the idea that a sequence is random if and only if it passes every one of a comprehensive class of computably enumerable statistical tests. (An excellent account of these developments may be found in Chapter 6 of Downey and Hirschfeldt [2010].) Dawid notes the de\ufb01ciency of the von Mises, Wald, Church notion of randomness, and suggests the desirability of investigating a notion of calibration based on Martin-L\u00a8 of randomness. Vovk and Shen [2010] investigates such a notion. Bienvenu et al. [2011] studies further generalizations in this direction which apply to the case of real-valued functions. 6 \fnotes explicitly that this criterion is not of practical use in \ufb01nite data scenarios, and speculates about the desirability of restrictions of computable calibration to \ufb01nite sample scenarios (anticipating multicalibration [H\u00b4 ebert-Johnson et al., 2018]). Multicalibration [H\u00b4 ebert-Johnson et al., 2018] asks for calibration on a restricted class of subsets of the data. H\u00b4 ebert-Johnson et al. [2018] gave algorithms for learning multicalibrated predictors with data requirements that scale only modestly with the number of subsets of the data on which calibration is required (and e\ufb03cient algorithms whenever it is possible to e\ufb03ciently optimize over these subsets)\u2014but multicalibrated forecasts need not be unique. Jung et al. [2021] generalized multicalibration (which aims to be consistent with mean outcomes) to moments and other properties of real valued outcomes, and gave e\ufb03cient algorithms for obtaining these guarantees. Dwork et al. [2021] generalized multicalibration to notions of \u201coutcome indistinguishability\u201d that ask that a probabilistic forecaster be indistinguishable from a true probabilistic model with respect to a hierarchy of distinguishers that might have access not just to the predictions but to the implementation details of the forecaster itself. Dwork et al. [2021] explicitly connect outcome indistinguishability to philosophical questions surrounding individual probabilities. Multicalibration has proven to be an e\ufb00ective technique for improving individual predictions in several applications in predictive medicine [Barda et al., 2020, 2021] Foster and Vohra [1998] gave the \ufb01rst algorithm to constructively make predictions of individual probabilities guaranteed to generate calibrated forecasts against arbitrary sequences of outcomes (and so necessarily without any knowledge of the \u201ctrue individual probabilities\u201d, since the outcomes can be generated adversarially, with knowledge of the predictor\u2019s algorithm). Sandroni et al. [2003] show constructively how to achieve calibration in the in\ufb01nite data limit on any computable subsequence of an arbitrary sequence of outcomes. Sandroni [2003] showed that any empirical test (not just calibration tests) that is guaranteed to pass an expert who is forecasting true individual probabilities can be passed by a prediction algorithm on any sequence of outcomes. This is closely related to the fact that individual probabilities are not uniquely speci\ufb01ed by data \u2014 and so we cannot attempt to test an expert by computing unique individual probabilities ourselves. Gupta et al. [2022] gave computationally and sample e\ufb03cient algorithms for achieving multi-calibrated forecasts against arbitrary sequences of outcomes \u2014 for means, moments, and quantiles. Bastani et al. [2022] gave practical implementations of quantile multicalibration algorithms in adversarial sequential settings, and applied them to give algorithms for producing prediction sets of various kinds of classi\ufb01ers with calibrated, group-wise conditionally valid guarantees. Although Sandroni [2003] showed that no empirical test of outcomes can distinguish a forecaster with knowledge of true individual probabilities from one without such knowledge in isolation, Al-Najjar and Weinstein [2008] and Feinberg and Stewart [2008] showed that there are comparative tests that can distinguish between two forecasters, one of whom is forecasting true individual probabilities and one of whom is not. In particular, the test of Feinberg and Stewart [2008] is based on checking for cross-calibration between two forecasters \u2014 i.e. calibration conditional on the predictions of both forecasters, and is driven by the fact that on a sequence of predictions such that one forecaster predicts a probability for an outcome p and the other predicts a probability p\u2032 \u0338= p, they cannot both be right, which is empirically veri\ufb01able if there are many such rounds. In the context of studying the utility of predictors for downstream fairness interventions, Garg et al. [2019] study predictors that are re\ufb01nements of one another (in the sense of DeGroot and Fienberg [1981, 1983]). They give a simple algorithm (\u201cMerge\u201d) that given any two predictors f1, f2, produces a predictor f3 that is cross-calibrated with respect to f1 and f2, and hence is a re\ufb01nement of both. A variant of the \u201cMerge\u201d algorithm of Garg et al. [2019] could be used in place of our \u201cReconcile\u201d algorithm in our arguments; the two algorithms have incomparable data requirements, but would lead to the same qualitative conclusions. Globus-Harris et al. [2022] proposes a framework in which models that are sub-optimal on di\ufb00erent subsets of the population can be updated and improved as part of a \u201cbias bounties\u201d program by means of falsi\ufb01cation; this is another setting in which models can be made to be contestable. 2 Basic Settings and De\ufb01nitions We study prediction tasks over a domain Z = X \u00d7Y. Here X represents the feature domain and Y represents the label domain. To avoid dealing with measure-theoretic issues, we assume in this paper that X is a discrete 7 \fset, but this is not essential to any of our results. For this paper we will restrict attention to binary prediction tasks, where Y = {0, 1} records the outcome of some binary event. Given a labelled example (x, y) \u2208Z, we view x as encoding all observable characteristics of the instance (e.g. meteorological conditions in a weather prediction task, demographic attributes and medical history in a predictive medicine task, etc.), and y represents the binary outcome we are trying to predict (and when part of the training data, represents the outcome of the binary event that we have observed and recorded). We model the world via a distribution D \u2208\u2206Z. Generally we will not have a direct description of the distribution, and instead have access only to a sample of n datapoints D sampled i.i.d. from D, which we will write as D \u2208Zn. We will also sometimes identify a dataset D = {(x1, y1), . . . , (xn, yn)} with the empirical distribution over D, which is simply the discrete distribution that places probability mass 1/n on each point (xi, yi) for i \u2208{1, . . ., n}. A model is some function f : X \u2192[0, 1], and our (typically unattainable goal) is to \ufb01nd a model f \u2217that has the property that for all x \u2208X, f \u2217(x) = Pr(x,y)\u223cD[y = 1|x] is the conditional label expectation given x, or (since we are assuming labels are binary) just \u201cthe individual probability\u201d of the outcome for x. Suppose someone purports to have a model for individual probabilities f. How can we evaluate whether f is any good? If our goal was purely prediction, we might evaluate f via its squared error \u2014 i.e. the expected (squared) deviation of its prediction from the true label. This is the objective we would minimize if we were solving (e.g.) a least squares regression problem: De\ufb01nition 2.1 (Brier Score). The squared error (also known as Brier score) of a model f evaluated on distribution D is: B(f, D) = E (x,y)\u223cD[(f(x) \u2212y)2] Observe that when we treat a dataset D = {(x1, y1), . . . , (xn, yn)} as an empirical distribution, then we have: B(f, D) = 1 n n X i=1 (f(xi) \u2212yi)2 The Brier score can be accurately estimated given access only to samples from a distribution, and a justi\ufb01cation for evaluating models via their Brier score is that amongst all models, the Brier score is minimized by the true individual probabilities encoded by a probability distribution. Lemma 2.1. Fix any probability distribution D and let f \u2217(x) = Pr(x,y)\u223cD[y = 1|x] represent the true individual probabilities encoded by D. Let f : X \u2192[0, 1] be any other model. Then: B(f \u2217, D) \u2264B(f, D) Thus if we have two models f1 and f2, and can verify from data that B(f1, D) < B(f2, D), this constitutes an empirical falsi\ufb01cation that f2 correctly encodes individual probabilities. 3 A Reconciliation Procedure Suppose we are given two models f1, f2 : X \u2192[0, 1] that purport to predict individual probabilities. Our principle concern is the \u201cmodel multiplicity\u201d problem \u2014 that f1 and f2 di\ufb00er substantially in their predictions, and yet we cannot falsify either of the two models from the data. Thus we will be interested in regions in which these models disagree substantially in their predictions. We will de\ufb01ne \u201csubstantially\u201d by an arbitrarily small discretization parameter \u01eb: De\ufb01nition 3.1. Two models f1 and f2 have an \u01eb-disagreement on a point x \u2208X if |f1(x) \u2212f2(x)| > \u01eb. Let U\u01eb(f1, f2) be the set of points on which f1 and f2 have an \u01eb-disagreement: U\u01eb(f1, f2) = {x : |f1(x) \u2212f2(x)| > \u01eb} 8 \fInformally, we will say that if f1 and f2 do not have an \u01eb-disagreement on x that they agree on x. One way that we can empirically falsify a model f is by measuring the average outcome y on some subset or group de\ufb01ned on the data, and comparing it to the average prediction of the model f on the same subset. If the two di\ufb00er substantially, the model must be incorrect. But from \ufb01nite data we will only be able to accurately measure these averages on groups that are su\ufb03ciently large. We will model groups g as indicator functions g : X \u2192{0, 1} that specify whether or not each data point x is in the group (g(x) = 1) or not (g(x) = 0). We will use the following notation to measure the size of a group as measured on the underlying distribution D: De\ufb01nition 3.2. Under a distribution D, a group g : X \u2192{0, 1} has probability mass \u00b5(g) de\ufb01ned as: \u00b5(g) = Pr (x,y)\u223cD[g(x) = 1] Given a model f and a group g, we can de\ufb01ne a quantitative extent to which the average prediction of the model on points in g compares to the average (expected) outcome on points in g. In expectation over the distribution, these two quantities should agree exactly if f = f \u2217actually encodes true individual probabilities, but from data we will only be able to estimate these quantities approximately. Thus we de\ufb01ne an approximate notion of agreement, which we call approximate group conditional mean consistency. Models f that can be shown not to satisfy approximate group conditional mean consistency on any group g have been falsi\ufb01ed, in that this constitutes a proof that f \u0338= f \u2217(i.e. f must not encode true individual probabilities). De\ufb01nition 3.3. A model f : X \u2192[0, 1] satis\ufb01es \u03b1-approximate group conditional mean consistency with respect to a group g \u2208G if: \u0012 E (x,y)\u223cD[f(x)|g(x) = 1] \u2212 E (x,y)\u223cD[y|g(x) = 1] \u00132 \u2264 \u03b1 \u00b5(g) Note that we parameterize \u03b1-approximate group conditional mean consistency so that it asks for a weaker condition the smaller the size \u00b5(g) of the group g. Informally, for a \ufb01xed value of \u03b1, it asks that the deviation between the average prediction of f and the actual expected outcomes y on a group g di\ufb00er by at most an error parameter that is proportional to 1/ p \u00b5(g). This will turn out to be the \u201cright\u201d scaling because it corresponds to the precision to which we can measure these quantities from data. We will show a quantitative version of the following statement. It must be the case that either 1. f1 and f2 agree on almost all of their predictions, or 2. f1, or f2, or both can be proven from the data to violate a group conditional mean consistency condition on a large set of points. In this case, the falsi\ufb01ed model can be \u201cpatched\u201d with a simple update in a way that improves its accuracy. The result is that there can be no substantial disagreements about individual probabilities by people who are willing to be convinced by the evidence of the data before them: models which disagree on a substantial fraction of their predictions witness for each other places in which their predictions are falsi\ufb01ed by the data, and provide the means to correct (and improve) each other. Thus disagreements can be leveraged to produce improved models, and this process necessarily converges only when the models agree. To formalize this, we start by partitioning the set of \u01eb-disagreements U\u01eb(f1, f2) into two additional sets that will be important \u2014 the set of disagreements on which f1(x) > f2(x), and the set of disagreements on which f1(x) < f2(x). De\ufb01nition 3.4. Fix any two models f1, f2 : X \u2192[0, 1] and any \u01eb > 0. De\ufb01ne the sets: U > \u01eb (f1, f2) = {x \u2208U\u01eb(f1, f2) : f1(x) > f2(x)} U < \u01eb (f1, f2) = {x \u2208U\u01eb(f1, f2) : f1(x) < f2(x)} Based on these sets, for \u2022 \u2208{>, <} and i \u2208{1, 2} de\ufb01ne the quantities: v\u2022 \u2217= E (x,y)\u223cD[y|x \u2208U \u2022 \u01eb (f1, f2)] v\u2022 i = E (x,y)\u223cD[fi(x)|x \u2208U \u2022 \u01eb (f1, f2)] 9 \fOur analysis will proceed by showing that if U\u01eb(f1, f2), the set of \u01eb-disagreements of f1 and f2 is large, then at least one of the two sets U > \u01eb (f1, f2) and U < \u01eb (f1, f2) will witness a large violation of group conditional mean consistency for at least one of the two models. Lemma 3.1. Fix any two models f1, f2 : X \u2192[0, 1] and any \u01eb > 0. If the fraction of points on which f1 and f2 have an \u01eb disagreement has mass \u00b5(U\u01eb(f1, f2)) = \u03b1 then for some \u2022 \u2208{>, <} some i \u2208{1, 2}, we have that: \u00b5(U \u2022 \u01eb (f1, f2)) \u00b7 (v\u2022 \u2217\u2212v\u2022 i )2 \u2265\u03b1\u01eb2 8 In other words, at least one of the sets U > \u01eb (f1, f2) and U < \u01eb (f1, f2) is a group that witnesses an \u03b1\u01eb2 8 -mean consistency violation for at least one of the models f1 and f2. Proof. Since U\u01eb(f1, f2) can be written as the disjoint union: U\u01eb(f1, f2) = U > \u01eb (f1, f2) \u222aU < \u01eb (f1, f2) we must have that for at least one value of \u2022 \u2208{>, <} we have that: \u00b5(U \u2022 \u01eb (f1, f2)) \u2265\u03b1 2 . Since the points in U \u2022 \u01eb (f1, f2) are \u01eb-separated, we must have that |v\u2022 1 \u2212v\u2022 2| \u2265\u01eb. Therefore, for at least one of i \u2208{1, 2} we must have that |v\u2022 i \u2212v\u2022 \u2217| \u2265\u01eb 2 Combining these two claims, we must have that: \u00b5(U \u2022 \u01eb (f1, f2)) \u00b7 (v\u2022 i \u2212v\u2022 \u2217)2 \u2265\u03b1\u01eb2 8 Let\u2019s consider the signi\ufb01cance of this Lemma. Most basically, if we have two models f1 and f2 that disagree substantially, this lemma gives an easily constructable set (U > \u01eb (f1, f2) or U < \u01eb (f1, f2)) that falsi\ufb01es by a substantial quantitative margin either the assertion that f1 encodes true conditional label expectations or the assertion that f2 does. Next, we show that not only do these sets falsify that at least one of f1 or f2 are a \u201ccorrect\u201d model \u2014 they provide a directly actionable way to improve one of the models. We prove the following lemma (which is closely related to the kinds of updates used to obtain multicalibrated predictors [H\u00b4 ebert-Johnson et al., 2018]) which shows us how to improve a model given a group g on which the model fails to satisfy approximate group conditional mean consistency. Lemma 3.2. Fix any model ft : X \u2192[0, 1], group gt : X \u2192{0, 1}, and distribution D. Let \u2206t = E (x,y)\u223cD[y|gt(x) = 1] \u2212 E (x,y)\u223cD[ft(x)|gt(x) = 1] and ft+1 = h(x, ft; gt, \u2206t) where h is a \u201cpatch\u201d de\ufb01ned as: h(x, f; g, \u2206) = ( f(x) + \u2206 g(x) = 1 f(x) otherwise Then: B(ft, D) \u2212B(ft+1, D) = \u00b5(gt) \u00b7 \u22062 t In other words: given any model ft and a group gt that witnesses a violation of \u03b1-approximate group conditional mean consistency on ft, we can e\ufb03ciently produce a model ft+1 that has Brier score that is smaller by exactly \u03b1. 10 \fProof. By the de\ufb01nition of the patch h(x, ft; gt, \u2206t), models ft and ft+1 di\ufb00er in their predictions only for x such that gt(x) = 1. Therefore we can calculate: B(ft, D) \u2212B(ft+1, D) = Pr[gt(x) = 0] \u00b7 E (x,y)\u223cD \u0002 (ft(x) \u2212y)2 \u2212(ft+1(x) \u2212y)2|gt(x) = 0 \u0003 + Pr[gt(x) = 1] \u00b7 E (x,y)\u223cD \u0002 (ft(x) \u2212y)2 \u2212(ft+1(x) \u2212y)2|gt(x) = 1 \u0003 = \u00b5(gt) E (x,y)\u223cD \u0002 (ft(x) \u2212y)2 \u2212(ft(x) + \u2206t \u2212y)2|gt(x) = 1 \u0003 = \u00b5(gt) \u0012 2\u2206t E (x,y)\u223cD [y \u2212ft(x)|gt(x) = 1] \u2212\u22062 t \u0013 = \u00b5(gt) \u00002\u22062 t \u2212\u22062 t \u0001 = \u00b5(gt)\u22062 t Summarizing, whenever we have two models that have \u01eb disagreements on an \u03b1-fraction of points, we can always constructively falsify at least one of the models, and update it to improve its Brier score by at least O(\u03b1\u01eb2). Finally, to make our argument that in-sample quantities (i.e. as measured on the data samples) translate to out of sample quantities (measured on the distribution), it will be useful for our algorithm to not use arbitrarily precise values when patching models, but instead values that are rounded to a \ufb01nite grid: De\ufb01nition 3.5. Fix any integer m. Let [1/m] denote the set of m + 1 grid points: \u0014 1 m \u0015 = n 0, 1 m, 2 m, . . . , m \u22121 m , 1 o For any value v \u2208[0, 1] let Round(v; m) = argminv\u2032\u2208[1/m] |v \u2212v\u2032| denote the closest grid point to v in [1/m]. Observe that for v\u2032 = Round(v; m) we always have that |v \u2212v\u2032| \u2264 1 2m We put this all together in Algorithm 1 (Reconciler). For simplicity of exposition, we initially describe and analyze Algorithm 1 as if it has direct access to distributional quantities. In practice, of course, we will have access only to samples from the distribution, and we will have to run an algorithm on a dataset D \u2208Zn consisting of these samples. We can do so by interpreting D as the uniform distribution over the samples contained within it. In Section 4, we show that when we run Reconcile(f1, f2, \u03b1, \u01eb, D) on a dataset D \u223cDn consisting of n i.i.d. samples from D, then the guarantees we prove in Theorem 3.1 with respect to the empirical distribution D translate over to the true distribution D with error terms quickly tending to 0 11 \fas the number of samples n grows large. Algorithm 1: Reconcile(f1, f2, \u03b1, \u01eb, D) Let t = t1 = t2 = 0 and f t1 1 = f1, f t2 2 = f2. Let m = \u2308 2 \u221a\u03b1\u01eb\u2309 while \u00b5(U\u01eb(f t1 1 , f t2 2 )) \u2265\u03b1 do For each \u2022 \u2208{>, <} and i \u2208{1, 2} Let: v\u2022 \u2217= E (x,y)\u223cD[y|x \u2208U \u2022 \u01eb (f t1 1 , f t2 2 )] v\u2022 i = E (x,y)\u223cD[f ti i (x)|x \u2208U \u2022 \u01eb (f t1 1 , f t2 2 )] Let: (it, \u2022t) = argmax i\u2208{1,2},\u2022\u2208{>,<} \u00b5(U \u2022 \u01eb (f t1 1 , f t2 2 )) \u00b7 (v\u2022 \u2217\u2212v\u2022 i )2 breaking ties arbitrarily. Let: gt(x) = ( 1 x \u2208U \u2022t \u01eb (f t1 1 , f t2 2 ) 0 otherwise Let: \u02dc \u2206t = E (x,y)\u223cD[y|gt(x) = 1] \u2212 E (x,y)\u223cD[f tit it (x)|gt(x) = 1] \u2206t = Round( \u02dc \u2206t; m) Let: f ti+1 i (x) = h(x, f ti i , gt, \u2206t), ti = ti + 1, t = t + 1. Output (f t1 1 , f t2 2 ). Theorem 3.1. For any pair of models f1, f2 : X \u2192[0, 1], any distribution D, and any \u03b1, \u01eb > 0, Algorithm 1 (Reconcile) runs for T = T1 + T2 many rounds and outputs a pair of models (f T1 1 , f T2 2 ) such that: 1. T \u2264(B(f1, D) + B(f2, D)) \u00b7 16 \u03b1\u01eb2 2. B(f T1 1 , D) \u2264B(f1, D) \u2212T1 \u00b7 \u03b1\u01eb2 16 and B(f T2 2 , D) \u2264B(f2, D) \u2212T2 \u00b7 \u03b1\u01eb2 16 3. \u00b5(U\u01eb(f T1 1 , f T2 2 )) < \u03b1. Remark 3.1. The third conclusion of Theorem 3.1 states that the \ufb01nal models output (f T1 1 , f T2 2 ) approximately agree on their predictions of individual probabilities almost everywhere. The \ufb01rst conclusion states that the reconciliation procedure converges quickly. The second condition of Theorem 3.1 focuses on one way that the output models (f T1 1 , f T2 2 ) are superior to the input models (f1, f2) \u2014 they are more accurate. But there is also another way: Every intermediate model f t1 1 and f t2 2 for t1 < T1 and t2 < T2 considered by the reconciliation procedure but ultimately not output has been falsi\ufb01ed via the demonstration of a set U \u2022 \u01eb (\u00b7, \u00b7) on which it fails to satisfy \u03b1-approximate group conditional mean consistency for a large value of \u03b1. Proof. By Lemma 3.1, for each round t < T we must have that: \u00b5(U \u2022t \u01eb (f t1 1 , f t2 2 )) \u00b7 \u0000v\u2022t \u2217\u2212v\u2022t it \u00012 \u2265\u03b1\u01eb2 8 Let \u02dc f ti+1 t = h(x, f ti i , gt, \u02dc \u2206t) \u2014 i.e. the update that would have resulted at round t had the algorithm used the unrounded measurement \u02dc \u2206t rather than the rounded measurement \u2206t. By Lemma 3.2, we have that: B(f ti t , D) \u2212B( \u02dc f ti+1 t , D) \u2265\u03b1\u01eb2 8 12 \f. We can now compute B(f ti t , D) \u2212B(f ti+1 t , D) = (B(f ti t , D) \u2212B( \u02dc f ti+1 t , D)) \u2212(B(f ti+1 t , D) \u2212B( \u02dc f ti+1 t , D)) \u2265 \u03b1\u01eb2 8 \u2212(B(f ti+1 t , D) \u2212B( \u02dc f ti+1 t , D)) So it remains to upper bound (B(f ti+1 t ) \u2212B( \u02dc f ti+1 t )). Let \u02c6 \u2206= \u02dc \u2206t \u2212\u2206t. We make several observations: First, \u02dc f ti+1 t = h(x, f ti+1 t , gt, \u02c6 \u2206). Second, \u02c6 \u2206 = E (x,y)\u223cD[y|gt(x) = 1] \u2212 E (x,y)\u223cD[f ti i (x)|gt(x) = 1] \u2212\u2206t = E (x,y)\u223cD[y|gt(x) = 1] \u2212 E (x,y)\u223cD[f ti+1 i (x)|gt(x) = 1] Third, by de\ufb01nition of the Round operation, | \u02c6 \u2206| \u2264 1 2m. Therefore we can again apply Lemma 3.2 to conclude that: B(f ti+1 t , D) \u2212B( \u02dc f ti+1 t , D) = \u00b5(gt) \u02c6 \u22062 \u2264 1 4m2 Combining this with our initial calculation lets us conclude that: B(f ti t , D) \u2212B(f ti+1 t , D) \u2265\u03b1\u01eb2 8 \u2212 1 4m2 \u2265\u03b1\u01eb2 16 Here we are using the fact that we have set m \u2265 2 \u221a\u03b1\u01eb. Applying this lemma for each of the T1 and T2 updates for f1 and f2, respectively, we get that: B(f T1 1 , D) \u2264B(f1, D)\u2212T1 \u00b7 \u03b1\u01eb2 16 and B(f T2 2 , D) \u2264B(f2, D)\u2212T2 \u00b7 \u03b1\u01eb2 16 . Since Brier scores are non-negative, we conclude that T1 \u2264B(f1, D) 16 \u03b1\u01eb2 and T2 \u2264B(f2, D) 16 \u03b1\u01eb2 . Thus T = T1 + T2 \u2264(B(f1, D) + B(f2, D)) \u00b7 16 \u03b1\u01eb2 Finally the halting condition of the algorithm implies that \u00b5(U\u01eb(f T1 1 , f T2 2 )) < \u03b1. Thus if we start with any two models that have substantial disagreement, we are guaranteed to be able to e\ufb03ciently produce strictly improved models that almost agree almost everywhere. In particular, we can never be in a position in which we have two equally accurate but unimprovable models that have substantial disagreements: in this case, we can always improve the models. The only time we can have substantial model disagreement is if we refuse to improve the models even in the face of e\ufb03ciently veri\ufb01able and actionable evidence that one of the models is suboptimal and improvable. We observe that any pair of models that have gone through the \u201cReconcile\u201d process must also produce very similar estimates for the conditional label expectation over any su\ufb03ciently large reference class (i.e. any subset of the feature space). In particular, for any su\ufb03ciently large reference class, either both models are consistent with the data or they are not \u2014 but they cannot substantially disagree. Corollary 3.1. Let E \u2282X be any subset of the feature space. Let f1 and f2 be any two models that have been output by Algorithm 1 (Reconcile) with parameters \u01eb and \u03b1. Let: p1(E) = E (x,y)\u223cD[f1(x)|x \u2208E] and p2(E) = E (x,y)\u223cD[f2(x)|x \u2208E] be the estimates for Pr[y = 1|x \u2208E] implied by models f1 and f2 respectively. Then: |p1(E) \u2212p2(E)| \u2264 \u03b1 \u00b5(E) + \u01eb 13 \fProof. Let S\u01eb(f1, f2) = {x : x \u0338\u2208U\u01eb(f1, f2)} be the set of points on which f1 and f2 do not have an \u01eb-disagreement. Recall that \u00b5(S\u01eb(f1, f2)) \u22651 \u2212\u03b1. We compute: \u00b5(E)|p1(E) \u2212p2(E)| = \f \f \f \f \f X x\u2208E \u00b5({x}) \u00b7 (f1(x) \u2212f2(x)) \f \f \f \f \f = \f \f \f \f \f \f X x\u2208E\u2229U\u01eb(f1,f2) \u00b5({x}) \u00b7 (f1(x) \u2212f2(x)) + X x\u2208E\u2229S\u01eb(f1,f2) \u00b5({x}) \u00b7 (f1(x) \u2212f2(x)) \f \f \f \f \f \f \u2264 \u03b1 + \u00b5(E \u2229S\u01eb(f1, f2))\u01eb \u2264 \u03b1 + \u00b5(E)\u01eb Dividing by \u00b5(E) yields the corollary. 4 Data Requirements We have presented our Algorithm 1 as if it has direct access to the distribution D. Of course in general we do not have access to D, but rather have access to some set D of n i.i.d. samples from D. We will typically instead run Algorithm 1 over the empirical distribution over D. We will prove that with high probability over the sample of D, when Algorithm 1 is run over the empirical distribution on D, then its guarantees translate over to the distribution D from which D was drawn, with error parameters that go to zero with the size n of the data sample. It will be important that the dataset D that we run Algorithm 1 on is sampled independently of the models f1 and f2 to be reconciled \u2014 i.e. it should be fresh data that was not used to train either of the models being reconciled. We begin by counting the number of potential models f t1 1 , f t2 2 that Algorithm 1 might output. Lemma 4.1. Fix any pair of models f1, f2 : X \u2192[0, 1] and any \u03b1, \u01eb > 0. Then there is a set C of pairs of models of size at most |C| \u2264(4 \u00b7 (m + 1))32/\u03b1\u01eb2+1 such that for any distribution D on which Algorithm 1 is run, the output models (f t1 1 , f t2 2 ) \u2208C. Here, as in Algorithm 1, m = \u2308 2 \u221a\u03b1\u01eb\u2309. Proof. Given a run of Algorithm 1 for T rounds, let \u03c0 = {(it, \u2022t, \u2206t)}T t=1 denote the record of the quantities (it, \u2022t, \u2206t) chosen at each round t. Let \u03c0 0, Algorithm 1 (Reconcile) runs for T = T1 + T2 many rounds and outputs a pair of models (f T1 1 , f T2 2 ) such that: 1. T \u2264 16 \u03b1\u01eb2 2. For any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over the randomness of D \u223cDn we have that: B(f T1 1 , D) \u2264B(f1, D) \u2212T1 \u00b7 \u03b1\u01eb2 16 + 2 v u u u t \u0000 16 \u03b1\u01eb2 + 1 \u0001 log \u0012 64 \u0010 \u2308 2 \u221a\u03b1\u01eb \u2309+1 \u0011 \u03b4 \u0013 n and B(f T2 2 , D) \u2264B(f2, D) \u2212T2 \u00b7 \u03b1\u01eb2 16 + 2 v u u u t \u0000 16 \u03b1\u01eb2 + 1 \u0001 log \u0012 64 \u0010 \u2308 2 \u221a\u03b1\u01eb \u2309+1 \u0011 \u03b4 \u0013 n 3. For any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over the randomness of D \u223cDn: \u00b5(U\u01eb(f T1 1 , f T2 2 )) < \u03b1 + v u u u t \u0000 32 \u03b1\u01eb2 + 1 \u0001 log \u0012 8 \u0010 \u2308 2 \u221a\u03b1\u01eb \u2309+1 \u0011 \u03b4 \u0013 n . Remark 4.1. Theorem 4.1 tells us that the guarantees we proved for Algorithm 1 in Theorem 3.1 (when we assumed direct access to the distribution D) continue to hold when all we have access to is a \ufb01nite sample of n points from the data distribution, with additional error terms that tend to zero as n grows large. How large is large? If we want the \ufb01nal disagreement region to have mass at most 2\u03b1 (i.e. we want the third conclusion of Theorem 4.1 to tell us that \u00b5(U\u01eb(f T1 1 , f T2 2 )) < 2\u03b1), then solving for n in the error bound, we \ufb01nd that it su\ufb03ces to have n samples for n on the order of: n \u2208\u02dc O \u0012log(1/\u03b4) \u03b13\u01eb2 \u0013 where the \u02dc O() notation hides logarithmic terms in 1/\u03b1 and 1/\u01eb. This is a remarkably small amount of data: Recall that we would need \u2248log(1/\u03b4) \u03b1\u01eb2 samples just to estimate the conditional label expectation Pr[y = 1|x \u2208S] for a conditional event S with \u00b5(S) = \u03b1 up to error \u01eb with probability 1 \u2212\u03b4 (or for two parties with disjoint samples to agree on this conditional label expectation up to error \u01eb). Theorem 4.1 tells us that in fact two parties can be made to agree on a 1\u2212\u03b1 fraction of points up to error \u01eb with an additional amount of data only on the order of \u02dc O(1/\u03b12). Crucially this bound is independent of the complexity of the models f1 and f2, and so we can think of the initial models as being arbitrary (and arbitrarily sophisticated). Remark 4.2. Rather than updating on the disagreement regions U \u2022 \u01eb (f1, f2), we could update separately on regions on which the two models predict values f1(x) = v and f2(x) = v\u2032 for each v \u0338= v\u2032, to produce a model that is cross calibrated in the sense of Feinberg and Stewart [2008]. Garg et al. [2019] give an algorithm for doing this called \u201cMerge\u201d that we could use for our purposes with small modi\ufb01cations11. Carrying this through would yield an algorithm that would require n = \u02dc O(log(1/\u03b4)/\u03b1\u01eb4) many samples, which is incomparable to the guarantee we obtain in Theorem 4.1 for Algorithm 1. 11We would have to bucket both predictor\u2019s outputs into O(1/\u01eb) many buckets, ignore pairs of predictions (v, v\u2032) that have probability mass smaller than \u03b1\u01eb2, and produce two models, one for each party, that might disagree on regions in which the original models did not have \u01eb-disagreements. 15 \fThe proof of Theorem 4.1\u2014which is in Appendix A\u2014is a straightforward argument combining Hoe\ufb00ding\u2019s inequality (Theorem A.1) with a union bound over the models enumerated in Lemma 4.1. Informally, we will argue that for each of the pairs of models (f1, f2) in C enumerated in Lemma 4.1, their Brier scores are similar as evaluated over D and D, and their disagreement regions U\u01eb(f1, f2) have similar mass over D and D. For any \ufb01xed pair of models (f1, f2) these statements follow from Hoe\ufb00ding\u2019s inequality \u2014 and the fact that C has bounded cardinality means that we can at small cost in data obtain these guarantees uniformly over all pairs of models in C. 5 Contestable Models Thus far we have considered the problem of reconciling two models f1 and f2, and have shown that we require only O(1/(\u03b13\u01eb2)) many points to obtain strictly improved models f \u2032 1, f \u2032 2 that have \u01eb disagreements on at most an \u03b1 measure of points. But what if someone then proposes a third model, f3, and then another f4, etc? We could run the reconciliation process again each time\u2014and perhaps if we had k models, repeatedly in a pairwise fashion until all k of the models approximately agreed\u2014but this would naively require a fresh set of samples for each new reconciliation procedure. In this section, we show how to do better: we attach to f just a single sample of \u201ccontestation\u201d data that is of size polynomial in our target reconciliation parameters \u03b1 and \u01eb (and independent of the complexity of the model or distribution). Using this data, we show that we can then put f through a reconciliation procedure with a very large (exponential in the size of its contestation data set) number of models, with the same guarantees as if we had run the models through Algorithm 1 each time. Driving this result is the observation that each time a particular model f is updated using the patch operation de\ufb01ned in Lemma 3.2, f\u2019s squared error drops, independently of which reconciliation process the update is a part of \u2014 and thus the total number of large updates made to a single model is bounded independently of the number of other models that are \u201creconciled\u201d with it. This, together with results from adaptive data analysis that allow us to repeatedly re-use hold-out sets while preserving statistical validity [Dwork et al., 2015, Bassily et al., 2021, Jung et al., 2020] are enough to give the result. We de\ufb01ne a \u201ccontestable model\u201d to be a model f attached to a \ufb01xed sample of \u201ccontestation data\u201d. A \u201ccontestable\u201d model can be \u201ccontested\u201d by identifying any subset of the data identi\ufb01ed by an indicator function g : X \u2192{0, 1}. The guarantee of a contestable model is that if it is contested using a subset of the data that is simultaneously large and on which the expectation of the model\u2019s predictions is substantially di\ufb00erent than the expectation of the label, then the model will be updated in a way that corrects the discovered error on the identi\ufb01ed subset of data, and strictly improves the squared error of the model. These contestations are accepted. Contestations can also be rejected on the grounds either that the group identi\ufb01ed is too small, or that the model already predicts on average a value over that group that is su\ufb03ciently close to the true label mean over that group. We aim to design contestable models that can receive a number of contestations over their lifetime that is exponential in the size of their contestation dataset. De\ufb01nition 5.1. A contestable model f consists of a current model fc : X \u2192[0, 1], a dataset D \u2208Zn, and has two operations: f.predict(x) which takes as input a data point x \u2208X and f.contest(g) which takes as input the indicator function for a group g : X \u2192{0, 1}. f.predict(x) outputs fc(x), where fc is the current model belonging to f, and f.contest(g) may update the current model fc to a new model fc+1 according to Algorithm 2. Algorithm 2, which follows, is a randomized algorithm: it samples from the Laplace distribution. We write Lap(b) to denote the sampling operation for the centered Laplace distribution with scale parameter b, which is the distribution that has probability density function f(x; b) = 1 2b exp \u0010 \u2212|x| b \u0011 . The analysis of Algorithm 2 is in Appendix B and goes through di\ufb00erential privacy [Dwork et al., 2006]. Di\ufb00erential privacy was originally introduced as a strong notion of privacy that could be satis\ufb01ed while still carrying out high accuracy statistical analyses, but has since found many other uses. Our interest in di\ufb00erential privacy will be because of the transfer theorems of Dwork et al. [2015], Bassily et al. [2021], Jung et al. [2020] which informally state that analyses that are both di\ufb00erentially private and accurate on 16 \fAlgorithm 2: f.Contest Given: A failure probability \u03b4, a dataset D \u2208Zn, an initial model f = f0, a threshold T to accept attempted contestations, a target total number of contestations K, and an upper bound C on the total number of accepted contestations. Let t = 0 denote a count of the number of attempted contestations and c = 0 denote the count of the number of accepted contestations. Let privacy parameter \u01eb = q log K \u03b4 \u221a C ln 1 \u03b4 n Let \u01eb1 = \u221a 512 \u221a 512+1\u01eb, \u01eb2 = 1 \u221a 512+1\u01eb Let \u03c3(\u01eb) = \u221a 32C log(1/\u03b4) \u01ebn Let \u02c6 T0 = T + Lap(\u03c3(\u01eb1)) while there is another model gt given as input to f.contest(gt) and c < C do Compute an empirical estimate of \u00b5(gt) \u00b7 E[y \u2212fc(x)|gt(x) = 1]: \u03b7t(fc, gt) = 1 n X (x,y)\u2208D (y \u2212fc(x)) \u00b7 gt(x) Let \u02c6 \u03b7t = |\u03b7t(fc, gt)| + Lap (2\u03c3(\u01eb1)) if \u02c6 \u03b7t \u2265\u02c6 Tc then The contestation is accepted. Let: \u02dc \u00b5t = 1 n X (x,y)\u2208D gt(x) + Lap(2\u03c3(\u01eb2)) \u02dc \u03b7t = \u03b7t(fc, gt) + Lap(2\u03c3(\u01eb2)) Let \u02dc \u2206t = \u02dc \u03b7t \u02dc \u00b5t Let fc+1(x) = h(x, fc, gt, \u02dc \u2206t), c = c + 1. Let \u02c6 Tc = T + Lap(\u03c3(\u01eb1)) else The contestation is rejected. Let t = t + 1. Halt. 17 \fa sample of data D drawn i.i.d. from an underlying distribution D must also be accurate on the underlying distribution. Algorithm 2 is an instantiation of Algorithm 3 (NumericSparse) from Dwork and Roth [2014], from which it follows that the algorithm is di\ufb00erentially private in the contestation dataset D. We then apply the version of the \u201ctransfer theorem\u201d given in Jung et al. [2020], which establishes that its estimates of statistics on D are representative of their true values on the underlying distribution D from which D was drawn. Our use of di\ufb00erential privacy here to get out of sample guarantees closely mirrors its use in H\u00b4 ebert-Johnson et al. [2018] to get a generalization theorem for multicalibration algorithms. In fact, if the \u201ccontestation\u201d sets gt submitted to f.Contest(gt) were the groups with respect to which which multicalibrated predictors are required to satisfy group conditional mean consistency, then Algorithm 2 would essentially (up to some details) be the multicalibration algorithm originally given by H\u00b4 ebert-Johnson et al. [2018]. But a contestable model can take as input the indicator function gt of any group, including those groups U \u2022 \u01eb (f t1 1 , f t2 2 ) used as updates within Reconcile (Algorithm 1). Thus, contestable models will be able to be reconciled with many other models in a data e\ufb03cient way (in addition to being \u201ccontested\u201d on other groups on which they fail to satisfy group conditional mean consistency). Theorem 5.1. Initialized with a dataset D \u223cDn of size n sampled i.i.d. from D, a target number of contestations K, a failure probability \u03b4, a threshold T = \u0398 \u0012 (log K \u03b4 ) 1/3(ln 1 \u03b4) 1/6 n1/3 \u0013 and a limit on successful contestations C = \u0398 \u0000 1 T 2 \u0001 , a contestable model will with probability 1 \u22122\u03b4n: 1. Process at least K contestations gt without halting, 2. Guarantee that every accepted contestation gt is such that: \f \f \f \f E (x,y)\u223cD[gt(x)(y \u2212fc(x))] \f \f \f \f \u2265\u2126 \u0000log K \u03b4 \u00011/3 \u0000ln 1 \u03b4 \u00011/6 n1/3 ! and produces an update that reduces the squared error of f by B(fc) \u2212B(fc+1) = \u2126(T 2) = \u2126 \u0000log K \u03b4 \u00012/3 \u0000ln 1 \u03b4 \u00011/3 n2/3 ! 3. Guarantee that every rejected contestation gt is such that: \f \f \f \f E (x,y)\u223cD[gt(x)(y \u2212fc(x))] \f \f \f \f \u2264O \u0000log K \u03b4 \u00011/3 \u0000ln 1 \u03b4 \u00011/6 n1/3 ! The proof of Theorem 5.1 can be found in Appendix B. We now observe how a contestable model with the guarantees of Theorem 5.1 can be repeatedly used as part of a reconciliation procedure akin to Algorithm 1. Algorithm 3: Contestable-Reconcile(f1, f2, \u03b1, \u01eb, D) while \u00b5(U\u01eb(f1, f2)) \u2265\u03b1 do For \u2022 \u2208{>, <} let: g\u2022(x) = ( 1 x \u2208U \u2022 \u01eb (f1, f2) 0 otherwise f1.contest(g>), f1.contest(g<), f2.contest(g>), f2.contest(g<) The idea is simple (and outlined in Algorithm 3): While we have two contestable models f1 and f2 that have \u01eb-disagreements on more than an \u03b1 fraction of the distribution, contest both models on the disagreement 18 \fsets U > \u01eb (f1, f2) and U < \u01eb (f1, f2). By Lemma 3.1, if indeed \u00b5(U \u2022 \u01eb (f1, f2)) \u2265\u03b1, then for at least one of the models i \u2208{1, 2} and for at least one of the sets \u2022 \u2208{>, <}, we must have: |\u00b5(U \u2022 \u01eb (f1, f2)) \u00b7 E[y \u2212fc(x)|x \u2208U \u2022 \u01eb (f1, f2)]| \u2265\u00b5(U \u2022 \u01eb (f1, f2)) \u00b7 E[y \u2212fc(x)|x \u2208U \u2022 \u01eb (f1, f2)]2 \u2265\u03b1\u01eb2 8 By Theorem 5.1, assuming that the contestation datasets of both models are of size: n \u2265\u2126 \uf8eb \uf8edlog K \u03b4 q log 1 \u03b4 \u03b13\u01eb6 \uf8f6 \uf8f8 then at least one of these contestations will succeed until (after at most a polynomial number of contestations in \u03b1 and \u01eb), the models are reconciled and \u00b5(U \u2022 \u01eb (f1, f2)) \u2264\u03b1. Solving for K, we \ufb01nd that a contestable model can be run through K = \u02dc \u0398 \uf8eb \uf8ed\u03b4 exp \uf8eb \uf8edn\u03b13\u01eb6 q log 1 \u03b4 \uf8f6 \uf8f8 \uf8f6 \uf8f8 many contestation procedures given a single contestation dataset of size n. Here the \u02dc \u0398 hides terms that are logarithmic in 1/\u03b1 and 1/\u01eb. We emphasize that a contestable model can be contested using K many sets of any nature \u2014 these can include the disagreement regions that arise from our Reconcile procedure, but can also include arbitary regions on which the current model is found to be mis-calibrated. Thus contestable models can be robustly and iteratively improved over an exponential number of contestations whenever they are falsi\ufb01ed by being shown to fail to satisfy group conditional mean consistency on any group. Modest amounts of data, attached to a model as a contestation dataset, can make the model long-lived in an easily adaptable and improvable form. 6 Conclusion Individual probability assignments are not determined by data; this lies at the heart of both the reference class problem and the predictive multiplicity problem. Insofar as individual probability assignments play a signi\ufb01cant role in consequential decision-making, their underdetermination by data may give rise to practical problems when we have two or more seemingly equally good estimation methods that nevertheless result in models that di\ufb00er substantially in the assignments they predict. We show that given modest amounts of data to resolve disagreements, such problems cannot arise at a substantial scale, because if two models disagree substantially in many places, then this large disagreement region itself points us to how to improve at least one of the models. The only way this process can conclude is with improved models that approximately agree almost everywhere. This does not \u201cresolve\u201d the reference class problem, the predictive multiplicity problem, or other puzzles about individual probability in that it does not claim a way to produce \u201ccorrect\u201d estimates of individual probabilities. But it does remove the practical bite of these problems in that it shows that two parties who agree on the data distribution and who have committed in good faith to make statistical estimates of individual probabilities cannot end up in a state where they substantially disagree on a large number of instances \u2014 and hence will rarely face any ambiguity in how they should act, given their statistical modeling. Acknowledgements We thank Philip Dawid, Konstantin Genin, Benjamin Jantzen, Leonard Smith, Kareem Khalifa, Omer Reingold, Alvin Roth, and Rakesh Vohra for helpful comments and making connections to the literature. A.R. was supported in part by NSF grants FAI-2147212 and CCF-2217062 and the Simons Collaboration on the Theory of Algorithmic Fairness. 19" + }, + { + "url": "http://arxiv.org/abs/1607.05397v3", + "title": "Multidimensional Dynamic Pricing for Welfare Maximization", + "abstract": "We study the problem of a seller dynamically pricing $d$ distinct types of\nindivisible goods, when faced with the online arrival of unit-demand buyers\ndrawn independently from an unknown distribution. The goods are not in limited\nsupply, but can only be produced at a limited rate and are costly to produce.\nThe seller observes only the bundle of goods purchased at each day, but nothing\nelse about the buyer's valuation function. Our main result is a dynamic pricing\nalgorithm for optimizing welfare (including the seller's cost of production)\nthat runs in time and a number of rounds that are polynomial in $d$ and the\napproximation parameter. We are able to do this despite the fact that (i) the\nprice-response function is not continuous, and even its fractional relaxation\nis a non-concave function of the prices, and (ii) the welfare is not observable\nto the seller.\n We derive this result as an application of a general technique for optimizing\nwelfare over \\emph{divisible} goods, which is of independent interest. When\nbuyers have strongly concave, H\\\"older continuous valuation functions over $d$\ndivisible goods, we give a general polynomial time dynamic pricing technique.\nWe are able to apply this technique to the setting of unit demand buyers\ndespite the fact that in that setting the goods are not divisible, and the\nnatural fractional relaxation of a unit demand valuation is not strongly\nconcave. In order to apply our general technique, we introduce a novel price\nrandomization procedure which has the effect of implicitly inducing buyers to\n\"regularize\" their valuations with a strongly concave function. Finally, we\nalso extend our results to a limited-supply setting in which the number of\ncopies of each good cannot be replenished.", + "authors": "Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu", + "published": "2016-07-19", + "updated": "2017-06-10", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.GT", + "cs.LG" + ], + "main_content": "Introduction Consider the problem of an online retailer who sells a large variety of goods. The seller can in principle produce or procure more copies of each good as needed, but only at a limited rate, and at some per-unit production /procurement cost that varies by good. In each round, the seller can dynamically set the price for each type of good. Each buyer has an unknown valuation function de\ufb01ned (in general) over bundles of goods, and quasi-linear utility for money. Each buyer chooses which item to buy to optimize his utility function given the prices. The seller observes the purchased bundle\u2014i.e. the revealed preferences of the buyer\u2014but not the buyer\u2019s valuation of the purchased bundle (or of any other bundle). The buyer\u2019s valuation function is drawn independently from a \ufb01xed but unknown distribution, called the buyer distribution. The seller\u2019s objective is to optimize social welfare: the expected buyer valuation of the purchased good minus its production cost. Social welfare, like pro\ufb01t, is a natural objective for the seller: in particular, sellers attempting to grow their market (rather than exploit an existing monopoly position) might prefer to optimize social welfare rather than pro\ufb01t in the short term. A tempting \ufb01rst attempt at solving this problem would be to simply set the price for each good to be equal to its cost of production, which would indeed maximize social welfare if there were no other constraints on the bundles of items purchased by buyers. However, this solution is unsatisfactory when additional supply can only be generated at a bounded rate, because the cost of production bears no relationship to the buyers\u2019 values for a good. Because of this, setting prices equal to costs can result, for example, in every buyer demanding the largest possible quantity of the same good, which the seller may not be able to accommodate. In a more realistic setting, there will be constraints on the rate of production and resupply for each good. Hence, we study the welfare maximization problem in which we impose the additional constraint that the expected bundle purchased (in expectation over the draw of the buyer) lies in a bounded set. Because constraints of this sort bind across buyers, setting prices equal to costs fails, and the problem requires a nontrivial solution. Since the buyer distribution is unknown, the seller cannot directly compute the prices that optimize social welfare. Instead, she faces a learning problem: she can try di\ufb00erent prices over time and observe the responses from random buyers drawn from the distribution, and try to learn the optimal price vector. More formally, the goal is to use a small number of rounds to learn a price vector that nearly optimizes expected social welfare. We want the algorithm\u2019s guarantees hold in the worst case over the choice of distributions over buyer valuation functions. Essentially, we are studying a welfare-optimization version of the well-known dynamic pricing problem, also known as learn-and-earn, with d > 1 goods for sale. (Prior work on dynamic pricing focused on pro\ufb01t maximization.) At a very high level, the main challenge presented is to learn the price response function\u2014i.e. the function mapping prices to expected bundles purchased\u2014and then optimize it with respect to welfare. Moreover, this is a high dimensional function (for large d), and so one must overcome the curse of dimensionality. Prior work on non-Bayesian dynamic pricing (e.g., Babaio\ufb00et al. (2015); Besbes and Zeevi (2009, 2012); Broder and Rusmevichientong (2012); den Boer and Zwart (2014); Keskin and Zeevi (2014); Wang et al. (2014)) dealt with this challenge by making strong assumptions on the price response function itself. Typical assumptions include Lipschitzness (Besbes and Zeevi, 2009, 2012; Wang et al., 2014) (which allows for discretization in low-dimensional problems), and particularly for high dimensional problems, linearity (den Boer and Zwart, 2014; Keskin and Zeevi, 2014) or concavity (Babaio\ufb00et al., 2015; 1 \fBesbes and Zeevi, 2009; Wang et al., 2014).1 However, assumptions of this sort are not well supported by a micro-economic foundation. In fact, natural assumptions on the buyer valuations do not necessarily result in price-response function with these properties. In this paper, we pursue a di\ufb00erent approach which stands on stronger microeconomic foundations: we make assumptions on the form of the valuation functions directly (and no assumptions on the distribution over valuation functions), and show that we can work with the price response function that results. This is the case despite the fact that our problem is high dimensional, and the price response function that results from our assumptions is not concave. We also face an additional challenge: unlike pro\ufb01t, welfare is not observable, and we can observe the purchased bundle but not the buyer\u2019s valuation for that bundle. Nevertheless, we design algorithms that \ufb01nd a near-optimal price vector with respect to welfare, in a number of rounds that is polynomial in d and the accuracy parameter. (Whereas, for example, a naive solution based on discretization and Lipschitzness of the price-response function requires a number of rounds that is exponential in d.) Our results also extend to the limited-supply setting. 1.1 Our Contributions Our main result solves the problem in the setting of indivisible goods when buyers are unit-demand or, alternatively, when the seller only allows each customer to buy a single item. Surprisingly, no other assumptions are needed! Further, we give a general result for the setting of divisible goods under certain assumptions on the valuation functions. In fact, we show how this general result can be leveraged to yield the result for unit demands. Both settings work as follows. There are d goods. In each round, prices are set and one buyer arrives and purchases her most preferred bundle from the set of feasible bundles. The seller incurs production/procurement costs for each sale, which are linear in the sold bundle. We give a computationally e\ufb03cient and round e\ufb03cient algorithm for \ufb01nding a nearly welfaremaximizing price vector subject to a constraint on the expected consumption. Let SW(p) be the expected social welfare that results from setting prices p. The seller would like to set prices to ensure that the expected per-round purchase of each good j, denoted xj(p), is bounded above by some supply sj. This models a realistic scenario in which the seller\u2019s inventory can be replenished, but only at a limited rate. For example, perhaps at most one truckload of goods can be stocked per day. Approximating a restocking period constraint with a constraint on the expected percustomer purchase is reasonable if the restocking period corresponds to a large number of rounds, because then the realized consumption over these rounds concentrates around is expectation. In the following, we will write x(p) = (x1(p),...,xd(p)) to denote the bundle induced by prices p. Divisible goods. Departing from previous work, instead of making assumptions about the functional form of the price-response function, which depends on the buyers\u2019 valuations in aggregate, we make assumptions on the individual buyers\u2019 valuations themselves. Speci\ufb01cally, we assume the buyers\u2019 valuations are strongly concave and H\u00a8 older continuous. (These assumptions are satis\ufb01ed by a large class of well-studied valuation, including CES and Cobb-Douglas as shown 1Prior work that does not make assumptions on the shape of valuations or demand curves is either restricted to selling a single good (d = 1) (Babaio\ufb00et al., 2015; Kleinberg and Leighton, 2003), or su\ufb00ers from the curse of dimensionality and comes with performance guarantees relative to the discretized prices rather than all prices (Badanidiyuru et al., 2013). 2 \fin Roth et al. (2016)). The sold bundles are constrained to lie in the bounded set F \u2282Rd + (e.g. F = [0,1]d means at most one unit of each good can be purchased). Theorem 1.1 (Divisible Goods). Assume divisible goods, and buyers with strongly concave and and H\u00a8 older-continuous valuations. There is an algorithm that takes as input parameters d,\u03b1,\u03b4 > 0 and a supply vector s \u2208Rd >0, such that with probability at least 1 \u2212\u03b4, the algorithm outputs a price vector p \u2208Rd + such that x(p) \u2264s and SW(p) \u2265 max p\u2208Rd +: x(p)\u2264s SW(p) \u2212\u03b1. (1) The number of rounds and the total computation time are polynomial in d, 1 \u03b1 and log 1 \u03b4. Unit-demand buyers and indivisible goods. We use our result for divisible goods to give a polynomial time dynamic pricing algorithm for welfare maximization in the indivisible goods setting when buyers have unit demand valuations. Here we consider distributions D over price vectors p \u2208Rd +, rather than \ufb01xed price vectors p. To extend our notation, let x(D) = Ep\u223cD [x(p)] and SW(D) = Ep\u223cD [SW(p)]. We prove: Theorem 1.2 (Indivisible Goods). Assume indivisible goods, and buyers with unit-demand valuations. There is an algorithm that takes as input parameters d,\u03b1,\u03b4 > 0 and a supply vector s \u2208Rd +, such that with probability at least 1\u2212\u03b4 the algorithm outputs a distribution D over price vectors p \u2208Rd + such that x(D) \u2264s and SW(D) \u2265 max distributions D\u2032: x(D\u2032)\u2264sSW(D\u2032) \u2212\u03b1. (2) The number of rounds and the total computation time are polynomial in d, 1 \u03b1 and log 1 \u03b4. In the generality that we state our theorem, using a distribution over prices rather than a \ufb01xed price vector is unavoidable. The reason has to do with how the buyers break ties when they are indi\ufb00erent between goods. As shown in Hsu et al. (2016), without further genericity assumptions, it can be that no \ufb01xed pricing can induce optimal (or even feasible) allocations if buyers use uncoordinated tie breaking rules. Instead, tie-breaking needs to be coordinated amongst the buyers: the tie-breaking rule needs to be di\ufb00erent for di\ufb00erent buyers, and essentially needs to be speci\ufb01ed by the mechanism. The randomness in our pricing scheme serves as a coordination mechanism amongst buyers (since each buyer faces a di\ufb00erent realization of prices). Remark. In both settings, we prove more general theorems in which we express our results in terms of a stronger benchmark\u2014the welfare of the optimal lottery over allocations, without restriction to those that can be induced by posted pricing.2 The above theorems can be reformulated in terms of cumulative regret for a given time horizon T. Then the execution of the algorithm in the respective theorem corresponds to an exploration phase of bounded length. The price vector p computed by the algorithm is used in an exploitation phase consisting of all subsequent rounds. Theorem 1.1 guarantees that OWel completes in poly(d,log 1 \u03b4) \u00b7 \u03b1\u2212m rounds, for some constant m. Expected regret relative to the best \ufb01xed price 2A posted price vector (resp., distribution over them) computed by an algorithm can only hope to compete with the best posted price vector (resp., distribution). Thus, the mathematical statement behind (ii) is that the posted price benchmarks used in the theorems are in fact equivalent to the stronger benchmark. 3 \fvector can be upper-bounded by 1 for every round of exploration, and \u03b1+\u03b4 per round of exploitation. Optimizing the choice of \u03b1 and \u03b4, we obtain regret poly(d,logT ) \u00b7 T m/(m+1). Theorem 1.2 implies a similar corollary for the unit-demands setting. Extension to limited supply. We extend our results to a limited-supply setting. In our model, there is a \ufb01xed horizon of T rounds and the seller has a non-replenishable supply of Tsj units of each good j. T and s \u2208[0,1]d are known in advance. Each day, the seller will set prices and a random buyer will purchase their preferred bundle until either the time horizon or the sellers\u2019 supply is exhausted, whichever comes \ufb01rst. For a pricing policy \u03c0, we use SWtot(\u03c0) to denote its expected total welfare.3 A \u201c\ufb01xed-vector\u201d pricing policy uses the same price vector p in all rounds. Likewise, \u201c\ufb01xed-distribution\u201d pricing policy always draws the price vector independently from the same \ufb01xed distribution D. The expected total welfare of these policies is denoted, resp., SWtot(p) and SWtot(D). In the setting of divisible goods, we simply use the algorithm from Theorem 1.1 with the same constraint vector s. The price vector p computed by this algorithm achieves high expected total welfare for a given problem instance: we prove that it is nearly optimal compared to the best \ufb01xed-vector pricing policy. Further, it is nearly optimal compared to any pricing policy. Likewise, in the setting of unit demands, we use the algorithm from Theorem 1.2 with the same constraint vector s. The distribution D computed by this algorithm is nearly optimal compared to the best \ufb01xed-distribution pricing policy with x(D) \u2264s. Theorem 1.3. Consider dynamic pricing with limited supply. Fix constraint vector s \u2208Rd + and time horizon T > 32 log(T)/s\u2217, where s\u2217= minj sj. (a) Consider the setting of divisible goods. When the algorithm from Theorem 1.1 is given as input d,s,\u03b1,\u03b4, with probability 1 \u2212\u03b4 it outputs a price vector p \u2208Rd + such that SWtot(p) \u2265 sup pricing policies \u03c0 SWtot(\u03c0) \u2212\u03b1T \u2212O \u0010p T log(T)/s\u2217\u0011 . (b) Assume indivisible goods and unit demands. When the algorithm from Theorem 1.2 is given as input d,s,\u03b1,\u03b4, with probability 1 \u2212\u03b4 it outputs distribution D over price vectors such that SWtot(D) \u2265 sup distributions D\u2032 with x(D\u2032) \u2264s SWtot(D\u2032) \u2212\u03b1T \u2212O \u0010p T log(T)/s\u2217\u0011 . The number of rounds and the total computation time are polynomial in d, 1 \u03b1 and log 1 \u03b4. 1.2 Our Techniques Our general results for divisible goods build on a crucial structural property: even though the expected welfare of the induced bundle x(p) is not concave in the price vector p, it becomes concave if we treat the bundle itself as the decision variable. We illustrate this via a simple 1-dimensional example, adapted from Roth et al. (2016): 3Considering expected welfare per round is not enough, as one pricing policy may halt sooner than another. 4 \fExample 1.4. There is a single good (d = 1), and a single buyer with valuation v(x) = \u221ax. he seller\u2019s cost function is c(x) = x. If price p is posted, the buyer\u2019s utility for x units is \u221ax \u2212p \u00b7 x, so she would purchase x\u2217(p) = 1 4p2 units of the good. Consequently, the welfare is SW(p) = v(x\u2217(p)) \u2212c(x\u2217(p)) = 1 2p \u22121 4p2 . Note that welfare is not a concave function of the price. However, if we write the welfare as a function of x = x\u2217(p), the purchased amount of good, this function is concave: SW(x) = v(x) \u2212c(x) = \u221a x \u2212x. Thus, we would like to optimize expected welfare as a function of the induced bundle. However, we only control prices and not induced bundles. To address this, our algorithm has two \u201clayers,\u201d where the outer layer optimizes over induced bundles, and the inner layer \ufb01nds a price vector which approximately induces a given bundle. Another challenge is that welfare is not observed, since we do not observe buyer valuations. Instead, we \ufb01nd a way to approximate the subgradients of welfare, and use noise-tolerant subgradient descent to optimize over the bundles. We build on and extend the result of Roth et al. (2016) for the special case of a single buyer and unlimited supply (which focuses on pro\ufb01t rather than welfare). The main distinction is single buyer vs. distributions over buyers; in other words, Roth et al. (2016) assume that for a given price vector the outcome is deterministic, whereas in our paper it is drawn from a \ufb01xed but unknown distribution over the possible outcomes. The \u201cinner layer\u201d of our algorithm extends the algorithm in Roth et al. (2016) from a single buyer to distributions over buyers. This extension presents several technical challenges, and answers one of their main open questions. In particular, we analyze a generalization of the convex programming technique used in Roth et al. (2016) to accommodate a distribution over (arbitrarily many) buyers. We cannot use the \u201couter layer\u201d from Roth et al. (2016) because it requires direct observations of the objective function to feed into a procedure for zeroth-order optimization, and our seller cannot directly observe the buyers\u2019 welfare (unlike pro\ufb01t, which is observable). Instead, we develop a new technique to obtain the subgradient for the welfare function so as to enable \ufb01rst-order optimization. Also, we remove a major assumption of homogenous buyer valuations. As stated, our general result does not apply to unit demand buyers over indivisible goods. In order to cast this problem as a divisible goods problem, we view buyers as having linear valuations over divisible goods, optimizing over the set of bundles that have at most unit \u21131 norm. The bundle that maximizes a linear function is always at a vertex of the feasible region, and hence is integral. That is, it is the bundle purchased by a unit-demand buyer in the indivisible goods setting. However, there is a substantial di\ufb03culty: our general technique relies on buyer valuation functions being strongly concave, a condition not satis\ufb01ed by linear functions. A standard way to obtain strong convexity in the convex optimization literature is to add a strongly convex regularizer to the objective function. However, we do not get to modify the buyer\u2019s objective function in this way. Instead, we perturb the price vectors proposed by our general dynamic pricing algorithm with Gumbel noise. Doing so has the property that the expected bundle purchased by each buyer (where expectation is taken over the price perturbation) is the bundle that maximizes the buyer\u2019s linear valuation function, plus an entropy regularization term (Warmuth, 2009). Thus, in expectation over our perturbations, we can view buyers as optimizing valuation functions which 5 \fare strongly concave over the \u21131 norm\u2014even though for every \ufb01xed perturbation, buyers are maximizing some linear function and thus buy a unit bundle of indivisible goods. By perturbing the price vectors used over the run of our algorithm for divisible goods, therefore, we can optimize welfare over these \u201cregularized\u201d buyers. By reducing the noise rate (and hence the implicit regularization parameter), we approach the optimal welfare of the actual, unit-demand buyers. The extension to limited supply (Theorem 1.3(a)) relies on a structural result about bandits with knapsacks (Badanidiyuru et al., 2013), a general framework of which dynamic pricing with limited supply is a special case. We use a non-standard \u201cembedding\u201d of dynamic pricing into this framework, and a concentration inequality for total welfare that requires a somewhat delicate proof. 1.3 Related Work Our setting is related to several lines of work. First, dynamic pricing, a.k.a. learn-and-earn, focuses on a seller with a large inventory of each good, facing a stream of buyers with unknown valuations. This is a large line of work, mainly in operations research \u2014 see Boer (2015) for a review. Most related are non-Bayesian approaches. As mentioned above, the main distinction is that we make assumptions on the buyer valuations rather than on the price response function. Also, the learnand-earn literature does not consider welfare-optimization, to the best of our knowledge. Second, our problem can be viewed as an instance of the multi-armed bandits problem (Bubeck and Cesa-Bianc 2012; Gittins, 1989), a well-studied abstract framework in which an algorithm repeatedly chooses actions (e.g., price vectors) and receives rewards (e.g., revenue from a sale). The main issue is the tension between acquisition and usage of information, a.k.a. the exploration-exploitation tradeo\ufb00. Bandit algorithms are directly applicable to dynamic pricing either via discretization (Babaio\ufb00et al., 2015; Badanidiyuru et al., 2013; Kleinberg and Leighton, 2003) or via assumptions on expected revenue.4 The main distinction is (again) that solutions to bandit problems tend to make assumptions directly on the rewards, in part because they do not model the \ufb01ner structure behind the rewards (such as valuation functions). Third, there are several papers on welfare-optimizing posted pricing in combinatorial auctions (Balcan et al., 2008; Blum et al., 2011; Chakraborty et al., 2013; Feldman et al., 2015). These papers tackle more di\ufb03cult scenarios with non-divisible goods and non-IID valuations, and accordingly obtain weaker, multiplicative guarantees. Also, the pricing is either static (Balcan et al., 2008; Feldman et al., 2015) (not changing over time), or changing over time but not adapting to the observed purchases (Blum et al., 2011; Chakraborty et al., 2013). This research is mainly motivated by connections to mechanism design for combinatorial auctions. Fourth, there is a large literature on revealed preferences, starting from Samuelson (Samuelson, 1938), see Mas-Colell et al. (1995); Rubinstein (2012); Varian (2006) for background. Most work in economics has focused on the construction of utility functions that explain or rationalize a given sequence of price/bundle observations, e.g. Afriat (1967). A recent literature studies the problem of predicting purchase decisions given past observations at di\ufb00erent price points (Balcan et al., 2014; Beigman and Vohra, 2006; Zadimoghaddam and Roth, 2012). More related to our paper is Amin et al. (2015) who study the problem of iteratively setting prices to maximize the pro\ufb01t obtained from a single budgeted buyer (who repeatedly makes purchase decisions) with a linear 4E.g., if expected revenue is concave in prices, one can apply bandit algorithms for concave rewards (Agarwal et al., 2013; Bubeck et al., 2015; Flaxman et al., 2005; Hazan and Levy, 2014). 6 \futility function. The most related paper in this line is Roth et al. (2016), as discussed in the previous subsection. Ours is the \ufb01rst paper in this line of work able to handle indivisible goods. A related, but distinct literature focuses on learning valuation functions from example evaluations of those functions (Balcan et al., 2012), rather than from example maximizations of those functions (as in the revealed preference literature). 1.4 Map of the Paper Section 3 contains the our general result for divisible goods (a generalization of Theorem 1.1). Our application to unit demand valuations over indivisible goods, and the perturbation techniques that go into deriving this result are presented in Section 4. The limited supply setting is treated in Section 5. We conclude in Section 6. To improve the \ufb02ow of the paper, some details are deferred to the appendix. 2 Model and Preliminaries There is a seller selling d di\ufb00erent types of goods to a sequence of buyers arriving one after another in rounds. Each buyer\u2019s valuation v is drawn independently from an unknown distribution \u03c8 over a \ufb01nite class V of valuation functions over the goods, where |V| = n.5 6 Both \u03c8 and V are unknown to the seller. Throughout, we will use i to index the buyer\u2019s types in V, and write vi for the valuation function for a buyer of type i, and \u03c8(vi) for the probability mass on buyers of type i. At each round t, the seller posts a price vector p = pt \u2208Rd +, and the t-th buyer with valuation v \u223c\u03c8 makes a purchase to maximize his utility under these prices. In particular, we consider two di\ufb00erent settings: one with divisible goods and the other with indivisible goods. Divisible goods Each valuation v : Rd + \u2192R+ is a function from (fractional) bundles of goods to values. Under prices p, the buyer with valuation v will purchase the utility-maximizing bundle x\u2217 v(p) \u2261argmax x\u2208F [v(x) \u2212\u27e8x,p\u27e9] where F \u2282Rd + denotes the set of feasible bundles available for purchase. Indivisible goods We consider unit-demand buyers, who will either purchase exactly 1 unit of some good or nothing. Each buyer\u2019s valuation is de\ufb01ned by a value vector v \u2208Rd >0 such that vj denotes her value for 1 unit of the j-th good. At round t, a buyer with valuation v purchases x\u2217 v(p) \u2261argmax j\u2208[d]\u222a{\u22a5} h vj \u2212pj i . where \u22a5denotes the choice of buying nothing (we de\ufb01ne v\u22a5= p\u22a5= 0), and we allow arbitrary tie-breaking rules. The seller has a (known) cost vector c \u2208Rd + such that the cost of producing a unit of good j is cj. The seller wishes to set prices so as to optimize the expected social welfare \u2014 the expected 5We take V to be \ufb01nite only for convenience. Our results do not depend on n = |V|, so it can be arbitrarily large. 6Throughout, R+ = {x \u2208R | x \u22650} and R>0 = {x \u2208R | x > 0} denote non-negative reals and positive reals, resp. 7 \fvaluation of the buyer\u2019s purchased bundle or item minus its production cost. In particular, if the seller posts a price vector p over the goods, the expected social welfare is SW(p) = E v\u223c\u03c8[v(x\u2217 v(p)) \u2212c(x\u2217 v(p))]. (3) where we write c(x\u2217 v(p)) to denote the production cost for the purchase x\u2217 v(p). For any distribution D over prices, the expected welfare is de\ufb01ned as SW(D) = Ep\u223cD [SW(p)]. Computational model We will think of the algorithm as having access to a revealed preference oracle ReP(\u03c8): given any input price vector p \u2208Rd +, it will draw a random valuation v from \u03c8, and return the purchase decision x\u2217 v(p). Our goal is to design computationally e\ufb03cient algorithms to compute optimal prices using only polynomially many queries to ReP. Notably, the expected or realized social welfare is not observable to the algorithm, since it cannot observe v(x\u2217 v(p)). 2.1 Noisy Subgradient Descent A key ingredient in our algorithms is the ability to minimize a convex function (or maximize a concave function), given access only to noisy sub-gradients of the function. We accomplish this using the gradient descent algorithm. Below we recap some necessary background. Let C \u2286Rd be a compact and convex set of diameter at most D (w.r.t. \u21132 norm). A subgradient of a function f : C \u2192R at point x \u2208C is any vector g \u2208C that satis\ufb01es the inequality f (y) \u2265 f (x) + \u27e8g,y \u2212x\u27e9for any point y \u2208C. The set of all subgradients at x is denoted \u2202f (x). If f is di\ufb00erentiable, the only subgradient g is the gradient \u2207f (x). The basic subgradient descent method is an iterative algorithm that starts at some point x1 \u2208C and iterates the following equations yt+1 = xt \u2212\u03b7 gt, and xt+1 = \u03a0C(yt+1) where \u03b7 is the learning rate and gt \u2208\u2202f (xt) is a subgradient of the function f at point xt, and \u03a0C(x) = argminy\u2208C \u2225x \u2212y\u2225denote the projection operator onto C. Now, we will assume that gt and/or xt are subject to noise. We will use two variants of the algorithm, which operate under two di\ufb00erent models of noise. In the \ufb01rst model, the algorithm only has access to unbiased estimates of the subgradient. Theorem 2.1 (Zinkevich (2003)). Suppose that f is convex, and for some constant D,G, the estimates of the subgradients satisfy E [gt] \u2208\u2202f (xt) and \u2225gt\u2225\u2264G for all steps t, and the diameter of the set satis\ufb01es \u2225C\u2225\u2264D. Then if we run the subgradient descent method with step size \u03b7 = D/(G \u221a T), then for any T and any initial point x1 \u2208C, the point z = 1 T PT t=1 xt satis\ufb01es E[f (z)] \u2264min x\u2208C f (x) + 2DG/ \u221a T. In the second model, the algorithm has access to the noiseless subgradients, but the points xt are adversarially perturbed after the projection. Theorem 2.2. Suppose that f is convex, \ufb01x constants D,E, and G. Suppose that the gradient descent algorithm performs the following update in each iteration yt+1 = xt \u2212\u03b7 gt and xt+1 = \u03a0C(yt+1) + \u03bet 8 \fsuch that gt \u2208\u2202f (xt) and \u03bet \u2208Rd is a noise vector. Suppose that \u2225gt\u2225\u2264G and \u2225\u03bet\u2225\u2264E, xt \u2208C for all steps t, and the diameter of the set satis\ufb01es \u2225C\u2225\u2264D. Then if we run the subgradient descent method with step size \u03b7 = D/(G \u221a T), for any T and any initial point x1 \u2208C, the point z = 1 T PT t=1xt satis\ufb01es f (z) \u2264min x\u2208C f (x) + DG/ \u221a T + GE \u221a T . The proof is similar to the standard analysis of gradient descent (e.g., see Theorem 3.1 in Bubeck (2015)). For the sake of completeness, we provide a self-contained proof of this result in Appendix B. 3 A General Algorithm in the Divisible Goods Setting This section is dedicated to the divisible good setting: we give a computationally e\ufb03cient algorithm for \ufb01nding a price vector that approximately optimizes social welfare subject to the constraint that the expected per-round demand of each good j is no more than sj. Speci\ufb01cally, let x\u2217 \u03c8(p) denote the expected bundle purchased by a random buyer under the prices p (or the induced bundle by p), that is x\u2217 \u03c8(p) = E v\u223c\u03c8[x\u2217 v(p)]. Given access to the revealed preference oracle ReP, the algorithm \ufb01nds an approximately optimal price vector p using polynomially queries to ReP and guarantees that x\u2217 \u03c8(p) \u2264s. Our algorithm consists of two layers, and we present it in three main steps. 1. First, we analyze a pertinent convex program and derive several structural results. In particular, we show that the expected social welfare can be expressed as a concave function of the induced bundle. 2. Next, we present the inner layer of the algorithm: given any target bundle \u02c6 x, we can iteratively \ufb01nd price vectors pt such that the induced bundle x\u2217 \u03c8(pt) converges to \u02c6 x over time. 3. Finally, we show how to derive subgradients of the expected social welfare function from information available. The outer layer of the algorithm will then use (noisy) subgradient descent to optimize the welfare function over the bundle space. We make the following assumptions on the feasible set F and each valuation function v \u2208V. Assumption 3.1 (Feasible set). We assume that F is convex, closed, has a non-empty interior7 and bounded norm: \u2225F \u22252 \u2264R for some parameter R.8 A canonical example is F = [0,1]d: each buyer can simultaneously buy up to one unit of each good. Assumption 3.2 (Valuations). Each valuation function v in V satis\ufb01es: 1. v is monotonically increasing in each coordinate. (This can be relaxed to Assumption 3.4.) 2. v is (\u03bb,\u03b2)-H\u00a8 older continuous with respect to the \u21131 norm over F , for some \u03bb \u22651 and some absolute constant \u03b2 \u2208(0,1]. Namely: |v(x) \u2212v(x\u2032)| \u2264\u03bb \u00b7 \u2225x \u2212x\u2032\u2225\u03b2 1 for all x,x\u2032 \u2208F . 7See De\ufb01nition A.1 for a formal de\ufb01nition. 8For a set C \u2282Rd and a norm \u2225\u00b7 \u2225, we write \u2225C\u2225= supx\u2208C \u2225x\u2225. When the norm is unspeci\ufb01ed, it is assumed to be \u21132. 9 \f3. v is \u03c3-strongly concave over F \u2014 for all x,x\u2032 \u2208F , v(x\u2032) \u2264v(x) + \u27e8\u2207v(x),x\u2032 \u2212x\u27e9\u2212(\u03c3/2) \u00b7 \u2225x \u2212x\u2032\u22252 2. These assumptions on the valuations are satis\ufb01ed by a large class of well-studied valuation functions, including Constant Elasticity of Substitution (CES) and Cobb-Douglas (See Roth et al. (2016) for a proof). We crucially rely on a use property of strongly concave functions: any point in the domain that is close to the minimum in objective value is also close to the minimum in Euclidean distance(see Lemma A.2). 3.1 A Stochastic Convex Program Let us say that a bundle \u02c6 x \u2208Rd + is inducible if there exists a price vector p \u2208Rd + such that x\u2217 \u03c8(p) = \u02c6 x. Note that each inducible bundle \u02c6 x is a convex combination of n bundles (purchased by all the buyers) in F , so it must lie in the set F . A centerpiece in our analysis is the following welfare maximization convex program that characterizes the relation between the posted prices and inducible bundles. De\ufb01nition 3.3. For any bundle \u02c6 x \u2208F , let convex program SCP( \u02c6 x) be the following max x\u2208F n X vi\u2208V \u03c8(vi)vi(xi) (4) such that X vi\u2208V \u03c8(vi)xij \u2264\u02c6 xj for every j \u2208[d] (5) xi \u2208F for every vi \u2208V (6) Let VAL( \u02c6 x) be the optimal value of the convex program SCP( \u02c6 x). We also say that SCP( \u02c6 x) is supplysaturating if its optimal solution x\u2022 saturates all of the supply constraints de\ufb01ned by eq. (5), that is P i \u03c8(vi)x\u2022 ij = \u02c6 xj for all j. To interpret the above as a stochastic welfare maximization program, consider a market in which there are d types of goods and each good j has supply \u02c6 xj. For each valuation function vi \u2208V, we introduce a buyer i with this valuation, who shows up to the market with probability \u03c8(vi). We use a vector xi = (xi1,...,xid) \u2208F to represent the bundle of goods allocated to a buyer i if he shows up. Then the program is precisely computing an allocation over all buyers to maximize the expected welfare subject to the constraint that the expected demand is no more than the supply given by \u02c6 x.9 Assumption 3.4 (Relaxing monotonicity in valuations). In fact, the assumption that each valuation in the class V is increasing can be relaxed. Our algorithm works as long as the class V and the feasible set F guarantees that SCP( \u02c6 x) is supply-saturating for any \u02c6 x \u2208F . For the sake of generality, our analysis will rely on the supply saturation condition instead of the monotonicity of the valuations. This will be useful for applying the algorithm to the indivisible goods setting. If the valuations in the class V are increasing functions, the optimal solution of SCP( \u02c6 x) will saturate all of the supply constraints in eq. (5). Claim 3.5. Suppose that each valuation v \u2208V is monotonically increasing in each coordinate. Then for any \u02c6 x \u2208F , the convex program SCP( \u02c6 x) is supply-saturating. 9Similar construction of such stochastic convex programs also appeared in Devanur et al. (2012). 10 \fFor each of the supply constraints in eq. (5), we can introduce a dual (price) variable pj and write down the following partial Lagrangian L \u02c6 x(x,p) = X vi\u2208V \u03c8(vi)vi(xi) \u2212 d X j=1 pj \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed X vi\u2208V \u03c8(vi)xij \u2212\u02c6 xj \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 (7) We can also consider the Lagrange dual function of the convex program g \u02c6 x : Rd \u2192R: g \u02c6 x(p) = max x\u2208F n L \u02c6 x(x,p). (8) We will mostly focus on the case where \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 , which we can show is a su\ufb03cient condition for inducibility.10 Lemma 3.6. Let \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 be a bundle, then \u02c6 x is inducible. Proof. Consider the convex program SCP( \u02c6 x). Since the convex program satis\ufb01es the Slater\u2019s condition, strong duality gives max x\u2208F n min p\u2208Rd + L \u02c6 x(x,p) = min p\u2208Rd + max x\u2208F n L \u02c6 x(x,p) = VAL( \u02c6 x) Furthermore, since SCP( \u02c6 x) is supply-saturating, the optimal solution satis\ufb01es Evi\u223c\u03c8 h x\u2022 i i = \u02c6 x. Let p\u2022 be the optimal dual solution. It follows that x\u2022 = argmax x\u2208F n L(x,p\u2022) = argmax x\u2208F n X i \u03c8(vi)vi(xi) \u2212 d X j=1 p\u2022 j \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed X vi\u2208V \u03c8(vi)xij \u2212\u02c6 xj \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 = argmax x\u2208F n X i \u03c8(vi) \u0010 vi(xi) \u2212\u27e8p\u2022 j ,xi\u27e9+ \u27e8p\u2022, \u02c6 x\u27e9 \u0011 Note that the expression inside the argmax is linearly separable across i. Therefore, x\u2022 i = argmax xi\u2208F [vi(xi) \u2212\u27e8p\u2022,xi\u27e9+ \u27e8p\u2022, \u02c6 x\u27e9] = argmax xi\u2208F [vi(xi) \u2212\u27e8p\u2022,xi\u27e9] (9) It follows that x\u2022 i = x\u2217 vi(p\u2022) for each i, and hence the price vector p\u2022 induces the bundle \u02c6 x. Next, we show that the prices that induce the bundle \u02c6 x are an optimal solution of the Lagrangian dual, and the bundles purchased by each buyer in response to these prices form the unique primal optimal solution. Lemma 3.7. Let \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 be a bundle, and let \u02c6 p \u2208Rd + be a price vector such that x\u2217 \u03c8( \u02c6 p) = \u02c6 x. Then \u2022 the price vector \u02c6 p is an optimal dual solution for SCP( \u02c6 x), and 10The restriction that the bundle be positive in each coordinate is necessary \u2014 a bundle with zero in some coordinate may not be inducible. Consider the same simple setting in Example 1.4 where d = 1 and there is a single buyer with valuation v(x) = \u221ax. Because the marginal valuation at 0 is in\ufb01nity, there is no bounded price to induce the buyer to purchase 0 units of the good. 11 \f\u2022 the vector x\u2022 \u2208F n such that x\u2022 i = x\u2217 vi( \u02c6 p) for each i is the unique optimal primal solution. A very nice consequence of Lemma 3.7 is that whenever the induced bundle \u02c6 x is \ufb01xed, the realized bundles purchased by buyers of each type are also \ufb01xed. This allows us to express the expected social welfare as a function only of the induced bundle. In particular, the expected valuation for inducing \u02c6 x in expectation is exactly VAL( \u02c6 x). This suggests a di\ufb00erent way to express the welfare: as a function of the induced bundle (as opposed to a function of the price vector de\ufb01ned in eq. (3)). For each \u02c6 x \u2208F , we can de\ufb01ne SW( \u02c6 x) = VAL( \u02c6 x) \u2212\u27e8c, \u02c6 x\u27e9. (10) We can show that the expected social welfare for inducing \u02c6 x in expectation is exactly SW( \u02c6 x)(see Claim C.1). More importantly, by rewriting the welfare as a function of the bundle, we obtain a concave objective function. This is crucial for us to obtain an e\ufb03cient algorithm later. Lemma 3.8. The expected social welfare function SW: F \u2192R as de\ufb01ned in eq. (10) is concave. With all of structural results above, we are ready to give our two-layered algorithm for \ufb01nding the welfare-maximizing prices. 3.2 Inner Layer: Converting Target Bundles to Prices Even though we can express the expected welfare as a concave function of the induced bundle, we still cannot directly optimize the function because the seller only controls the prices of the goods instead of the expected induced bundle itself. To optimize over the bundle space, we give an algorithm that \ufb01nds a price vector that approximately induces any target expected bundle \u02c6 x. Speci\ufb01cally, suppose that the seller has some target bundle \u02c6 x in mind, we can learn a price vector \u02c6 p such that the expected induced bundle is close to the target bundle: \r \r \r \r \u02c6 x \u2212x\u2217 \u03c8( \u02c6 p) \r \r \r \r \u2264\u03b5. In Lemma 3.7, we show that the prices that exactly induce the target bundle \u02c6 x are the optimal dual solution for the convex program SCP( \u02c6 x), which is the price vector p that minimizes the Lagrangian dual function g \u02c6 x. We will show that if we can \ufb01nd an approximate minimizer for g \u02c6 x, we can then approximately induce the target expected bundle \u02c6 x. In particular, we will apply the noisy gradient descent method (Theorem 2.1) to minimize the function g \u02c6 x, and for the sake of convergence of the algorithm, we will restrict the search space for the price vector to be P (\u03b5) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3p \u2208Rd + | \u2225p\u22252 \u2264 \u221a d\u03bb(1/\u03b2) 4d \u03b52\u03c3 !(1\u2212\u03b2)/\u03b2\uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe (11) where \u03b5 is the target accuracy parameter. First, we will show that the minimax value of the Lagrangian remains close to VAL( \u02c6 x) even when we restrict the dual variables/prices to be in P (\u03b5). Lemma 3.9. Let \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 . There exists a value R-OPT such that max x\u2208F n min p\u2208P (\u03b5)L \u02c6 x(x,p) = min p\u2208P (\u03b5)max x\u2208F n L \u02c6 x(x,p) = R-OPT Moreover, VAL( \u02c6 x) \u2264R-OPT \u2264VAL( \u02c6 x) + \u03b52\u03c3 4 . 12 \fThe next result translates the approximation error in minimizing the function g \u02c6 x to the error in inducing the target bundle \u02c6 x by making use of the strong concavity of the valuations in V. Lemma 3.10. Let \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 and p\u2032 be a price vector in P (\u03b5) such that g \u02c6 x(p\u2032) \u2264minp\u2208P (\u03b5)g \u02c6 x(p) + \u03b1 for some \u03b1 > 0. Let x\u2032 = x\u2217 \u03c8(p\u2032) be the expected bundle induced by prices p\u2032. Then \u2225x\u2032 \u2212\u02c6 x\u22252 \u22642 \u221a \u03b1/\u03c3. Therefore, in order to (approximately) induce a target bundle in expectation, we just need to compute an (approximate) minimizer for the Lagrangian dual function g \u02c6 x. We \ufb01rst show that we can compute an unbiased estimate of the gradient of g \u02c6 x by using the observed bundle purchased by a random buyer. Lemma 3.11. Let p \u2208Rd + be any price vector, and x\u2217 v(p) be bundle purchased by a buyer with valuation function v under prices p. Then E v\u223c\u03c8[ \u02c6 x \u2212x\u2217 v(p)] = \u02c6 x \u2212x\u2217 \u03c8(p) = \u2207g \u02c6 x(p). The result of Lemma 3.11 shows that we can obtain unbiased estimates of the gradients of the function g \u02c6 x at di\ufb00erent prices, as long as we can obtain unbiased estimates for the expected demand x\u2217 \u03c8(p). In the next section, we will give another technique to obtain unbiased estimates for the gradients. Given access to unbiased estimate of the gradients of g \u02c6 x, we can rely on the noisy subgradient descent method (and its guarantee in Theorem 2.1) to minimize the function g \u02c6 x. Note that the algorithm will only \ufb01nd a point that approximately minimizes the function in expectation, but we can get an approximate minimizer with high probability using a standard ampli\ufb01cation technique \u2014 running the subgradient descent method for logarithmically many times, so that one of the output price vectors is guaranteed to be accurate with high probability. More formally: Lemma 3.12. Let \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 be any target bundle. There exists an algorithm that given any target accuracy \u03b5 and con\ufb01dence parameter \u03b4 as input, outputs a list P of log(1/\u03b4) price vectors such that with probability at least 1\u2212\u03b4, there exists a price vector \u02c6 p \u2208P that satis\ufb01es \r \r \r \rx\u2217 \u03c8( \u02c6 p) \u2212\u02c6 x \r \r \r \r \u2264\u03b5. Furthermore, the running time, the length of the list and the number of queries to ReP is bounded by poly(d,1/\u03b5,log(1/\u03b4)). Lastly, we have one remaining technical problem to solve: given a set of price vectors P in which at least one price vector can approximately induce the target expected bundle \u02c6 x, we need to identify one such price vector. To accomplish this, we will simply post each price vector p \u2208P repeatedly, to obtain polynomially many observations from the buyers and compute the empirical average bundles over these polynomially many rounds. We select the price vector whose empirical average purchased bundle is closest to the target bundle \u02c6 x. Putting all the pieces together, we obtain our full algorithm BunToPrice (formal description in Algorithm 2 in the appendix). Theorem 3.13. Let \u02c6 x \u2208 \u0010 F \u2229Rd >0 \u0011 be any target bundle. For any target accuracy parameter \u03b5 and con\ufb01dence parameter \u03b4, the instantiation BunToPrice( \u02c6 x,\u03b5,\u03b4) outputs a price vector \u02c6 p that with probability at least 1 \u2212\u03b4 satis\ufb01es \u2225\u02c6 x \u2212x\u2217 \u03c8( \u02c6 p)\u2225\u2264\u03b5. Furthermore, the number of queries to ReP is bounded by poly(d,1/\u03b5,log(1/\u03b4)). 13 \f3.3 Outer Layer: Welfare Maximization Finally, we combine the subroutine BunToPrice with subgradient descent to \ufb01nd the welfare maximizing prices. At a high level, we will use subgradient descent to optimize the function SW over the bundle space, and along the way use the algorithm BunToPrice to obtain prices which induce each target bundle that arises along subgradient descent\u2019s optimization path. To ensure that the per-round expected demand for each good j is bounded by some supply sj, the algorithm will optimize over bundles in the set S = {x \u2208F | xj \u2264sj for each j \u2208[d]}. There are several technical challenges remaining. First, in order to optimize the concave function SW using subgradient descent, we need to compute a subgradient for each bundle the subgradient descent method chooses at intermediate steps. The following result establishes a very nice property that the price vector that induces each target bundle \u02c6 x gives us a simple way to compute a subgradient in \u2202SW( \u02c6 x). In particular, this means we can obtain a subgradient of the function SW using our subroutine BunToPrice. Lemma 3.14. Let \u02c6 x \u2208 \u0010 Rd >0 \u2229F \u0011 , and \u02c6 p be the price vector that induces \u02c6 x. Then ( \u02c6 p \u2212c) \u2208\u2202SW( \u02c6 x). Remark. Lemma 3.14 also shows that the welfare-optimal solution always prices every good at cost or higher, and hence welfare-optimal solutions are always no-de\ufb01cit. To see this, note that if the prices p induce expected bundle x, then (p \u2212c) is a subgradient of the welfare function at x, where c denotes the production cost vector. Hence, if for any good j, we had pj < ci, the gradient would be negative in that coordinate, and we could increase welfare by reducing the demand of good j, contradicting optimality. Second, at each iteration t, subgradient descent may require a subgradient at some bundle xt, but because of the error in BunToPrice, we only \ufb01nd prices to approximately induce the target bundle. To overcome this issue, we will rely on the analysis of subgradient descent under adversarial noise (given in Theorem 2.2). Lastly, instead of optimizing over the entire set S, we will optimize over a slightly smaller set S\u03be = {x \u2208F | \u03be \u2264xj \u2264sj \u2212\u03be}. This allows us to settle two issues: (1) we can guarantee that all of the induced bundles lie in the set S despite the error of BunToPrice and (2) each bundle in S\u03be is guaranteed to be inducible since it is strictly positive in every coordinate (as required by Theorem 3.13). Lemma 3.15. For any \u03be \u2208(0,1), maxx\u2208S SW(x) \u2212maxx\u2032\u2208S\u03be SW(x\u2032) \u2264\u03bb(d\u03be)\u03b2 + \u221a d\u03be\u2225c\u2225. Putting all the pieces together, we obtain our main algorithm OWel (full description presented in Algorithm 3). To establish the welfare guarantee of \u02c6 D, we will compare to an even stronger benchmark\u2014the welfare of the optimal lottery over allocations. In particular, given any constraint vector s \u2208Rd+1 + , a feasible lottery over allocations is a randomized mapping \u03c0: [n] \u2192F that assigns each buyer to a randomized bundle such that Evi\u223c\u03c8 [\u03c0(i)] \u2264s. Let OPTlot be the optimal social welfare achieved by a lottery over allocations. The following is the formal guarantee of OWel (corresponding to Theorem 1.1). Theorem 3.16. For any accuracy parameter \u03b1 > 0, con\ufb01dence parameter \u03b4 > 0, and subset S = {x \u2208 F | xj \u2264sj for each j \u2208[d]} given by a supply vector s. Given query access to ReP, the instantiation OWel(\u03b1,\u03b4,s) outputs a price vector p\u2032 that with probability at least 1 \u2212\u03b4 satis\ufb01es x\u2217 \u03c8(p\u2032) \u2264s and SW(p\u2032) \u2265OPTlot \u2212\u03b1. 14 \fFurthermore, both the run-time of the algorithm and the number of queries to ReP is bounded by poly(d,1/\u03b1,1/\u03c3,1/\u03bb,log(1/\u03b4)). Remark. The only part of the algorithm that interacts with the oracle ReP (or the buyers) is BunToPrice of the inner layer. Since the BunToPrice only requires a bounded and unbiased estimate of x\u2217 \u03c8(p) for each price vector p it queries, we can replace ReP by any procedure that can compute such unbiased estimate. This is crucial for solving the unit-demand problem. 4 Unit-Demand Buyers with Indivisible Goods We now switch to the setting of indivisible goods and unit-demand buyers. Our goal is to develop a computationally and query e\ufb03cient algorithm to \ufb01nd an approximately optimal distribution over prices subject to the constraint that the per-round expected demand of each good j is bounded by sj. In particular, we will use the algorithm OWel in Section 3 as a main tool for our solution. Throughout, we impose the following mild boundedness assumption on the values. Assumption on valuations There exists a constant upper bound Vmax such that for any buyer i \u2208[n] and item j \u2208[d], 0 < vij \u2264Vmax. Overview: relax and regularize A natural starting point for solving our problem with OWel is to consider the linear relaxation of unit demand valuations: that is we can view the buyers as having linear valuation functions over divisible goods, and optimizing over a feasible set of bundles that is the non-negative orthant of the \u21131 ball. This relaxation maintains the property that buyers buy integral quantities of each good. However, this approach runs into a substantial di\ufb03culty, because linear valuation functions are not strongly concave, and strong concavity was an important ingredient in our analysis of OWel. Instead, we will imagine that we have access to a regularized version of the linear relaxation of our original problem: that is, we imagine that each buyer has a regularized valuation function of the form \u27e8v,x\u27e9+\u03b7H(x), where H is the entropy function (of course, in reality, we cannot modify the valuations of the buyers). We show that we can solve the (imagined) regularized version of the problem, and also that we can induce buyers to behave (in expectation) as if their valuation functions were regularized by appropriately perturbing the price vectors we present to them. Our solution then consists of the following three steps: 1. We show that the algorithm OWel can compute an approximately optimal price vector for the regularized version of problem, as long as the algorithm has access to an unbiased estimate of the expected demand of a random regularized buyer. 2. Next, we show that we can obtain such unbiased estimates given access to the revealed preference oracle ReP(\u03c8) in the original un-regularized instance. The key ingredient is a novel price perturbation technique. 3. Finally, we show how to construct an approximately optimal price distribution based on the price vector output by OWel. 15 \fTo facilitate the discussion, we will introduce a dummy good (indexed as (d + 1)) to represent the buyer\u2019s option of buying nothing. The seller\u2019s price and each buyer\u2019s value for this item is always 0, and the per-round demand upper bound is simply sd+1 = 1. Moreover, we will write \u2206d+1 to denote the set of all probability distributions over the (d +1) items (or simply the simplex over the items). For any x \u2208\u2206d+1, the entropy of x is de\ufb01ned as H(x) = P j\u2208[d+1] xj log 1 xj . The function H is strongly concave with respect to the \u21131 norm over the simplex. 4.1 Solving the regularized problem Given a probability distribution \u03c8 over value vectors and a parameter \u03b7, we imagine a corresponding \u03b7-regularized problem with a distribution \u02dc \u03c8 over valuation functions: for each valuation vector vi in the support of \u03c8, create a regularized valuation function \u02dc vi : \u2206d+1 \u2192R>0 with the same probability mass \u02dc \u03c8vi = \u03c8vi such that \u02dc vi(x) = \u27e8vi,x\u27e9+ \u03b7H(x). The regularized problem is an instance of the divisible good setting, with feasible set F = \u2206d+1. Suppose that we have access to an unbiased estimate for x\u2217 \u02dc \u03c8(p) for any price vector p: then we can apply OWel from Section 3 to compute approximately optimal prices. There is a small obstacle in the analysis\u2014the regularized valuations de\ufb01ned above are not monotonically increasing in each coordinate (because of the entropy term). However, recall that we were able to substitute monotonicity for Assumption 3.4\u2014 that the convex program SCP( \u02c6 x) is supply-saturating for each \u02c6 x in our feasible region F = \u2206d+1. This is indeed satis\ufb01ed in our setting: Lemma 4.1. Let F = \u2206d+1 and \u02dc \u03c8 be a distribution over the regularized valuation functions of the form [\u27e8v,x\u27e9+ \u03b7H(x)]. Then the convex program SCP( \u02c6 x) de\ufb01ned as max x\u2208\u2206n d+1 X vi\u2208V \u02dc \u03c8(vi) [\u27e8vi,xi\u27e9+ \u03b7H(xi)] such that X vi\u2208V \u02dc \u03c8(vi)xij \u2264\u02c6 xj for every j \u2208[d + 1] is supply-saturating for any \u02c6 x \u2208\u2206d+1. Proof. Let x\u2022 be an optimal solution to SCP( \u02c6 x). The convex combination E \u02dc \u03c8 h x\u2022 i i \u2208\u2206d+1, so we must have P j\u2208[d+1] E \u02dc \u03c8 h x\u2022 ij i = 1. Note that \u02c6 x also lies in \u2206d+1 (these are the only inducible average bundles for unit demand buyers), so P j\u2208[d+1] \u02c6 xj = 1. It follows that all of the constraints E \u02dc \u03c8 h x\u2022 ij i \u2264\u02c6 xj are saturated. Moreover, the feasible set \u2206d+1 is convex, closed, has a non-empty interior, and each of the regularized valuations is \u03b7-strongly concave with respect to the \u21132 norm,11 and (( \u221a d + 1 + Vmax,1/2)H\u00a8 older continuous (see Corollary D.2for a proof). Thus we satisfy all conditions needed to apply OWel, so long as we have access to unbiased estimates of x\u2217 \u02dc \u03c8(p) for any price vector p \u2208Rd+1 + . 4.2 From price perturbation to value regularization Solving the regularized problem using OWel requires an unbiased estimate for x \u02dc \u03c8(p) for any price vector p that the algorithm queries, but in our problem instance the valuations are actually drawn from \u03c8 (without the entropy term). To obtain such an estimate, we give a price perturbation technique that allows us to simulate the response for the regularized buyers: given any price 11This follows from the fact that the \u21131 norm of any vector is bigger than its \u21132 norm. 16 \fvector p, we will perturb each coordinate to obtain a noisy price vector p\u2032, with the e\ufb00ect that the random item purchased by the unit-demand buyer in expectation over the perturbation equals the bundle purchased by her regularized counterpart.12 Lemma 4.2 (Warmuth (2009)). Fix any \u03b7 > 0 and any vector u \u2208Rd+1 >0 . For each j \u2208[d + 1], let Gj be a random number drawn independently and uniformly at random from [0,1], then x\u2217= argmax x\u2208\u2206d+1 [\u27e8u,x\u27e9+ \u03b7H(x)] = E \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0argmax x\u2208\u2206d+1 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0\u27e8u,x\u27e9+ \u03b7 X j\u2208[d+1] xj ln \u0010 ln(1/Gj) \u0011 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb. The random variable lnln(Gj) is distributed according to the Gumbel distribution. One immediate technical issue we have is that the Gumbel distribution is unbounded, so the perturbed prices might be negative. We also need to guarantee that the price on the dummy good \u22a5is 0. To overcome this issue, we will translate the perturbed price vector p into a non-negative price vector p\u2032 with the following procedure: set p\u2032 d+1 = 0 and p\u2032 j = (pj \u2212pd+1) \u2212min{0,minj\u2032\u2208[d](pj\u2032 \u2212pd+1)}. We will refer to this procedure as Convert, and show that the choice made by any unit-demand buyer remains the same under the new price vector p\u2032. Lemma 4.3. Let p \u2208Rd+1 any real-valued vector and let p\u2032 = Convert(p). Then for any vector v \u2208Rd+1 >0 such that vd+1 = 0, argmax j\u2208[d+1] [vj \u2212pj] = argmax j\u2208[d+1] [vj \u2212p\u2032 j] Furthermore, p\u2032 j \u22650 for all j \u2208[d]. Combining the Gumbel noise addition and the procedure Convert, we can now obtain unbiased estimates for x\u2217 \u02dc \u03c8(p) using feedback from ReP(\u03c8), the revealed preference oracle for the original problem instance. More formally, for any \ufb01xed price vector p, consider the following distribution D(p) of random prices: 1. For each j \u2208[d + 1], let Gj be a random number drawn independently and uniformly at random from [0,1], and let \u02dc pj = pj + lnln(Gj) for each j \u2208[d + 1]. 2. Output p\u2032 = Convert( \u02dc p). Given this subroutine for generating random prices, we have a procedure Sim (presented in Algorithm 1) for obtaining an unbiased estimate of x\u2217 \u02dc \u03c8(p) for any price vector p. We will establish the correctness of Sim in Lemma 4.4. Lemma 4.4. Fix any price vector p \u2208Rd+1 + , parameter \u03b7 > 0, and any distribution \u03c8 over value vectors in Rd+1 >0 . Let x\u2032 be the estimate output by Sim(p,\u03b7), then E\u03c8,Sim [x\u2032] = x\u2217 \u02dc v(p). Using the subroutine Sim to obtain unbiased estimate of x\u2217 \u02dc \u03c8(p) for di\ufb00erent price vectors p, we can now instantiate OWel to solve the \u03b7-regularized problem. 12The technique of simulating regularization through perturbation was also used to establish the equivalence between \u201cFollow the Regularized Leader\u201d and \u201cFollow the Perturbed Leader\u201d (Warmuth, 2009). In our setting, it is important that we can obtain this e\ufb00ect by perturbing the price vector, rather than the valuation vector, because we do not have control over buyer valuations. 17 \fAlgorithm 1 Simulate the response of a regularized buyer Sim(p,\u03b7) Input: A price vector p and regularization parameter \u03b7 Let p\u2032 a price vector drawn from D(p) Query the oracle ReP(\u03c8) with p\u2032 (i.e. post p\u2032 to a random consumer) and obtain feedback x\u2032 Output: x\u2032 as an unbiased estimate for x\u2217 \u02dc \u03c8(p) 4.3 Wrap-up: An approximately optimal distribution over prices Let \u02c6 p be the price vector output by OWel when solving the \u03b7-regularized problem, and consider the distribution over prices \u02c6 D = D( \u02c6 p). Similar to the divisible goods setting, we will again compare to the optimal lottery over allocations. In particular, the optimal lottery is given by max x\u2208\u2206n d+1 X i\u2208[n] \u03c8(vi)[\u27e8vi \u2212c,xi\u27e9] such that X i\u2208[n] \u03c8(vi)xij \u2264sj for each j \u2208[d + 1] Let x\u22c6and OPTlot be the optimal solution and value for the program de\ufb01ned above. Since any distribution of prices is just inducing a lottery over allocations, we know that max D s.t. x\u2217 \u03c8(D)\u2264sSW(D) \u2264OPTlot where we write x\u2217 \u03c8(D) = Ep\u223cD,v\u223c\u03c8 [x\u2217 v(p)] to denote expected demand over the goods. The following lemma bounds the sub-optimality of \u02c6 D compared to OPTlot in terms of the regularization parameter \u03b7 and accuracy guarantee of the price vector \u02c6 p. Lemma 4.5. Fix any regularization parameter \u03b7. Suppose that \u02c6 p is an \u03b5-approximately optimal price vector for the \u03b7-regularized problem. Then the distribution of prices \u02c6 D = D( \u02c6 p) satis\ufb01es SW( \u02c6 D) \u2265OPTlot \u2212\u03b5 \u2212\u03b7 log(d + 1). Therefore, to achieve a target accuracy of \u03b1, it su\ufb03ces to instantiate OWel with accuracy parameter \u03b5 = \u03b1/2 to solve the \u03b7-regularized problem with \u03b7 = \u03b1/(2log(d +1)). Putting all the pieces together, we have our algorithm OWel-UD (formally presented in Algorithm 4) that achieves the following main result, which recovers Theorem 1.2. Theorem 4.6. For any accuracy parameter \u03b1 > 0, con\ufb01dence parameter \u03b4 > 0, and subset S = {x \u2208 F | xj \u2264sj for each j \u2208[d]} given by a supply vector s. Given query access to ReP, the instantiation OWel-UD(\u03b1,\u03b4,s) outputs a distribution \u02c6 D over prices such that with probability at least 1 \u2212\u03b4 satis\ufb01es x\u2217 \u03c8( \u02c6 D) \u2264s and SW( \u02c6 D) \u2265OPTlot \u2212\u03b1. Furthermore, both the run-time of the algorithm and the number of queries to ReP is bounded by poly(d,1/\u03b1,Vmax,log(1/\u03b4)). 18 \f5 Limited Supply: Proof of Theorem 1.3 We turn our attention to dynamic pricing with limited supply, so as to prove Theorem 1.3. Setting and notation. Compared to the main model described in Section 2, the limited supply setting di\ufb00ers in the following ways. A problem instance is characterized by a pair (s,T), where s \u2208Rd + is the supply vector and T is the time horizon (the maximal number of rounds). Initially the seller has T \u00b7 sj units of each good j. For ease of exposition, assume that at most one unit of each good can be sold in each round. Execution halts when the time horizon is exceeded, or when the remaining supply of any one good falls below 0. Performance of a given pricing policy \u03c0 is characterized by its expected total welfare (over the entire execution), denoted SWtot(\u03c0). We are particularly interested in \u201c\ufb01xed-vector\u201d pricing policies: ones that always uses the same \ufb01xed price vector p. The expected total welfare of such policy is denoted SWtot(p). Likewise, \u201c\ufb01xed-distribution\u201d pricing policies always draw the price vector independently from the same \ufb01xed distribution D; the expected total welfare of such policy is denoted SWtot(D). The induced bundle for a given price vector p is a vector x = x(p) \u2208Rd + such that xj is the perround expected consumption of each product j if price p is chosen. A bundle x \u2208Rd + is inducible if x = x(p) for some price vector p. Connection to \u201cBandits with Knapsacks\u201d. We represent our problem as a special case of \u201cBandits with Knapsacks\u201d (Badanidiyuru et al., 2013), a general setting for multi-armed bandit problems with resource consumption (henceforth denoted BwK). In BwK, there are several resources consumed by an algorithm, with a limited supply of each. In each round an algorithm chooses from a \ufb01xed set of \u2018arms\u2019, receives a reward and consumes some resources. Thus, the outcome from choosing an arm is a vector (outcome vector) which consists of the reward and the consumption of all resources. The outcome vector is assumed to be an independent draw of some \ufb01xed but unknown distribution that depends only on the chosen arm. Dynamic pricing with limited supply is a canonical special case of BwK: arms correspond to price vectors, resources correspond to the goods (one resource for each good), and rewards is the seller\u2019s utility from a given customer (typically revenue or pro\ufb01t, in our case \u2014 welfare). The outcome in a given round is determined by the purchased bundle. 5.1 Divisible goods: proof of Theorem 1.3(a) We use a di\ufb00erent, non-standard connection to BwK: arms correspond to inducible bundles, rather than price vectors. More precisely, for each inducible bundle x we have an arm in BwK such that choosing this arm means choosing a particular price vector px that induces bundle x. Henceforth, such an arm is termed arm-bundle x. The reward and resource consumption from choosing this arm-bundle are de\ufb01ned as those from choosing px. Note that the expected consumption of each good j is simply xj. An algorithm \u03c0 for such an instance of BwK, i.e., an algorithm that in each round selects an inducible bundle and observes the purchased bundle, will be called a bundling policy. Its expected total welfare is denoted SWtot(\u03c0). A \u201c\ufb01xed-bundle\u201d bundling policy chooses some arm-bundle x in each round. Its expected total welfare is denoted as SWtot(x). Recall that choosing an arm-bundle x determines the realized bundles purchased by each type of buyers. (It is a consequence of Lemma 3.7.) In other words, the realized bundles do not depend on the choice of px. Therefore: 19 \fClaim 5.1. Fix a problem instance. For any pricing policy \u03c0\u2032, there exists a bundling policy \u03c0 with the same expected total welfare. Proof. The bundling policy \u03c0 is constructed as follows: whenever \u03c0\u2032 chooses a price vector p that induces bundle x, \u03c0 chooses arm-bundle x. Then \u03c0 and \u03c0\u2032 have the same distribution over the outcome vectors in each round t (this is proved by induction on t). The analysis in Badanidiyuru et al. (2013) emphasizes \u201c\ufb01xed-distribution\u201d bundling policies: where in each round the arm-bundle is sampled independently from a \ufb01xed distribution D over arm-bundles. Let SWtot(D) and SW(D) denote, resp., the expected total welfare and the expected per-round welfare from this bundling policy. A structural result from Badanidiyuru et al. (2013), as specialized to our setting, essentially reduces optimization over arbitrary bundling policies to that over \ufb01xed-distribution bundling policies: Lemma 5.2 (specialized from Badanidiyuru et al. (2013)). Fix a \ufb01nite set B of arm-bundles. Let supSWtot(B) be the supremum of expected total welfare achieved by bundling policies that can only use arm-bundles from B. There exists a distribution D over B such that T \u00b7 SW(D) \u2265supSWtot(B) and xj(D) \u2264sj for each good j. Here T \u00b7 SW(D) is seen as an approximation for SWtot(D), given that xj(D) \u2264sj for each j. Reduction to best \ufb01xed bundle. A distribution D over arm-bundles can be replaced by armbundle x = Ex\u223cD [x], in the sense that SW(x) \u2265SW(D) (because is concave in the expected bundle, see Lemma 3.8), and xj(D) = xj for each good j. 13 Let supSWtot be the supremum of expected total welfare over all bundling policies \u03c0. By Claim 5.1, it is also the supremum of expected total welfare over all pricing policies. We claim that for any given \u03b5 > 0 there exists a \ufb01nite set B of arm-bundles such that supSWtot(B) \u2265supSWtot \u2212\u03b5. This holds because SW(x) is a H\u00a8 older-continuous function of bundle x (see Lemma C.2), and so for any \u03b5, there is a \ufb01ne enough discretization of bundles that yields an \u03b5-net for social welfare. Putting this together, we reduce arbitrary bundling policies to \ufb01xed-bundle policies: Corollary 5.3. For each \u03b5 > 0 there exists is an arm-bundle x such that T \u00b7 SW(x) \u2265supSWtot \u2212\u03b5 and xj \u2264sj for each good j. Again, T \u00b7 SW(x) is seen as an approximation for SWtot(x), given that xj \u2264sj for each j. Completing the proof. Fix parameter \u03b1 > 0. Applying Corollary 5.3 with \u03b5 = \u03b1T /2 and letting p be a price vector that induces bundle x, we obtain a price vector p such that T \u00b7 SW(p) \u2265supSWtot \u2212\u03b1T/2 and xj(p) \u2264sj for each good j. The algorithm from Theorem 1.1 can compute a price vector p\u2217such that SW(p\u2217) \u2265SW(p) \u2212\u03b1/2 and xj(p\u2217) \u2264sj for each good j. In particular, T \u00b7 SW(p\u2217) \u2265supSWtot \u2212\u03b1T. It remains to bound the di\ufb00erence between the expected total welfare SWtot(p\u2217) and the estimate T \u00b7 SW(p\u2217): 13A similar statement \u2014 that any distribution over arms is \u201cdominated\u201d by some arm \u2014 is false for many other special cases of BwK, including another version of dynamic pricing with limited supply (Badanidiyuru et al., 2013). 20 \fLemma 5.4. Let smin = minj sj and assume that T smin > 32 logT. Then SWtot(p\u2217) \u2265T \u00b7 SW(p\u2217) \u2212O( p T log(T)/smin). (The lemma applies to all price vectors p\u2217such that xj(p\u2217) \u2264sj for each good j. Its proof is deferred to Section 5.3.) Putting this all together, we see that SWtot(p\u2217) \u2265supSWtot \u2212\u03b1T \u2212O( p T log(T)/smin). This completes the proof of Theorem 1.3. 5.2 Indivisible goods and unit demands: proof sketch of of Theorem 1.3(b) Recall that we compete against a weaker benchmark: the best \ufb01xed distribution D over the price vectors; more precisely, against supSWtot := supD: x(D)\u2264s SWtot(D). We can bound the deviation between SWtot(D) and T \u00b7SW(D) via the following lemma (which is stated and proved similarly to Lemma 5.4). Lemma 5.5. Let smin = minj sj and assume that T smin > 32 logT. Let D be a distribution over price vectors such that x(D) \u2264s. Then: |SWtot(D) \u2212T \u00b7 SW(D)| \u2264O( p T log(T )/smin). Fix \u03b5 > 0 and choose some distribution D such that x(D) \u2264s and SWtot(D) \u2265supSWtot \u2212\u03b5. Apply Theorem 1.3 to construct a distribution D\u2217such that x(D\u2217) \u2264s and SW(D\u2217) \u2265SW(D) \u2212\u03b1. By Lemma 5.5, it follows that SWtot(D) \u2212SWtot(D\u2217) \u2264T (SW(D) \u2212SW(D\u2217)) + O( p T log(T)/smin) \u2264\u03b1T + O( p T log(T)/smin). 5.3 Proof of Lemma 5.4 Let \u03c0 denote the the \ufb01xed-price policy with price vector p\u2217. For the sake of the argument, let us consider the execution of \u03c0 in the problem instance I with time horizon T, but without the supply constraint. Let Zt be the realized total welfare of this execution by time t. Without loss of generality, we view an execution of \u03c0 in the original problem instance as an execution in the unlimited-supply instance, truncated at round \u03c4 when the original problem instance would halt. Thus, the total realized welfare of \u03c0 in the original problem instance is Z\u03c4, where \u03c4 is a stopping time. Let xj = xj(p\u2217) be the expected consumption of a given good j. Let yj,t be the realized total consumption of this good by time t. Let w = SW(p\u2217) be the expected per-round welfare for p\u2217. By Cherno\ufb00Bound, letting c0 = p 8 logT, with probability at least 1 \u2212T \u22122 we have |yj,t \u2212txj| \u2264c0 ptxj and Zt \u2265wt \u2212c0 \u221a t for each good j and all rounds t \u2264T. (12) An execution of \u03c0 on unlimited-supply problem instance I is called clean if the event in eq. (12) holds. To prove the lemma, it su\ufb03ces to show in clean execution, Z\u03c4 \u2265T \u00b7 SW(p\u2217) \u2212O( p T log(T)/smin). (13) 21 \fSo we will assume a clean execution from now on. Let Bj = sj T be the supply for good j. The stopping time \u03c4 can be expressed as \u03c4 = min goods j min(T,\u03c4j), where \u03c4j = min n rounds t : yj,t > Bj o . (14) Informally, we can think of each \u03c4j as the stopping time for good j. Let us analyze \u03c4j. Claim 5.6. \u03c4j \u2265 B\u2212c0\u221a Bj xj , for each good j. Proof. Let \u03b5 = c0 pBj/xj. It su\ufb03ces to prove that for each round t \u2264Bj/xj \u2212\u03b5 we have yj,t \u2264Bj. This is so because by eq. (12) we have yj,t \u2264txj + c0 ptxj \u2264Bj \u2212\u03b5xj + c0 pBj \u2264Bj. Claim 5.7. min(T,\u03c4j) \u2265T \u22122c0 pT/sj, for each good j. Proof. If T \u2264 B\u2212c0\u221a Bj xj , then the claim follows trivially by Claim 5.6. Else, we have T \u2265 B \u2212c0 pBj xj \u2265 Bj 2xj , so \u03c4j \u2265T \u2212 c0Bj xj \u2265T \u22122c0T/ q Bj = T \u22122c0 q T/sj. Plugging this into eq. (14), it follows that \u03c4 \u2265T \u22122c0 \u221aT/smin. Since Z\u03c4 \u2265w\u03c4 \u2212c0 \u221a\u03c4 by eq. (12), the lower bound on \u03c4 implies eq. (13). This completes the proof of Lemma 5.4. 6 Conclusions and open questions We provide a polynomial time dynamic pricing algorithm for maximizing welfare over the allocation of d goods, for buyers satisfying reasonable assumptions on their valuation functions. Prior work either required explicit assumptions on the aggregate price response function unsupported by micro-economic foundations, or had running time exponential in d. Let us highlight two interesting directions. First: give an algorithm with a more reasonable polynomial run time. While we achieve polynomial dependence on d and \u03b1, this is mainly a proof of concept result: the degree of the polynomial run-time of our algorithm is quite high. A (much) smaller degree is desirable, but appears beyond the reach of our current techniques. Are there practical algorithms that achieve the same guarantees? Second: extend our results to revenue optimization, a more traditional objective in the dynamic pricing literature. With our current techniques, this extension requires a major assumption on valuations, namely that buyers\u2019 valuations are uniformly homogeneous with degree m < 1: that there exists a constant m < 1 such that for every buyer i, bundle x, and scalar \u03bb, vi(\u03bbx) = \u03bbmvi(x) (the extension to revenue maximization subject to this assumption follows from the techniques of Roth et al. (2016)). Can revenue maximization be handled subject to weaker assumptions? Acknowledgements. We are grateful to Moshe Babaio\ufb00for valuable feedback on an early draft of this paper. This work was partially supported by NSF grant CNS-1253345, a Sloan Foundation Fellowship, and a DARPA grant. Parts of this work have been done while Zhiwei Steven Wu was visiting Microsoft Research. 22" + }, + { + "url": "http://arxiv.org/abs/1504.01033v2", + "title": "Watch and Learn: Optimizing from Revealed Preferences Feedback", + "abstract": "A Stackelberg game is played between a leader and a follower. The leader\nfirst chooses an action, then the follower plays his best response. The goal of\nthe leader is to pick the action that will maximize his payoff given the\nfollower's best response. In this paper we present an approach to solving for\nthe leader's optimal strategy in certain Stackelberg games where the follower's\nutility function (and thus the subsequent best response of the follower) is\nunknown.\n Stackelberg games capture, for example, the following interaction between a\nproducer and a consumer. The producer chooses the prices of the goods he\nproduces, and then a consumer chooses to buy a utility maximizing bundle of\ngoods. The goal of the seller here is to set prices to maximize his\nprofit---his revenue, minus the production cost of the purchased bundle. It is\nquite natural that the seller in this example should not know the buyer's\nutility function. However, he does have access to revealed preference\nfeedback---he can set prices, and then observe the purchased bundle and his own\nprofit. We give algorithms for efficiently solving, in terms of both\ncomputational and query complexity, a broad class of Stackelberg games in which\nthe follower's utility function is unknown, using only \"revealed preference\"\naccess to it. This class includes in particular the profit maximization\nproblem, as well as the optimal tolling problem in nonatomic congestion games,\nwhen the latency functions are unknown. Surprisingly, we are able to solve\nthese problems even though the optimization problems are non-convex in the\nleader's actions.", + "authors": "Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu", + "published": "2015-04-04", + "updated": "2015-11-18", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.GT", + "cs.LG" + ], + "main_content": "Introduction 1 1.1 Our Results and Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Preliminaries 4 2.1 Projected Subgradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Strong Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Tools for Zeroth-Order Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Pro\ufb01t Maximization From Revealed Preferences 7 3.1 The Model and Problem Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 An Overview of Our Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 Expressing Pro\ufb01t as a Function of the Bundle . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4 Converting Bundles to Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.5 Pro\ufb01t Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 General Framework of Stackelberg Games 18 4.1 Inducing a Target Action of the Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.2 Optimizing Leader\u2019s Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 Optimal Traf\ufb01c Routing from Revealed Behavior 21 6 The Principal-Agent Problem 25 6.1 Inducing the Agent\u2019s Contribution Using Noisy Observations . . . . . . . . . . . . . . . . . 26 6.2 Optimizing the Principal\u2019s Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 7 Conclusion 30 A A Routing Game Where Social Cost is Not Convex in The Tolls 32 B Missing Proofs in Section 3 34 B.1 Properties of CES and Cobb-Douglas Utilities . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.1.1 Constant Elasticity of Substitution (CES) . . . . . . . . . . . . . . . . . . . . . . . 34 B.1.2 Cobb-Douglas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 C Detailed Analysis of Section 4 38 D Improvement with Ellipsoid in Noiseless Settings 41 D.1 The Ellipsoid Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 D.2 Learning Prices with Ellipsoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 D.3 Learning Tolls with Ellipsoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 \f1 Introduction Consider the following two natural problems: 1. Pro\ufb01t Maximization via Revealed Preferences: A retailer, who sells d goods, repeatedly interacts with a buyer. In each interaction, the retailer decides how to price the d goods by choosing p \u2208Rd +, and in response, the buyer purchases the bundle x \u2208Rd + that maximizes her utility v(x) \u2212\u27e8x, p\u27e9, where v is an unknown concave valuation function. The retailer observes the bundle purchased, and therefore his pro\ufb01t, which is \u27e8x, p\u27e9\u2212c(x), where c is an unknown convex cost function. The retailer would like to set prices that maximize his pro\ufb01t after only a polynomial number of interactions with the buyer. 2. Optimal Tolling via Revealed Behavior: A municipal authority administers m roads that form a network G = (V, E). Each road e \u2208E of the network has an unknown latency function \u2113e : R+ \u2192 R+ which determines the time it takes to traverse the road given a level of congestion. The authority has the power to set constant tolls \u03c4e \u2208R+ on the roads in an attempt to manipulate traf\ufb01c \ufb02ow. In rounds, the authority sets tolls, and then observes the Nash equilibrium \ufb02ow induced by the non-atomic network congestion game de\ufb01ned by the unknown latency functions and the tolls, together with the social cost (average total latency) of the \ufb02ow. The authority would like to set tolls that minimize the social cost after only a polynomial number of rounds. Although these problems are quite different, they share at least one important feature\u2014the retailer and the municipal authority each wish to optimize an unknown objective function given only query access to it. That is, they have the power to choose some set of prices or tolls, and then observe the value of their objective function that results from that choice. This kind of problem (alternately called bandit or zeroth order optimization) is well-studied, and is well understood in cases in which the unknown objective being maximized (resp. minimized) is concave (resp. convex). Unfortunately, the two problems posed above share another important feature\u2014when posed as bandit optimization problems, the objective function being maximized (resp. minimized) is generally not concave (resp. convex). For the pro\ufb01t maximization problem, even simple instances lead to a non concave objective function. Example 1. Consider a setting with one good (d = 1). The buyer\u2019s valuation function v(x) = \u221ax, and the retailer\u2019s cost function is c(x) = x. The buyer\u2019s utility for buying x units at price p is \u221ax \u2212x \u00b7 p. Thus, if the price is p, a utility-maximizing buyer will purchase x\u2217(p) = 1 4p2 units. The pro\ufb01t of the retailer is then Pro\ufb01t(p) = p \u00b7 x\u2217(p) \u2212c(x\u2217(p)) = 1 4p \u2212 1 4p2 . Unfortunately, this pro\ufb01t function is not concave. Since the retailer\u2019s pro\ufb01t function is not concave in the prices, it cannot be optimized ef\ufb01ciently using generic methods for concave maximization. This phenomenon persists into higher dimensions, where it is not clear how to ef\ufb01ciently maximize the non-concave objective. The welfare objective in the tolling problem is also non-convex in the tolls. We give an example in Appendix A. Surprisingly, despite this non-convexity, we show that both of these problems can be solved ef\ufb01ciently subject to certain mild conditions. More generally, we show how to solve a large family of Stackelberg games in which the utility function of the \u201cfollower\u201d is unknown. A Stackelberg game is played by a leader and a follower. The leader moves \ufb01rst and commits to an action (e.g., setting prices or tolls as in 1 \four examples), and then the follower best responds, playing the action that maximizes her utility given the leader\u2019s action. The leader\u2019s problem is to \ufb01nd the action that will optimize his objective (e.g., maximizing pro\ufb01t, or minimizing social cost as in our examples) after the follower best responds to this action. Traditionally, Stackelberg games are solved assuming that the leader knows the follower\u2019s utility function, and thus his own utility function. But this assumption is very strong, and in many realistic settings the follower\u2019s utility function will be unknown. Our results give general conditions\u2014and several natural examples\u2014under which the problem of computing an optimal Stackelberg equilibrium can be solved ef\ufb01ciently with only revealed preferences feedback to the follower\u2019s utility function. For clarity of exposition, we \ufb01rst work out our solution in detail for the special case of pro\ufb01t maximization from revealed preferences.We then derive and state our general theorem for optimally solving a class of Stackelberg games where the follower\u2019s utility is unknown. Finally, we show how to apply the general theorem to other problems, including the optimal tolling problem mentioned above and a natural principal-agent problem. 1.1 Our Results and Techniques The main challenge in solving our class of Stackelberg games is that for many natural examples, the leader\u2019s objective function is not concave when written as a function of his own action. For instance, in our example, the retailer\u2019s pro\ufb01t is not concave as a function of the price he sets. Our \ufb01rst key ingredient is to show that in many natural settings, the leader\u2019s objective is concave when written as a function of the follower\u2019s action. Consider again the retailer\u2019s pro\ufb01t maximization problem. Recall that if the buyer\u2019s valuation function v(x) = \u221ax, then when she faces a price p, she will buy the bundle x\u2217(p) = 1/4p2. In this simple case, we can see that setting a price of p\u2217(x) = 1/2\u221ax will induce the buyer to purchase x units. In principle, we can now write the retailer\u2019s pro\ufb01t function as a function of the bundle x. In our example, the retailer\u2019s cost function is simply c(x) = x. So, Pro\ufb01t(x) = p\u2217(x) \u00b7 x \u2212c(x) = \u221ax 2 \u2212x. Written in terms of x, the pro\ufb01t function is concave! As we show, this phenomenon continues in higher dimensions, for arbitrary convex cost functions c and for a wide class of concave valuation functions satisfying certain technical conditions, including the well studied families of CES and Cobb-Douglas utility functions. Thus, if the retailer had access to an oracle for the concave function Pro\ufb01t(x), we could use an algorithm for bandit concave optimization to maximize the retailer\u2019s pro\ufb01t. Unfortunately, the retailer does not directly get to choose the bundle purchased by the buyer and observe the pro\ufb01t for that bundle: he can only set prices and observe the buyer\u2019s chosen bundle x\u2217(p) at those prices, and the resulting pro\ufb01t Pro\ufb01t(x\u2217(p)). Nevertheless, we have reduced the retailer\u2019s problem to a possibly simpler one. In order to \ufb01nd the pro\ufb01t maximizing prices, it suf\ufb01ces to give an algorithm which simulates access to an oracle for Pro\ufb01t(x) given only the retailer\u2019s query access to x\u2217(p) and Pro\ufb01t(x\u2217(p)). Speci\ufb01cally, if for a given bundle x, the retailer could \ufb01nd prices p such that the buyer\u2019s chosen bundle x\u2217(p) = x, then he could simulate access to Pro\ufb01t(x) by setting prices p and receiving Pro\ufb01t(x\u2217(p)) = Pro\ufb01t(x). Our next key ingredient is a \u201ct\u02c6 atonnement-like\u201d procedure that ef\ufb01ciently \ufb01nds prices that approximately induce a target bundle x given only access to x\u2217(p), provided that the buyer\u2019s valuation function is H\u00a8 older continuous and strongly concave on the set of feasible bundles. Speci\ufb01cally, given a target bundle x, our procedure \ufb01nds prices p such that |Pro\ufb01t(x\u2217(p)) \u2212Pro\ufb01t(x)| \u2264\u03b5. Thus, we can use our procedure to simulate approximate access to the function Pro\ufb01t(x). Our procedure requires only poly(d, 1/\u03b5) queries 2 \fto x\u2217(p). Using recent algorithms for bandit optimization due to Belloni et al. [BLNR15], we can maximize the retailer\u2019s pro\ufb01ts ef\ufb01ciently even with only approximate access to Pro\ufb01t(x). When our algorithms receive noiseless feedback, we can improve the dependence on the approximation parameter \u03b5 to be only poly(log 1/\u03b5). A similar approach can be used to solve the optimal tolling problem assuming the unknown latency functions are convex and strictly increasing. As in the preceding example, the municipal authority\u2019s objective function (social cost) is not convex in the tolls, but is convex in the induced \ufb02ow. Whenever the latency function are strictly increasing, the potential function of the routing game is strongly convex, and so we can use our t\u02c6 atonnement procedure to \ufb01nd tolls that induce target \ufb02ows at equilibrium. Our results for maximizing pro\ufb01ts and optimizing tolls follow from a more general method that allows the leader in a large class of continuous action Stackelberg game to iteratively and ef\ufb01ciently maximize his objective function while only observing the follower\u2019s response. The class requires the following conditions: 1. The follower\u2019s utility function is strongly concave in her own actions and linear in the leader\u2019s actions. 2. The leader\u2019s objective function is concave when written as a function of the follower\u2019s actions.1 Finally, we show that our techniques are tolerant to two different kinds of noise. Our techniques work even if the follower only approximately maximizes his utility function, which corresponds to bounded, but adversarially chosen noise \u2013 and also if unbounded, but well behaved (i.e. zero mean and bounded variance) noise is introduced into the system. To illustrate this noise tolerance, we show how to solve a simple ddimensional principal-agent problem, in which the principal contracts for the production of d types of goods that are produced as a stochastic function of the agent\u2019s actions. 1.2 Related Work There is a very large literature in operations research on solving so-called \u201cbilevel programming\u201d problems, which are closely related to Stackelberg games. Similar to a Stackelberg game, the variables in a bilevel programming problem are partitioned into two \u201clevels.\u201d The second-level variables are constrained to be the optimal solution to some problem de\ufb01ned by the \ufb01rst-level variables. See [CMS05] for a survey of the bilevel programming literature. Unlike our work, this literature does not focus substantially on computational issues (many of the algorithms are not polynomial time). [KCP10] show that optimally solving certain discrete Stackelberg games is NP-hard. Even ignoring computational ef\ufb01ciency, this literature assumes knowledge of the objective function of the \u201cfollower.\u201d Our work departs signi\ufb01cantly from this literature by assuming that the leader has no knowledge of the follower\u2019s utility function. There are two other works that we are aware of that consider solving Stackelberg games when the follower\u2019s utility function is unknown. Letchford, Conitzer, and Munagala [LCM09] give algorithms for learning optimal leader strategies with a number of queries that is polynomial in the number of pure strategies of the leader. In our setting, the leader has a continuous and high dimensional action space, and so the results of [LCM09] do not apply. Blum, Haghtalab, and Procaccia [BHP14] consider the problem of learning optimal strategies for the leader in a class of security games. They exploit the structure of security games to learn optimal strategies for the leader in a number of queries that is polynomial in the representation size of the game (despite the fact that the number of pure strategies is exponential). The algorithm of [BHP14] is not computationally ef\ufb01cient \u2013 indeed, the problem they are solving is NP-hard. Neither of these techniques apply to our setting \u2013 and despite the fact that in our setting the leader has a continuous 1When the leader and follower are instead trying to minimize a cost function, replace \u201cconcave\u201d with \u201cconvex\u201d in the above. 3 \faction space (which is exponentially large even under discretization), we are able to give an algorithm with both polynomial query complexity and polynomial running time. There is also a body of related work related to our main example of pro\ufb01t maximization. Speci\ufb01cally, there is a recent line of work on learning to predict from revealed preferences ([BV06, ZR12, BDM+14]). In this line, the goal is to predict buyer behavior, rather than to optimize seller prices. Following these works, Amin et al. [ACD+15] considered how to \ufb01nd pro\ufb01t maximizing pricing from revealed preferences in the special case in which the buyer has a linear utility function and a \ufb01xed budget. The technique of [ACD+15] is quite specialized to linear utility functions, and does not easily extend to more general utility functions in the pro\ufb01t maximization problem, and not to Stackelberg games in general. \u201cRevealed preferences\u201d queries are quite similar to demand queries (see e.g. [BN09]). Demand queries are known to be suf\ufb01cient to \ufb01nd welfare optimal allocations, and more generally, to be able to solve separable convex programs whose objective is social welfare. In contrast, our optimization problem is non-convex (and so the typical methodology by which demand queries are used does not apply), and our objective is not welfare. The pro\ufb01t maximization application can be viewed as a dynamic pricing problem in which the seller has no knowledge of the buyers utilities. Babaioff et al. [BDKS15] study a version of this problem that is incomparable to our setting. On the one hand, [BDKS15] allow for distributions over buyers. On the other hand, [BDKS15] is limited to selling a single type of good, whereas our algorithms apply to selling bundles of many types of goods. There is also work related to our optimal tolling problem. In an elegant paper, Bhaskar et al. [BLSS14] study how one can iteratively \ufb01nd tolls such that a particular target \ufb02ow is an equilibrium of a non-atomic routing game where the latency functions are unknown, which is a subproblem we also need to solve in the routing application. Their technique is specialized to routing games, and requires that the unknown latency functions have a known simple functional form (linear or low-degree convex polynomial). In contrast, our technique works quite generally, and in the special case of routing games, does not require the latency functions to satisfy any known functional form (or even be convex). Our technique can also be implemented in a noise tolerant way, although at the expense of having a polynomial dependence on the approximation parameter, rather than a polylogarithmic dependence (in the absence of noise, our method can also be implemented to depend only polylogarithmically on the approximation parameter.) Finally, our work is related in motivation to a recent line of work designed to study the sample complexity of auctions [BBHM08, CR14, HMR14, DHN14, CHN14, BMM15, MR15]. In this line of work, like in our work, the goal is to optimize an objective in a game theoretic setting when the designer has no direct knowledge of participant\u2019s utility functions. 2 Preliminaries We will denote the set of non-negative real numbers by R+ = {x \u2208R | x \u22650} and the set of positive real numbers by R>0 = {x \u2208R | x > 0}. For a set C \u2286Rd and a norm \u2225\u00b7 \u2225, we will use \u2225C\u2225= supx\u2208C \u2225x\u2225 to denote the diameter of C with respect to the norm \u2225\u00b7 \u2225. When the norm is unspeci\ufb01ed, \u2225\u00b7 \u2225will denote the Euclidean norm \u2225\u00b7 \u22252. An important concept we use is the interior of a set. In the following, we will use Bu to denote the unit ball centered at u for any u \u2208Rd. De\ufb01nition 1. For any \u03b4 > 0 and any set C \u2286Rd, the \u03b4-interior IntC,\u03b4 of C is a subset of C such that a point x is in the \u03b4-interior IntC,\u03b4 of C if the ball of radius \u03b4 centered at x is contained in C, that is: x + \u03b4B0 = {x + \u03b4y | \u2225y\u2225\u22641} \u2286C. 4 \fThe interior IntC of C is a subset of C such that a point x is in IntC if there exists some \u03b4\u2032 > 0 such that x is in IntC,\u03b4\u2032. We will also make use of the notions of H\u00a8 older continuity and Lipschitzness. De\ufb01nition 2. A function f : C \u2192R is (\u03bb, \u03b2)-H\u00a8 older continuous for some \u03bb, \u03b2 \u22650 if for any x, y \u2208C, |f(x) \u2212f(y)| \u2264\u03bb\u2225x \u2212y\u2225\u03b2. A function f is \u03bb-Lipschitz if it is (\u03bb, 1)-H\u00a8 older continuous. 2.1 Projected Subgradient Descent A key ingredient in our algorithms is the ability to minimize a convex function (or maximize a concave function), given access only to the subgradients of the function (i.e. with a so-called \u201c\ufb01rst-order\u201d method). For concreteness, in this paper we do so using the projected sub gradient descent algorithm. This algorithm has the property that it is noise-tolerant, which is important in some of our applications. However, we note that any other noise-tolerant \ufb01rst-order method could be used in place of gradient descent to obtain qualitatively similar results. In fact, we show in the appendix that for applications that do not require noise tolerance, we can use the Ellipsoid algorithm, which obtains an exponentially better dependence on the approximation parameter. Because we strive for generality, in the body of the paper we restrict attention to gradient descent. Let C \u2286Rd be a compact and convex set that is contained in a Euclidean ball of radius R, centered at some point x1 \u2208Rd. Let c : Rd \u2192R be a convex \u201closs function.\u201d Assume that c is also \u03bb-Lipschitz\u2014-that is, |c(x) \u2212c(y)| \u2264\u03bb\u2225x \u2212y\u22252. Let \u03a0C denote the projection operator onto C, \u03a0C(x) = argmin y\u2208C \u2225x \u2212y\u2225. Projected subgradient descent is an iterative algorithm that starts at x1 \u2208C and iterates the following equations yt+1 = xt \u2212\u03b7 gt, where gt \u2208\u2202c(xt) xt+1 = \u03a0X (yt+1) The algorithm has the following guarantee. Theorem 3. The projected subgradient descent algorithm with \u03b7 = R \u03bb \u221a T satis\ufb01es c 1 T T X t=1 xs ! \u2264min y\u2208C c(y) + R\u03bb \u221a T Alternatively, the algorithm \ufb01nds a solution within \u03b5 of optimal after T = (R\u03bb/\u03b5)2 steps. 2.2 Strong Convexity We will make essential use of strong convexity/concavity of certain functions. 5 \fDe\ufb01nition 4. Let \u03c6: C \u2192R be a function de\ufb01ned over a convex set C \u2286Rd. We say \u03c6 is \u03c3-strongly convex if for every x, y \u2208C, \u03c6(y) \u2265\u03c6(x) + \u27e8\u2207\u03c6(x), y \u2212x\u27e9+ \u03c3 2 \u00b7 \u2225y \u2212x\u22252 2. We say \u03c6 is \u03c3-strongly concave if (\u2212\u03c6) is \u03c3-strongly convex. An extremely useful property of strongly convex functions is that any point in the domain that is close to the minimum in objective value is also close to the minimum in Euclidean distance. Lemma 5. Let \u03c6: C \u2192R be a \u03c3-strongly convex function, and let x\u2217= argminx\u2208C \u03c6(x) be the minimizer of \u03c6. Then, for any x \u2208C, \u2225x \u2212x\u2217\u22252 2 \u22642 \u03c3 \u00b7 (\u03c6(x) \u2212\u03c6(x\u2217)). Similarly, if \u03c6 is \u03c3-strongly concave, and x\u2217= argmaxx\u2208C \u03c6(x), then for any x \u2208C, \u2225x \u2212x\u2217\u22252 2 \u22642 \u03c3 \u00b7 (\u03c6(x\u2217) \u2212\u03c6(x)). 2.3 Tools for Zeroth-Order Optimization We brie\ufb02y discuss a useful tool for noisy zeroth-order optimization (also known as bandit optimization) by [BLNR15], which will be used as blackbox algorithm in our framework. The important feature we require, satis\ufb01ed by the algorithm from [BLNR15] is that the optimization procedure be able to tolerate a small amount of adversarial noise. De\ufb01nition 6. Let C be a convex set in Rd. We say that C is well-rounded if there exist r, R > 0 such that Bd 2(r) \u2286C \u2286Bd 2(R) and R/r \u2264O( \u221a d), where Bd 2(\u03b3) denotes an \u21132 ball of radius \u03b3 in Rd. Let C be a well-rounded convex set in Rd and F, f : Rd \u2192R be functions such that f is convex and F satis\ufb01es sup x\u2208C |F(x) \u2212f(x)| \u2264\u03b5/d, (1) for some \u03b5 > 0. The function F can be seen as an oracle that gives a noisy evaluation of f at any point in C. Belloni et al. [BLNR15] give an algorithm that \ufb01nds a point x \u2208C that approximately optimizes the convex function f and only uses function evaluations of F at points in x \u2208C. The set C only needs to be speci\ufb01ed via a membership oracle that decides if a point x is in C or not. Lemma 7 ([BLNR15], Corollary 1). Let C be a well-rounded set in Rd and f and F be functions that satisfy Equation (1). There is an algorithm ZOO(\u03b5, C) (short for zeroth-order optimization) that makes \u02dc O(d4.5) calls2 to F and returns a point x \u2208C such that E[f(x)] \u2264min y\u2208C f(y) + \u03b5. Naturally, the algorithm can also be used to approximately maximize a concave function. 2The notation \u02dc O(\u00b7) hides the logarithmic dependence on d and 1/\u03b5. 6 \f3 Pro\ufb01t Maximization From Revealed Preferences 3.1 The Model and Problem Setup Consider the problem of maximizing pro\ufb01t from revealed preferences. In this problem, there is a producer, who wants to sell a bundle x of d divisible goods to a consumer. The bundles are vectors x \u2208C where C \u2286Rd + is some set of feasible bundles that we assume is known to both the producer and consumer. \u2022 The producer has an unknown cost function c : Rd + \u2192R+. He is allowed to set prices p \u2208Rd + for each good, and receives pro\ufb01t r(p) = \u27e8p, x\u2217(p)\u27e9\u2212c(x\u2217(p)), where x\u2217(p) is the bundle of goods the consumer purchases at prices p. His goal is to \ufb01nd the pro\ufb01t maximizing prices p\u2217= argmax p\u2208Rd + r(p). \u2022 The consumer has a valuation function v : Rd + \u2192R+. The valuation function is unknown to the producer. The consumer has a quasi-linear utility function u(x, p) = v(x) \u2212\u27e8p, x\u27e9. Given prices p, the consumer will buy the bundle x\u2217(p) \u2208C that maximizes her utility. Thus, x\u2217(p) = argmax x\u2208C u(x, p) = argmax x\u2208C (v(x) \u2212\u27e8x, p\u27e9) . We call x\u2217(p) the induced bundle at prices p. In our model, in each time period t the producer will choose prices pt and can observe the resulting induced bundle x\u2217(pt) and pro\ufb01t r(pt). We would like to design an algorithm so that after a polynomial number of observations T, the pro\ufb01t r(pT) is nearly as large as the optimal pro\ufb01t r(p\u2217). We will make several assumptions about the functions c and v and the set C. We view these assumptions as comparatively mild: Assumption 3.1 (Set of Feasible Bundles). The set of feasible bundles C \u2286Rd + is convex and well-rounded. It also contains the set (0, 1]d \u2286C (the consumer can simultaneously buy at least one unit of each good). Also, \u2225C\u22252 \u2264\u03b3 (e.g. when C = (0, 1]d, we have \u03b3 = \u221a d). Lastly, C is downward closed, in the sense that for any x \u2208C, there exists some \u03b4 \u2208(0, 1) such that \u03b4 x \u2208C (the consumer can always choose buy less of each good). Assumption 3.2 (Producer\u2019s Cost Function). The producer\u2019s cost function c : Rd + \u2192R is convex and Lipschitz-continuous. Assumption 3.3 (Consumer\u2019s Valuation Function). The consumer\u2019s valuation function v : Rd + \u2192R is nondecreasing, H\u00a8 older-continuous, differentiable and strongly concave over C. For any price vector p \u2208Rd +, the induced bundle x\u2217(p) = argmaxx\u2208C u(x, p) is de\ufb01ned. Note that without the assumption that the consumer\u2019s valuation function is concave and that the producer\u2019s cost function is convex, even with full information, their corresponding optimization problems would not be polynomial time solvable. Our fourth assumption of homogeneity is more restrictive , but as we observe, is satis\ufb01ed by a wide range of economically meaningful valuation functions including CES and Cobb-Douglas utilities. Informally, homogeneity is a scale-invariance condition \u2014 changing the units by which quantities of goods are measured should have a predictable multiplicative effect on the buyer valuation functions: 7 \fDe\ufb01nition 8. For k \u22650, a function v : Rd + \u2192R+ is homogeneous of degree k if for every x \u2208Rd and for every \u03c3 > 0, v(\u03c3x) = \u03c3kv(x). The function v is simply homogeneous if it is homogeneous of degree k for some k \u22650. Our fourth assumption is simply that the buyer valuation function is homogeneous of some degree: Assumption 3.4. The consumer\u2019s valuation function v is homogeneous. 3.2 An Overview of Our Solution We present our solution in three main steps: 1. First, we show that the pro\ufb01t function can be expressed as a concave function r(x) of the consumer\u2019s induced bundle x, rather than as a (non-concave) function of the prices. 2. Next, we show that for a given candidate bundle x, we can iteratively \ufb01nd prices p such that x \u2248x\u2217(p). That is, in each time period s we can set prices ps and observe the purchased bundle x\u2217(ps), and after a polynomial number of time periods S, we are guaranteed to \ufb01nd prices p = pS such that x\u2217(p) \u2248x. Once we have found such prices, we can observe the pro\ufb01t r(x\u2217(p)) \u2248r(x), which allows us to simulate query access to r(x). 3. Finally, we use our simulated query access to r(x) as feedback to a bandit concave optimization algorithm, which iteratively queries bundles x, and quickly converges to the pro\ufb01t maximizing bundle. 3.3 Expressing Pro\ufb01t as a Function of the Bundle First, we carry out Step 1 above and demonstrate how to rewrite the pro\ufb01t function as a function of the bundle x, rather than as a function of the prices p. Note that for any given bundle x \u2208C, there might be multiple price vectors that induce x. We denote the set of price vectors that induce x by: P \u2217(x) = {p \u2208Rd | x\u2217(p) = x}. We then de\ufb01ne the pro\ufb01t of a bundle x to be r(x) = max p\u2208P \u2217(x) r(p) = max p\u2208P \u2217(x)\u27e8p, x\u27e9\u2212c(x). Observe that the pro\ufb01t maximizing price vector p \u2208P \u2217(x) is the price vector that maximizes revenue \u27e8p, x\u27e9, since the cost c(x) depends only on x, and so is the same for every p \u2208P \u2217(x). The following lemma characterizes the revenue maximizing price vector that induces any \ufb01xed bundle x \u2208C. Lemma 9. Let b x \u2208C be a bundle, and P \u2217(b x) be the set of price vectors that induce bundle b x. Then the price vector p = \u2207v(b x) is the revenue maximizing price vector that induces b x. That is, \u2207v(b x) \u2208P \u2217(b x) and for any price vector p\u2032 \u2208P \u2217(b x), \u27e8p\u2032, b x\u27e9\u2264\u27e8\u2207v(b x), b x\u27e9. Proof. Observe that for any x \u2208C the gradient of the consumer\u2019s utility u(x, p) = v(x) \u2212\u27e8p, x\u27e9with respect to x is (\u2207v \u2212p). If the prices are p = \u2207v(b x), then since v is concave and \u2207v(b x) \u2212p = 0, b x is a maximizer of the consumer\u2019s utility function. Thus, we have x\u2217(\u2207v(b x)) = b x, and so \u2207v(b x) \u2208P \u2217(b x). 8 \fSuppose that there exists another price vector p\u2032 \u2208P \u2217(b x) such that p\u2032 \u0338= \u2207v(b x). Since the function u(\u00b7, p\u2032) is concave in x and b x \u2208arg maxx\u2208C u(x, p\u2032), we know that for any x\u2032 \u2208C \u2207v(b x) \u2212p\u2032, x\u2032 \u2212b x \u000b \u22640, otherwise there is a feasible ascent direction, which contradicts the assumption that b x maximizes u(x, p\u2032). By 3.1, we know there exists some \u03b4 < 1 such that \u03b4b x \u2208C. Now consider x\u2032 = \u03b4b x, then it follows that \u2207v(b x) \u2212p\u2032, (1 \u2212\u03b4) b x \u000b = (1 \u2212\u03b4) \u0000\u27e8\u2207v(b x), b x\u27e9\u2212 p\u2032, b x \u000b\u0001 \u22650. Therefore, \u27e8p\u2032, x\u27e9\u2264\u27e8\u2207v(x), x\u27e9, as desired. This completes the proof. With this characterization of the revenue maximizing price vector, we can then rewrite the pro\ufb01t as a function of x in closed form for any x \u2208C: r(x) = \u27e8\u2207v(x), x\u27e9\u2212c(x). (2) Next, we show that r(x) is a concave function of x whenever the valuation v satis\ufb01es 3.3 (concavity and differentiability) and 3.4 (homogeneity). Theorem 10. If the consumer\u2019s valuation function v is differentiable, homogeneous, and concave over C, the producer\u2019s pro\ufb01t function r(x) = \u27e8\u2207v(x), x\u27e9\u2212c(x) is concave over the domain C. To prove this result, we invoke Euler\u2019s theorem for homogeneous functions: Theorem 11 (Euler\u2019s Theorem for Homogeneous Functions). Let v : C \u2192R+ be continuous and differentiable. Then v is homogeneous of degree k if and only if \u27e8\u2207v(x), x\u27e9= k \u00b7 v(x). Proof of Theorem 10. Recall that: r(x) = \u27e8\u2207v(x), x\u27e9\u2212c(x) By the assumption that v is continuous, differentiable, and homogeneous of some degree k \u22650, we have by Euler\u2019s theorem that r(x) = kv(x) \u2212c(x) Because by assumption, v(x) is concave, and c(x) is convex, we conclude that r(x) is concave. Finally, we note that many important and well studied classes of valuation functions satisfy our assumptions \u2013 namely differentiability, strong concavity and homogeneity. Two classes of interest include \u2022 Constant Elasticity of Substitution (CES). Valuation functions of the form: v(x) = d X i=1 \u03b1ix\u03c1 i !\u03b2 , where \u03b1i > 0 for every i \u2208[d] and \u03c1, \u03b2 > 0 such that \u03c1 < 1 and \u03b2\u03c1 < 1. These functions are known to be differentiable, H\u00a8 older continuous and strongly concave over the set (0, H]d (see Appendix B.1 for a proof). Observe that v(\u03c3x) = (Pd i=1 \u03b1i(\u03c3xi)\u03c1)\u03b2 = \u03c3\u03c1\u03b2(Pd i=1 \u03b1ix\u03c1 i )\u03b2 = \u03c3\u03c1\u03b2v(x), so these functions are homogeneous of degree k = \u03c1\u03b2. 9 \f\u2022 Cobb-Douglas. These are valuation functions of the form v(x) = d Y i=1 x\u03b1i i , where \u03b1i > 0 for every i \u2208[d] and Pd i=1 \u03b1i < 1. These functions are known to be differentiable, H\u00a8 older continuous and strongly concave over the set (0, H]d (see Appendix B.1 for a proof). Observe that v(\u03c3x) = Qd i=1(\u03c3xi)\u03b1i = (Qd i=1 \u03c3\u03b1i)(Qd i=1 x\u03b1i i ) = \u03c3 Pd i=1 \u03b1i \u00b7 v(x), so these functions are homogeneous of degree k = Pd i=1 \u03b1i. 3.4 Converting Bundles to Prices Next, we carry out Step 2 and show how to \ufb01nd prices b p to induce a given bundle b x. Speci\ufb01cally, the producer has a target bundle b x \u2208C in mind, and would like to learn a price vector b p \u2208Rd + such that the induced bundle x\u2217(b p) is \u201cclose\u201d to b x. That is, \u2225b x \u2212x\u2217(b p)\u22252 \u2264\u03b5, for some \u03b5 > 0. Our solution will actually only allow us to produce a price vector b p such that b x and x\u2217(b p) are \u201cclose in value.\u201d That is |u(b x, b p) \u2212u(x\u2217(b p), b p)| \u2264\u03b4. However, by strong concavity of the valuation function, this will be enough to guarantee that the actual bundle is close to the target bundle. The following is just an elaboration of assumption 3.3: Assumption 3.5 (Quantitative version of 3.3). The valuation function v is both 1. (\u03bbval, \u03b2)-H\u00a8 older continuous over the domain C with respect to the \u21132 norm\u2014for all x, x\u2032 \u2208C, |v(x) \u2212v(x\u2032)| \u2264\u03bbval \u00b7 \u2225x \u2212x\u2032\u2225\u03b2 2, for some constants \u03bbval \u22651 and \u03b2 \u2208(0, 1], and 2. \u03c3-strongly concave over the interior of C\u2014for all x, x\u2032 \u2208C, v(x\u2032) \u2264v(x) + \u27e8\u2207v(x), x\u2032 \u2212x\u27e9\u2212(\u03c3/2) \u00b7 \u2225x \u2212x\u2032\u22252 2. Our algorithm LearnPrice(b x, \u03b5) is given as Algorithm 1. We will prove: Theorem 12. Let b x \u2208C be a target bundle and \u03b5 > 0. Then LearnPrice(b x, \u03b5) outputs a price vector b p such that the induced bundle satis\ufb01es \u2225b x \u2212x\u2217(b p)\u2225\u2264\u03b5 and the number of observations it needs is no more than T = d \u00b7 poly \u00121 \u03b5, 1 \u03c3, \u03b3, \u03bbval \u0013 . 10 \fAlgorithm 1 Learning the price vector to induce a target bundle: LearnPrice(b x, \u03b5) Input: A target bundle b x \u2208C, and target accuracy \u03b5 Initialize: restricted price space P = {p \u2208Rd + | \u2225p\u2225\u2264 \u221a dL} where L = (\u03bbval)1/\u03b2 \u0012 4 \u03b52\u03c3 \u0013(1\u2212\u03b2)/\u03b2 p1 j = 0 for all good j \u2208[d] T = 32d L2\u03b32 \u03b54\u03c32 \u03b7 = \u221a 2\u03b3 L \u221a dT For t = 1, . . . , T: Observe the purchased bundle by the consumer x\u2217(pt) Update price vector with projected subgradient descent: b pt+1 j = pt j \u2212\u03b7 \u0000b xj \u2212x\u2217(pt)j \u0001 for each j \u2208[d], pt+1 = \u03a0P \u0002 b pt+1\u0003 Output: b p = 1/T PT t=1 pt. To analyze LearnPrice(b x, \u03b5), we will start by de\ufb01ning the following convex program whose solution is the target bundle b x. max x\u2208C v(x) (3) such that xj \u2264b xj for every good j \u2208[d] (4) Since v is non-decreasing, it is not hard to see that b x is the optimal solution. The partial Lagrangian of this program is de\ufb01ned as follows, L(x, p) = v(x) \u2212 d X j=1 pj(xj \u2212b xj), where pj is the dual variable for each constraint (4) and is interpreted as the price of good j. By strong duality, we know that there is a value OPT such that max x\u2208C min p\u2208Rd + L(x, p) = min p\u2208Rd + max x\u2208C L(x, p) = OPT = v(b x). (5) We know that OPT = v(b x) because b x is the optimal solution to (3)-(4). We can also de\ufb01ne the Lagrange dual function g: Rd \u2192R to be g(p) = max x\u2208C L(x, p). We will show that an approximately optimal price vector for g approximately induces the target bundle b x, and that LearnPrice(b x, \u03b5) is using projected subgradient descent to \ufb01nd such a solution to g. In order to reason about the convergence rate of the algorithm, we restrict the space of the prices to the following bounded set: P = ( p \u2208Rd + | \u2225p\u22252 \u2264 \u221a d (\u03bbval)1/\u03b2 \u0012 4 \u03b52\u03c3 \u0013(1\u2212\u03b2)/\u03b2) . (6) First, we can show that the minimax value of the Lagrangian remains closed to OPT even if we restrict the prices to the set P. 11 \fLemma 13. There exists a value R-OPT such that max x\u2208C min p\u2208P L(x, p) = min p\u2208P max x\u2208C L(x, p) = R-OPT. Moreover, v(b x) \u2264R-OPT \u2264v(b x) + \u03b52\u03c3 4 . Proof. Since C and P are both convex and P is also compact, the minimax theorem [Sio58] shows that there is a value R-OPT such that max x\u2208C min p\u2208P L(x, p) = min p\u2208P max x\u2208C L(x, p) = R-OPT. (7) Since P \u2286Rd +, by (5), we have R-OPT \u2265v(b x). Thus, we only need to show that R-OPT \u2264v(b x) + \u03b1, where \u03b1 = \u03b52\u03c3/4. Let (x\u2022, p\u2022) be a pair of minimax strategies for (7). That is x\u2022 \u2208argmax x\u2208C min p\u2208P L(x, p) and p\u2022 \u2208argmin p\u2208P max x\u2208C L(x, p) It suf\ufb01ces to show that L(x\u2022, p\u2022) \u2264v(b x) + \u03b1. Suppose not, then we have v(b x) + \u03b1 < L(x\u2022, p\u2022) = min p\u2208P L(x\u2022, p) = v(x\u2022) \u2212max p\u2208P \u27e8p, x\u2022 \u2212b x\u27e9\u2264v(x\u2022). Now consider the bundle y such that yj = max{x\u2022 j, b xj} for each j \u2208[d]. It is clear that v(y) \u2265 v(x\u2022) > v(b x). Let L = (\u03bbval)1/\u03b2 \u0000 4 \u03b52\u03c3 \u0001(1\u2212\u03b2)/\u03b2, then we can construct the following price vector p\u2032 \u2208P such that p\u2032 j = L for each good j with x\u2022 j > b xj, and p\u2032 j = 0 for all other goods. Since we assume that v is (\u03bbval, \u03b2)-H\u00a8 older continuous with respect to \u21132 norm, we have v(x\u2022) \u2212v(b x) \u2264v(y) \u2212v(b x) \u2264\u03bbval\u2225y \u2212b x\u2225\u03b2 2 \u2264\u03bbval\u2225y \u2212b x\u2225\u03b2 1 It follows that v(b x) + \u03b1 < L(x\u2022, p\u2022) \u2264L(x\u2022, p\u2032) = v(x\u2022) \u2212\u27e8p\u2032, x\u2022 \u2212b x\u27e9 = v(x\u2022) \u2212 X j:x\u2022 j>b xj L (y \u2212b xj) = v(x\u2022) \u2212L\u2225y \u2212b x\u22251 \u2264v(y) \u2212L\u2225y \u2212b x\u22252 Suppose that \u2225y \u2212b x\u22252 \u22651 or \u03b2 = 1, we know that \u2225y \u2212b x\u2225\u03b2 2 \u2264\u2225y \u2212b x\u22252. This means v(b x) + \u03b1 < v(y) \u2212L\u2225y \u2212b x\u2225\u03b2 2 \u2264v(y) \u2212\u03bbval\u2225y \u2212b x\u2225\u03b2 2 \u2264v(b x), a contradiction. Next suppose that \u2225y \u2212b x\u22252 < 1 and \u03b2 \u2208(0, 1). We also have that \u03b1 < v(y) \u2212v(b x) \u2212L\u2225y \u2212b x\u22252 \u2264\u03bbval\u2225y \u2212b x\u2225\u03b2 2 \u2212L\u2225y \u2212b x\u22252 \u2264\u03bbval \u2225y \u2212b x\u2225\u03b2 2 \u0012 1 \u2212L \u03bbval \u2225y \u2212b x\u22251\u2212\u03b2 2 \u0013 12 \fSince \u03b1 > 0, it must be that \u0010 1 \u2212 L \u03bbval \u2225y \u2212b x\u22251\u2212\u03b2 2 \u0011 is also positive, and so \u2225y \u2212b x\u22252 < \u0010 \u03bbval L \u00111/(1\u2212\u03b2) . By the choice of our L, \u03b1 < \u03bbval \u0012\u03bbval L \u0013\u03b2/(1\u2212\u03b2) = \u03b52\u03c3 4 = \u03b1 which is a contradiction. Therefore, the minimax value of (7) is no more than v(b x) + \u03b1. The preceding lemma shows that b x is a primal optimal solution (even when prices are restricted). Therefore, if b p = argminp\u2208P g(p) are the prices that minimize the Lagrangian dual, we must have that b x = x\u2217(b p) is the induced bundle at prices b p. The next lemma shows that if p\u2032 are prices that approximately minimize the Lagrangian dual, then the induced bundle x\u2217(p\u2032) is close to b x. Lemma 14. Let p\u2032 \u2208P be a price vector such that g(p\u2032) \u2264minp\u2208P g(p) + \u03b1. Let x\u2032 = x\u2217(p\u2032) be the induced bundle at prices p\u2032. Then x\u2032 satis\ufb01es \u2225x\u2032 \u2212b x\u2225\u22642 p \u03b1/\u03c3. Proof. Let R-OPT denote the Lagrangian value when we restrict the price space to P. From Lemma 13, we have that R-OPT = minp\u2208P g(p) \u2208[v(b x), v(b x) + \u03b1]. By assumption, we also have g(p\u2032) = L(x\u2032, p\u2032) \u2264R-OPT + \u03b1 \u2264v(b x) + 2\u03b1. Note that L(b x, p\u2032) = v(b x) \u2212\u27e8p\u2032, b x \u2212b x\u27e9= v(b x) and x\u2032 is the maximizer for L(\u00b7, p\u2032), so it follows that 0 \u2264L(x\u2032, p\u2032) \u2212L(b x, p\u2032) \u22642\u03b1. Since we know that v is a \u03c3-strongly concave function over C, the utility function u(\u00b7, p\u2032) = v(\u00b7) \u2212\u27e8p\u2032, \u00b7\u27e9is also \u03c3-strongly concave over C.3 Then we have the following by Lemma 5 and the above argument, 2\u03b1 \u2265L(x\u2032, p\u2032) \u2212L(b x, p\u2032) = u(x\u2032, p\u2032) \u2212u(b x, p\u2032) \u2265\u03c3 2 \u2225x\u2032 \u2212x\u22252 (8) This means \u2225x\u2032 \u2212x\u2225\u22642 p \u03b1/\u03c3. Based on Lemma 14, we can reduce the problem of \ufb01nding the appropriate prices to induce the target bundle to \ufb01nding the approximate optimal solution to argminp\u2208P g(p). Even though the function g is unknown to the producer (because v is unknown), we can still approximately optimize the function using projected subgradient descent if we are provided access to subgradients of g. The next lemma shows that the bundle x\u2217(p) purchased by the consumer gives a subgradient of the Lagrange dual objective function at p. Lemma 15. Let p be any price vector, and x\u2217(p) be the induced bundle. Then (b x \u2212x\u2217(p)) \u2208\u2202g(p). 3If f(\u00b7) is a \u03c3-strongly concave function over C and g(\u00b7) is a concave function over C, then (f + g)(\u00b7) is a \u03c3-strongly concave function over C. 13 \fProof. Given x\u2032 = arg maxx\u2208[0,1]d L(x, p), we know by the envelope theorem that a subgradient of g can be obtained as follows \u2202g \u2202pj = b xj \u2212x\u2032 j for each j \u2208[d]. Note that x\u2032 corresponds to the induced bundle of p because x\u2032 = argmax x\u2208C L(x, p) = argmax x\u2208C [v(x) \u2212\u27e8p, x \u2212b x\u27e9] = argmax x\u2208C [v(x) \u2212\u27e8p, x\u27e9] = argmax x\u2208C u(x, p) = x\u2217(p) Therefore, the vector (b x \u2212x\u2217(p)) is a subgradient of g at the price vector p. Now that we know the subgradients of the function g at p can be easily obtained from the induced bundle purchased by the consumer, it remains to observe that Algorithm LearnPrice(b x, \u03b5) is performing projected gradient descent on the Lagrange dual objective, and to analyze its convergence. Proof of Theorem 12. By Lemma 14, it suf\ufb01ces to show that the price vector b p returned by projected gradient descent satis\ufb01es g(b p) \u2264min p\u2208P g(p) + \u03b52\u03c3 4 . Note that the set P is contained in the \u21132 ball centered at 0 with radius L. Also, for each pt, the subgradient we obtain is bounded: \u2225b x \u2212x\u2217(pt)\u2225\u2264 p \u2225b x\u22252 + \u2225x\u2217(pt)\u22252 \u2264 \u221a 2\u03b3 since \u2225C\u2225\u2264\u03b3. Since we set T = 32d L2\u03b32 \u03b54\u03c32 \u03b7 = \u221a 2\u03b3 L \u221a dT we can apply the guarantee of projected gradient descent from Theorem 3, which gives: g(b p) \u2212min p\u2208P g(p) \u2264 \u221a 2L\u03b3 \u221a T = \u03b52\u03c3 4 By Lemma 14, we know that the resulting bundle x\u2217(b p) satis\ufb01es that \u2225b x \u2212x\u2217(p)\u2225\u2264\u03b5. Remark 16. Since noise tolerance is not required in this setting, it is possible approximately induce the target bundle only using poly-logarithmically in (1/\u03b5) number of observations. We will give an ellipsoidbased variant of LearnPrice in Appendix D that achieves this guarantee. 3.5 Pro\ufb01t Maximization Finally, we will show how to combine the algorithm LearnPrice with the zeroth order optimization algorithm ZOO to \ufb01nd the approximate pro\ufb01t-maximizing price vector. At a high level, we will use ZOO to (approximately) optimize the pro\ufb01t function r over the bundle space and use LearnPrice to (approximately) induce the optimal bundle. 14 \fBefore we show how to use ZOO, we will verify that if we run the algorithm LearnPrice to obtain prices b p that approximately induce the desired bundle x, and observe the revenue generated from prices b p, we will indeed obtain an approximation to the revenue function r(x). Recall from Lemma 9 that the pro\ufb01t function can be written as a function of the bundle r(x) = \u27e8\u2207v(x), x\u27e9\u2212c(x) as long as the producer uses the pro\ufb01t maximizing price vector \u2207v(x) to induce the bundle x. However, the price vector returned by LearnPrice might not be the optimal price vector for the induced bundle. In order to have an estimate of the optimal pro\ufb01t for each bundle, we need to guarantee that prices returned by LearnPrice are the pro\ufb01t maximizing ones. To do that, we will restrict the bundle space that ZOO is optimizing over to be the interior of C. Now we show that for every bundle in the interior of C, there is a unique price vector that induces that bundle. Thus, these prices are the pro\ufb01t-maximizing prices inducing that bundle. Lemma 17. Let x\u2032 be a bundle in IntC. Then \u2207v(x\u2032) is the unique price vector that induces x\u2032. Proof. Let p\u2032 be a price vector such that x\u2217(p\u2032) = x. Since IntC \u2286C, we must have x\u2032 = argmax x\u2208IntC \u0002 v(x) \u2212\u27e8p\u2032, x\u27e9 \u0003 . By the de\ufb01nition of IntC, we know that there exists some \u03b4 > 0 such that the ball \u03b4Bx\u2032 is contained in C. Now consider the function f : Rd \u2192R such that f(x) = u(x, p\u2032). It follows that x\u2032 is a local optimum of f neighborhood \u03b4Bx\u2032. Since f is continuously differentiable, we must have \u2207f(x\u2032) = 0 by \ufb01rst-order conditions. Therefore, we must have \u2207f(x\u2032) = \u2207v(x\u2032) \u2212p\u2032 = 0, which implies that p\u2032 = \u2207v(x\u2032). Instead of using the interior itself, we will use a simple and ef\ufb01ciently computable proxy for the interior obtained by slightly shifting and contracting C. Claim 18. For any 0 < \u03b4 < 1/2, let the set C\u03b4 = (1 \u22122\u03b4)C + \u03b41, where 1 denotes the d-dimensional vector with 1 in each coordinate. Given 3.1, C\u03b4 is contained in the (\u03b4/2)-interior of C. That is, C\u03b4 \u2286IntC,\u03b4/2. Proof. Our goal is to show that C\u03b4 + \u03b4B0 \u2286C, where B0 denote the unit ball centered at 0. Any point in C\u03b4 +(\u03b4/2)B0 can be written as x\u2032 +(\u03b4/2) y\u2032 for x\u2032 \u2208C\u03b4 and y\u2032 \u2208B0. We will show that x\u2032 +(\u03b4/2) y\u2032 \u2208C. Since x\u2032 \u2208C\u03b4, there exists x \u2208C such that x\u2032 = (1 \u22122\u03b4)x + \u03b41. Since y\u2032 \u2208B0, there exists y \u2208(0, 1]d such that 1 2 y\u2032 = 2y \u22121. 15 \fTo see this, note that (0, 1]d contains a ball of radius 1/4 whose center is (1/2) \u00b7 1. By Assumption 3.1, C contains (0, 1]d, so y \u2208C. Therefore for some x, y \u2208C, x\u2032 + (\u03b4/2) y\u2032 = (1 \u22122\u03b4)x + \u03b41 + 2\u03b4y \u2212\u03b41 = (1 \u22122\u03b4)x + 2\u03b4y | {z } \u2208C , where we used convexity of C. Hence, x\u2032 + (\u03b4/2) y\u2032 \u2208C, as desired. We will let ZOO operate on the set C\u03b4 instead of C, and we \ufb01rst want to show that there is little loss in pro\ufb01t if we restrict the induced bundle to C\u03b4. The following is just a formal, quantitative version of of Assumption 3.2: Assumption 3.6 (Quantitative version of Assumption 3.2). The producer\u2019s cost function c: Rd + \u2192R is \u03bbcost-Lipschitz over the domain C with respect to the \u21132 norm: for x, x\u2032 \u2208C, |c(x) \u2212c(x\u2032)| \u2264\u03bbcost\u2225x \u2212x\u2032\u2225. Given this assumption, the pro\ufb01t function is also H\u00a8 older continuous. Lemma 19. For any x, y \u2208C such that \u2225x \u2212y\u2225\u22641, the following holds |r(x) \u2212r(y)| \u2264(\u03bbval + \u03bbcost)\u2225x \u2212y\u2225\u03b2. Proof. Recall the revenue component of the pro\ufb01t function is \u27e8\u2207v(x), x\u27e9. Since v is a concave and homogeneous function, we know that the homogeneity degree satis\ufb01es k \u22641. (See Appendix B for a proof). By Euler\u2019s theorem (Theorem 11), \u27e8\u2207v(x), x\u27e9= k \u00b7 v(x). (9) Since v is (\u03bbval, \u03b2)-H\u00a8 older continuous C, by Equation 9 we know that the revenue \u27e8\u2207v(x), x\u27e9is also \u03bbvalH\u00a8 older continuous over C. Furthermore, since the cost function c is \u03bbcost-Lipschitz over C, the pro\ufb01t function satis\ufb01es the following: for any x, y \u2208C such that \u2225x \u2212y\u2225\u22641, we have |r(x) \u2212r(y)| \u2264|\u27e8\u2207v(x), x\u27e9\u2212\u27e8\u2207v(y), y\u27e9| + |c(x) \u2212c(y)| \u2264\u03bbval\u2225x \u2212y\u2225\u03b2 + \u03bbcost\u2225x \u2212y\u2225 Since \u2225x \u2212y\u2225\u22641, we know that \u2225x \u2212y\u2225\u03b2 \u2265\u2225x \u2212y\u2225, so |r(x) \u2212r(y)| \u2264(\u03bbval + \u03bbcost)\u2225x \u2212y\u2225\u03b2. We can bound the difference between the optimal pro\ufb01ts in C\u03b4 and C. Lemma 20. For any 0 < \u03b4 \u22641/3\u03b3, max x\u2208C r(x) \u2212max x\u2208C\u03b4 r(x) \u2264(3\u03b4\u03b3)\u03b2(\u03bbval + \u03bbcost). Proof. Let x\u2217\u2208arg maxx\u2208C r(x). We know that (1 \u22122\u03b4)x\u2217+ \u03b41 \u2208C\u03b4, and \u2225x\u2217\u2212(1 \u22122\u03b4)x\u2217\u2212\u03b41\u2225\u2264\u03b4\u22252x\u2217\u22121\u2225\u22643\u03b4\u03b3. By Lemma 19, we then have r(x\u2217) \u2212r((1 \u2212\u03b4)x\u2217+ \u03b41) \u2264(3\u03b4\u03b3)\u03b2(\u03bbval + \u03bbcost). Furthermore, we also know maxx\u2208C\u03b4 r(x) \u2265r((1 \u2212\u03b4)x\u2217+ \u03b41), so we have shown the bound above. 16 \fNow we focus on how to optimize the pro\ufb01t function r over the set C\u03b4. Recall the algorithm ZOO requires approximate evaluations for the pro\ufb01t function r. Such evaluations can be implemented using our algorithm LearnPrice: for each bundle x \u2208C\u03b4, run LearnPrice(x, \u03b5) to obtain a price vector p such that \u2225x \u2212x\u2217(p)\u2225\u2264\u03b5, and then the resulting pro\ufb01t r(x\u2217(p)) serves as an approximate evaluation for r(x): |r(x) \u2212r(x\u2217(p))| \u2264(\u03bbval + \u03bbcost)\u03b5\u03b2. Algorithm 2 Learning the price vector to optimize pro\ufb01t: Opro(C, \u03b1) Input: Feasible bundle space C, and target accuracy \u03b1 Initialize: \u03b5 = min (\u0012 \u03b1 \u03bb(d + 1 + (12\u03b3)\u03b2) \u00131/\u03b2 , 1 12\u03b3 ) \u03b4 = 4\u03b5 \u03b1\u2032 = d\u03b5\u03b2(\u03bbval + \u03bbcost) restricted bundle space C\u03b4 = (1 \u22122\u03b4)C + \u03b41 and number of iterations T = \u02dc O(d4.5) For t = 1, . . . , T: ZOO(\u03b1\u2032, C\u03b4) queries the pro\ufb01t for bundle xt Let pt = LearnPrice(xt, \u03b5) and observe the induced bundle x\u2217(pt) Send r(x\u2217(pt)) to ZOO(\u03b1\u2032, C\u03b4) as an approximate evaluation of r(xt) b x = ZOO(\u03b1\u2032, C\u03b4) b p = LearnPrice(b x, \u03b5) Output: the last price vector b p Theorem 21. Let \u03b1 > 0 be the target accuracy. The instantiation Opro(C, \u03b1) computes a price vector b p such that the expected pro\ufb01t E [r(b p)] \u2265max p\u2208Rd + r(p) \u2212\u03b1, the number of times it calls the algorithm LearnPrice is bounded by \u02dc O(d4.5), and the total observations it requires from the consumer is poly(d, 1/\u03b1). 4 Proof. First we show that each induced bundle x\u2217(pt) is in the interior IntC. Note that in the algorithm, we have \u03b5 = \u03b4/4. By the guarantee of LearnPrice in Theorem 12, we have that \u2225xt \u2212x\u2217(pt)\u2225\u2264\u03b5 = \u03b4/4. By 18, we know that xt \u2208IntC,\u03b4/2, so the ball of radius \u03b5 centered at xt is contained in C, and hence x\u2217(pt) is in the interior of C. By Lemma 17 and Lemma 9, each vector pt = \u2207v(x\u2217(pt)) is the pro\ufb01t-maximizing prices for the induced bundle x\u2217(pt), so the pro\ufb01t the algorithm observes is indeed r(x\u2217(pt)). Next, to establish the accuracy guarantee, we need to bound two sources of error. First, we need to bound the error from ZOO. To simplify notation, let \u03bb = (\u03bbval + \u03bbcost). Recall from Lemma 19 that the approximate pro\ufb01t evaluation r(x\u2217(pt)) satis\ufb01es |r(xt) \u2212r(x\u2217(pt))| \u2264\u03bb\u03b5\u03b2. 4 In Appendix D, we give a variant of the algorithm with query complexity scaling poly-logarithmically in 1/\u03b1. 17 \fBy the accuracy guarantee in Lemma 7, the \ufb01nal queried bundle b x satis\ufb01es E[r(b x)] \u2265max x\u2208C\u03b4 r(x) \u2212d\u03bb\u03b5\u03b2. Since we know that |r(b x) \u2212r(x\u2217(b p))| \u2264\u03bb\u03b5, we also have E[r(x\u2217(b p))] \u2265max x\u2208C\u03b4 r(x) \u2212(d + 1)\u03bb\u03b5\u03b2. Next, as we are restricting the bundle space to C\u03b4, there might be further loss of pro\ufb01t. Note that \u03b4 = 4\u03b5 \u2264 1/3\u03b3, so we can bound it with Lemma 20: E[r (x\u2217(b p))] \u2265max x\u2208C r(x) \u2212\u03bb h (d + 1)\u03b5\u03b2 + (3\u03b4\u03b3)\u03b2i = max x\u2208C r(x) \u2212\u03bb h (d + 1)\u03b5\u03b2 + (12\u03b5\u03b3)\u03b2i . If we plug in our setting for parameter \u03b5, we recover the desired bound since r(x\u2217(b p)) = r(b p) and maxx\u2208C r(x) = maxp\u2208Rd + r(p). Finally, we need to bound the total number of observations the algorithm needs from the consumer. In each iteration, the instantiation LearnPrice(xt, \u03b5) requires number of observations bounded by according to Theorem 12 T \u2032 = d \u00b7 poly \u00121 \u03b5, 1 \u03c3 , \u03b3, \u03bbval \u0013 Therefore, after plugging in \u03b5, we have that the total number of observations Opro needs is bounded by O(T \u2032 \u00d7 T) = poly(d, 1/\u03b1) (hiding constants \u03bbcost, \u03bbval, \u03c3, \u03b3). 4 General Framework of Stackelberg Games Now that we have worked out a concrete application of our method in the context of learning to maximize revenue from revealed preferences, we will abstract our techniques and show how they can be used to solve a general family of Stackelberg games in which the objective of the follower is unknown to the leader. Along the way, we will also generalize our technique to operate in a setting in which the follower responds to the leaders actions by only approximately maximizing her utility function. In addition to generalizing the settings in which our approach applies, this avoids a technical concern that might otherwise arise \u2013 that bundles maximizing strongly concave utility functions might be non-rational. In addition to being able to handle approximations to optimal bundles that would be induced by taking a rational approximation, we show our method is robust to much larger errors. In our general framework, we consider a Stackelberg game that consists of a leader with action set AL and a follower with action set AF. Each player has a utility function UL, UF : AL \u00d7 AF \u2192R. In the corresponding Stackelberg game, the leader chooses an action p \u2208AL, and then the follower chooses a \u03b6-best response x\u2032(p) such that UF(p, x\u2032(p)) \u2265UF (p, x\u2217(p)) \u2212\u03b6, where x\u2217(p) = argmaxx\u2208AF UF (p, x) is the follower\u2019s exact best-response. Note that when \u03b6 = 0, x\u2032(p) = x\u2217(p). The example of maximizing revenue from revealed preferences is a special case of this framework. The producer is the leader and his action space consists of prices p and the follower is the consumer and her 18 \faction space is the bundle x she purchases. The producer\u2019s utility for a pair (p, x) is his revenue minus the cost of producing x and the consumer\u2019s utility is her value for x minus the price she pays. In general, we consider solving the leader\u2019s optimization problem\u2014\ufb01nd p \u2208AL such that UL(p, x\u2217(p)) is (approximately) maximized. Formally, we consider a sub-class of Stackelberg games that have the following structure. De\ufb01nition 22. An instance is a Stackelberg game S(AL, AF, \u03c6) which consists of two players\u2014the leader and the follower such that: \u2022 the leader has action set AL \u2286Rd, the follower has action set AF \u2286Rd, both of which are convex and compact; \u2022 the follower\u2019s utility function UF : AL \u00d7 AF \u2192R takes the form UF (p, x) = \u03c6(x) \u2212\u27e8p, x\u27e9, where \u03c6: Rd \u2192R is a strongly concave, differentiable function unknown to the leader; \u2022 the leader\u2019s utility function UL : AL \u00d7 AF \u2192R is an unknown function. The optimization problem associated with the game instance is maxp\u2208AL \u03c8(p, x\u2217(p)). Our \ufb01rst step to solve the problem is to rewrite the leader\u2019s utility function so that it can be expressed as a function only in the follower\u2019s action. For each action of the follower x \u2208AF , the set of leader\u2019s actions that induce x is P \u2217(x) = {p \u2208AL | x\u2217(p) = x}. Among all of the leader\u2019s actions that induce x, the optimal one is: p\u2217(x) = argmax p\u2208P \u2217(x) UL(p, x), where ties are broken arbitrarily. We can then rewrite the leader\u2019s objective as a function of only x: \u03c8(x) = UL(p\u2217(x), x). (10) Note that to approximately solve the leader\u2019s optimization problem, it is suf\ufb01cient to \ufb01nd the follower\u2019s action b x \u2208AF which approximately optimizes \u03c8F (\u00b7), together with the action b p \u2208AL that approximately induces b x. Before we present the algorithm, we state the assumptions on the utility functions of the two players that we will need. Assumption 4.1. The game S(AL, AF, \u03c6) satis\ufb01es the following properties. 1. The function \u03c8: AL \u2192R de\ufb01ned in (10) is concave and \u03bbL-Lipschitz; 2. The function \u03c6: AF \u2192R is non-decreasing, \u03c3-strongly concave and \u03bbF-Lipschitz; 3. The action space of the leader AL contains the following set P = {p \u2208Rd + | \u2225p\u2225\u2264 \u221a d\u03bbF }; (11) 4. The action space of the follower AF has bounded diameter, \u2225AF \u2225\u2264\u03b3. 19 \f4.1 Inducing a Target Action of the Follower We \ufb01rst consider the following sub-problem. Given a target action b x of the follower we want to learn an action b p for the leader such that the induced action satis\ufb01es \u2225x\u2032(b p) \u2212b x\u2225\u2264\u03b5. We now give an algorithm to learn b p that requires only polynomially many observations of the follower\u2019s \u03b6-approximate best responses. Algorithm 3 Learning the leader\u2019s action to induce a target follower\u2019s action: LearnLead(b x, \u03b5) Input: A target follower action b x \u2208AF, and target accuracy \u03b5 Initialize: restricted action space P = {p \u2208Rd + | \u2225p\u2225\u2264 \u221a d\u03bbF} p1 j = 0 for all j \u2208[d] T = 16 \u221a 2d\u03bbF\u03b3 \u03b52\u03c3 \u22124\u03b6 !2 \u03b7 = \u221a 2\u03b3 \u221a d\u03bbF \u221a T For t = 1, . . . , T: Observe the induced action by the follower x\u2217(pt) Update leader\u2019s action: \u02dc pt+1 j = pt j \u2212\u03b7 \u0000b xj \u2212x\u2217(pt)j \u0001 for each j \u2208[d], pt+1 = \u03a0P \u0002 b pt+1\u0003 Output: b p = 1/T PT t=1 pt. Theorem 23. Let b x \u2208AF be a target follower action and \u03b5 > 0. Then LearnLead(b x, \u03b5) outputs a leader action b p such that the induced follower action satis\ufb01es \u2225b x \u2212x\u2032(b p)\u2225\u2264\u03b5 and the number of observations it needs is no more than T = O \u0012d\u03bb2 F \u03b32 \u03b54\u03c32 \u0013 as long as \u03b5 > 2 p 2\u03b6/\u03c3. 4.2 Optimizing Leader\u2019s Utility Now that we know how to approximately induce any action of the follower using LearnLead, we are ready to give an algorithm to optimize the leader\u2019s utility function UL. Recall that we can write the UL as a function \u03c8 that depends only of the follower\u2019s action. In order to obtain the approximately optimal utility \u03c8(x), the leader must play the optimal action p that induces the follower to play approximately x. Assumption 4.2. For any b x \u2208AF and \u03b5 > 0, the instantiation LearnLead(b x, \u03b5) returns b p such that b p = p\u2217(x\u2217(b p)) . Whenever this assumption holds, we can use LearnLead to allow the leader to obtain utility UL(b p, x\u2217(b p)) = \u03c8(x\u2217(b p)). 20 \fWhile 4.2 appears to be quite strong, we can often achieve it. Recall that we were able to satisfy 4.2 in our revealed preferences application by operating in the interior of the feasible region of the follower\u2019s action space, and we can similarly do this in our principal-agent example. Moreover, it is trivially satis\ufb01ed whenever the leader\u2019s objective function depends only on the follower\u2019s action, since in this case, every leader-action p which induces a particular follower-action x is optimal. This is the case, for example, in our routing games application in Section 5. Now we will show how to use the algorithm ZOO to \ufb01nd an approximate optimal point for the function \u03c8. First, we will use LearnLead to provide approximate function evaluation for \u03c8 at each b x \u2208AF : our algorithm \ufb01rst runs LearnLead(b x, \u03b5) to learn a price vector b p, and we will use the observed function value on the induced follower\u2019s approximate best response \u03c8(x\u2032(b p)) as an approximation for \u03c8(b x). Since LearnLead guarantees that \u2225x\u2032(b p) \u2212b x\u2225\u2264\u03b5, by the Lipschitz property of \u03c8 we have |\u03c8(b x) \u2212\u03c8(x\u2032(b p))| \u2264\u03bbL\u03b5. With these approximate evaluations, ZOO can then \ufb01nd a (d\u03bbL\u03b5)-approximate optimizer of \u03c8 with only \u02dc O(d4.5) iterations by Lemma 7. The full algorithm is presented in Algorithm 4. Algorithm 4 Leader learn to optimize: LearnOpt(AF, \u03b1) Input: Follower action space C, and target accuracy \u03b1 Initialize: number of iterations T = \u02dc O(n4.5) and \u03b5 = \u03b1 \u03bbL(d+1) For t = 1, . . . , T: ZOO(d\u03b5\u03bbL, AF ) queries the objective value for action xt \u2208AF Let pt = LearnLead(xt, \u03b5) and observe the induced action x\u2032(pt) Send \u03c8(x\u2032(pt)) to ZOO(d\u03b5\u03bbL, C\u03b4) as an approximate evaluation of \u03c8(xt) b x = ZOO(d\u03b5\u03bbL, AF ) b p = LearnLead(b x, \u03b5) Output: the leader action b p Theorem 24. Let \u03b1 > 0 be the target accuracy. The instantiation LearnOpt(AF, \u03b1) computes a leader action b p along with its induced follower action x\u2217(b p) that satis\ufb01es E[UL(b p, x\u2217(b p))] \u2265max p\u2208AL UL(p, x\u2217(p)) \u2212\u03b1, and the number of observations the algorithm requires of the follower is bounded by \u02dc O \u0012d9.5 \u03b14 \u0013 , as long as \u03b1 \u2265\u2126(d\u03bbL p \u03b6/\u03c3). 5 Optimal Traf\ufb01c Routing from Revealed Behavior In this section, we give the second main application of our technique discussed in the introduction: how to \ufb01nd tolls to induce an approximately optimal \ufb02ow in a non-atomic traf\ufb01c routing game when the latency functions are unknown. 21 \fA nonatomic routing game G(G, \u2113, D) is de\ufb01ned by a graph G = (V, E), latency function \u2113e on each edge e \u2208E, and the source, destination and demands for n commodities: D = {(si, ti, ki)}i\u2208[n]. The latency function \u2113e : R+ \u2192[0, 1] represents the delay on each edge e as a function of the total \ufb02ow on that edge. For simplicity, we assume Pn i=1 ki = 1, and we let m denote the number of edges |E|. For each commodity i, the demand ki speci\ufb01es the volume of \ufb02ow from si to ti routed by (self-interested) agents. The game is nonatomic: in\ufb01nitely many agents each control only an in\ufb01nitesimal amount of \ufb02ow and each agent of type i selects an action (an si-ti path) so as to minimize her total latency. The aggregate decisions of the agents induce a multicommodity \ufb02ow (f i)i\u2208[n], with each vector f i = (f i e)e\u2208E \u2208Fi, where Fi is the \ufb02ow polytope for the i\u2019th commodity: Fi = \uf8f1 \uf8f2 \uf8f3f i \u2208Rm + | X (v,w)\u2208E f i vw = X (u,v)\u2208E f i uv, \u2200v \u2208V \\ {si, ti}, X (si,w)\u2208E f i siw \u2212 X (u,si)\u2208E f i u,si = ki \uf8fc \uf8fd \uf8fe Let F = {f = Pn i=1 f i | f i \u2208Fi for each i} denote the set of feasible \ufb02ows. A \ufb02ow f de\ufb01nes a latency \u2113e(fe) on each edge e. Given a path P, we write \u2113P (f) = P e\u2208P \u2113e(fe) to denote the sum latency on all edges in the path. A Nash or Wardrop equilibrium is de\ufb01ned as follows: De\ufb01nition 25 (Wardrop equilibrium). A multicommodity \ufb02ow b f is a Wardrop equilibrium of a routing game if it is feasible and for every commodity i, and for all si-ti paths P, Q with b f i P > 0, we have \u2113P( b f) \u2264\u2113Q( b f). Crucial to our application is the following well known lemma, which states that a Wardrop equilibrium can be found as the solution to a optimization problem (convex whenever the latencies are non-decreasing), which minimizes a potential function associated with the routing game Lemma 26 ([MS96]). A Wardrop equilibrium can be computed by solving the following optimization problem: min f\u2208F \u03a6(f) := X e Z fe 0 \u2113e(x) dx Whenever the latency functions \u2113e are each non-decreasing, this is a convex program. We call \u03a6 the potential function of the routing game. Now suppose there is a municipal authority which administers the network and wishes to minimize the social cost of the equilibrium \ufb02ow: \u03a8(f) = X e\u2208E fe \u00b7 \u2113e(fe). The authority has the power to impose constant tolls on the edges. A toll vector \u03c4 = (\u03c4e)e\u2208E \u2208Rm + induces a new latency function on each edge: \u2113\u03c4 e(fe) = \u2113(fe) + \u03c4e, which gives rise to a different routing game G(G, \u2113\u03c4 , D) with a new potential function \u03a6\u03c4. In particular, the equilibrium \ufb02ow f \u2217(\u03c4) induced by the toll vector is the Wardrop equilibrium of the tolled routing game: f \u2217(\u03c4) = argmin f\u2208F \u03a6\u03c4(f) = argmin f\u2208F \"X e\u2208E Z fe 0 (\u2113e(x) + \u03c4e)dx # = argmin f\u2208F \" \u03a6(f) + X e\u2208E \u03c4e \u00b7 fe # . While the latency functions are unknown to the authority, his goal is to \ufb01nd a toll vector b \u03c4 such that the induced \ufb02ow f \u2217(b \u03c4) approximately minimizes the total congestion function \u03a8. 22 \fWe can formulate this problem as an instance of the type of Stackelberg game we de\ufb01ned in De\ufb01nition 22, where the authority is the leader, and there is a single \u201c\ufb02ow\u201d player minimizing the game\u2019s potential function, serving the role of the follower. We will refer to them as the toll player and the \ufb02ow player respectively. In our setting: 1. The toll player has action set \u03c4 \u2208Rm + and the \ufb02ow player has action set F; 2. The \ufb02ow player has a utility function UF : Rm + \u00d7 F \u2192R of the form UF (\u03c4, f) = \u2212\u03a6(f) \u2212\u27e8\u03c4, f\u27e9; 3. The toll player has a utility function UL : Rm + \u00d7 F \u2192R of the form UL(\u03c4, f) = \u2212\u03a8(f). Now we will apply the tools in Section 4 to solve this problem. Before we begin, we will impose the following assumptions on the latency functions to match with 4.1. We need two types of assumptions: one set to let us \ufb01nd tolls to induce a target \ufb02ow, and another to guarantee that once we can induce such \ufb02ows (and hence implement a \u201c\ufb02ow cost oracle\u201d), we can optimize over \ufb02ows. To \ufb01nd tolls to induce a target \ufb02ow, we require that the potential function \u03a6 be strongly convex in the \ufb02ow variables. The following conditions are suf\ufb01cient to guarantee this: Assumption 5.1. For each edge e \u2208E, \u2113e is differentiable and has derivative bounded away from zero: there exists some \u03c3 > 0 such that for all x \u2208[0, 1], \u2113\u2032 e(x) \u2265\u03c3. Recall that the potential function \u03a6(x) is a function on m variables (fe)e\u2208E, and it\u2019s Hessian \u22072\u03a6 at each f \u2208F is a diagonal matrix with entries \u2113\u2032 e(fe) \u2265\u03c3. Therefore, we know that \u22072\u03a6(f) \u2ab0\u03c3I for any f \u2208F, and so under Assumption 5.1, \u03a6 is a \u03c3-strongly convex function over F. Note that the only condition we really require is that the potential function be strongly convex, and there are weaker conditions that imply this, but we state Assumption 5.1 because of its simplicity. Once we can implement a \ufb02ow oracle, we need to be able to use a bandit convex optimization algorithm to optimize social cost over \ufb02ows. Hence, we require that the social cost function be convex and Lipschitz. The following assumptions are suf\ufb01cient to guarantee this: Assumption 5.2. For each edge e \u2208E, \u2113e is convex and (\u03bb/m)-Lipschitz continuous over [0, 1]. Note that this guarantees that \u03a8 is \u03bb-Lipschitz over F. We \ufb01rst show that we can use the algorithm LearnLead to learn a toll vector to induce any \ufb02ow as a Wardrop equilibrium. Lemma 27. Fix any non-atomic routing game satisfying Assumption 5.1. Let b f \u2208F in a target \ufb02ow and \u03b5 > 0. Then the instantiation LearnLead( b f, \u03b5) outputs a toll vector b \u03c4 such that the induced Wardrop equilibrium \ufb02ow f \u2217(b \u03c4) satis\ufb01es \u2225b f \u2212f \u2217(b \u03c4)\u2225\u2264\u03b5, and the number of observations on the \ufb02ow behavior it needs is no more than O \u0012 m3 \u03b54\u03c32 \u0013 . 23 \fProof. Before we apply Theorem 23, we still need to show that the potential function \u03a6 of the original routing game (without tolls) is Lipschitz over F. Note that this does not require any assumptions on the latency functions \u2113e other than that they are bounded in [0, 1]. Let f, g \u2208F, then we can write |\u03a6(f) \u2212\u03a6(g)| = \f \f \f \f \f X e \u0012Z fe 0 \u2113e(x) dx \u2212 Z ge 0 \u2113e(x) dx \u0013\f \f \f \f \f = \f \f \f \f \f X e\u2208E Z fe ge \u2113e(x) dx \f \f \f \f \f \u2264 X e\u2208E max{\u2113e(fe), \u2113e(ge)} |fe \u2212ge| \u2264 X e |fe \u2212ge| \u2264\u221am\u2225f \u2212g\u2225, where the last inequality follows from the fact that \u2225x\u22251 \u2264\u221am\u2225x\u22252 for any x \u2208Rm. Also, observe that each \ufb02ow vector in F has norm bounded by \u221am. Therefore, we know that \u03a6 is a \u221am-Lipschitz function. Then we can instantiate Theorem 23 and obtain the result above. Now we can instantiate Theorem 24 and show that LearnOpt can \ufb01nd a toll vector that induces the approximately optimal \ufb02ow. Pre-processing Step The set F is not a well-rounded convex body in Rm (it has zero volume), so we will have to apply the following standard pre-processing step to transform it into a well-rounded body. First, we \ufb01nd a maximal set I of linearly independent points in F. We will then embed the polytope F into this lower-dimensional subspace spanned by I, so that F becomes full-dimensional. In this subspace, F is a convex body with a relative interior. Next, we apply the transformation of [LV06] to transform F into a well-rounded body within Span(I).5 We will run ZOO over the transformed body. Lemma 28. Let \u03b1 > 0 be the target accuracy. The instantiation LearnOpt(AF , \u03b1) computes a toll vector b \u03c4 such that the induced \ufb02ow b f = f \u2217(b \u03c4) is \u03b1-approximately optimal in expectation: E h \u03a8 \u0010 b f \u0011i \u2264min f\u2208F \u03a8(f) + \u03b1. The total number of observations we need on the \ufb02ow behavior is bounded by \u02dc O \u0012m11.5 \u03b14 \u0013 . Remark 29. Just as with the pro\ufb01t maximization example, if we do not require noise tolerance, then we can improve the dependence on the approximation parameter \u03b1 to be polylogarithmic. We show how to do this in the appendix. 5See Section 5 of [LV06] for details of the rounding algorithm. 24 \f6 The Principal-Agent Problem Our general framework applies even when the leader observes only the noisy feedback that arises when the follower only approximately maximizes her utility function. This corresponds to adversarially chosen noise of bounded magnitude. In this section, we show how to handle the natural setting in which the noise being added need not be bounded, but is well behaved \u2013 speci\ufb01cally has mean 0, and bounded variance. This can be used to model actual noise in an interaction, rather than a failure to exactly maximize a utility function. As a running example as we work out the details, we will discuss a simple principal-agent problem related to our pro\ufb01t-maximization example. In a principal-agent problem, the principal (the leader) de\ufb01nes a contract by which the agent (the follower) will be paid, as a function of work produced by the agent. The key property of principal agent problems is that the agent is not able to deterministically produce work of a given quality. Instead, the agent chooses (and experiences cost as a function of) a level of effort, which stochastically maps to the quality of his work. However, the effort chosen by the agent is unobservable to the principal \u2013 only the quality of the \ufb01nished product. We consider a simple d-dimensional principal-agent problem, in which the result of the agent can be evaluated along d dimensions, each of which might require a different amount of effort. Since the agent knows how effort is stochastically mapped to realizations, we abstract away the agent\u2019s choice of an \u201ceffort\u201d vector, and instead (without loss of generality) view the agent as choosing a \u201ctarget contribution\u201d x \u2208C \u2286 Rd + \u2013 the expected value of the agent\u2019s ultimate contribution. The agent experiences some strongly convex cost c(x) for producing a target contribution of x, but might nevertheless be incentivized to produce high quality contributions by the contract offered by the principal. However, the contribution that is actually realized (and that the principal observes) is a stochastically perturbed version of x: \u02dc x = x + \u03b8, where \u03b8 \u2208Rd is a noise vector sampled from the mean-zero Gaussian distribution N(0, I). The principal wants to optimize over the set of linear contracts: he will choose a price vector p \u2208Rd +, such that in response to the agent\u2019s realized contribution \u02dc x, the agent collects reward \u27e8p, \u02dc x\u27e9. His goal is to choose a price vector to optimize his expected value for the agent\u2019s contribution, minus his own costs. The agent\u2019s strongly convex cost function c: C \u2192R+ is unknown to the principal. If the principal\u2019s contract vector is p and the agent attempts to contribute x, then his utility is Ua(p, x) = \u27e8p, (x + \u03b8)\u27e9\u2212c(x), and his expected utility is just ua(p, x) = E[Ua(p, x)] = \u27e8p, x\u27e9\u2212c(x). Fixing any price p, the agent will attempt to play the induced contribution vector: x\u2217(p) = argmaxx\u2208C (\u27e8p, x\u27e9\u2212c(x)) in order to optimize his expected utility. The principal has value vi for each unit of contribution in the i-th dimension, and upon observing the realized contribution \u02dc x, his utility is up(p, \u02dc x) = \u27e8v, \u02dc x\u27e9\u2212\u27e8p, \u02dc x\u27e9= \u27e8v \u2212p, \u02dc x\u27e9. The principal\u2019s goal is to \ufb01nd a price vector b p to (approximately) maximize his expected utility: E[up(p, x\u2217(p) + \u03b8)] = E [\u27e8v \u2212p, x\u2217(p) + \u03b8\u27e9] = \u27e8v \u2212p, x\u2217(p)\u27e9. This is an instantiation of our class of Stackelberg games in which the principal is the leader with action set Rd + and utility function \u03c8(p, x) = \u27e8v \u2212p, x\u27e9, and the agent is the follower with action set C and utility function \u03c6(p, x) = \u27e8p, x\u27e9\u2212c(x). Indeed, in expectation, it is merely a \u201cprocurement\u201d version of 25 \four pro\ufb01t-maximization example. However, the crucial difference in this application (causing it to deviate from the general setting de\ufb01ned in De\ufb01nition 22) is that the leader only gets to observe a noisy version of the follower\u2019s best response at each round: \u02dc x = x\u2217(p) + \u03b8. We will adapt the analysis from Section 3 and Section 4 to show that our algorithm is robust to noisy observations. We make the following assumptions, which correspond to the set of assumptions we made in our previous applications. Assumption 6.1. The following assumptions parallel 4.1 and 3.1. 1. The set of feasible contributions C \u2286Rd + is convex, closed, and bounded. It also contains the unit hypercube, [0, 1]d \u2286C (the agent can simultaneously attempt to contribute at least one unit in each dimension) and in particular contains 0 \u2208Rd (the agent can contribute nothing). Lastly, \u2225C\u22252 \u2264\u03b3; 2. the agent\u2019s cost function c is homogeneous, 1-Lipschitz and \u03c3-strongly convex; 3. the principal\u2019s valuation vector has norm \u2225v\u2225\u22641. 6.1 Inducing the Agent\u2019s Contribution Using Noisy Observations We will \ufb01rst show that in general, LearnLead can learn the leader\u2019s action which approximately induces any target follower action b x even if the algorithm only observes noisy perturbed best responses from the follower. This result holds in full generality, but we illustrate it by using the principal-agent problem. First, given any target contribution b x, consider the following convex program similar to Section 3.4: min x\u2208C c(x) (12) such that xj \u2265b xj for every j \u2208[d] (13) The Lagrangian of the program is L(x, p) = c(x) + \u27e8p, x \u2212b x\u27e9, and the Lagrangian dual objective function is g(p) = min x\u2208C L(x, p). By the same analysis used in the proof of Lemma 14, if we \ufb01nd a price vector b p \u2208P such that g(b p) \u2265 maxp\u2208P g(p)\u2212\u03b1, then we know that the induced contribution vector x\u2217(b p) satis\ufb01es \u2225x\u2217(b p)\u2212b x\u2225\u2264 p 2\u03b1/\u03c3. Now we show how to (approximately) optimize the function g based on the realized contributions of the agent, which correspond to mean-zero perturbations of the agent\u2019s best response. As shown in Lemma 15, a subgradient of g at price p is (x\u2217(p) \u2212b x), but now since the principal only observes the realized contribution vector \u02dc x, our algorithm does not have access to subgradients. However, we can still obtain an unbiased estimate of the subgradient: the vector (\u02dc x \u2212b x) satis\ufb01es E [\u02dc x \u2212b x] = (x\u2217(p) \u2212b x) because the noise vector is drawn from N(0, I). This is suf\ufb01cient to allow us to analyze LearnLead as stochastic gradient descent. The principal does the following: initialize p1 = 0 and at each round t \u2208[T], observes a realized contribution vector \u02dc xt = x\u2217(pt) + \u03b8t and updates the contract prices as follows: pt+1 = \u03a0P \u0002 pt + \u03b7(\u02dc xt \u2212b x) \u0003 , where each \u03b8t \u223cN(0, I), \u03b7 is a learning rate and P = {p \u2208Rd + | \u2225p\u2225\u2264 \u221a d}; Finally, the algorithm outputs the average price vector b p = 1/T PT t=1 pt. We use the following standard theorem about the convergence guarantee for stochastic gradient descent (a more general result can be found in [NJLS09]). 26 \fLemma 30. With probability at least 1 \u2212\u03b2, the average vector b p output by stochastic gradient descent satis\ufb01es max p\u2208P g(p) \u2212g(b p) \u2264O \u221a d \u221a T \u0012 \u03b3 + \u221a d log \u0012Td \u03b2 \u0013\u0013! . Algorithm 5 Learning the price vector from noisy observations: LearnPriceN(b x, \u03b5, \u03b2) Input: A target contribution b x \u2208C, target accuracy \u03b5, and con\ufb01dence parameter \u03b2 Initialize: restricted price space P = {p \u2208Rd + | \u2225p\u2225\u2264 \u221a d} p1 j = 0 for all j \u2208[d] T = \u02dc O \u0012 d\u03b32 \u03b54\u03c32 \u0013 \u03b7 = \u221a 2\u03b3 \u221a d \u221a T For t = 1, . . . , T: Observe the realized contribution by the agent \u02dc xt = x\u2217(pt) + \u03b8, where \u03b8 \u223cN(0, I) Update price vector: \u02dc pt+1 j = pt j + \u03b7 \u0000b xj \u2212\u02dc xt j \u0001 for each j \u2208[d], pt+1 = \u03a0P \u0002 b pt+1\u0003 Output: b p = 1/T PT t=1 pt. Lemma 31. Let b x \u2208C be any target contribution vector. Then, with probability at least 1\u2212\u03b2, the algorithm LearnPriceN(b x, \u03b5, \u03b2) outputs a contract price vector b p for the principal such that the induced contribution vector x\u2217(b p) satis\ufb01es \u2225b x \u2212x\u2217(b p)\u2225\u2264\u03b5, and the number observations on the realized contributions of the agent it needs is no more than T = \u02dc O \u0012 d\u03b32 \u03b54\u03c32 \u0013 . 6.2 Optimizing the Principal\u2019s Utility Finally, we show how to optimize the principal\u2019s utility by combining LearnPriceN and ZOO. Following from the same analysis of Lemma 9, we know that the principal\u2019s utility-maximizing price vector to induce expected contribution b x is \u2207c(b x). We can then rewrite the expected utility of the principal as a function of the attempted contribution of the agent: up(x) = \u27e8v \u2212\u2207c(x), x\u27e9. Since c is a homogeneous and convex function, by Theorem 10, up is a concave function. Similar to Section 3.5, we will run ZOO to optimize over the interior subset: C\u03b4 = (1 \u22122\u03b4)C + \u03b41, so any price vector b p given by LearnPriceN is the unique price that induces the agent\u2019s attempted contribution vector x\u2217(b p) (Lemma 17). By the same analysis of Lemma 20, we know that there is little loss in principal\u2019s utility by restricting the contribution vectors to C\u03b4. 27 \fLemma 32. The function up: C \u2192R is 2-Lipschitz, and for any 0 < \u03b4 < 1, max x\u2208C up(x) \u2212max x\u2208C\u03b4 up(x) \u22646\u03b4\u03b3. Now we show how to use LearnPriceN to provide an noisy evaluation for up at each point of C\u03b4 (scale of \u03b4 determined in the analysis). For each b p the LearnPriceN returns, the realized contribution vector we observe is \u02dc x = x\u2217(b p) + \u03b8, so the utility experienced by the principal is up(b p, \u02dc x) = \u27e8v \u2212b p, \u02dc x\u27e9. We \ufb01rst demonstrate that up(b p, \u02dc x) gives an unbiased estimate for up, and we can obtain an accurate estimate by taking the average of a small number realized utilities. In the following, let constant a = ln 2/(2\u03c0). Lemma 33. Let x\u2032 \u2208C be the contribution vector such that p\u2032 = \u2207c(x\u2032) is the unique price vector that induces x\u2032. Let noise vectors \u03b81, . . . , \u03b8s \u223cN(0, I) and \u02dc xj = x\u2032 +\u03b8j for each j \u2208[s]. Then with probability at least 1 \u2212\u03b2, \f \f \f \f \f \f 1 s s X j=1 up(b p, \u02dc xj) \u2212up(x\u2032) \f \f \f \f \f \f \u2264 r d s r 2 a ln 2 \u03b2 . Proof. Let b = v \u2212p\u2032, then we can write 1 s s X j=1 up(b p, \u02dc xj) \u2212up(x\u2032) = 1 s s X j=1 \u0000\u27e8b, \u02dc xj\u27e9\u2212\u27e8b, x\u2032\u27e9 \u0001 = 1 s s X j=1 \u27e8b, \u03b8j\u27e9 = 1 s s X j=1 d X i=1 bi\u03b8j i Note that each \u03b8j i is sampled from the Gaussian distribution N(0, 1), and we use the fact that if X \u223c N(0, \u03c32 1) and Y \u223cN(0, \u03c32 2) then (bX+cY ) \u223cN(0, b2\u03c32 1+c2\u03c32 2). We can further derive that 1 s Ps j=1 Pd i=1 bi\u03b8j i is a random variable with distribution N(0, \u2225b\u22252/s). Then we will use the following fact about Gaussian tails: let Y be a random variable sampled from distribution N(0, \u03b92) and a = ln 2/(2\u03c0), then for all \u03b6 > 0 Pr [|Y | > \u03b6] \u22642 exp \u0000\u2212a\u03b62/\u03b92\u0001 It follows that with probability at least 1 \u2212\u03b2, we have \f \f \f \f \f \f 1 s s X j=1 d X i=1 bi\u03b8j i \f \f \f \f \f \f \u2264 s ln 2 \u03b2 as \u2225b\u2225. Finally, note that we can bound \u2225b\u2225= \u2225v \u2212p\u2032\u2225\u2264 \u221a 2d, so replacing \u2225b\u2225by \u221a 2d recovers our bound. Now we are ready to give the algorithm to optimize the principal\u2019s utility in Algorithm 6. 28 \fAlgorithm 6 Learning the price vector to optimize under noisy observations: OproN(C, \u03b1, \u03b2) Input: Feasible bundle space C, target accuracy \u03b1, and con\ufb01dence parameter \u03b2 Initialize: \u03b5 = \u03b1 12\u03b3 + 3d \u03b4 = 2\u03b5 \u03b1\u2032 = 3d\u03b5 \u03b2\u2032 = \u03b2/2T s = 2d ln 2 \u03b2\u2032 a\u03b52 restricted bundle space C\u03b4 = (1 \u22122\u03b4)C + \u03b41 and number of iterations T = \u02dc O(d4.5) For t = 1, . . . , T: ZOO(\u03b1\u2032, C\u03b4) queries the pro\ufb01t for bundle xt Let pt = LearnPriceN(xt, \u03b5, \u03b2\u2032) For j = 1, . . . s: Principal post price pt Let \u02dc xj(pt) be the realized contribution and experiences utility u(pt, \u02dc xj(pt)) Send 1 s Ps j=1 u(pt, \u02dc xj(pt)) to ZOO(\u03b1\u2032, C\u03b4) as an approximate evaluation of up(xt) b x = ZOO(\u03b1\u2032, C\u03b4) b p = LearnPrice(b x, \u03b5) Output: the last price vector b p Theorem 34. Let \u03b1 > 0 and 0 < \u03b2 < 1/2. With probability at least 1 \u2212\u03b2, the price vector b p output by OproN(C, \u03b1, \u03b2) satis\ufb01es E [up(b p, x\u2217(b p))] \u2265max p\u2208P up(p, x\u2217(p)) \u2212\u03b1, and the number of observations on realized contributions is bounded by \u02dc O \u0012d9.5 \u03b14 \u0013 . Proof. First, by Lemma 31 and union bound, with probability at least 1 \u2212\u03b2/2, we have \u2225xt \u2212x\u2217(pt)\u2225\u2264\u03b5 for all t \u2208[T]. We condition on this level of accuracy for the rest of the proof. By the same analysis of Footnote 4, we know that each target contribution x\u2217(pt) is in the interior IntC, so we have that up(x\u2217(pt)) = up(pt, x\u2217(pt)). To establish the accuracy guarantee, we need to bound two sources of error. First, we need to bound the error from ZOO. Note that the target contribution x\u2217(pt) satis\ufb01es |up(xt) \u2212up(x\u2217(pt))| \u22642\u03b5. By Lemma 33 and our setting of s, we have with probability at least 1 \u2212\u03b2\u2032 that \f \f \f \f \f \f 1 s s X j=1 up(pt, \u02dc xj(pt)) \u2212up(x\u2217(pt)) \f \f \f \f \f \f \u2264\u03b5. By union bound, we know such accuracy holds for all t \u2208[T] with probability at least 1\u2212\u03b2/2. We condition on this level of accuracy, then the average utility provides an accurate evaluation for up(xt) at each queried point xt \f \f \f \f \f \f 1 s s X j=1 up(pt, \u02dc xj(pt)) \u2212up(xt) \f \f \f \f \f \f \u22643\u03b5. 29 \fBy Lemma 7, we know that the vector b x output by ZOO satis\ufb01es E [up(b x)] \u2265max x\u2208C\u03b4 up(x) \u22123d\u03b5. Finally, by Lemma 32 and the value of \u03b5, we also have E [up(b x)] \u2265max x\u2208C up(x) \u2212(12\u03b5\u03b3 + 3d\u03b5) = max x\u2208C up(x) \u2212\u03b1. Note maxx\u2208C up(x) = maxp\u2208P up(p, x\u2217(p)), so we have shown the accuracy guarantee. In each iteration, the algorithm requires \u02dc O \u0010 d\u03b32 \u03b54\u03c32 \u0011 noisy observations for running LearnPriceN and s observations for estimating up(x\u2217(pt)), so the total number of observations is bounded by \u02dc O \u0012 d4.5 \u00d7 \u0012\u03b32d(\u03b3 + d)4 \u03c32\u03b14 + d(\u03b3 + d)2 \u03b12 \u0013\u0013 = \u02dc O \u0012d9.5 \u03b14 \u0013 where we hide constants \u03c3, \u03b3 in the last equality. 7 Conclusion In this paper, we have given algorithms for optimally solving a large class of Stackelberg games in which the leader has only \u201crevealed preferences\u201d feedback about the follower\u2019s utility function, with applications both to pro\ufb01t maximization from revealed preferences data, and optimal tolling in congestion games. We believe this is a very natural model in which to have access to agent utility functions, and that pursuing this line of work will be fruitful. There are many interesting directions, but let us highlight one in particular. In our pro\ufb01t maximization application, it would be very natural to consider a \u201cBayesian\u201d version of our problem. At each round, the producer sets prices, at which point a new consumer, with valuation function drawn from an unknown prior, purchases her utility maximizing bundle. The producer\u2019s goal is to \ufb01nd the prices that maximize her expected pro\ufb01t, over draws from the unknown prior. Under what conditions can we solve this problem ef\ufb01ciently? The main challenge (and the reason why it likely requires new techniques) is that the expected value of the purchased bundle need not maximize any well-behaved utility function, even if each individual consumer is maximizing a concave utility function. Acknowledgements We would like to thank Michael Kearns and Mallesh Pai for stimulating discussions about this work. In particular, we thank Mallesh for helpful discussions about principal-agent problems and conditions under which the pro\ufb01t function of the producer in the revealed preferences problem might be concave. We would also like to thank Tengyuan Liang and Alexander Rakhlin for very helpful discussions about [BLNR15]." + } + ], + "Linjun Zhang": [ + { + "url": "http://arxiv.org/abs/2102.06289v3", + "title": "When and How Mixup Improves Calibration", + "abstract": "In many machine learning applications, it is important for the model to\nprovide confidence scores that accurately capture its prediction uncertainty.\nAlthough modern learning methods have achieved great success in predictive\naccuracy, generating calibrated confidence scores remains a major challenge.\nMixup, a popular yet simple data augmentation technique based on taking convex\ncombinations of pairs of training examples, has been empirically found to\nsignificantly improve confidence calibration across diverse applications.\nHowever, when and how Mixup helps calibration is still a mystery. In this\npaper, we theoretically prove that Mixup improves calibration in\n\\textit{high-dimensional} settings by investigating natural statistical models.\nInterestingly, the calibration benefit of Mixup increases as the model capacity\nincreases. We support our theories with experiments on common architectures and\ndatasets. In addition, we study how Mixup improves calibration in\nsemi-supervised learning. While incorporating unlabeled data can sometimes make\nthe model less calibrated, adding Mixup training mitigates this issue and\nprovably improves calibration. Our analysis provides new insights and a\nframework to understand Mixup and calibration.", + "authors": "Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou", + "published": "2021-02-11", + "updated": "2022-07-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Modern machine learning methods have dramatically improved the predictive accuracy in many learning tasks (Simonyan & Zisserman, 2014; Srivastava et al., 2015; He et al., 2016a). The deployment of AI-based systems in high risk \ufb01elds such as medical diagnosis (Jiang et al., 2012) requires a predictive model to be trustworthy, which makes the topic of accurately quantifying the predictive uncertainty an *Equal contribution 1Rutgers University 2Harvard University 3National University of Singapore 4Stanford University. Correspondence to: Linjun Zhang , Zhun Deng . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). increasingly important problem (Thulasidasan et al., 2019). However, as pointed out by Guo et al. (2017), many popular modern architectures such as neural networks are very poorly calibrated. A variety of methods have been proposed for quantifying predictive uncertainty including training multiple probabilistic models with ensembling or bootstrap (Osband et al., 2016) and re-calibration of probabilities on a validation set through temperature scaling (Platt et al., 1999), which usually involves much more complicated procedures and extra computation. Meanwhile, recent work (Thulasidasan et al., 2019) has shown that models trained with Mixup (Zhang et al., 2017), a simple data augmentation technique based on taking convex combinations of pairs of examples and their labels, are signi\ufb01cantly better calibrated. However, when and how Mixup helps calibration is still not well-understood, especially from a theoretical perspective. As our \ufb01rst contribution, we demonstrate that the calibration improvement brought by Mixup is more signi\ufb01cant in the high-dimensional settings, i.e. the number of parameters is comparable to the training sample size. Figure 1 shows a motivating experiment on CIFAR-10. The Expected Calibration Error (ECE), which is a standard measure of how un-calibrated a model is, is smaller with Mixup augmentation compared to those without Mixup augmentation, especially when the model is wider or deeper. We provide a theoretical explanation for this phenomenon under several natural statistical models. In particular, our theory holds when the data distribution can be described by a Gaussian generative model, which is very \ufb02exible and includes many generative adversarial networks (GANs). In a Gaussian generative model, a function is used to map a Gaussian random variable to an input vector of some models such as neural networks. Because the function used to map a Gaussian random variable is arbitrary and can be nonlinear, our theory is applicable to a very broad class of data distributions. As our second contribution, we investigate how Mixup helps calibration in semi-supervised learning, which is relatively under-explored. Labeled data are usually expensive to obtain, and training models by combining a small amount of labeled data with abundant unlabeled data plays an important role in AI (Chapelle et al., 2009). In light of this, we investigate the effect of Mixup in semi-supervised learning, where we focus on the commonly used pseudo-labeling algorithm (Chapelle et al., 2009; Carmon et al., 2019). We arXiv:2102.06289v3 [cs.LG] 10 Jul 2022 \fWhen and How Mixup Improves Calibration 0.35 0.30 0.25 0.10 0.20 0.15 0.05 100 10 1000 (a) Network width 0.35 0.30 0.25 0.20 0.15 0.10 0.05 1 10 (b) Network depth Figure 1. Expected calibration error (ECE) calculated for a fullyconnected neural network on CIFAR-10. In (a), we \ufb01x the depth and increase the width of the neural network; while in (b), we \ufb01x the width and increase the depth of the neural network. Mixup augmentation can reduce ECE especially for larger capacity models. observe experimentally that the pseudo-labeling by itself can sometimes hurt calibration. However, combining Mixup with pseudo-labeling consistently improves calibration. We provide theories to explain these \ufb01ndings. As our third contribution, we further extend our results to Maximum Calibration Error (MCE), which also demonstrates similar phenomena as those for ECE. Outline of the paper. Section 2 discusses related works and introduces the notations. In Section 3, we present our main theoretical results for ECE by showing that Mixup improves calibration for classi\ufb01cation problems in the high-dimensional regime. Section 4 investigates the semisupervised learning setting and demonstrates the bene\ufb01t of further applying Mixup to the pseudo-labeling algorithm. In Section 5, we extend our studies of calibration to MCE. Section 6 concludes with a discussion of future work. Proofs are deferred to the Appendix. 1.1. Related Work Mixup is a popular data augmentation scheme that has been shown to improve a model\u2019s prediction accuracy (Zhang et al., 2017; Thulasidasan et al., 2019; Guo et al., 2019). Recent theoretical analysis shows that Mixup has an implicit regularization effect that enables models to better generalize (Zhang et al., 2020). The focus of our work is not on accuracy, but on calibration. Modern learning models such as neural networks have achieved remarkable performance nowadays in optimization (Deng et al., 2020a; Ji et al., 2021b; Deng et al., 2021d; Ji et al., 2021a; Kawaguchi et al., 2022). Even though the generalization and prediction (Deng et al., 2020c; Zhang et al., 2020; Deng et al., 2021a) of neural networks are quite amazing, it has shown that neural networks tend to be over-con\ufb01dent. A well-calibrated predictive model is needed in many applications of machine learning, ranging from economics (Foster & Vohra, 1997), personalized medicine (Jiang et al., 2012), to weather forecasting (Gneiting & Raftery, 2005), to fraud detection (Bahnsen et al., 2014). The problem on producing a well-calibrated model has received increasing attention in recent years (Naeini et al., 2015; Lakshminarayanan et al., 2016; Guo et al., 2017; Zhao et al., 2020; Foster & Stine, 2004; Kuleshov et al., 2018; Wen et al., 2020; Huang et al., 2020). In realworld settings, the input distributions are sometimes shifted from the training distribution due to non-stationarity. The predictive uncertainty under such out-of-distribution condition was studied by Ovadia et al. (2019) and Chan et al. (2020). Mixup has been empirically shown to improve the calibration for deep neural networks in both the same and out-of-distribution domains (Thulasidasan et al., 2019; Tomani & Buettner, 2020). Ours is the \ufb01rst work to provide theoretical explanation for this phenomenon. Semi-supervised learning is a broad \ufb01eld in machine learning concerned with learning from both labeled and unlabeled datasets (Chapelle et al., 2009). Prior work mostly focuses on improving the prediction accuracy and robustness with unlabeled data (Zhu et al., 2003; Zhu & Goldberg, 2009; Berthelot et al., 2019; Deng et al., 2020b; 2021c; 2020c) and adversarial robustness (Carmon et al., 2019; Deng et al., 2021b). Recently, Chan et al. (2020) found that unlabeled data improves Bayesian uncertainty calibration in some experiments, but the relationship between using unlabeled data and calibration, especially from the theoretical perspective, is still largely unknown. All of the facts above motivate our theoretical exploration in this paper. 2. Preliminaries In this section, We introduce the notations and brie\ufb02y recap the mathematical formulation of Mixup and calibration measures considered in this paper. 2.1. Notations We denote the training data set by S = {(x1, y1), \u00b7 \u00b7 \u00b7 , (xn, yn)}, where xi \u2208 X \u2286 Rd and yi \u2208Y \u2286Rm are drawn i.i.d. from a joint distribution Px,y. The general parameterized loss is denoted by l(\u03b8, z), where \u03b8 \u2208\u0398 \u2286Rp and zi = (xi, yi) denotes the input and output pair. Let L(\u03b8) = Ez\u223cPx,yl(\u03b8, z) denote the standard population loss and Lstd n (\u03b8, S) = Pn i=1 l(\u03b8, zi)/n denote the standard empirical loss. In addition, we de\ufb01ne \u02dc xi,j(\u03bb) = \u03bbxi + (1 \u2212\u03bb)xj, \u02dc yi,j(\u03bb) = \u03bbyi + (1 \u2212\u03bb)yj, and \u02dc zi,j(\u03bb) = (\u02dc xi,j(\u03bb), \u02dc yi,j(\u03bb)) for \u03bb \u2208[0, 1]. We use tD1 + (1 \u2212t)D2 for t \u2208(0, 1) to denote the mixture distribution such that a sample coming from that distribution is drawn with probabilities t and (1 \u2212t) from D1 and D2 respectively. In classi\ufb01cation, the output yi is the embedding of the class of xi; i.e., yi \u2208{0, 1}m is the one-hot encoding of the class (with all entries equal to zero except for the one corresponding to the class of xi), where \fWhen and How Mixup Improves Calibration m is the total number of classes. 2.2. Mixup Mixup is a data augmentation technique, which linearly interpolates the training sample pairs within the training data set to create a new data set Smix(\u03bb) = {(\u02dc zi,j(\u03bb))}n i,j=1, with \u03bb following a distribution D\u03bb supported on [0, 1]. Throughout the paper, we consider the most commonly used D\u03bb \u2014 the Beta distribution Beta(\u03b1, \u03b2) for \u03b1, \u03b2 > 0. Typically, in a machine learning task, one wants to learn a function f : X \u2192Y from a function class F using the training data set S \u2208(X \u00d7 Y)n. Such a function is usually parametrized as f\u03b8 with some parameter \u03b8. Let us denote the learned parameter by \u02c6 \u03b8 = M(S). In this paper, we consider learning the parameter by the Mixup training M(Smix(\u03bb)). Due to the randomness in \u03bb, we consider taking the expectation over \u03bb. For example, a mapping could either be an estimator, such as the empirical mean of input: M(S) = Pn i=1 xi/n, or be the minimizer of a loss function: M(S, \u03b8) = argmin\u03b8 Pn i=1 l(\u03b8, zi)/n. The corresponding transformed mappings obtained via Mixup are then E\u03bb\u223cD\u03bbM(Smix(\u03bb)) = Pn i,j=1 E\u03bb\u223cD\u03bbxi,j(\u03bb)/n2 and E\u03bb\u223cD\u03bbM(Smix(\u03bb), \u03b8) = Pn i,j=1 E\u03bb\u223cD\u03bbl(\u03b8, \u02dc zi,j(\u03bb))/n2 respectively. 2.3. Calibration for classi\ufb01cation For a classi\ufb01cation problem, if there are K classes, typically, for an input x, a probability vector \u02c6 h(x) = (p1(x), \u00b7 \u00b7 \u00b7 , pK(x))\u22a4\u2208RK is obtained from the trained model, where pi is the corresponding probability (or socalled con\ufb01dence score) that x belongs to the class i, and PK i=1 pi = 1. Then, the output is \u02c6 y = argmaxi pi(x). The hope is that, for instance, given 1000 samples, each with con\ufb01dence 0.7, around 700 examples should be classi\ufb01ed correctly. In other words, we expect for all v \u2208[0, 1], P(\u02c6 y = y|\u02c6 p = v) \u2248v, where \u02c6 p is the largest entry in \u02c6 h(x) and y is the true class x belongs to, which is termed as prediction con\ufb01dence. Expected Calibration Error (ECE). The most prevalent calibration metric is the Expected Calibration Error (Naeini et al., 2015), which is de\ufb01ned as, ECE = Ev\u223cD\u02c6 p [|P(\u02c6 y = y|\u02c6 p = v) \u2212v|] , (1) where D\u02c6 p is the distribution of \u02c6 p. While ECE is widely used, we note that recents works (Nixon et al., 2019; Kumar et al., 2019) found that some methods of estimating ECE in practice (such as the binning method) is sometimes undesirable and can produce biased estimator under some specially constructed data distributions. Throughout this paper, in our theories, we mainly focus on the population version of calibration error as de\ufb01ned in (1), which does not suffer from any such bias. Maximum Calibration Error (MCE). Another widely used calibration metric is the Maximum Calibration Error (Naeini et al., 2015), which is de\ufb01ned as MCE = max v\u2208[0,1] |P(\u02c6 y = y|\u02c6 p = v) \u2212v|. Again, in our theory, we will only consider this population version of MCE. A predictor \u02c6 p with ECE/MCE equal to 0 is said to be perfectly calibrated. 3. Calibration in Supervised Learning Although Mixup has been shown to improve the test accuracy (Zhang et al., 2017; Guo et al., 2019; Zhang et al., 2020), there has been much less understanding of how it affects model calibration 1. In this section, we focus on investigating when and how Mixup improves calibration. 3.1. Problem set-up As a con\ufb01rmation of the phenomenon suggested in Figure 1 in the introduction, our theoretical results demonstrate that Mixup indeed improves calibration, and the improvement is especially signi\ufb01cant in the high-dimensional regime. Here, by high-dimensional regime, we mean when the number of parameters in the model, p, is comparable to the sample size n, i.e. p/n > c for some constant c > 0. In other words, the improvement in calibration by using Mixup is more signi\ufb01cant in the over-parameterized case or when the ratio between p and n is a constant asymptotically larger than 0. Moreover, we also prove that Mixup helps calibration on out-of-domain data, which is critical for machine learning applications. In order to derive tractable analysis, we \ufb01rst study the concrete and natural Gaussian model. The Gaussian model is a popular setting for understanding phenomena happening in more complex models due to its tractability in theory and its ability to partially capture some essence of the phenomena. Indeed, the Gaussian model has been widely used in theoretical investigations of more complex machine learning models such as neural networks in adversarial learning (Schmidt et al., 2018; Carmon et al., 2019; Dan et al., 2020; Deng et al., 2021b). We further extend our analysis to the very \ufb02exible Gaussian generative models in Section 3.4. The Gaussian model. We consider a common model used for theoretical machine learning analysis: a mixture of two spherical Gaussians with one component per class (Carmon 1Models with better test accuracy are not necessarily better calibrated. \fWhen and How Mixup Improves Calibration et al., 2019): De\ufb01nition 3.1 (Gaussian model). For \u03b8\u2217\u2208Rp and \u03c3 > 0, the (\u03b8\u2217, \u03c3)-Gaussian model is de\ufb01ned as the following distribution over (x, y) \u2208Rp \u00d7 {1, \u22121}: x | y \u223cN(y \u00b7 \u03b8\u2217, \u03c32I), for i = 1, 2, ..., n, and y follows the Bernoulli distribution P(y = 1) = P(y = \u22121) = 1/2. For simplicity, we \ufb01rst consider the case where \u03c3 is known, and the only unknown parameter is \u00b5. The case where \u03c3 is unknown is a special example of the general Gaussian generative model that we will consider in Section 3.4. Algorithms. In this section, we focus on studying the following linear classi\ufb01er for the Gaussian classi\ufb01cation. Speci\ufb01cally, the classi\ufb01er follows the celebrated Fisher\u2019s rule (Johnson et al., 2002), or so-called linear discriminant analysis, which is also considered by Carmon et al. (2019) to study the adversarial robustness. The classi\ufb01er is constructed as \u02c6 C(x) = sgn(\u02c6 \u03b8\u22a4x), (2) where \u02c6 \u03b8 = Pn i=1 xiyi/n. Given \u02c6 \u03b8 and x, the output y obtained via classi\ufb01er \u02c6 C can be equivalently de\ufb01ned by the following process: we \ufb01rst obtain the con\ufb01dence vector h(x) = (p1(x), p\u22121(x))\u22a4, and then output y = \u02c6 C(x) = argmaxk\u2208{\u22121,1} pk(x). Here, for k \u2208{\u22121, 1}, the con\ufb01dence score pk(x) represents an estimator of P(y = k|x) and therefore takes the following form: pk(x) = 1 e\u22122k\u00b7\u02c6 \u03b8\u22a4xi/\u03c32 + 1 . (3) In comparison, by applying Mixup to the above algorithm, we \ufb01rst obtain {\u02dc xi,j(\u03bb), \u02dc yi,j(\u03bb)}n i,j=1, which leads to another classi\ufb01er \u02c6 Cmix(x) = sgn(\u02c6 \u03b8mix\u22a4x), (4) where \u02c6 \u03b8mix = E\u03bb\u223cD\u03bb Pn i,j=1 \u02dc xi,j(\u03bb)\u02dc yi,j(\u03bb)/n2. Here, given the randomness of \u03bb, we take expectation with respect to \u03bb in the same way as in the previous study (Zhang et al., 2017), though this is unnecessary in our theoretical analysis. The con\ufb01dence score obtained by \u02c6 Cmix can be obtained similarly to that in Eq. (3) with \u02c6 \u03b8 being replaced by \u02c6 \u03b8mix. 3.2. Mixup helps calibration in classi\ufb01cation We follow the convention in high-dimensional statistics, where the parameter dimension p grows along with the sample size n, and state our theorem in the large n, p regime where both n and p goes to in\ufb01nity. Throughout the paper, we use the term \u201cwith high probability\u201d to indicate that the event happens with probability at least 1 \u2212o(1), where o(1) \u21920 as n \u2192\u221eand the randomness is taken over the training data set. In the following, we show that the condition p/n = \u2126(1) is necessary and the fact that Mixup improves calibration is a high-dimensional phenomenon. Let us denote the ECE calculated with respect to \u02c6 C and \u02c6 Cmix by ECE( \u02c6 C) and ECE( \u02c6 Cmix) respectively. Our \ufb01rst theorem states that Mixup indeed improves calibration for the above algorithm under the Gaussian model. Theorem 3.1. Under the settings described above, there exists c2 > c1 > 0, when p/n \u2208(c1, c2) and \u2225\u03b8\u22252 < C for some universal constants C > 0 (not depending on n and p), then for suf\ufb01ciently large p and n, there exist \u03b1, \u03b2 > 0, such that when the distribution D\u03bb is chosen as Beta(\u03b1, \u03b2), with high probability, ECE( \u02c6 Cmix) < ECE( \u02c6 C). The above theorem states that when p is comparable to n and p/n is not too small, applying Mixup leads to a better calibration than without applying Mixup. In the very next theorem, we further demonstrate that the condition \u201c p and n are comparable\u201d is necessary for Mixup to reach a smaller ECE. Theorem 3.2. There exists a threshold \u03c4 = o(1) such that if p/n \u2264\u03c4 and \u2225\u03b8\u22252 < C for some universal constant C > 0, given any constants \u03b1, \u03b2 > 0 (not depending on n and p), when n is suf\ufb01ciently large, we have, with high probability, ECE( \u02c6 C) < ECE( \u02c6 Cmix). In Theorem 3.2, we can see if p is too small compared with n, then applying Mixup cannot have any gain and even hurts the calibration. Usually, in the implementation of Mixup, we \ufb01rst \ufb01x \u03b1 and \u03b2 before training, and the above theorem reveals the fact that in the low-dimensional regime, where p/n is suf\ufb01ciently close to 0, the Mixup could not help calibration with high probability. Moreover, combined with Theorem 3.3 stated below, which characterizes the monotonic relationship between p/n and the improvement brought by Mixup, we can see Mixup helps calibration more when the dimension is higher. For the ease of presentation, for all \u03b2 > 0, let us de\ufb01ne Beta(0, \u03b2) as the degenerated distribution which takes the only value at 0 with probability one. We also de\ufb01ne \u02c6 Cmix \u03b1,\u03b2 as the classi\ufb01er where we apply Mixup with distribution \u03bb \u223cBeta(\u03b1, \u03b2). Theorem 3.3. For any constant cmax > 0, p/n \u2192cratio \u2208 (0, cmax), when \u03b8 is suf\ufb01ciently large (still of a constant \fWhen and How Mixup Improves Calibration level), we have for any \u03b2 > 0, with high probability, the change of ECE by using Mixup, characterized by d d\u03b1ECE( \u02c6 Cmix \u03b1,\u03b2 ) |\u03b1\u21920+ is negative, and monotonically decreasing with respect to cratio. In Theorem 3.3, the derivative with respect to \u03b1 is interpreted as follows. Since for any \u03b2 > 0, Beta(0, \u03b2) is the degenerated distribution at 0, \u02c6 \u03b8mix(0, \u03b2) corresponds to the output without Mixup. Therefore, increasing \u03b1 from 0 to some positive value implies applying Mixup. Thus, Theorem 3.3 suggests that in high-dimensions, increasing the interpolation range in Mixup decreases ECE. Intuition behind our results. In the high-dimensional regime, especially in the over-parameterized case (p > n), the models have more \ufb02exibility to set the con\ufb01dence vectors. For instance, for trained neural networks, the entries of the con\ufb01dence vectors for many data points are all close to zero except for one entry, whose value is close to 1, because the model is trained to memorize the training labels. Mixup mitigates this problem by using linear interpolation that creates one-hot encoding terms with entry values lying between (0, 1), which pushes the value of entries to diverge. This could be partially addressed in our analysis above, as the magnitude of the con\ufb01dence is closely related to \u2225\u02c6 \u03b8\u2225, i.e. when \u2225\u02c6 \u03b8\u2225is large, the con\ufb01dence scores are more likely to be close to 0 or 1. Mixup, as a form of regularization (Zhang et al., 2020), could shrink \u2225\u02c6 \u03b8\u2225and avoid too extreme con\ufb01dence scores. Additional supporting experiments. To complement our theory, we further provide more experimental evidence on popular image classi\ufb01cation data sets with neural networks. We used the fully-connected neural networks with various values of the width (i.e. the number of neurons per hidden layer) and the depth (i.e., the number of hidden layers). For the experiments on the effect of the width, we \ufb01xed the depth to be 8 and varied the width from 10 to 3000. For the experiments on the effect of the depth, the depth was varied from 1 to 24 (i.e., from 3 to 26 layers including input/output layers) by \ufb01xing the width to be 400 with data-augmentation and 80 without data-augmentation. We used the following standard data-augmentation operations using torchvision.transforms for both data sets: random crop (via RandomCrop(32, padding=4) and random horizontal \ufb02ip (via RandomHorizontalFlip) for each image. In this experiment, we used the standard data sets \u2014 CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009). We used stochastic gradient descent (SGD) with mini-batch size of 64. We set the learning rate to be 0.01 and momentum coef\ufb01cient to be 0.9. We used the Beta distribution Beta(\u03b1, \u03b1) with \u03b1 = 1.0 for Mixup. The results 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 1000 100 10 (a) Width 0.20 0.24 0.22 0.18 0.16 0.12 0.14 1 10 (b) Depth 0.5 0.4 0.3 0.2 0.1 0.0 10 100 1000 (c) Width 0.5 0.4 0.3 0.2 0.1 0.0 1 10 (d) Depth 0.5 0.4 0.3 0.2 0.1 0.0 1 100 10 (e) Width 0.18 0.16 0.12 0.10 0.08 0.14 10 1 (f) Depth Figure 2. Expected calibration error (ECE). (a), (b): CIFAR-10 without data augmentation; (c), (d): CIFAR-100 with data augmentation; (e), (f): CIFAR-100 without data augmentation. are reported in Figure 1 and 2 with a fully-connected neural network. Consistently across all the experiments, Mixup reduces ECE for larger capacity models and can hurt ECE for small models, which matches our theory. For reasons of space, since our focus is mainly on the calibration, the empirical results regarding test accuracy for each \ufb01gure are deferred to the Appendix. 3.3. Improvement for out-of-domain data In this section, we evaluate the quality of predictive uncertainty on out-of-domain inputs. It has been found empirically that in the out-of-domain setting, Mixup can also enhance the reliability of prediction and boost the performance in calibration comparing to the standard training (without Mixup) (Thulasidasan et al., 2019; Tomani & Buettner, 2020). To explain the above phenomenon, using the similar analysis as those for Theorem 3.1, we provide the following theorem. Theorem 3.4. Let us consider the ECE evaluated on the out-of-domain Gaussian model with mean parameter \u03b8\u2032, that is, P(y = 1) = P(y = \u22121) = 1/2, and x | y \u223c N(y\u00b7\u03b8\u2032, \u03c32I), for i = 1, 2, ..., n. If we have (\u03b8\u2032\u2212\u03b8\u2217)\u22a4\u03b8\u2217\u2264 p/(2n), then when p and n are suf\ufb01ciently large, with high \fWhen and How Mixup Improves Calibration probability, ECE( \u02c6 Cmix; \u03b8\u2032, \u03c3) < ECE( \u02c6 C; \u03b8\u2032, \u03c3), where ECE(\u00b7; \u03b8\u2032, \u03c3) denotes the expected calibration error calculated with respect to the out-of-domain distribution \u2013 the Gaussian model with parameters \u03b8\u2032 and \u03c3, while \u02c6 Cmix and \u02c6 C are still obtained via (2) and (4) via the in-domain training data. The above theorem states the continuity of the boosting effect of Mixup over the domain shift. As long as the domain shift is not too large, Mixup still helps calibration. 3.4. Gaussian generative model Now let us consider a more general class of distributions, the Gaussian generative model, which is a \ufb02exible distribution and has been commonly considered in the machine learning literature. For example, many common deep generative models such as Generative Adversarial Nets (GANs) (Goodfellow et al., 2014) are Gaussian generative models, where the input is a Gaussian sample. De\ufb01nition 3.2 (Gaussian generative model). For \u03b8\u2217\u2208Rp and g : Rp \u2192Rd (d \u2265p), the (\u03b8\u2217, g)-Gaussian model is de\ufb01ned as the following distribution over (x, y) \u2208Rd \u00d7 {1, \u22121}, x = g(z), where: z | y \u223cN(y \u00b7 \u03b8\u2217, I), for i = 1, 2, ..., n, and y follows the Bernoulli distribution P(y = 1) = P(y = \u22121) = 1/2. Now suppose we can learn an h \u2208 {h : h \u25e6 g is an identity mapping in Rp} approximately such that the estimator \u02c6 h satis\ufb01es the following condition. Assumption 3.1. For any given v \u2208Rp, k \u2208{\u22121, 1}, there exists a \u03b8\u2217\u2208Rp, such that given y = k, the probability density function of R1 = v\u22a4\u02c6 h(x) and R2 = v\u22a4h(x) = v\u22a4z \u223cN(k \u00b7 b\u22a4\u03b8\u2217, \u2225v\u22252) satis\ufb01es that pR1(u) = pR2(u) \u00b7 (1 + \u03b4u) for all u \u2208R where \u03b4u satis\ufb01es ER1[|\u03b4u|] = o(1) when n \u2192\u221e. As a special case of De\ufb01nition 3.2, we consider the Gaussian model with unknown \u03c3. Estimating \u03c3 by \u02c6 \u03c3 = v u u t\u2225 n X i=1 (xi \u2212yi\u02c6 \u03b8)\u22252/pn will satisfy Assumption 3.1 when \u2225\u03b8\u2217\u2225< C for some universal constant C. In practice, for more general cases, we can learn such h following the framework of GANs. For example, \u02c6 h = argminh max k\u2208{\u22121,1} W(h(x), z | y), where z is the Gaussian mixture de\ufb01ned in De\ufb01nition 3.2 with \u03b8\u2217= 1p/\u221ap and \u03c3 = 1, and W(\u00b7, \u00b7) denotes the Wasserstein distance. Due to the \ufb02exibility of h, the choice of \u03b8\u2217and \u03c3 will not impact the training process. Now we consider the following two classi\ufb01ers: \u02c6 C(x) = sgn(\u02c6 \u03b8\u22a4\u02c6 h(x)), where \u02c6 \u03b8 = Pn i=1 \u02c6 h(xi)yi/n, and \u02c6 Cmix(x) = sgn(\u02c6 \u03b8mix\u22a4\u02c6 h(x)) where \u02c6 \u03b8mix = n X i,j=1 E\u03bb\u223cD\u03bb(\u03bb\u02c6 h(xi) + (1\u2212\u03bb)\u02c6 h(xj)) \u00b7 \u02dc yi,j(\u03bb)/n2. Similarly, for a generic \u02c6 \u03b8, the con\ufb01dence scores are given by pk(x) = 1/(e\u22122k\u00b7\u02c6 \u03b8\u22a4\u02c6 h(x) + 1). We then have the following result showing that under the more general Gaussian generative model, the Mixup method could still provably lead to an improvement on the calibration. Theorem 3.5. Under the settings described above with Assumption 3.1, there exists c2 > c1 > 0, when p/n \u2208(c1, c2), \u02c6 h is L-Lipschitz, and \u2225\u03b8\u22252 < C for some universal constants L, C > 0 (not depending on n and p), then for suf\ufb01ciently large p and n, there exist \u03b1, \u03b2 > 0 for the Mixup distribution D\u03bb = Beta(\u03b1, \u03b2), such that, with high probability, ECE( \u02c6 Cmix) < ECE( \u02c6 C). 4. Mixup Improves Calibration in Semi-supervised Learning Data augmentation by incorporating cheap unlabeled data from multiple domains is a powerful way to improve prediction accuracy especially when there is limited labeled data. One of the commonly used semi-supervised learning algorithms is the pseudo-labeling algorithm (Chapelle et al., 2009), which \ufb01rst trains an initial classi\ufb01er \u02c6 Cinit on the labeled data, then assigns pseudo-labels to the unlabeled data using the \u02c6 Cinit. Lastly, using the combined labeled and pseudo-labeled data to perform supervised learning and obtain a \ufb01nal classi\ufb01er \u02c6 Cfinal. Previous work has shown that the pseudo-labeling algorithm has many bene\ufb01ts such as improving prediction accuracy and robustness against adversarial attacks (Carmon et al., 2019). However, as we observe from Figure 3c and 3d, incorporating unlabeled data via the pseudo-labeling algorithm does not always improve calibration; sometimes pseudo-labeling even hurts \fWhen and How Mixup Improves Calibration 0.08 0.10 0.14 0.12 0.04 0.06 0.16 0.02 0.00 0 50 100 150 200 (a) Kuzushiji-MNIST 0.08 0.05 0.07 0.06 0.01 0.02 0.03 0.04 0.00 0 100 50 150 200 (b) Fashion-MNIST 0.14 0.16 0.12 0.10 0.08 0.06 0.04 0.02 0.00 0 100 300 200 400 (c) CIFAR-10 0.40 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0.35 0 50 100 150 200 (d) CIFAR-100 Figure 3. ECE calculated for ResNets on varieties of data sets. In (a) and (b), using only pseudo-label algorithm improves calibration, while in (c) and (d), using only pseudo-label algorithm hurts calibration. Further applying Mixup in the last step of pseudo-label algorithm promotes calibration in both cases. The pseudo-labels (or p-labels in short) are inserted into training at the midpoint of the entire training: i.e., at epoch = 100 for (a), (b) and (d) and epoch = 200 for (c). calibration. We \ufb01nd that further applying Mixup at the last step of pseudo-labeling algorithm mitigates this issue and improves calibration as shown in Figure 3. The details of the experimental setup are included in the Appendix We justify the empirical \ufb01ndings above by theoretically analyzing the calibration in the semi-supervised learning setting. Speci\ufb01cally, we assume we have nl labeled data points {xi, yi}nl i=1 and nu unlabeled data points {xu i }nu i=1 Algorithm 1 The pseudo-labeling algorithm Step 1: Obtain an initial classi\ufb01er \u02c6 Cinit(x) = sgn(\u02c6 \u03b8\u22a4 initx), where \u02c6 \u03b8init = Pnl i=1 xiyi/nl. Step 2: Apply \u02c6 Cinit on the unlabeled data set {xu i }nu i=1, and obtain pseudo-labels yu i = \u02c6 Cinit(xu i ) for i \u2208[nu]. Step 3: Obtain the \ufb01nal classi\ufb01er \u02c6 Cfinal(x) = sgn(\u02c6 \u03b8\u22a4 finalx), where \u02c6 \u03b8final = 1 nl + nu nl X i=1 xiyi + nu X i=1 xu i yu i ! i.i.d. sampled from the (\u03b8\u2217, \u03c3)-Gaussian model in De\ufb01nition 3.1. The pseudo-labeling algorithm is the same as the one considered in Carmon et al. (2019), which is shown in Algorithm 1. We then present two theorems. The \ufb01rst theorem demonstrates that when the labeled data is not suf\ufb01cient, then under some mild conditions, the unlabeled data will help the calibration. The second theorem characterizes settings where the standard pseudo-labeling algorithm (Algorithm 1) makes calibration worse and increases ECE. Theorem 4.1. Suppose C1 p p/nl \u2264\u2225\u03b8\u2225\u2264C2 p p/nl for some universal constant C1 < 1/2 and C2 > 2, when p/nl, \u2225\u03b8\u2225, nu are suf\ufb01ciently large, we have with high probability, ECE( \u02c6 Cfinal) < ECE( \u02c6 Cinit). Meanwhile, in some cases, for instance, when the labeled data is suf\ufb01cient, the pseudo-labeling algorithm may hurt the calibration, as shown in the following theorem. Theorem 4.2. If C1\u2225\u03b8\u2225< C2, p < C3 for some constants C1, C2, C3 > 0. Let nl and nu \u2192\u221e, then with high probability, ECE( \u02c6 Cinit) < ECE( \u02c6 Cfinal). The above two theorems suggest that the pseudo-labeling \fWhen and How Mixup Improves Calibration algorithm is not able to robustly guarantee improvement in calibration. In the following, we show that we can mitigate this issue by applying Mixup to the last step in Algorithm 1. Speci\ufb01cally, we consider the following classi\ufb01er with Mixup: \u02c6 Cmix,final(x) = sgn(\u02c6 \u03b8\u22a4 mix,finalx), where \u02c6 \u03b8final,mix(\u03bb) = E\u03bb\u223cD\u03bb[ 1 nl + nu nl+nu X i=1 xl,u i,j (\u03bb)yl,u i,j (\u03bb)]. Here {xl,u i,j (\u03bb), yl,u i,j (\u03bb)}nl+nu i,j=1 is the data set obtained by applying Mixup to the pooled data set by combining {xi, yi}nl i=1 and {xu i , yu i }nu i=1. We then have the following result showing Mixup helps the calibration in the semisupervised setting. Theorem 4.3. Under the setup described above, and denote the ECE of \u02c6 Cfinal and \u02c6 Cmix,final by ECE( \u02c6 Cfinal) and ECE( \u02c6 Cmix,final) respectively. If C1 < \u2225\u03b8\u2225< C2 for some universal constants C1, C2 (not depending on n and p), then for suf\ufb01ciently large p and nl, nu, there exists \u03b1, \u03b2 > 0, such that when the Mixup distribution \u03bb \u223cBeta(\u03b1, \u03b2), with high probability, we have ECE( \u02c6 Cmix,final) < ECE( \u02c6 Cfinal). From Theorem 4.3, we can see that even though incorporating unlabeled data can sometimes make the model less calibrated, adding Mixup training consistently (i.e., under the same conditions of either Theorem 4.1 or Theorem 4.2) mitigates this issue and provably improves calibration. 5. Extension to Maximum Calibration Error Here we further investigate how Mixup helps calibration under maximum calibration error. Similar conclusions can be reached for MCE as those for ECE in Section 3.2, demonstrating that the effects of Mixup can be found across common calibration metrics. Theorem 5.1. Under the settings described in Theorem 3.1, there exists c2 > c1 > 0, when p/n \u2208(c1, c2) and \u2225\u03b8\u22252 < C for some universal constants C > 0 (not depending on n and p), then for suf\ufb01ciently large p and n, there exist \u03b1, \u03b2 > 0, such that when the distribution D\u03bb is chosen as Beta(\u03b1, \u03b2), with high probability, MCE( \u02c6 Cmix) < MCE( \u02c6 C). From the above theorem, we can see Mixup can also help decrease the maximum calibration error. Comparing with ECE, from Figure 4, we can similarly observe that when the model capacity is small, Mixup does not really help. We here provide the following theorem to further illustrate that point. 100 101 depth 0.2 0.4 0.6 0.8 MCE no mixup mixup (a) CIFAR-10 100 101 depth 0.2 0.4 0.6 0.8 MCE no mixup mixup (b) CIFAR-100 Figure 4. Maximum Calibration Error (MCE) calculated with varying network depth. Mixup augmentation can reduce MCE especially for larger capacity models (deeper networks) compared to these models trained without Mixup. Theorem 5.2. There exists a threshold \u03c4 = o(1) such that if p/n \u2264\u03c4 and \u2225\u03b8\u22252 < C for some universal constant C > 0, given any constants \u03b1, \u03b2 > 0 (not depending on n and p), when n is suf\ufb01ciently large, we have, with high probability, MCE( \u02c6 C) < MCE( \u02c6 Cmix). Lastly, we provide a similar theorem as Theorem 3.3 to further illustrate that Mixup helps in the high-dimensional (overparametrized) regime. Theorem 5.3. For any constant cmax > 0, p/n \u2192cratio \u2208 (0, cmax), when \u03b8 is suf\ufb01ciently large (still of a constant level), we have for any \u03b2 > 0, with high probability, the change of ECE by using Mixup, characterized by d d\u03b1MCE( \u02c6 Cmix \u03b1,\u03b2 ) |\u03b1\u21920+ is negative, and monotonically decreasing with respect to cratio. 6. Conclusion and Discussion Mixup is a popular data augmentation scheme and it has been empirically shown to improve calibration in machine learning. In this paper, we provide a theoretical point of view on how and when Mixup helps the calibration, by studying data generative models. We identify that the calibration improvement induced by Mixup is a high-dimensional phenomenon, and that such reduction in ECE becomes more substantial when the dimension is compared to the number of samples. This suggests that Mixup can be especially helpful for calibration in low sample regime where post-hoc calibration approaches like Platt-scaling are not commonly used. We further study the relationship between Mixup and calibration in a semi-supervised setting when there is an abundance of unlabeled data. Using unlabeled data alone can hurt calibration in some settings, while combining Mixup with pseudo-labeling can mitigate this issue. Our work points to a few promising further directions. Since there are many variants of Mixup (Berthelot et al., 2019; \fWhen and How Mixup Improves Calibration Verma et al., 2019; Roady et al., 2020; Kim et al., 2020), it would be interesting to study how these extensions of Mixup affect calibration. Another interesting direction is to use the analysis and framework developed in this paper to study the semi-supervised setting where the unlabeled data come from a different domain than the target one. It would be interesting to study how the calibration will change by leveraging the out-of-domain unlabeled data. Acknowledgements The research of Linjun Zhang is partially supported by NSF DMS-2015378. The research of Zhun Deng is supported by the Sloan Foundation grants, the NSF grant 1763665, and the Simons Foundation Collaboration on the Theory of Algorithmic Fairness. James Zou is supported by funding from NSF CAREER and the Sloan Fellowship." + }, + { + "url": "http://arxiv.org/abs/2011.03598v1", + "title": "Estimation, Confidence Intervals, and Large-Scale Hypotheses Testing for High-Dimensional Mixed Linear Regression", + "abstract": "This paper studies the high-dimensional mixed linear regression (MLR) where\nthe output variable comes from one of the two linear regression models with an\nunknown mixing proportion and an unknown covariance structure of the random\ncovariates. Building upon a high-dimensional EM algorithm, we propose an\niterative procedure for estimating the two regression vectors and establish\ntheir rates of convergence. Based on the iterative estimators, we further\nconstruct debiased estimators and establish their asymptotic normality. For\nindividual coordinates, confidence intervals centered at the debiased\nestimators are constructed.\n Furthermore, a large-scale multiple testing procedure is proposed for testing\nthe regression coefficients and is shown to control the false discovery rate\n(FDR) asymptotically. Simulation studies are carried out to examine the\nnumerical performance of the proposed methods and their superiority over\nexisting methods. The proposed methods are further illustrated through an\nanalysis of a dataset of multiplex image cytometry, which investigates the\ninteraction networks among the cellular phenotypes that include the expression\nlevels of 20 epitopes or combinations of markers.", + "authors": "Linjun Zhang, Rong Ma, T. Tony Cai, Hongzhe Li", + "published": "2020-11-06", + "updated": "2020-11-06", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "stat.ML" + ], + "main_content": "INTRODUCTION Mixed linear regression (MLR) models are widely used in analyzing heterogeneous data arising from biology, physics, economics, and business (McLachlan and Peel 2004; Gr\u00a8 un and Leisch 2007; Netrapalli et al. 2013; Li et al. 2019; Devijver et al. 2020). In many of these modern applications, the number of the covariates is comparable with, or sometimes far exceeds, the number of observed samples. In such high-dimensional settings, statistical inference methods designed for estimation and hypothesis testing of the regression coef\ufb01cients in the classical low-dimensional setting are often not valid. There is a paucity of methods and fundamental theoretical understanding on statistical estimation and inference for high-dimensional MLR models. This motivates us to develop computationally ef\ufb01cient and theoretically guaranteed statistical methods for highdimensional MLR models in analyzing large heterogeneous datasets. We consider the following high-dimensional MLR model where the observed data are i.i.d. draws from one of the two unknown linear models: yi = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 x\u22a4 i \u03b2\u2217 1 + \u03f5i with probability \u03c9\u2217, x\u22a4 i \u03b2\u2217 2 + \u03f5i with probability 1 \u2212\u03c9\u2217, i = 1, 2, . . . , n, \u03b2\u2217 1, \u03b2\u2217 2 \u2208Rp, (1.1) where the random design variables xi \u2208Rp are i.i.d. samples from Np(0, \u03a3) with the unknown covariance matrix \u03a3, \u03b2\u2217 1 \u0338= \u03b2\u2217 2 are latent regression coef\ufb01cients, \u03c9\u2217\u2208(0, 1) is the unknown mixing proportion, and the noise \u03f5i is i.i.d. from N(0, \u03c32) for some \u03c3 > 0. For identi\ufb01ability, we assume \u03c9\u2217\u2208(1/2, 1). Under the high-dimensional setting where p is much larger than n, we aim to answer the following inference questions: 1. What is an ef\ufb01cient algorithm for estimating the underlying regression vectors \u03b2\u2217 1 and \u03b2\u2217 2 where both the mixing proportion \u03c9\u2217and the random design covariance matrix \u03a3 are unknown? What is the rate of convergence of the estimator? 2. How to construct asymptotically valid tests and con\ufb01dence intervals for the individual coordinates of the latent regression coef\ufb01cients \u03b2\u2217 1 and \u03b2\u2217 2, and their difference \u03b2\u2217 1 \u2212\u03b2\u2217 2? 1 \f3. For simultaneously testing the null hypotheses H0j : \u03b2\u2217 1j = \u03b2\u2217 2j = 0, j = 1, ..., p, how to construct a large-scale multiple testing procedure that controls the false discovery rate (FDR) and false discovery proportion (FDP) asymptotically? 1.1 Related Works In the classical low-dimensional settings, the problems of estimation, hypotheses testing and con\ufb01dence intervals for MLR have been extensively studied in literature. For example, Zhu and Zhang (2004) considered hypothesis testing and developed an asymptotic theory for both the maximum likelihood and the maximum modi\ufb01ed likelihood estimators in MLR. Khalili and Chen (2007) introduced a penalized likelihood approach for variable selection in MLR. Faria and Soromenho (2010) compared three expectation-maximization (EM) algorithms that compute the maximum likelihood estimates of the coef\ufb01cients of MLR. Chaganty and Liang (2013) developed a computationally ef\ufb01cient algorithm based on the tensor power method, and obtained the rates of convergence for their proposed estimator. Bashir and Carter (2012) proposed a robust model that can achieve high breakdown point in the contaminated data for parameter estimation in MLR. Based on a new initialization step for the EM algorithm, Yi et al. (2014) provided the theoretical guarantees for coef\ufb01cient estimation in MLR. Moreover, Yao and Song (2015) proposed a deconvolution method to study the MLR with measurement errors. Zhong et al. (2016) proposed a non-convex continuous objective function for solving the general unbalanced k-component MLR. Balakrishnan et al. (2017) developed a general framework for proving rigorous guarantees on the performance of the EM algorithm and applied to some statistical problems including the estimation of coef\ufb01cients in the symmetric MLR where the mixing proportion is known to be \u03c9\u2217= 1/2. Li and Liang (2018) presented a \ufb01xed parameter algorithm that solves MLR under Gaussian design in time that is nearly linear in the sample size and the dimension. More recently, building upon the work of Balakrishnan et al. (2017) on the symmetric MLR, McLachlan and Peel (2004) and Klusowski et al. (2019) introduced better tools for analyzing the convergence rates of 2 \fthe EM algorithm for estimating the coef\ufb01cients. Shen and Sanghavi (2019) proposed an ef\ufb01cient algorithm, Iterative Least Trimmed Squares, for solving MLR with adversarial corruptions. In contrast, statistical inference for MLR in the high-dimensional setting is relatively less studied. Speci\ufb01cally, St\u00a8 adler et al. (2010) proposed an \u21131 penalized estimator and developed an ef\ufb01cient EM algorithm for MLR with provable convergence properties. Wang et al. (2015) and Yi and Caramanis (2015) established a general theory of the EM algorithm for statistical inference in high dimensional latent variable models, including the high-dimensional MLR with the symmetric and spherical assumptions. In Zhu et al. (2017), a generic stochastic EM algorithm was proposed for the high-dimensional MLR with theoretical guarantees obtained under the symmetric setting (\u03c9\u2217= 1/2). More recently, Fan et al. (2018) studied the fundamental tradeoffs between statistical accuracy and computational tractability for high-dimensional latent variables models, including testing the global null hypothesis in MLR. However, problems such as statistical inference about the individual regression coef\ufb01cients and large-scale multiple testing under the general MLR with an unknown mixing proportion and an unknown design covariance matrix have not been addressed in the literature. 1.2 Main Contributions The main contributions of our paper are three-fold. 1. Based on a careful analysis of a high-dimensional EM algorithm, we propose iterative estimators for the regression coef\ufb01cients (\u03b2\u2217 1, \u03b2\u2217 2) without the knowledge of the mixing proportions or the design covariance matrix, and obtain explicitly the rates of convergence of the iterative estimators under the \u21132 norm. To the best of our knowledge, this is the \ufb01rst result on the estimation of the high-dimensional MLR with both unknown mixing proportion and unknown design covariance matrix. 2. Further, we construct debiased estimators of the latent regression coef\ufb01cients, based on the iterative estimators, and establish the asymptotic normality of its individual coordinates. 3 \fThe limiting distribution is then used for constructing con\ufb01dence intervals and tests for the individual latent regression coef\ufb01cients. 3. For the problem of large-scale testing of hypotheses H0j : \u03b2\u2217 1j = \u03b2\u2217 2j = 0, j = 1, ..., p, we propose a multiple testing procedure that is shown to control the FDR and FDP asymptotically. Strong numerical results suggest the superior empirical performance of our proposed testing procedure over the existing methods. 1.3 Organization and Notation Throughout our paper, for a vector a = (a1, ..., an)\u22a4\u2208Rn, we de\ufb01ne the \u2113p norm \u2225a\u2225p = \u0000 Pn i=1 ap i \u00011/p, and the \u2113\u221enorm \u2225a\u2225\u221e= max1\u2264j\u2264n |ai|. a\u2212j \u2208Rn\u22121 stands for the subvector of a without the j-th component. For vectors a, b \u2208Rn, we denote their inner product \u27e8a, b\u27e9= Pn i=1 aibi. For a matrix A \u2208Rp\u00d7q, \u03bbi(A) stands for the i-th largest singular value of A and \u03bbmax(A) = \u03bb1(A), \u03bbmin(A) = \u03bbp\u2227q(A). \u2225A\u22251 denotes the matrix \u21131 norm, and \u2225A\u2225\u221e= maxi,j |Aij|. In addition, A\u2212i.\u2212j \u2208R(p\u22121)\u00d7(q\u22121) stands for the submatrix of A without the i th row and j-th column. For any positive integer p, we denote [p] = {1, ..., p}. Furthermore, for sequences {an} and {bn}, we write an = o(bn) if limn an/bn = 0, and write an = O(bn), an \u2272bn or bn \u2273an if there exists a constant C such that an \u2264Cbn for all n. We also write an = OP(bn) if there exists a constant C such that lim infn\u2192\u221eP(an \u2264Cbn) = 1, and an = oP(bn) if an/bn p \u21921. We write an \u224dbn if an \u2272bn and an \u2273bn. For a set A, we denote |A| as its cardinality. Lastly, C, C0, C1, ... are constants that may vary from place to place. The rest of the paper is organized as follows. We propose in Section 2 the iterative algorithm and the estimators of the latent regression coef\ufb01cients and study their theoretical properties. Section 3 introduces the debiased estimators of individual regression coef\ufb01cients and obtains their asymptotic normality and the resulting con\ufb01dence intervals. In Section 4, by focusing on the problem of testing large-scale simultaneous hypotheses, we present our multiple testing procedure and show that it controls the FDR/FDP asymptotically. In Section 5, the numerical performance of the proposed methods are evaluated through extensive simulations. In Section 6, the proposed pro4 \fcedures are illustrated by an analysis of a multiplex image cytometry dataset. Further extensions and related problems are discussed in Section 7. The proofs of other theorems as well as technical lemmas are collected in the Supplementary Materials (Zhang et al. 2020). 2 ITERATIVE ESTIMATION VIA THE EM ALGORITHM Suppose we have n observations {(xi, yi)}n i=1 generated independently from the MLR model in (1.1), and wish to estimate and make inference on the coef\ufb01cient vectors \u03b2\u2217 1 and \u03b2\u2217 2. In the classical setting where p is \ufb01xed or much smaller than n, the maximum likelihood estimator (MLE) has been shown to perform well under mild conditions (Balakrishnan et al. 2017). The MLE aims to maximize the log-likelihood of the data {(xi, yi)}n i=1, which can be written as ln(\u03b8; x, y) = 1 n n X i=1 log \u0014 \u03c9 \u221a 2\u03c0\u03c3 exp \u001a \u2212(yi \u2212\u27e8xi, \u03b21\u27e9)2 2\u03c32 \u001b + 1 \u2212\u03c9 \u221a 2\u03c0\u03c3 exp \u001a \u2212(yi \u2212\u27e8xi, \u03b22\u27e9)2 2\u03c32 \u001b \u0015 , (2.1) where we denote the parameter \u03b8 = (\u03c9, \u03b21, \u03b22) and the log-likelihood by ln(\u03b8). Due to the non-convexity of ln(\u03b8; x, y), searching for the MLE is computationally intractable. Moreover, in the high-dimensional setting where the dimension p is much larger than the sample size n, the MLE is in general not well de\ufb01ned, unless the models are carefully regularized by sparsity-type assumptions. In this paper, we propose to explore the sparsity of the coef\ufb01cient vectors. Further, we develop an EM algorithm to address the extra computational challenge for parameter estimation and uncertainty assessment. 2.1 High-Dimensional EM Algorithm and the Iterative Estimators For ease of presentation, let us use zi to denote the hidden labels of (xi, yi), that is, zi = 1 if (xi, yi) is drawn from the \ufb01rst model yi = x\u22a4 i \u03b2\u2217 1+\u03f5i, and zi = 2 if the underlying truth is the second model yi = x\u22a4 i \u03b2\u2217 2 + \u03f5i. The marginal distribution of zi is given by P(zi = 1) = 1 \u2212P(zi = 2) = \u03c9. The EM algorithm is essentially an alternating maximization method, which alternatively optimizes between the identi\ufb01cation of hidden labels {zi}n i=1 and the estimation of parameter \u03b8 = (\u03c9, \u03b21, \u03b22). 5 \fSpeci\ufb01cally, in the E-step of (t + 1)-th iteration, given the parameters \u03b8(t) = (\u03c9(t), \u03b2(t) 1 , \u03b2(t) 2 ) estimated from the previous t-th step, the conditional probability of the i-th sample in class 1 given the observed data (xi, yi) can be calculated as \u03b3\u03b8(t)(xi, yi) := P\u03b8(t)(zi = 1 | xi, yi) = \u03c9(t) exp(\u2212(yi\u2212\u27e8xi,\u03b2(t) 1 \u27e9)2 2\u03c32 ) \u03c9(t) exp(\u2212(yi\u2212\u27e8xi,\u03b2(t) 1 \u27e9)2 2\u03c32 ) + (1 \u2212\u03c9(t)) exp(\u2212(yi\u2212\u27e8xi,\u03b2(t) 2 \u27e9)2 2\u03c32 ) . (2.2) As a result, the conditional expectation of the log-likelihood (2.1), with respect to the conditional distribution given (x, y) under the current estimate of the parameter \u03b8(t), can be calculated as Qn(\u03b8 | \u03b8(t)) :=E\u03b8(t)[ln(\u03b8; x, y) | x, y] (2.3) = \u22121 2n \" n X i=1 \u03b3\u03b8(t)(xi, yi)(yi \u2212\u27e8xi, \u03b21\u27e9)2 + n X i=1 (1 \u2212\u03b3\u03b8(t)(xi, yi))(yi \u2212\u27e8xi, \u03b22\u27e9)2 # + 1 n n X i=1 (1 \u2212\u03b3\u03b8(t)(xi, yi)) log(1 \u2212\u03c9) + \u03b3\u03b8(t)(xi, yi) log \u03c9. Given \u03b3\u03b8(t)(xi, yi), i.e., the distribution of the latent labels, the M-step is usually proceeded by maximizing Qn(\u03b8 | \u03b8(t)): \u02c6 \u03b8(t+1) = arg max \u03b8 Qn(\u03b8 | \u03b8(t)). However, in the high-dimensional setting, such a maximization tends to over\ufb01t data. To handle the challenge of high-dimensionality, the key ingredient of our algorithm is to add a regularization term \u2225\u03b2\u22251 to enforce sparsity. In particular, we write \u03b3(t) \u03b8,i = \u03b3\u03b8(t)(xi, yi), and let \u02c6 \u03b2(t+1) 1 = arg min \u03b21 1 2n n X i=1 \u03b3(t) \u03b8,i(yi \u2212\u27e8xi, \u03b21\u27e9)2 + \u03bb(t+1) n \u2225\u03b21\u22251 \u02c6 \u03b2(t+1) 2 = arg min \u03b22 1 2n n X i=1 (1 \u2212\u03b3(t) \u03b8,i)(yi \u2212\u27e8xi, \u03b22\u27e9)2 + \u03bb(t+1) n \u2225\u03b22\u22251, (2.4) where \u03bb(t+1) is a tuning parameter which will also be updated recursively, and will be speci\ufb01ed later. We also update \u03c9(t+1) by \u03c9(t+1) = 1 n n X i=1 \u03b3\u03b8(t)(xi, yi). 6 \fGiven a suitable initialization, the proposed high-dimensional EM algorithm then proceeds by iterating between the E-step and the M-step, which is summarized in the following Algorithm 1. Algorithm 1 EM for High-Dimensional MLR 1: Inputs: Initializations \u02c6 \u03c9(0), \u02c6 \u03b2(0) 1 , \u02c6 \u03b2(0) 2 , maximum number of iterations T, and constants \u03ba \u2208 (0, 1), C\u03bb > 0. Split the Dataset into T subsets of size n/T. For i \u2208[n], set \u03b3(0) \u03b8,i = \u02c6 \u03c9(0) exp(\u2212(yi\u2212\u27e8xi, \u02c6 \u03b2(0) 1 \u27e9)2 2\u03c32 ) \u02c6 \u03c9(0) exp(\u2212(yi\u2212\u27e8xi, \u02c6 \u03b2(0) 1 \u27e9)2 2\u03c32 ) + (1 \u2212\u02c6 \u03c9(0)) exp(\u2212(yi\u2212\u27e8xi, \u02c6 \u03b2(0) 2 \u27e9)2 2\u03c32 ) . 2: for t = 0, 1, . . . , T \u22121 do 3: E-Step: Evaluate Qn(\u03b8 | \u02c6 \u03b8(t)) as de\ufb01ned in (2.3) with the t-th data subset. 4: M-Step: Update \u03b2(t+1) 1 and \u02c6 \u03b2(t+1) via \u02c6 \u03b2(t+1) 1 = arg min \u03b21 1 2n n X i=1 \u03b3(t) \u03b8,i(yi \u2212\u27e8xi, \u03b21\u27e9)2 + \u03bb(t+1) n \u2225\u03b21\u22251 \u02c6 \u03b2(t+1) 2 = arg min \u03b22 1 2n n X i=1 (1 \u2212\u03b3(t) \u03b8,i)(yi \u2212\u27e8xi, \u03b22\u27e9)2 + \u03bb(t+1) n \u2225\u03b22\u22251, (2.5) with \u03bb(t+1) n = \u03ba\u03bb\u03bb(t) n + C\u03bb r log p n . (2.6) Update \u02c6 \u03c9(t) via \u02c6 \u03c9(t) = 1 n Pn i=1 \u03b3(t) \u03b8,i. 5: end for 6: Output \u03b2(T) 1 and \u02c6 \u03b2(T) 2 . Remark 1. In the above algorithm, it is required that the noise level \u03c32 is known. Such an assumption is widely used in prior literature in mixed linear regressions, see Wang et al. (2015); Balakrishnan et al. (2017); Klusowski et al. (2019) and reference therein. In practice, a good estimator of \u03c32 can be substituted in the algorithm to achieve deisrable empirical performance. See Section 5 for more detailed numerical justi\ufb01cations. 2.2 Rate of Convergence In this section, we give theoretical guarantees for estimating the coef\ufb01cient vectors \u03b2\u2217 1 and \u03b2\u2217 2 using Algorithm 1. To begin with, we introduce the parameter space for (\u03c9\u2217, \u03b2\u2217 1, \u03b2\u2217 2), where we 7 \fassume that \u03b2\u2217 1 and \u03b2\u2217 2 are both sparse vectors and \u03c9\u2217is bounded away from 0 or 1. \u0398(s) = \u001a (\u03c9, \u03b21, \u03b22) : \u03c9 \u2208(c, 1 \u2212c), \u2225\u03b21\u22250, \u2225\u03b22\u22250 \u2264s, for some c \u2208(0, 1/2) \u001b . Furthermore, we introduce the following regularity conditions on the initialization and signal-tonoise ratio (SNR) strength. (A1) : Initialization: \u2225\u03b2(0) 1 \u2212\u03b2\u2217 1\u22252 + \u2225\u03b2(0) 2 \u2212\u03b2\u2217 2\u2225+ |\u03c9(0) \u2212\u03c9\u2217| \u2264min{\u03c9\u2217/2, (1 \u2212\u03c9\u2217)/2, cl \u00b7 \u2206\u2217}, where \u2206\u2217= p (\u03b2\u2217 1 \u2212\u03b2\u2217 2)\u22a4\u03a3\u22121(\u03b2\u2217 1 \u2212\u03b2\u2217 2); (A2) : SNR strength: (\u03b2\u2217 1 \u2212\u03b2\u2217 2)\u22a4\u03a3\u22121(\u03b2\u2217 1 \u2212\u03b2\u2217 2) \u2265cs, where cl, cs > 0 are some universal constants that and do not grow with n or p. The (A1) suggests the initialized estimator should be closed to the truth. Such a condition is common in the literature of mixed linear regression, see Balakrishnan et al. (2017); Yi et al. (2014); Wang et al. (2015); Yi and Caramanis (2015). In practice, our initialization algorithm is discussed in Section 5. Condition (A2) has also been commonly used in the literature of mixed linear regression (Balakrishnan et al. 2017; Klusowski et al. 2019), and other hidden variable models (Wang et al. 2015; Cai et al. 2019). Theorem 1. Suppose 1/M < \u03bbmin(\u03a3) \u2264\u03bbmax(\u03a3) < M for some constant M > 1 and conditions (A1) and (A2) hold. If s log p\u00b7 log n n = o(1), and cl is suf\ufb01ciently large, that is, cl \u2265C(\u03c9\u2217, M, cl) where C(\u03c9\u2217, M, cl) is a constant depending only on (\u03c9\u2217, M, cl). Then T \u2273log n, we have with probability at least 1 \u2212p\u22121, \u2225\u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1\u22252 + \u2225\u02c6 \u03b2(T) 2 \u2212\u03b2\u2217 2\u22252 \u2272 r s log p\u00b7 log n n . Remark 2. The condition on the spectrum of \u03a3 is standard in the high-dimensional literature. For example, it has been used in Cai et al. (2016); Cai and Zhou (2012) and Javanmard and Montanari (2014a) for estimation of precision matrices, covariance matrices and regression coef\ufb01cients, respectively. Further, the convergence rate of optimization error is exponentially fast, so we only need that the number of iterations T \u2273log n to make the optimization error negligible comparing to the statistical error. 8 \f3 DEBIASED ESTIMATORS AND THEIR ASYMPTOTIC NORMALITY The iterative estimators obtained from the high-dimensional EM algorithm (Algorithm 1) enjoys desirable properties in term of squared error, they are however unsuitable to be used directly for statistical inference. In this section, we introduce the debiased estimators for the mixed linear regression coef\ufb01cients \u03b2\u2217 \u2113j with \u2113\u2208{1, 2} and j \u2208[p], and obtain their asymptotic normality, which can then be used to perform hypothesis testing and construct con\ufb01dence intervals for the individual coef\ufb01cients. 3.1 Debiased Estimators Due to the \u21131 regularization in the M-step, the outputs \u02c6 \u03b2(T) 1 and \u02c6 \u03b2(T) 2 from the high-dimensional EM algorithm (Algorithm 1) are biased. To facilitate the subsequent statistical inference, we proceed by correcting their biases. Such a de-biased procedure has been used widely in high-dimensional single linear regression models (Javanmard and Montanari 2014a,b; van de Geer et al. 2014; Zhang and Zhang 2014; Ning and Liu 2017), but cannot be directly applied to the EM solutions. In the following, we \ufb01rst present some high-level intuition. We start with the regression coef\ufb01cient \u03b2\u2217 1. Note that in Algorithm 1, \u02c6 \u03b2(T) 1 is constructed only based on the T-th sample, and the sample size is n/T with T \u224dlog n. In the following, for the notational simplicity, we simply write nT = n/T. Firstly, \u02c6 \u03b2(T) 1 satis\ufb01es the Karush-Kuhn-Tucker (KKT) condition \u22121 n nT X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u02c6 \u03b2(T) 1 \u27e9)xi + \u03bb(T) n \u2202|| \u02c6 \u03b2(T) 1 ||1 = 0, (3.1) where \u2202|| \u02c6 \u03b2(T) 1 ||1 is the subgradient of the \u21131 norm || \u00b7 ||1. Letting b \u03a3XX = 1 nT PnT i=1 \u03b3(T) \u03b8,i xix\u22a4 i and b \u03a3XY = 1 nT PnT i=1 \u03b3(T) \u03b8,i xiyi, equation (3.1) can then be rewritten as b \u03a3XX \u02c6 \u03b2(T) 1 \u2212b \u03a3XY + \u03bb(T) n \u2202|| \u02c6 \u03b2(T) 1 ||1 = 0, and as a result, b \u03a3XX( \u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1) + \u03bb(T) n \u2202|| \u02c6 \u03b2(T) 1 ||1 = b \u03a3XY \u2212b \u03a3XX\u03b2\u2217 1. 9 \fFollowing the debiased Lasso method in Javanmard and Montanari (2014a), suppose one has a good approximation of the \u201cinverse\u201d of b \u03a3XX, say M, then one can multiply M on the left to obtain M b \u03a3XX( \u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1) + \u03bb(T) n M\u2202|| \u02c6 \u03b2(T) 1 ||1 = M(b \u03a3XY \u2212b \u03a3XX\u03b2\u2217 1). Then it follows ( \u02c6 \u03b2(T) 1 + \u03bb(T) n M\u2202|| \u02c6 \u03b2(T) 1 ||1) \u2212\u03b2\u2217 1 = M(b \u03a3XY \u2212b \u03a3XX\u03b2\u2217 1) + (I \u2212M b \u03a3XX)( \u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1). (3.2) By inspection, if we let b \u03b2u 1 = \u02c6 \u03b2(T) 1 + \u03bb(T) n M\u2202|| \u02c6 \u03b2(T) 1 ||1, then \u221an( b \u03b2u 1 \u2212\u03b2\u2217 1) =\u221an(M b \u03a3XY \u2212M b \u03a3XX\u03b2\u2217 1) + \u221an(I \u2212M b \u03a3XX)( \u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1) =\u221an \u00141 n n X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)Mxi \u0015 + oP(1), where the second equality incorporated the assumption that M approximate the \u201cinverse\u201d of b \u03a3XX well and thus \u2225(I \u2212M b \u03a3XX)( \u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1)\u2225\u221e\u2264\u2225I \u2212M b \u03a3XX\u2225\u221e\u2225\u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1\u22251 is negligible. Unlike the procedure in Javanmard and Montanari (2014a), our \u02c6 \u03a3XX depends on (xi, yi) instead of only on xi\u2019s, and therefore solving a direct approximation will mess up with the subsequent asymptotic normality. We propose to solve M by the following two-step procedure. First, let \u02dc \u03a3XX = 1 n Pn i=1 xix\u22a4 i , for j \u2208[p], let \u02dc mj be the solution of minimize mj\u2208Rp m\u22a4 j \u02c6 \u03a3XXmj subject to ||\u02c6 \u03a3XXmj \u2212e(p) j ||\u221e\u2264\u00b5, \u2225mj\u2225\u2264C p log n. (3.3) where \u00b5 and C are tuning parameters that will be discussed later. Second we set mj = \u02dc mj/\u02c6 \u03c9(T), with \u02c6 \u03c9(T) = \u02c6 \u03c9(T) 1 n Pn i=1 \u03b3(T) \u03b8,i . The denominator is used because \u02dc \u03a3XX is approximately \u03c9\u2217\u00b7 \u02c6 \u03a3XX. Although \u02c6 \u03c9(T) still depends on (xi, yi), but it is close to \u03c9\u2217and the distance is negligible. Now, to \ufb01nd the asymptotic distribution of \u221anT( b \u03b2u 1 \u2212\u03b2\u2217 1), let us consider the dominating term 1 \u221anT nT X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)Mxi. (3.4) 10 \fBy inspection, conditioning on x, (3.4) is an approximation of the linear transformation of the score function \u2207\u03b8\u2217ln(\u03b8\u2217; x, y). Speci\ufb01cally, straightforward computation yields the score function \u2207\u03b21ln(\u03b8\u2217; x, y) = 1 nT nT X i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi. (3.5) By Theorem 1, \u03b8(T) is close to \u03b8\u2217, so heuristically, (3.4) is also close to a linear transformation of the score function (3.5) when n is large. Moreover, since the score function \u2207ln(\u03b8; x, y) is asymptotically normal at the truth \u03b8 = \u03b8\u2217with covariance matrix being the information matrix I(\u03b8\u2217) = \u2212E\u03b8\u2217\u22072ln(\u03b8\u2217; x, y), in order to make valid inference about the parameters, it suf\ufb01ces to estimate the information matrix. The following Lemma 1 provides the an estimator for information matrix and the corresponding asymptotic distribution of (3.4). Lemma 1. Recall \u03b8 = (\u03c9, \u03b21, \u03b22), and denote Tn(\u03b8) = \u2212 \u0000\u22072 \u03b8Qn(\u03b8 | \u03b8\u2032) + \u22072 \u03b8,\u03b8\u2032Qn(\u03b8 | \u03b8\u2032) |\u03b8,\u03b8\u2032=\u03b8 \u0001 . Under the same conditions of Theorem 1, and s log p log n\u00b7(\u221as\u2228log2 p) \u221an = o(1). Let T\u03b2,n(\u03b8) = (Tn(\u03b8))\u22121,\u22121 \u2208 R2p\u00d72p, then j \u2208[p], conditional on X, we have \u27e8mj, 1 \u221anT PnT i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 q m\u22a4 j (T\u03b2,n(\u02c6 \u03b8(T)))1,1mj d \u2192N(0, 1), (3.6) where (T\u03b2,n(\u02c6 \u03b8(T)))1,1 is de\ufb01ned in (3.7). Similar arguments can be applied to the regression coef\ufb01cient \u03b2\u2217 2. In Algorithm 2, we summarize our proposed method for obtaining the debiased estimators b \u03b2u 1 and b \u03b2u 2. The following theorem establishes the asymptotic normality of the the individual components of these debiased estimators, which can be directly used for performing hypotheses testing or constructing con\ufb01dence intervals. Theorem 2. Under the same conditions of Theorem 1. We further assume that \u2225\u03a3\u22252, \u2225\u03a3\u22121\u22251 \u2264L for some L > 0 and s log p log n\u00b7(\u221as\u2228log2 p) \u221an = o(1), and the tuning parameters \u00b5 = C\u2032 q log p n log n, C = L. Then for any j \u2208[p] and \u2113= 1, 2, conditional on x, as n \u2192\u221e \u221anT \u0010 b \u03b2u \u2113j \u2212\u03b2\u2217 \u2113j \u0011 p \u02c6 v\u2113j d \u2192N(0, 1). 11 \fAlgorithm 2 De-biasing EM for High-Dimensional MLR 1: Inputs: \u03b3(T) \u03b8,i , \u02c6 \u03b2(T) 1 , \u02c6 \u03b2(T) 2 and \u03bb(T) n from Algorithm 1, j \u2208[p], tuning parameter \u00b5, and coverage probability 1 \u2212\u03b1. 2: Precision matrix approximation: Let \u02c6 \u03c9(T) = 1 nT PnT i=1 \u03b3(T) \u03b8,i , and \u02dc \u03a3XX = 1 nT PnT i=1 xix\u22a4 i and for j \u2208[p], let \u02dc mj be the solution of minimize mj\u2208Rp m\u22a4 j \u02dc \u03a3XXmj subject to ||\u02dc \u03a3XXmj \u2212e(p) j ||\u221e\u2264\u00b5; \u2225mj\u22251 \u2264C p log n. Let mj = \u02dc mj/\u02c6 \u03c9(T). 3: De-biasing: For \u2113= 1, 2, let b \u03b2u \u2113j = \u02c6 \u03b2(T) \u2113j + \u03bb(T) n m\u22a4 j \u2202|| \u02c6 \u03b2(T) \u2113 ||1. 4: Variance estimation: For \u2113= 1, 2, let \u02c6 v\u2113j = m\u22a4 j \u0010 (Tn(\u02c6 \u03b8(T)))\u2113,\u2113 \u0011 mj, where (Tn(\u02c6 \u03b8))\u2113,\u2113are de\ufb01ned in (3.7) and (3.8). 5: Output ( b \u03b2u 1j, \u02c6 v1j) and ( b \u03b2u 2j, \u02c6 v2j). In MLR, sometimes it is also of interest to know in which features do the associations between y and x differ. In other words, one would like to make inference about the differential parameter (\u03b2\u2217 1j \u2212\u03b2\u2217 2j) for some given j \u2208[p]. Towards this end, consider its natural estimator ( b \u03b2u 1j \u2212b \u03b2u 2j). It can be shown that a consistent estimator for the variance of ( b \u03b2u 1j \u2212b \u03b2u 2j) is given by \u02dc vj = m\u22a4 j \u0010 (T\u03b2,n(\u02c6 \u03b8(T)))1,1 + (T\u03b2,n(\u02c6 \u03b8(T)))2,2 \u2212(T\u03b2,n(\u02c6 \u03b8(T)))1,2 \u2212(T\u03b2,n(\u02c6 \u03b8(T)))2,1 \u0011 mj, where (T\u03b2,n(\u02c6 \u03b8(T)))1,1 = 1 nT n X i=1 \u03b3(T) \u03b8,i xix\u22a4 i + 2 nT nT X i=1 (yi \u2212\u27e8xi, \u02c6 \u03b2(T) 1 \u27e9)2 \u03b7(\u02c6 \u03b8(T)) xix\u22a4 i , (3.7) (T\u03b2,n(\u02c6 \u03b8(T)))2,2 = 1 nT n X i=1 (1 \u2212\u03b3(T) \u03b8,i )xix\u22a4 i + 2 nT n X i=1 (yi \u2212\u27e8xi, \u02c6 \u03b2(T) 2 \u27e9)2 \u03b7(\u02c6 \u03b8(T)) xix\u22a4 i , (3.8) (T\u03b2,n(\u02c6 \u03b8(T)))2,1 = (T\u03b2,n(\u02c6 \u03b8(T)))1,2 = 2 nT n X i=1 (\u27e8xi, \u02c6 \u03b2(T) 1 \u27e9\u2212yi) \u00b7 (yi \u2212\u27e8xi, \u02c6 \u03b2(T) 2 \u27e9) \u03b7(\u02c6 \u03b8(T)) xix\u22a4 i , (3.9) and \u03b7(\u02c6 \u03b8(T)) =\u03c32 \u0014 \u02c6 \u03c9(T) + (1 \u2212\u02c6 \u03c9(T)) exp \u001a(2yi \u2212\u27e8xi, \u02c6 \u03b2(T) 1 + \u02c6 \u03b2(T) 2 \u27e9) \u00b7 \u27e8xi, \u02c6 \u03b2(T) 2 \u2212\u02c6 \u03b2(T) 1 \u27e9 2\u03c32 \u001b\u0015 12 \f\u00d7 \u0014 1 \u2212\u02c6 \u03c9(T) + \u02c6 \u03c9(T) exp \u001a \u2212(2yi \u2212\u27e8xi, \u02c6 \u03b2(T) 1 + \u02c6 \u03b2(T) 2 \u27e9) \u00b7 \u27e8xi, \u02c6 \u03b2(T) 2 \u2212\u02c6 \u03b2(T) 1 \u27e9 2\u03c32 \u001b\u0015 . (3.10) Similar toTheorem 2, the following theorem establishes the asymptotic normality of the estimator ( b \u03b2u 1j \u2212b \u03b2u 2j). Theorem 3. Under the same conditions of Theorem 1. We further assume that \u2225\u03a3\u22121\u22251 \u2264L for some L > 0 and s log p log n \u221an \u21920. Then for any j \u2208[p], as n \u2192\u221e, conditional on x, \u221anT \u0010 ( b \u03b2u 1j \u2212b \u03b2u 2j) \u2212(\u03b2\u2217 1j \u2212\u03b2\u2217 2j) \u0011 p\u02dc vj d \u2192N(0, 1). 3.2 Asymptotic Con\ufb01dence Intervals Given the asymptotic normality established in Theorems 2 and 3, we are now ready to present the con\ufb01dence intervals for the individual coordinates \u03b2lj\u2019s for j \u2208[p] and l = 1, 2 and the differential parameters (\u03b2\u2217 1j \u2212\u03b2\u2217 2j) for j \u2208[p]. Speci\ufb01cally, let I(ind) lj = [ b \u03b2u \u2113j \u2212z\u03b1/2 p \u02c6 v\u2113j, b \u03b2u \u2113j + z\u03b1/2 p \u02c6 v\u2113j], for j \u2208[p] and l = 1, 2, and I(dif) j = [( b \u03b2u 1j \u2212b \u03b2u 2j) \u2212z\u03b1/2 p \u02dc vj, ( b \u03b2u 1j \u2212b \u03b2u 2j) + z\u03b1/2 p \u02dc vj], for j \u2208[p], where z\u03b1/2 is the \u03b1/2-th quantile of a standard normal distribution. The following theorem provides the asymptotic guarantee for the validity of these con\ufb01dence intervals. Theorem 4. Under the conditions of Theorem 2, the con\ufb01dence intervals I(ind) lj and I(dif) j are asymptotically valid, that is, lim n\u2192\u221eP(\u03b2lj \u2208I(ind) lj ) = 1 \u2212\u03b1, for j \u2208[p] and l = 1, 2; lim n\u2192\u221eP(\u03b21j \u2212\u03b22j \u2208I(dif) j ) = 1 \u2212\u03b1, for j \u2208[p]. 13 \f4 LARGE-SCALE MULTIPLE TESTING 4.1 The Multiple Testing Procedure In this section, we consider simultaneous testing of the following null hypotheses H0j : \u03b2\u2217 1j = \u03b2\u2217 2j = 0, 1 \u2264j \u2264p. Apart from identifying as many nonzero coordinates as possible, to obtain results of practical interest, we would also like to control the false discovery rate (FDR) as well as the false discovery proportion (FDP). Speci\ufb01cally, since each individual hypothesis H0j is a composite of two hypotheses with H0j = H(1) 0j \u2229H(2) 0,j where H(\u2113) 0j : \u03b2\u2217 \u2113j = 0, we can construct standardized statistics T (\u2113) j = \u02c6 \u03b2u \u2113j \u02c6 v\u2113j/\u221an, for j = 1, ..., p and \u2113= 1, 2. For a given threshold level t > 0, each individual partial hypothesis H(\u2113) 0j : \u03b2\u2217 \u2113j = 0 is rejected if |T (\u2113) j | \u2265t. Hence if we propose a test statistic Tj = max{|T (1) j |, |T (2) j |} for each null hypothesis H0j and we reject H0j whenever Tj \u2265t, then for each t, we can de\ufb01ne FDP(t) = P j\u2208H0 I{Tj \u2265t} max \b Pp j=1 I{Tj \u2265t}, 1 \t, FDR(t) = E[FDP(t)]. In order to control the FDR/FDP at a pre-speci\ufb01ed level 0 < \u03b1 < 1, we can set the threshold level as \u02dc t1 = inf \u001a 0 \u2264t \u2264bp : P j\u2208H0 I{Tj \u2265t} max \b Pp j=1 I{Tj \u2265t}, 1 \t \u2264\u03b1 \u001b (4.1) for some bp to be determined later. In general, the ideal choice \u02dc t1 is unknown since it depends on the knowledge of the true null H0. Inspired by the Gaussian approximation idea proposed by Liu (2013), we \ufb01rst substitute the numerator in (4.1) by its upper bound X j\u2208H0 I{Tj \u2265t} \u2264 X j\u2208H0 I{|T (1) j | \u2265t} + X j\u2208H0 I{|T (2) j | \u2265t}, 14 \fand then use Gaussian tails to approximate the counts P j\u2208H0 I{|T (\u2113) j | \u2265t} for \u2113= 1, 2. Speci\ufb01cally, let G(\u2113) 0 (t) be an estimate of the proportion of the nulls falsely rejected by the test I{|T (\u2113) j | \u2265 t} among all the true nulls at the threshold level t, so that G(\u2113) 0 (t) = 1 |H0| X j\u2208H0 I{|T (\u2113) j | \u2265t}, \u2113= 1, 2. (4.2) Let G(t) = 2 \u22122\u03a6(t) be the tails of normal distribution. We will show that, asymptotically, we can use G(t) to approximate G(\u2113) 0 (t) for \u2113= 1, 2. Therefore, we have the following multiple testing procedure controlling the FDR and the FDP. Procedure 1. Let 0 < \u03b1 < 1, bp = \u221a2 log p \u22122 log log p and de\ufb01ne \u02c6 t = inf \u001a 0 \u2264t \u2264bp : pG(t) max \b Pp j=1 I{|Tj| \u2265t}, 1 \t \u2264\u03b1/2 \u001b . (4.3) If \u02c6 t in (4.3) does not exist, then let \u02c6 t = \u221a2 log p. We reject H0,j whenever |Tj| \u2265\u02c6 t. 4.2 Theoretical Properties For \u2113= 1, 2, let \u0393\u2113= M \u22a4(E\u03b8\u2217[Tn(\u03b8\u2217)])\u2113,\u2113M where M has its j-th column as mj and let D\u2113be the diagonal of \u0393\u2113. We de\ufb01ne D\u22121/2 \u2113 \u0393\u2113D\u22121/2 \u2113 = (\u03c1(\u2113) jk )1\u2264j,k\u2264p and denote B\u2113(\u03b4) = {(j, k) : |\u03c1(\u2113) jk | \u2265 \u03b4, i \u0338= j} and A\u2113(\u03f5) = B\u2113((log p)\u22122\u2212\u03f5). (A3). Suppose that for any \u2113= 1, 2, there is some \u03f5 > 0 and q > 0, such that X j,k\u2208H0:(j,k)\u2208A\u2113(\u03f5) p 2|\u03c1(\u2113) jk | 1+|\u03c1(\u2113) jk | +q = O(p2/(log p)2). The following theorem shows the asymptotic control of FDR and FDP of our procedure. Theorem 5. Under the conditions of Theorem 2, if (A3) holds, then for \u02c6 t de\ufb01ned in Procedure 1, lim (n,p)\u2192\u221e FDR(\u02c6 t) \u03b1p0/p \u22641, lim (n,p)\u2192\u221eP \u0012FDP(\u02c6 t) \u03b1p0/p \u22641 + \u03f5 \u0013 = 1 (4.4) for any \u03f5 > 0. 15 \f5 SIMULATION STUDIES In this section, we evaluate the numerical performance of the proposed methods. For both estimation and large-scale multiple testing, the empirical results in various settings demonstrate the numerical advantages of the proposed procedures over alternative methods. 5.1 Estimation For estimation, we let the dimension of the covariates p range from 600 to 1000, the sparsity s vary from 10 to 30, and set the sample size n = 400. We also set the mixture proportion \u03c9\u2217= 0.3 and the noise level \u03c32 = 1. The design covariates xi\u2019s are generated from a multivariate Gaussian distribution with covariance matrix \u03a3 = \u03a3M, where \u03a3M is a p \u00d7 p blockwise diagonal matrix of 10 identical unit diagonal Toeplitz matrices whose off-diagonal entries descend from 0.4 to 0 (see Supplementary Material for the explicit form). For the two regression coef\ufb01cients \u03b2\u2217 1 and \u03b2\u2217 2, for some \ufb01xed \u03c1 > 0, we set \u03b2\u2217 1j = \u03c1 \u00b7 1{1 \u2264j \u2264s} and \u03b2\u2217 2j = \u2212\u03c1 \u00b7 1{p/2 + 1 \u2264j \u2264p/2 + s} so that each of the coef\ufb01cient vectors is s-sparse. In particular, for our proposed methods, as of practical interest, we assume that the noise level \u03c32 is unknown and also needs to be estimated at each iteration. Speci\ufb01cally, for any t \u22651, in the M-Step of Algorithm 1, we de\ufb01ne (\u02c6 \u03c32 1)(t) = 1 n n X i=1 \u03b3(t) \u03b8,i(yi \u2212\u27e8xi, \u03b2\u27e9)2, (\u02c6 \u03c32 2)(t) = 1 n n X i=1 (1 \u2212\u03b3(t) \u03b8,i)(yi \u2212\u27e8xi, \u03b2\u27e9)2, and set (\u02c6 \u03c32)(t) = [(\u02c6 \u03c32 1)(t) + (\u02c6 \u03c32 2)(t)]/2. The variance estimator (\u02c6 \u03c32)(t) is then used as a substitute for \u03c32 in the subsequent E-Step. Throughout, we set T = 30, \u03ba = 0.3 and C = 0.8. We consider two initializations for our proposed algorithm. We start with \ufb01tting a Lasso to the mixed samples, which results to a coarse but useful variable screening. Combining the response variable y and the Lasso selected covariates, we use one of the following high-dimensional clustering methods to divide the samples. In particular, our two initialization corresponds to the emgm function in the R package xLLiM, and the hddc algorithm in the R package HDclassif. Once we obtain an initial two-group clustering of samples, we can \ufb01t the Elastic Net separately using 16 \fthe samples within each group. The resulting regression coef\ufb01cients will be used as initial values \u02c6 \u03b2(0) 1 and \u02c6 \u03b2(0) 2 , respectively. For the above Elastic Net algorithm (Zou and Hastie 2005), we set the elasticnet mixing parameter as 0.5. We evaluate and compare the empirical performances of 1) GLLiM: the Gaussian Locally Linear Mapping EM algorithm proposed by Deleforge et al. (2015), which is implemented by the gllim function in the R package xLLiM; 2) Initial1: \ufb01t Elastic Net separately to the clusters determined by Lasso+emgm; 3) Initial2: \ufb01t Elastic Net separately to the clusters determined by Lasso+hddc; 4) MIREM1 : our proposed algorithm based on initialization Initial1; and 5) MIREM2: our proposed algorithm based on initialization Initial2. The estimation performance is evaluated using the empirical mean-squared error (EMSE): for N rounds of simulations and estimators ( \u02c6 \u03b2r 1, \u02c6 \u03b2r 2) obtained in the r-th round, we de\ufb01ne EMSE = min \u001a 1 N N X r=1 [\u2225\u02c6 \u03b2r 1 \u2212\u03b2\u2217 1\u22252 + \u2225\u02c6 \u03b2r 2 \u2212\u03b2\u2217 2\u22252], 1 N N X r=1 [\u2225\u02c6 \u03b2r 1 \u2212\u03b2\u2217 2\u22252 + \u2225\u02c6 \u03b2r 2 \u2212\u03b2\u2217 1\u22252] \u001b . In Table 1, we show the EMSEs calculated from N = 500 rounds of simulations. We observe that both MIREM1 and MIREM2 outperform the other three methods across almost all the settings. As dimension p, the sparsity s, or the signal magnitude \u03c1 increases, all the methods show increased estimation errors. In addition, comparing our proposed methods MIREM1 and MIREM2, we \ufb01nd that MIREM1 has better performance than MIREM2 in almost all the settings. The EM based GLLiM method, with 100 iterations, performs slightly better than our initializations, but our proposed MIREM algorithms, with only T = 30 iterations, have superior performance, suggesting signi\ufb01cant improvement upon the initial estimators. 5.2 Large-scale Multiple Testing and FDR Control In this section, the empirical performance of our proposed multiple testing procedure is evaluated under different settings. Speci\ufb01cally, we vary the number of covariates p from 800 to 1000, the sparsity level s from 10, 15 to 20, and set the sample size n as 300 or 400. The two regression coef\ufb01cients \u03b2\u2217 1 and \u03b2\u2217 2, the design covariates, the mixing proportion \u03c9\u2217and the number of iterations T are the same as previous simulations with \u03c1 = 0.45. About our proposed method, in light 17 \fTable 1: Comparison of empirical mean-squared error (EMSE) of different methods with \u03c9\u2217= 0.3 and n = 400 \u03c1 = 0.45 \u03c1 = 0.85 p = 600 700 800 900 1000 600 700 800 900 1000 s = 10 GLLiM 2.86 2.95 3.17 3.20 3.26 5.09 5.11 5.12 5.13 5.16 Initial1 3.19 3.00 3.12 3.12 3.27 6.07 5.94 6.00 6.06 6.27 Initial2 2.43 2.43 2.50 2.41 2.59 5.08 5.20 5.22 5.06 4.76 MIREM1 1.40 1.40 1.42 1.42 1.43 1.18 1.18 1.18 1.21 1.23 MIREM2 1.73 1.81 1.75 1.79 1.81 2.85 2.17 2.29 2.61 2.66 s = 15 GLLiM 3.16 3.21 3.27 3.30 3.34 6.26 6.29 6.35 6.31 6.37 Initial1 4.19 4.21 4.21 4.21 4.40 9.04 9.02 9.01 8.99 8.97 Initial2 3.21 3.15 3.25 3.16 3.08 8.68 7.66 8.35 8.00 7.71 MIREM1 1.42 1.47 1.45 1.47 1.49 1.26 1.31 1.62 1.57 1.56 MIREM2 1.92 1.98 2.04 2.02 1.94 3.36 3.65 3.48 3.75 4.03 s = 20 GLLiM 3.63 3.66 3.66 3.68 3.73 7.31 7.32 7.34 7.32 7.35 Initial1 5.45 5.38 5.68 5.62 5.52 12.18 12.11 12.09 12.15 11.89 Initial2 4.35 3.92 4.11 4.39 4.18 11.80 10.39 10.61 10.81 10.59 MIREM1 1.57 1.58 1.60 1.56 1.61 1.77 2.43 2.15 1.77 2.14 MIREM2 2.49 2.45 2.32 2.34 2.41 5.14 4.91 5.19 5.23 5.31 s = 25 GLLiM 4.08 4.08 4.14 4.11 4.14 8.23 8.23 8.24 8.24 8.24 Initial1 6.96 7.04 6.91 6.99 6.88 15.53 15.29 15.22 15.08 15.24 Initial2 5.68 6.02 5.74 5.57 5.78 13.74 13.85 13.86 13.48 13.35 MIREM1 1.82 1.71 1.73 1.74 1.75 3.60 4.41 3.93 4.81 5.70 MIREM2 3.00 2.88 2.84 3.10 2.99 7.98 7.26 6.71 7.40 7.02 s = 30 GLLiM 4.51 4.52 4.53 4.58 4.56 9.06 9.07 9.04 9.05 9.08 Initial1 8.35 8.30 8.55 8.59 8.55 18.63 18.46 18.17 18.35 17.77 Initial2 7.61 7.30 7.50 7.35 7.75 16.41 17.10 16.79 16.24 15.44 MIREM1 1.99 1.94 1.95 1.91 1.95 5.36 8.34 6.76 10.50 8.79 MIREM2 3.64 3.50 3.48 3.68 3.89 8.32 9.26 9.65 9.66 9.00 of the results from the previous section, we will focus on MIREM1 instead of MIREM2 for its superior performance across most settings. To the best of our knowledge, there is no existing method for multiple testing in mixed linear regression models. So we compare the empirical FDRs and powers of our proposed testing procedure to the Benjamini-Yekutieli (B-Y) procedure (Benjamini and Yekutieli 2001) applied to our proposed test statistics for individual tests. To 18 \fillustrate the necessity of \ufb01tting a mixed linear regression model when the underlying model is indeed a mixture, we also evaluate the performance of the multiple testing procedure based on ordinary debiased Lasso estimators (Javanmard and Javadi 2019), denoted as dLasso, designed for the linear regression models. Table 2: Empirical powers and FDRs with \u03b1 = 0.1, \u03c9\u2217= 0.3 and n = 300 Powers FDRs p = 800 850 900 950 1000 800 850 900 950 1000 s = 10 MIREM1 0.459 0.459 0.430 0.465 0.441 0.009 0.009 0.014 0.012 0.025 B-Y 0.344 0.344 0.290 0.346 0.332 <0.001 <0.001 <0.001 <0.001 <0.001 dLasso 0.934 0.934 0.930 0.918 0.896 0.958 0.958 0.960 0.963 0.965 s = 15 MIREM1 0.582 0.592 0.563 0.623 0.609 0.022 0.024 0.028 0.028 0.030 B-Y 0.510 0.530 0.484 0.550 0.551 <0.001 <0.001 <0.001 <0.001 <0.001 dLasso 0.897 0.922 0.916 0.914 0.901 0.946 0.948 0.951 0.953 0.956 s = 20 MIREM1 0.724 0.744 0.723 0.756 0.778 0.088 0.086 0.071 0.089 0.110 B-Y 0.621 0.635 0.617 0.657 0.672 0.001 0.001 0.001 0.001 0.001 dLasso 0.882 0.909 0.897 0.894 0.896 0.936 0.937 0.942 0.945 0.947 Table 3: Empirical powers and FDRs with \u03b1 = 0.1, \u03c9\u2217= 0.3 and n = 400 Powers FDRs p = 800 850 900 950 1000 800 850 900 950 1000 s = 10 MIREM1 0.864 0.805 0.774 0.796 0.846 0.046 0.017 0.036 0.015 0.066 B-Y 0.849 0.779 0.748 0.768 0.833 0.001 <0.001 <0.001 <0.001 0.001 dLasso 0.977 0.973 0.975 0.985 0.994 0.943 0.957 0.957 0.961 0.962 s = 15 MIREM1 0.847 0.863 0.859 0.859 0.877 0.044 0.040 0.028 0.052 0.044 B-Y 0.825 0.842 0.843 0.834 0.857 0.001 0.001 0.001 0.001 0.001 dLasso 0.964 0.968 0.968 0.965 0.973 0.945 0.949 0.952 0.954 0.956 s = 20 MIREM1 0.933 0.914 0.935 0.911 0.920 0.105 0.125 0.109 0.104 0.112 B-Y 0.905 0.873 0.900 0.877 0.889 0.001 0.002 0.001 0.001 0.001 dLasso 0.983 0.969 0.969 0.974 0.975 0.932 0.941 0.942 0.947 0.947 From Tables 2 and 3, we \ufb01nd that the B-Y procedure and our proposed multiple testing procedure are both able to control the FDR below or around the nominal level \u03b1 = 0.1, whereas the 19 \fdLasso fails to control the FDR, as a consequence of its inability to capture the mixture structure. In particular, the empirical FDRs of our proposed test procedure are closer to the nominal level \u03b1 in comparison to the rather conservative B-Y procedure, yielding improved empirical powers of our proposed method across all the settings. In particular, by inspecting the intermediate steps of the dLasso method, we found that, due to the failure to account for the mixture components, dLasso signi\ufb01cantly underestimates the standard errors of the debiased Lasso estimators and therefore the individual p-values, which explains the anticonservativeness of the dLasso method. 6 ANALYSIS OF A MULTIPLEX IMAGE CYTOMETRY DATASET In this section, we apply our proposed methods to analyze a multiplex image cytometry dataset studied by Schapiro et al. (2017). Speci\ufb01cally, our dataset contains cellular phenotypes visualized by the imaging mass cytometry (IMC). By pairing classic immunohistochemistry staining, highresolution tissue laser ablation, and mass cytometry, IMC can measure abundances of more than 40 unique metal-isotope-labeled tissue-bound antibodies simultaneously at a resolution comparable to that of \ufb02uorescence microscopy. Schapiro et al. (2017) analyzed images collected from 49 diverse breast cancer samples and 3 matched normal tissues, with each image containing cells whose number varies from 266 to 1,454. Among the 30 cellular phenotypes, there are expression levels of 20 different epitopes (e.g., vimentin; and CD68) or combinations of markers (e.g., proliferative Ki67+ and phospho-S6+). An initial analysis of our image cytometry datasets using tSNE indicates strong evidences of population heterogeneity among the cells within each of the images (see Figure 1 for some examples). We focus on analyzing the conditional dependence network among these 30 protein epitopes and markers based on the single cell data for each of the samples or images. It is well known that the conditional dependence network can be modeled by Gaussian graphical model, which can be obtained using node-wise regression (Meinshausen and B\u00a8 uhlmann 2006; Yuan and Lin 2007). In other words, to obtain the dependence between two variables X and Y conditioning on all the other variables (Z1, ..., Zd), it suf\ufb01ces to perform a linear regression between (X, Z1, ..., Zd) 20 \f\u221230 \u221220 \u221210 0 10 20 30 \u221220 \u221210 0 10 20 Image210732 tsne_coord1 tsne_coord2 \u221215 \u221210 \u22125 0 5 10 15 20 \u221240 \u221220 0 20 Image161932 tsne_coord1 tsne_coord2 \u221210 \u22125 0 5 10 15 \u221240 \u221220 0 20 40 Image3250 tsne_coord1 tsne_coord2 \u221240 \u221220 0 20 40 \u221220 \u221215 \u221210 \u22125 0 5 10 15 Image1019493 tsne_coord1 tsne_coord2 \u221220 \u221210 0 10 20 \u221220 \u221210 0 10 20 Image452670 tsne_coord1 tsne_coord2 \u221240 \u221220 0 20 40 \u221215 \u221210 \u22125 0 5 10 15 Image1941 tsne_coord1 tsne_coord2 Figure 1: tSNE plots of cells in six randomly selected images/samples based on the expression levels of 30 epitopes or protein markers. against Y and assess the coef\ufb01cient of X. The construction of the conditional dependence network thus requires \ufb01tting such linear regressions over all the possible variable con\ufb01gurations. However, in our image cytometry data, heterogeneity among different cell-types may induce a mixture of different dependence structures. To address such an issue, we apply our proposed methods based on the sparse mixed linear regression model instead of the ordinary sparse linear regression model for network construction. As an example, we \ufb01rst focus on the Image 210732. In addition to the global heterogeneity, we also observed that the marginal associations between many pairs of epitopes or markers contain a two-class mixture pattern, as shown in Figure 2. To obtain a conditional dependence network based on the 1,151 cells in this image, we \ufb01tted node-wise mixed linear regressions and for each of them performed the proposed multiple testing procedure with FDR < 10%. The \ufb01nal network (Figure, 3, top left) was constructed such that the edges indicate the identi\ufb01ed associations from at least one of such node-wise regressions. To better illustrate the effects of mixture, the widths of the 21 \fedges were set to be proportional to the \u21132 distances between the two mixed regression coef\ufb01cients, so that a thicker edge indicates a larger discrepancy between the two mixtures. As a comparison, we also obtained a network (Figure 3, top right) based on the standard node-wise Lasso and the multiple testing procedure of Javanmard and Javadi (2019) with the same FDR level. Similar to our simulation results, the standard Lasso-based methods tend to report many more associations than our proposed method, due to its failure to account for the underlying mixtures and the resulting underestimated p-values for the individual tests. In particular, we found that many heterogeneous associations shown by scatter plots in Figure 2 were indeed captured by our methods as the thicker edges in the network estimated by our mixture model. Naturally, the above analysis can be conducted similarly for each of the images. Here we present the results for Image 1941 and Image 452670, as two additional examples. With the global heterogeneity shown in Figure 1, again we obtained much denser networks from the standard Lasso based method and sparser networks from our proposed method (both with FDR < 10% for the node-wise regressions). Moreover, many marginal associations (Figure 2) with heterogeneous associations had thicker edges of the networks based on our proposed method. Our analysis also suggests that the naive application of Lasso to heterogeneous datasets can lead to false associations. 7 DISCUSSION The present paper introduced an iterative estimation procedure using an EM algorithm, a debiased approach for individual coef\ufb01cient inference based on the EM solutions, and a multiple testing procedure based on the debiased estimators for the high-dimensional mixed linear regression. Similar to many other works on EM algorithms (Balakrishnan et al. 2017; Yi et al. 2014; Wang et al. 2015; Yi and Caramanis 2015), sample splitting was used to facilitate the theoretical analysis in order to derive the estimation consistency. However, the numerical results suggest that such data splitting seems to be unnecessary for achieving the desirable results in practice. It is interesting to develop novel technical tools for analyzing the algorithms without splitting the sample. 22 \f\u22122 0 2 4 6 \u22122 \u22121 0 1 Image210732 CD68Nd146Di EcadherinEr167Di \u22122 0 2 4 6 \u22124 \u22122 0 2 Image210732 CD68Nd146Di TwistNd145Di \u22128 \u22126 \u22124 \u22122 0 2 4 \u22124 \u22123 \u22122 \u22121 0 1 2 Image210732 EpCAMDy161Di bcateninHo165Di \u22128 \u22126 \u22124 \u22122 0 2 4 \u22122 \u22121 0 1 2 Image210732 EpCAMDy161Di Cytokeratin818Yb174Di \u22128 \u22126 \u22124 \u22122 0 2 4 \u22124 \u22122 0 2 Image210732 EpCAMDy161Di S6Er170Di \u22128 \u22126 \u22124 \u22122 0 2 4 \u22122 \u22121 0 1 2 Image210732 EpCAMDy161Di VimentinDy162Di \u22126 \u22124 \u22122 0 2 \u22122 \u22121 0 1 2 3 Image1941 CarbonicAnhydraseIXEr166Di CD68Nd146Di \u22122 0 2 4 \u22124 \u22122 0 2 Image1941 EcadherinEr167Di bcateninHo165Di \u22122 0 2 4 \u22126 \u22124 \u22122 0 2 Image1941 EcadherinEr167Di Her2Eu151Di \u22123 \u22122 \u22121 0 1 2 3 \u22122 \u22121 0 1 2 Image452670 HistoneH3Yb176Di EcadherinEr167Di \u22124 \u22122 0 2 4 \u22124 \u22122 0 2 Image452670 CarbonicAnhydraseIXEr166Di EpCAMDy161Di \u22121 0 1 2 3 4 5 \u22122 \u22121 0 1 2 Image452670 SMANd148Di EcadherinEr167Di Figure 2: Pairwise scatter plots of epitope expressions for cells on different images, showing mixture of associations. The proposed EM algorithm assumes that the noise variance is known. Such an algorithm can be naturally extended to the case where the two noise variances are different and known. A more interesting problem is to develop an algorithm for the case where the two noise variances are 23 \fdifferent and unknown. In addition, this paper focuses on the two-class mixed regression model. It is interesting to extend the proposed algorithms to the general k-class mixed regression models and analyze its performance, especially when k is unknown. In addition to the estimation, individual coef\ufb01cient inference and multiple testing problems considered in the current paper, there are several other interesting and related problems that are worth investigating. One such related problem is testing a single regression model against a mixed regression model. This involves, for example, the construction and analysis of a goodness-of-\ufb01t test. Finally, a natural generalization of the mixed linear regression model is the mixed generalized linear models (MGLM), where the outcome variables are allowed to be categorical. Estimation and multiple testing for high-dimensional MGLM are important and challenging problems that we leave for future research. 8 PROOFS We present in this section the proofs of Theorems 2 and 5, the results on the individual coordinate inference and multiple testing. Theorem 3 can be proved by using the same derivation as that in Theorem 2, and the proofs Theorem 1 and other technical lemmas are given in the Supplementary Materials (Zhang et al. 2020). 8.1 Proof of Theorem 2 We \ufb01rst state the following lemmas. Lemma 2. Under the same conditions as in Theorem 1. For any given vector m\u2217\u2208Rp, there exists a constant C > 0 such that \u2225E[ 1 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i m\u2217] \u2212E[ 1 nT nT X i=1 \u03b3\u03b8\u2217,ixix\u22a4 i m\u2217]\u22252 \u2264C\u2225m\u2217\u22252\u2225\u03b8\u2217\u2212\u02c6 \u03b8(T)\u22252; \u2225E[ 1 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i m\u2217] \u22121 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i m\u2217\u2225\u221e= \u2225m\u2217\u22252 \u00b7 Op( r log p nT ). Lemma 3. Under the same conditions as in Theorem 2. There exists a constant c > 0 such that m\u22a4 j (Tn(\u02c6 \u03b8(T)))1,1mj \u2265c, j \u2208[p]. 24 \fGiven the lemmas, we now proceed to proving Theorem 2. By symmetry, in the following, we only consider the case where l = 1. Recall that nT = n/T with T \u224dlog n, \u02dc \u03a3XX = 1 nT PnT i=1 xix\u22a4 i . We \ufb01rst verify that for \u00b5 = C q log p log n n with suf\ufb01ciently large constant C, the optimization of mj is feasible, that is, there exits m\u2217 j \u2208Rp, such that ||\u02dc \u03a3XXm\u2217 j \u2212e(p) j ||\u221e\u2264\u00b5 and \u2225m\u2217 j\u22251 \u2264C\u221alog n. Take m\u2217 j = (\u03a3\u22121)j and use the fact that \u2225\u03a3\u22121\u2225\u2264L, we have some \u2225m\u2217 j\u22251 \u2264C\u221alog n. Further, since \u2225m\u2217 j\u22252 \u2264C\u221alog n and n/T \u224dn/ log n, by the Bernstein inequality and union bound, we get \u2225E[ 1 nT nT X i=1 xix\u22a4 i m\u2217 j] \u22121 nT nT X i=1 xix\u22a4 i m\u2217 j\u2225\u221e= Op( r log p n log n). Therefore, the optimization is feasible, and recall that the solution is denoted as \u02dc mj. Then we proceed to showing the asymptotic normality. For a given j, by (3.2), we have \u221anT( b \u03b2u 1j \u2212\u03b2\u2217 1j) =\u221anT(m\u22a4 j b \u03a3XY \u2212m\u22a4 j b \u03a3XX\u03b2\u2217 1) + \u221anT(e\u22a4 j \u2212m\u22a4 j b \u03a3XX)( \u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1) (8.1) =\u221anT \u0014 1 nT n X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)m\u22a4 j xi \u0015 + \u221anT\u2225e\u22a4 j \u2212m\u22a4 j b \u03a3XX\u2225\u221e\u00b7 \u2225\u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1\u22251. We then show that for mj = \u02dc mj/\u02c6 \u03c9(T), \u2225e\u22a4 j \u2212m\u22a4 j b \u03a3XX\u2225\u221eis small. Recall that b \u03a3XX = 1 nT PnT i=1 \u03b3(T) \u03b8,i xix\u22a4 i . By Lemma 2, we get \u2225E[ 1 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i mj]\u2212E[ 1 nT nT X i=1 \u03b3\u03b8\u2217,ixix\u22a4 i mj]\u22252 \u2272\u2225mj\u22252\u00b7\u2225\u03b8\u2217\u2212\u02c6 \u03b8(T)\u22252 = Op( r s log p n log n), and \u2225E[ 1 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i m\u2217 j] \u22121 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i m\u2217 j\u2225\u221e= Op( r log p n log n). Let zi be the class for the pair of data (xi, yi), we obtain E[ 1 nT nT X i=1 \u03b3\u03b8\u2217,ixix\u22a4 i m\u2217 j] = E[ 1 nT nT X i=1 E \u02c6 \u03b8\u2217[1(zi = 1)xix\u22a4 i m\u2217 j | xi, yi]] = \u03c9\u2217\u03a3. Therefore, we have ||\u02c6 \u03a3XXmj \u2212e(p) j ||\u221e 25 \f=||E[ 1 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i mj] \u22121 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i mj||\u221e + ||E[ 1 nT nT X i=1 \u03b3(T) \u03b8,i xix\u22a4 i mj] \u2212E[ 1 nT nT X i=1 \u03b3\u03b8\u2217,ixix\u22a4 i mj]||\u221e+ ||E[ 1 nT n X i=1 \u03b3\u03b8\u2217,ixix\u22a4 i mj] \u2212e(p) j ||\u221e =Op( r log p n log n) + Op( r s log p n log n) + \u2225\u03c9\u2217\u03a3mj \u2212e(p) j \u2225\u221e \u2264Op( r s log p n log n) + \u2225\u03c9\u2217\u03a3mj \u2212\u02c6 \u03c9(T) \u02dc \u03a3XXmj\u2225\u221e+ \u2225\u02c6 \u03c9(T) \u02dc \u03a3XXmj \u2212e(p) j \u2225\u221e \u2264Op( r s log p n log n) + \u2225\u03c9\u2217\u03a3 \u2212\u02c6 \u03c9(T) \u02dc \u03a3XX\u2225\u221e\u00b7 \u2225mj\u22251 + \u2225\u02dc \u03a3XX \u02dc mj \u2212e(p) j \u2225\u221e \u2264Op( r s log p n log n). Then (8.1) becomes \u221anT( b \u03b2u 1j \u2212\u03b2\u2217 1j) =\u221anT \u0014 1 nT n X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)m\u22a4 j xi \u0015 + \u221anT\u2225e\u22a4 j \u2212m\u22a4 j b \u03a3XX\u2225\u221e\u00b7 \u2225\u02c6 \u03b2(T) 1 \u2212\u03b2\u2217 1\u22251 (8.2) =\u27e8mj, 1 \u221anT nT X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9+ OP(s3/2 log p log n \u221an ) =\u27e8mj, 1 \u221anT nT X i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9+ oP(1). Then, by Lemma 3, we have \u221anT \u0010 b \u03b2u 1j \u2212\u03b2\u2217 1j \u0011 p \u02c6 v1j = \u27e8mj, 1 \u221an Pn i=1 \u03b3(T) \u03b8,i (yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 p \u02c6 v1j + oP(1). Finally, using Lemma 1, we obtain the desired result. \u221an \u0010 b \u03b2u 1j \u2212\u03b2\u2217 1j \u0011 p \u02c6 v1j d \u2192N(0, 1). 8.2 Proof of Theorem 5 We \ufb01rst consider the case when \u02c6 t, given by (4.3), does not exist. In this case, we have \u02c6 t = \u221a2 log p. Note that for j \u2208H0, we have T (1) j = \u221an \u02c6 \u03b2u 1,j \u02c6 v(1) j = \u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 \u02c6 v(1) j + \u221anRem1 \u02c6 v(1) j , 26 \fwhere Rem1 = oP(1/\u221an), and a similar expression holds for T (2) j . Then we have P \u0012 X j\u2208H0 I \u0000|Tj| \u2265 p 2 log p \u0001 \u22651 \u0013 \u2264P \u0012 X j\u2208H0 I \u0000|T (1) j | \u2265 p 2 log p \u0001 \u22651 \u0013 + P \u0012 X j\u2208H0 I \u0000|T (2) j | \u2265 p 2 log p \u0001 \u22651 \u0013 \u2264P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 \u02c6 v(1) j + \u221anRem1 \u02c6 v(1) j \u2265 p 2 log p \u0013 \u22651 \u0013 + P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 \u02c6 v(1) j + \u221anRem1 \u02c6 v(1) j \u2264\u2212 p 2 log p \u0013 \u22651 \u0013 + P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1(1 \u2212\u03b3\u03b8\u2217,i)(yi \u2212\u27e8xi, \u03b2\u2217 2\u27e9)xi\u27e9 \u02c6 v(2) j + \u221anRem2 \u02c6 v(2) j \u2265 p 2 log p \u0013 \u22651 \u0013 + P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1(1 \u2212\u03b3\u03b8\u2217,i)(yi \u2212\u27e8xi, \u03b2\u2217 2\u27e9)xi\u27e9 \u02c6 v(2) j + \u221anRem2 \u02c6 v(2) j \u2264\u2212 p 2 log p \u0013 \u22651 \u0013 . (8.3) De\ufb01ne (v(1) j )2 = Var(\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9). For any \u03f5 > 0, we can bound the \ufb01rst term by P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 \u02c6 v(1) j + \u221anRem1 \u02c6 v(1) j \u2265 p 2 log p \u0013 \u22651 \u0013 = P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 v(1) j \u2265\u02c6 v(1) j v(1) j p 2 log p \u2212 \u221anRem1 v(1) j \u0013 \u22651 \u0013 \u2264P \u0012 X j\u2208H0 I \u0012\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 v(1) j \u2265(1 \u2212\u03f5) p 2 log p \u2212\u03f5 \u0013 \u22651 \u0013 + P \u0012 max j\u2208H0 \f \f \f \f \u221anRem1 v(1) j \f \f \f \f \u2265\u03f5 \u0013 + P \u0012\f \f \f \f \u02c6 v(1) j v(1) j \u22121 \f \f \f \f \u2265\u03f5 \u0013 \u2264p max j\u2208H0 P \u0012\u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 v(1) j \u2265(1 \u2212\u03f5) p 2 log p \u2212\u03f5 \u0013 + P \u0012 max j\u2208H0 \f \f \f \f \u221anRem1 v(1) j \f \f \f \f \u2265\u03f5 \u0013 + P \u0012\f \f \f \f \u02c6 v(1) j v(1) j \u22121 \f \f \f \f \u2265\u03f5 \u0013 . By the proof of Theorem 2, we know that P \u0012 max j\u2208H0 \f \f \f \f \u221anRem1 v(1) j \f \f \f \f \u2265\u03f5 \u0013 \u21920, P \u0012\f \f \f \f \u02c6 v(1) j v(1) j \u22121 \f \f \f \f \u2265\u03f5 \u0013 \u21920. 27 \fIn addition, for j \u2208H0, let T (1) 0j = \u27e8mj, 1 \u221an Pn i=1 \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9 v(1) j . where E\u27e8mj, \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9/v(1) j = 0 and Var(E\u27e8mj, \u03b3\u03b8\u2217,i(yi \u2212\u27e8xi, \u03b2\u2217 1\u27e9)xi\u27e9/v(1) j ) = 1. Conditional on X by Lemma 6.1 of Liu (2013), we have sup 0\u2264t\u22644\u221alog p \f \f \f \f P(|T (1) 0j | \u2265t) G(t) \u22121 \f \f \f \f \u2264C(log p)\u22121. (8.4) Hereafter, unless explicitly noted, all of our discussion will be conditional on {xi}n i=1. Now let t = (1 \u2212\u03f5)\u221a2 log p \u22122\u03f5, we have P \u0012 T (1) 0j \u2265(1 \u2212\u03f5) p 2 log p \u22122\u03f5 \u0013 \u2264G((1 \u2212\u03f5) p 2 log p \u22122\u03f5) + C G((1 \u2212\u03f5)\u221a2 log p \u22122\u03f5) log p . Hence p max j\u2208H0 P \u0012 T0j \u2265(1 \u2212\u03f5) p 2 log p \u2212\u03f5 \u0013 \u2264CpG((1 \u2212\u03f5) p 2 log p \u22122\u03f5) + O(p\u2212c), which goes to zero as (n, p) \u2192\u221e. By symmetry, we know that the rest three terms in (8.3) also goes to 0. Therefore we have proved the theorem when \u02c6 t = \u221a2 log p. Now consider the case when 0 \u2264\u02c6 t \u2264bp holds. We have FDP(\u02c6 t) = P j\u2208H0 I{|Tj| \u2265\u02c6 t} max \b Pp j=1 I{|Tj| \u2265\u02c6 t}, 1 \t \u2264 P j\u2208H0 I{|T (1) j | \u2265\u02c6 t} + P j\u2208H0 I{|T (2) j | \u2265\u02c6 t} max \b Pp j=1 I{|Tj| \u2265\u02c6 t}, 1 \t . Note that for \u2113= 1, 2, P j\u2208H0 I{|T (\u2113) j | \u2265\u02c6 t} max \b Pp j=1 I{|Tj| \u2265\u02c6 t}, 1 \t \u2264 p0G(\u02c6 t) max \b Pp j=1 I{|Tj| \u2265\u02c6 t}, 1 \t(1 + A(\u2113) p ) where A(\u2113) p = sup 0\u2264t\u2264bp \f \f \f \f P j\u2208H0 I{|T (\u2113) j | \u2265t} p0G(t) \u22121 \f \f \f \f. Note that by de\ufb01nition p0G(\u02c6 t) max \b Pp j=1 I{|Tj| \u2265\u02c6 t}, 1 \t \u2264p0\u03b1 p . 28 \fThe proof is complete if A(\u2113) p \u21920 in probability. The rest of the proof is devoted to it. We \ufb01rst show that |T (\u2113) j \u2212T (\u2113) 0j | = oP(1/ p log p). (8.5) To see this, we notice that, under the sparsity condition s = o \u0000n1/2 log3/2 p log n \u0001 , with probability at least 1 \u2212O(p\u2212c), |T (\u2113) j \u2212T (\u2113) 0j | \u2264 \f \f \f \f \u221anRem\u2113 v(\u2113) j \f \f \f \f \u00b7 \f \f \f \f v(\u2113) j \u02c6 v(\u2113) j \f \f \f \f + \f \f \f \fT (\u2113) 0j (v(\u2113) j /\u02c6 v(\u2113) j \u22121) \f \f \f \f = o \u0012 1 \u221alog p \u0013 . By the fact that G(t + o(1/\u221alog p))/G(t) = 1 + o(1) uniformly in 0 \u2264t \u2264\u221a2 log p, it suf\ufb01ces to show that sup 0\u2264t\u2264bp \f \f \f \f P j\u2208H0 I{|T (\u2113) 0j | \u2265t} p0G(t) \u22121 \f \f \f \f \u21920 in probability. (8.6) Let z0 < z1 < ... < zdp \u22641 and ti = G\u22121(zi), where z0 = G(bp), zi = cp/p + c2/3 p ei\u03b4/p with cp = pG(bp), and dp = [log((p \u2212cp)/c2/3 p )]1/\u03b4 and 0 < \u03b4 < 1, which will be speci\ufb01ed later. We have G(ti)/G(ti+1) = 1 + o(1) uniformly in i, and t0/ p 2 log(p/cp) = 1 + o(1). Note that uniformly for 1 \u2264j \u2264m, G(ti)/G(ti\u22121) \u21921 as p \u2192\u221e. The proof of (8.6) reduces to show that max 0\u2264i\u2264dp \f \f \f \f P j\u2208H0 I{|T (\u2113) 0j | \u2265ti} p0G(ti) \u22121 \f \f \f \f \u21920 (8.7) in probability. Hereafter, we omit the dependence on the index \u2113for simplicity. In fact, for each \u03f5 > 0, we have P \u0012 max 0\u2264i\u2264dp \f \f \f \f P j\u2208H0[I{|T0j| \u2265ti} \u2212G(ti)] p0G(ti) \f \f \f \f \u2265\u03f5 \u0013 \u2264 dp X j=0 P \u0012\f \f \f \f P j\u2208H0[I{|T0j| \u2265ti} \u2212G(ti)] p0G(ti) \f \f \f \f \u2265\u03f5/2 \u0013 . Set I(t) = P j\u2208H0[I{|T0j|\u2265t}\u2212P(|T0j|\u2265t)] p0G(t) . By Markov\u2019s inequality P(|I(ti)| \u2265\u03f5/2) \u2264E[I(ti)]2 \u03f52/4 , and it suf\ufb01ces to show Pdp j=0 E[I(ti)]2 = o(1). To see this, by (8.4), EI2(t) = P j\u2208H0[P(|T0j| \u2265t) \u2212P 2(|T0j| \u2265t)] p2 0G2(t) + P j,k\u2208H0,k\u0338=j[P(|T0k| \u2265t, |T0j| \u2265t) \u2212P(|T0k| \u2265t)P(|T0j| \u2265t)] p2 0G2(t) \u2264 C p0G(t) + 1 p2 0 X (j,k)\u2208A(\u03f5)\u2229H0 P(|T0k| \u2265t, |T0j| \u2265t) G2(t) 29 \f+ 1 p2 0 X (j,k)\u2208A(\u03f5)c\u2229H0 \u0014P(|T0k| \u2265t, |T0j| \u2265t) G2(t) \u22121 \u0015 = C p0G(t) + I11(t) + I12(t). For (j, k) \u2208A(\u03f5)c \u2229H0, applying Lemma 6.1 in Liu (2013), we have I12(t) \u2264C(log p)\u22121\u2212\u03be for some \u03be > 0 uniformly in 0 < t < \u221a2 log p. By Lemma 6.2 in Liu (2013), for (j, k) \u2208A(\u03f5) \u2229H0, we have P(|T0k| \u2265t, |T0j| \u2265t) \u2264C(t + 1)\u22122 exp \u0012 \u2212 t2 1 + |\u03c1jk| \u0013 . So that I11(t) \u2264C 1 p2 0 X (j,k)\u2208A(\u03f5)\u2229H0 (t + 1)\u22122 exp \u0012 \u2212 t2 1 + |\u03c1jk| \u0013 G\u22122(t) \u2264C 1 p2 0 X (j,k)\u2208A(\u03f5)\u2229H0 [G(t)] \u2212 2|\u03c1jk| 1+|\u03c1jk|. Note that for 0 \u2264t \u2264bp, we have G(t) \u2265G(bp) = cp/p, so that by assumption (A3) it follows that for some \u03f5, q > 0, I11(t) \u2264C X (j,k)\u2208A(\u03f5)\u2229H0 p 2|\u03c1jk| 1+|\u03c1jk| +q\u22122 = O(1/(log p)2). By the above inequalities, we can prove (8.7) by choosing 0 < \u03b4 < 1 so that dp X i=0 E[I(ti)]2 \u2264C dp X i=0 (pG(ti))\u22121 + Cdp[(log p)\u22121\u2212\u03b4 + (log p)\u22122] \u2264C dp X i=0 1 cp + c2/3 p ei\u03b4 + o(1) = o(1). Lately, as all the above arguments are conditional on {xi}n i=1, the statements of Theorem 5 follow by averaging over the probability measure of {xi}n i=1 . FUNDING This research was supported by NIH grants R01GM123056 and R01GM129781 and NSF grant DMS-1712735. 30 \fSUPPLEMENTARY MATERIALS In the Supplemental Materials, we prove all the main theorems and the technical lemmas." + }, + { + "url": "http://arxiv.org/abs/2010.04819v4", + "title": "How Does Mixup Help With Robustness and Generalization?", + "abstract": "Mixup is a popular data augmentation technique based on taking convex\ncombinations of pairs of examples and their labels. This simple technique has\nbeen shown to substantially improve both the robustness and the generalization\nof the trained model. However, it is not well-understood why such improvement\noccurs. In this paper, we provide theoretical analysis to demonstrate how using\nMixup in training helps model robustness and generalization. For robustness, we\nshow that minimizing the Mixup loss corresponds to approximately minimizing an\nupper bound of the adversarial loss. This explains why models obtained by Mixup\ntraining exhibits robustness to several kinds of adversarial attacks such as\nFast Gradient Sign Method (FGSM). For generalization, we prove that Mixup\naugmentation corresponds to a specific type of data-adaptive regularization\nwhich reduces overfitting. Our analysis provides new insights and a framework\nto understand Mixup.", + "authors": "Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou", + "published": "2020-10-09", + "updated": "2021-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION Mixup was introduced by Zhang et al. (2018) as a data augmentation technique. It has been empirically shown to substantially improve test performance and robustness to adversarial noise of state-of-the-art neural network architectures (Zhang et al., 2018; Lamb et al., 2019; Thulasidasan et al., 2019; Zhang et al., 2018; Arazo et al., 2019). Despite the impressive empirical performance, it is still not fully understood why Mixup leads to such improvement across the different aspects mentioned above. We \ufb01rst provide more background about robustness and generalization properties of deep networks and Mixup. Then we give an overview of our main contributions. Adversarial robustness. Although neural networks have achieved remarkable success in many areas such as natural language processing (Devlin et al., 2018) and image recognition (He et al., 2016a), it has been observed that neural networks are very sensitive to adversarial examples \u2014 prediction can be easily \ufb02ipped by human imperceptible perturbations (Goodfellow et al., 2014; Szegedy et al., 2013). Speci\ufb01cally, in Goodfellow et al. (2014), the authors use fast gradient sign method (FGSM) to generate adversarial examples, which makes an image of panda to be classi\ufb01ed as gibbon with high con\ufb01dence. Although various defense mechanisms have been proposed against adversarial attacks, those mechanisms typically sacri\ufb01ce test accuracy in turn for robustness (Tsipras et al., 2018) and many of them require a signi\ufb01cant amount of additional computation time. In contrast, Mixup training tends to improve test accuracy and at the same time also exhibits a certain degree of resistance to adversarial examples, such as those generated by FGSM (Lamb et al., 2019). Moreover, the corresponding training time is relatively modest. As an illustration, we compare the robust test \u2217Equal contribution. 1 arXiv:2010.04819v4 [cs.LG] 17 Mar 2021 \fPublished as a conference paper at ICLR 2021 (a) Robustness 0 100 200 300 400 epoch 100 0.3 0.4 0.5 0.6 0.7 0.8 0.9 2.0 test loss mixup ERM 0 100 200 300 400 epoch 10 1 0.1 0.1 0.1 0.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Generalization gap mixup ERM (b) Generalization Figure 1: Illustrative examples of the impact of Mixup on robustness and generalization. (a) Adversarial robustness on the SVHN data under FGSM attacks. (b) Generalization gap between test and train loss. More details regarding the experimental setup are included in Appendix C.1, C.2. accuracy between a model trained with Mixup and a model trained with standard empirical risk minimization (ERM) under adversarial attacks generated by FGSM (Fig. 1a). The model trained with Mixup loss has much better robust accuracy. Robustness of Mixup under other attacks have also been empirically studied in Lamb et al. (2019). Generalization. Generalization theory has been a central focus of learning theory (Vapnik, 1979; 2013; Bartlett et al., 2002; Bartlett & Mendelson, 2002; Bousquet & Elisseeff, 2002; Xu & Mannor, 2012), but it still remains a mystery for many modern deep learning algorithms (Zhang et al., 2016; Kawaguchi et al., 2017). For Mixup, from Fig. (1b), we observe that Mixup training results in better test performance than the standard empirical risk minimization. That is mainly due to its good generalization property since the training errors are small for both Mixup training and empirical risk minimization (experiments with training error results are included in the appendix). While there have been many enlightening studies trying to establish generalization theory for modern machine learning algorithms (Sun et al., 2015; Neyshabur et al., 2015; Hardt et al., 2016; Bartlett et al., 2017; Kawaguchi et al., 2017; Arora et al., 2018; Neyshabur & Li, 2019), few existing studies have illustrated the generalization behavior of Mixup training in theory. Our contributions. In this paper, we theoretically investigate how Mixup improves both adversarial robustness and generalization. We begin by relating the loss function induced by Mixup to the standard loss with additional adaptive regularization terms. Based on the derived regularization terms, we show that Mixup training minimizes an upper bound on the adversarial loss,which leads to the robustness against single-step adversarial attacks. For generalization, we show how the regularization terms can reduce over-\ufb01tting and lead to better generalization behaviors than those of standard training. Our analyses provides insights and framework to understand the impact of Mixup. Outline of the paper. Section 2 introduces the notations and problem setup. In Section 3, we present our main theoretical results, including the regularization effect of Mixup and the subsequent analysis to show that such regularization improves adversarial robustness and generalization. Section 4 concludes with a discussion of future work. Proofs are deferred to the Appendix. 1.1 RELATED WORK Since its advent, Mixup training (Zhang et al., 2018) has been shown to substantially improve generalization and single-step adversarial robustness among a wide rage of tasks, on both supervised (Lamb et al., 2019; Verma et al., 2019a; Guo et al., 2019), and semi-supervised settings (Berthelot et al., 2019; Verma et al., 2019b). This has motivated a recent line of work for developing a number of variants of Mixup, including Manifold Mixup (Verma et al., 2019a), Puzzle Mix (Kim et al., 2020), CutMix (Yun et al., 2019), Adversarial Mixup Resynthesis (Beckham et al., 2019), and PatchUp (Faramarzi et al., 2020). However, theoretical understanding of the underlying mechanism of why Mixup and its variants perform well on generalization and adversarial robustness is still limited. 2 \fPublished as a conference paper at ICLR 2021 Some of the theoretical tools we use in this paper are related to Wang & Manning (2013) and Wager et al. (2013), where the authors use second-order Taylor approximation to derive a regularized loss function for Dropout training. This technique is then extended to drive more properties of Dropout, including the inductive bias of Dropout (Helmbold & Long, 2015), the regularization effect in matrix factorization (Mianjy et al., 2018), and the implicit regularization in neural networks (Wei et al., 2020). This technique has been recently applied to Mixup in a parallel and independent work (Carratino et al., 2020) to derive regularization terms. Compared with the results in Carratino et al. (2020), our derived regularization enjoys a simpler form and therefore enables the subsequent analysis of adversarial robustness and generalization. We clarify the detailed differences in Section 3. To the best of our knowledge, our paper is the \ufb01rst to provide a theoretical treatment to connect the regularization, adversarial robustness, and generalization for Mixup training. 2 PRELIMINARIES In this section, we state our notations and brie\ufb02y recap the de\ufb01nition of Mixup. Notations. We denote the general parameterized loss as l(\u03b8, z), where \u03b8 \u2208\u0398 \u2286Rd and z = (x, y) is the input and output pair. We consider a training dataset S = {(x1, y1), \u00b7 \u00b7 \u00b7 , (xn, yn)}, where xi \u2208X \u2286Rp and yi \u2208Y \u2286Rm are i.i.d. drawn from a joint distribution Px,y. We further denote \u02dc xi,j(\u03bb) = \u03bbxi + (1 \u2212\u03bb)xj, \u02dc yi,j(\u03bb) = \u03bbyi + (1 \u2212\u03bb)yj for \u03bb \u2208[0, 1] and let \u02dc zi,j(\u03bb) = (\u02dc xi,j(\u03bb), \u02dc yi,j(\u03bb)). Let L(\u03b8) = Ez\u223cPx,yl(\u03b8, z) denote the standard population loss and Lstd n (\u03b8, S) = Pn i=1 l(\u03b8, zi)/n denote the standard empirical loss. For the two distributions D1 and D2, we use pD1 + (1 \u2212p)D2 for p \u2208(0, 1) to denote the mixture distribution such that a sample is drawn with probabilities p and (1 \u2212p) from D1 and D2 respectively. For a parameterized function f\u03b8(x), we use \u2207f\u03b8(x) and \u2207\u03b8f\u03b8(x) to respectively denote the gradient with respect to x and \u03b8. For two vectors a and b, we use cos(x, y) to denote \u27e8x, y\u27e9/(\u2225x\u2225\u00b7 \u2225y\u2225). Mixup. Generally, for classi\ufb01cation cases, the output yi is the embedding of the class of xi, i.e. the one-hot encoding by taking m as the total number of classes and letting yi \u2208{0, 1}m be the binary vector with all entries equal to zero except for the one corresponding to the class of xi. In particular, if we take m = 1, it degenerates to the binary classi\ufb01cation. For regression cases, yi can be any real number/vector. The Mixup loss is de\ufb01ned in the following form: Lmix n (\u03b8, S) = 1 n2 n X i,j=1 E\u03bb\u223cD\u03bbl(\u03b8, \u02dc zij(\u03bb)), (1) where D\u03bb is a distribution supported on [0, 1]. Throughout the paper, we consider the most commonly used D\u03bb \u2013 Beta distribution Beta(\u03b1, \u03b2) for \u03b1, \u03b2 > 0. 3 MAIN RESULTS In this section, we \ufb01rst introduce a lemma that characterizes the regularization effect of Mixup. Based on this lemma, we then derive our main theoretical results on adversarial robustness and generalization error bound in Sections 3.2 and 3.3 respectively. 3.1 THE REGULARIZATION EFFECT OF MIXUP As a starting point, we demonstrate how Mixup training is approximately equivalent to optimizing a regularized version of standard empirical loss Lstd n (\u03b8, S). Throughout the paper, we consider the following class of loss functions for the prediction function f\u03b8(x) and target y: L = {l(\u03b8, (x, y))|l(\u03b8, (x, y)) = h(f\u03b8(x)) \u2212yf\u03b8(x) for some function h}. (2) This function class L includes many commonly used losses, including the loss function induced by Generalized Linear Models (GLMs), such as linear regression and logistic regression, and also crossentropy for neural networks. In the following, we introduce a lemma stating that the Mixup training with \u03bb \u223cD\u03bb = Beta(\u03b1, \u03b2) induces a regularized loss function with the weights of each regularization speci\ufb01ed by a mixture of Beta distributions \u02dc D\u03bb = \u03b1 \u03b1+\u03b2 Beta(\u03b1 + 1, \u03b2) + \u03b2 \u03b1+\u03b2 Beta(\u03b2 + 1, \u03b1). 3 \fPublished as a conference paper at ICLR 2021 Lemma 3.1. Consider the loss function l(\u03b8, (x, y)) = h(f\u03b8(x)) \u2212yf\u03b8(x), where h(\u00b7) and f\u03b8(\u00b7) for all \u03b8 \u2208\u0398 are twice differentiable. We further denote \u02dc D\u03bb as a uniform mixture of two Beta distributions, i.e., \u03b1 \u03b1+\u03b2 Beta(\u03b1 + 1, \u03b2) + \u03b2 \u03b1+\u03b2 Beta(\u03b2 + 1, \u03b1), and DX as the empirical distribution of the training dataset S = (x1, \u00b7 \u00b7 \u00b7 , xn), the corresponding Mixup loss Lmix n (\u03b8, S), as de\ufb01ned in Eq. (1) with \u03bb \u223cD\u03bb = Beta(\u03b1, \u03b2), can be rewritten as Lmix n (\u03b8, S) = Lstd n (\u03b8, S) + 3 X i=1 Ri(\u03b8, S) + E\u03bb\u223c\u02dc D\u03bb[(1 \u2212\u03bb)2\u03d5(1 \u2212\u03bb)], where lima\u21920 \u03d5(a) = 0 and R1(\u03b8, S) = E\u03bb\u223c\u02dc D\u03bb[1 \u2212\u03bb] n n X i=1 (h\u2032(f\u03b8(xi)) \u2212yi)\u2207f\u03b8(xi)\u22a4Erx\u223cDX[rx \u2212xi], R2(\u03b8, S) = E\u03bb\u223c\u02dc D\u03bb[(1 \u2212\u03bb)2] 2n n X i=1 h\u2032\u2032(f\u03b8(xi))\u2207f\u03b8(xi)\u22a4Erx\u223cDX[(rx \u2212xi)(rx \u2212xi)\u22a4]\u2207f\u03b8(xi), R3(\u03b8, S) = E\u03bb\u223c\u02dc D\u03bb[(1 \u2212\u03bb)2] 2n n X i=1 (h\u2032(f\u03b8(xi)) \u2212yi)Erx\u223cDX[(rx \u2212xi)\u22072f\u03b8(xi)(rx \u2212xi)\u22a4]. By putting the higher order terms of approximation in \u03d5(\u00b7), this result shows that Mixup is related to regularizing \u2207f\u03b8(xi) and \u22072f\u03b8(xi), which are the \ufb01rst and second directional derivatives with respect to xi. Throughout the paper, our theory is mainly built upon analysis of the quadratic approximation of Lmix n (\u03b8, S), which we further denote as \u02dc Lmix n (\u03b8, S) := Lstd n (\u03b8, S) + 3 X i=1 Ri(\u03b8, S). (3) Comparison with related work. The result in Lemma 3.1 relies on the second-order Taylor expansion of the loss function Eq. (1). Similar approximations have been proposed before to study the regularization effect of Dropout training, see Wang & Manning (2013); Wager et al. (2013); Mianjy et al. (2018); Wei et al. (2020). Recently, Carratino et al. (2020) independently used similar approximation to study the regularization effect of Mixup. However, the regularization terms derived in Carratino et al. (2020) is much more complicated than those in Lemma 3.1. For example, in GLM, our technique yields the regularization term as shown in Lemma 3.3, which is much simpler than those in Corollaries 2 and 3 in Carratino et al. (2020). One technical step we use here to simplify the regularization expression is to equalize Mixup with input perturbation, see more details in the proof in the Appendix. This simpler expression enables us to study the robustness and generalization of Mixup in the subsequent sections. Validity of the approximation. In the following, we present numerical experiments to support the approximation in Eq. (3). Following the setup of numerical validations in Wager et al. (2013); Carratino et al. (2020), we experimentally show that the quadratic approximation is generally very accurate. Speci\ufb01cally, we train a Logistic Regression model (as one example of a GLM model, which we study later) and a two layer neural network with ReLU activations. We use the two-moons dataset (Buitinck et al., 2013). Fig. 2 shows the training and test data\u2019s loss functions for training two models with different loss functions: the original Mixup loss and the approximate Mixup loss. Both models had the same random initialization scheme. Throughout training, we compute the test and training loss of each model using its own loss function. The empirical results shows the approximation of Mixup loss is quite close to the original Mixup loss. 3.2 MIXUP AND ADVERSARIAL ROBUSTNESS Having introduced \u02dc Lmix n (\u03b8, S) in Eq. (3), we are now ready to state our main theoretical results. In this subsection, we illustrate how Mixup helps adversarial robustness. We prove that minimizing \u02dc Lmix n (\u03b8, S) is equivalent to minimizing an upper bound of the second order Taylor expansion of an adversarial loss. 4 \fPublished as a conference paper at ICLR 2021 Logistic Regression Two Layer ReLU Neural Network Figure 2: Comparison of the original Mixup loss with the approximate Mixup loss function. Throughout this subsection, we study the logistic loss function l(\u03b8, z) = log(1 + exp(f\u03b8(x))) \u2212yf\u03b8(x), where y \u2208Y = {0, 1}. In addition, let g be the logistic function such that g(s) = es/(1 + es) and consider the case where \u03b8 is in the data-dependent space \u0398, de\ufb01ned as \u0398 = {\u03b8 \u2208Rd : yif\u03b8(xi) + (yi \u22121)f\u03b8(xi) \u22650 for all i = 1, . . . , n}. Notice that \u0398 contains the set of all \u03b8 with zero training errors: \u0398 \u2287{\u03b8 \u2208Rq : the label prediction \u02c6 yi = 1{f\u03b8(xi) \u22650} is equal to yi for all i = 1, . . . , n }. (4) In many practical cases, the training error (0-1 loss) becomes zero in \ufb01nite time although the training loss does not. Equation (4) shows that the condition of \u03b8 \u2208\u0398 is satis\ufb01ed in \ufb01nite time in such practical cases with zero training errors. Logistic regression. As a starting point, we study the logistic regression with f\u03b8(x) = \u03b8\u22a4x, in which case the number of parameters coincides with the data dimension, i.e. p = d. For a given \u03b5 > 0, we consider the adversarial loss with \u21132-attack of size \u03b5 \u221a d, that is, Ladv n (\u03b8, S) = 1/n Pn i=1 max\u2225\u03b4i\u22252\u2264\u03b5 \u221a d l(\u03b8, (xi + \u03b4i, yi)). We \ufb01rst present the following second order Taylor approximation of Ladv n (\u03b8, S). Lemma 3.2. The second order Taylor approximation of Ladv n (\u03b8, S) is Pn i=1 \u02dc ladv(\u03b5 \u221a d, (xi, yi))/n, where for any \u03b7 > 0, x \u2208Rp and y \u2208{0, 1}, \u02dc ladv(\u03b7, (x, y)) = l(\u03b8, (x, y)) + \u03b7|g(x\u22a4\u03b8) \u2212y| \u00b7 \u2225\u03b8\u22252 + \u03b72 2 \u00b7 g(x\u22a4\u03b8)(1 \u2212g(x\u22a4\u03b8)) \u00b7 \u2225\u03b8\u22252 2. (5) By comparing \u02dc ladv(\u03b4, (x, y)) and \u02dc Lmix n (\u03b8, S) applied to logistic regression, we prove the following. Theorem 3.1. Suppose that f\u03b8(x) = x\u22a4\u03b8 and there exists a constant cx > 0 such that \u2225xi\u22252 \u2265 cx \u221a d for all i \u2208{1, . . . , n}. Then, for any \u03b8 \u2208\u0398, we have \u02dc Lmix n (\u03b8, S) \u22651 n n X i=1 \u02dc ladv(\u03b5i \u221a d, (xi, yi)) \u22651 n n X i=1 \u02dc ladv(\u03b5mix \u221a d, (xi, yi)) where \u03b5i = RicxE\u03bb\u223c\u02dc D\u03bb[1 \u2212\u03bb] with Ri = | cos(\u03b8, xi)|, and \u03b5mix = R \u00b7 cxE\u03bb\u223c\u02dc D\u03bb[1 \u2212\u03bb] with R = mini\u2208{1,...,n} | cos(\u03b8, xi)|. Theorem 3.1 suggests that \u02dc Lmix n (\u03b8, S) is an upper bound of the second order Taylor expansion of the adversarial loss with \u21132-attack of size \u03b5mix \u221a d. Note that \u03b5mix depends on \u03b8; one can think the \ufb01nal radius is taken at the minimizer of \u02dc Lmix n (\u03b8, S). Therefore, minimizing the Mixup loss would result in a small adversarial loss. Our analysis suggests that Mixup by itself can improve robustness against small attacks, which tend to be single-step attacks (Lamb et al., 2019). An interesting direction for future work is to explore whether combining Mixup with adversarial training is able to provide robustness against larger and more sophisticated attacks such as iterative projected gradient descent and other multiple-step attacks. 5 \fPublished as a conference paper at ICLR 2021 60 80 100 Train accuracy 0 100 200 300 400 epoch 10 3 10 2 R (a) Linear 40 60 80 100 Train accuracy 0 100 200 300 400 epoch 10 4 10 3 10 2 10 1 R (b) ANN 0.0 0.2 0.4 0.6 0.8 1.0 Ri 0 1 2 3 4 Density (c) ANN: epoch = 0 0.0 0.2 0.4 0.6 0.8 1.0 Ri 0 1 2 3 4 5 Density (d) ANN: epoch = 400 Figure 3: The behaviors of the values of R and Ri during training for linear models and arti\ufb01cial neural network with ReLU (ANN). The subplots (c) and (d) show the histogram of (R1, R2, . . . , Rn) for ANN before and after training. R and Ri control the radii of adversarial attacks that Mixup training protects for. Remark 3.1. Note that Theorem 3.1 also implies adversarial robustness against \u2113\u221eattacks with size \u03b5 since for any attack \u03b4, \u2225\u03b4\u2225\u221e\u2264\u03b5 implies \u2225\u03b4\u22252 \u2264\u03b5 \u221a d, and therefore max\u2225\u03b4\u2225\u221e\u2264\u03f5 l(\u03b8, (x+\u03b4, y)) \u2264 max\u2225\u03b4\u22252\u2a7d \u221a d\u00b7\u03f5 l(\u03b8, (x + \u03b4, y)). In the following we provide more discussion about the range of R = mini\u2208{1,...,n} | cos(\u03b8, xi)|. We \ufb01rst show that under additional regularity conditions, we can obtain a high probability lower bound that does not depend on sample size. We then numerically demonstrate that R tends to increase during training for both cases of linear models and neural networks at the end of this subsection. A constant lower bound for logistic regression. Now, we show how to obtain a constant lower bound by adding some additional conditions. Assumption 3.1. Let us denote \u02c6 \u0398n \u2286\u0398 as the set of minimizers of \u02dc Lmix n (\u03b8, S). We assume there exists a set \u0398\u22171, such that for all n \u2265N, where N is a positive integer, \u02c6 \u0398n \u2286\u0398\u2217with probability at least 1 \u2212\u03b4n where \u03b4n \u21920 as n \u21920. Moreover, there exists a \u03c4 \u2208(0, 1) such that p\u03c4 = P ({x \u2208X : | cos(x, \u03b8)| \u2265\u03c4 for all \u03b8 \u2208\u0398\u2217}) \u2208(0, 1]. Such condition generally holds for regular optimization problems, where the minimizers are not located too dispersedly in the sense of solid angle (instead of Euclidean distance). More speci\ufb01cally, if we normalize all the minimizers\u2019 \u21132 norm to 1, this assumption requires that the set of minimizers should not be located all over the sphere. In addition, Assumption 3.1 only requires that the probability p\u03c4 and the threshold \u03c4 to be non-zero. In particular, if the distribution of x has positive mass in all solid angles, then when the set of minimizers is discrete, this assumption holds. For more complicated cases in which the set of minimizers consists of sub-manifolds, as long as there exists a solid angle in X that is disjoint with the set of minimizers, the assumption still holds. Theorem 3.2. Under Assumption 3.1, for f\u03b8(x) = x\u22a4\u03b8, if there exists constants bx, cx > 0 such that cx \u221a d \u2264\u2225xi\u22252 \u2264bx \u221a d for all i \u2208{1, . . . , n}. Then, with probability at least 1 \u2212\u03b4n \u2212 2 exp(\u2212np2 \u03c4/2), there exists constants \u03ba > 0, \u03ba2 > \u03ba1 > 0, such that for any \u03b8 \u2208\u02c6 \u0398n, we have \u02dc Lmix n (\u03b8, S) \u22651 n n X i=1 \u02dc ladv(\u02dc \u03b5mix \u221a d, (xi, yi)) where \u02dc \u03b5mix = \u02dc RcxE\u03bb\u223c\u02dc D\u03bb[1 \u2212\u03bb] and \u02dc R = min n p\u03c4 \u03ba1 2\u03ba2\u2212p\u03c4 (\u03ba2\u2212\u03ba1), q 4\u03bap\u03c4 2\u2212p\u03c4 +4\u03bap\u03c4 o \u03c4. Neural networks with ReLU / Max-pooling. The results in the above subsection can be extended to the case of neural networks with ReLU activation functions and max-pooling. Speci\ufb01cally, we 1Under some well-separation and smoothness conditions, we would expect all elements in \u02c6 \u0398n will fall into a neighborhood Nn of minimizers of ES \u02dc Lmix n (\u03b8, S), and Nn will shrink as n increases, i.e. Nn+1 \u2282Nn. One can think \u0398\u2217is a set containing all Nn for n \u2265N. 6 \fPublished as a conference paper at ICLR 2021 consider the logistic loss, l(\u03b8, z) = log(1 + exp(f\u03b8(x))) \u2212yf\u03b8(x) with y \u2208{0, 1}, where f\u03b8(x) represents a fully connected neural network with ReLU activation function or max-pooling: f\u03b8(x) = \u03b2\u22a4\u03c3 \u0000WN\u22121 \u00b7 \u00b7 \u00b7 (W2\u03c3(W1x) \u0001 . Here, \u03c3 represents nonlinearity via ReLU and max pooling, each Wi is a matrix, and \u03b2 is a column vector: i.e., \u03b8 consists of {Wi}N\u22121 i=1 and \u03b2. With the nonlinearity \u03c3 for ReLU and max-pooling, the function f\u03b8 satis\ufb01es that f\u03b8(x) = \u2207f\u03b8(x)\u22a4x and \u22072f\u03b8(x) = 0 almost everywhere, where the gradient is taken with respect to input x. Under such conditions, similar to Lemma 3.2, the adversarial loss function Pn i=1 max\u2225\u03b4i\u22252\u2264\u03b5 \u221a d l(\u03b8, (xi + \u03b4i, yi))/n can be written as Lstd n (\u03b8, S)+\u03b5mix \u221a d( 1 n n X i=1 |g(f\u03b8(xi))\u2212yi|\u2225\u2207f\u03b8(xi)\u22252)+ \u03b52 mixd 2 ( 1 n n X i=1 |h\u2032\u2032(f\u03b8(xi))|\u2225\u2207f\u03b8(xi)\u22252 2). (6) With a little abuse of notations, we also denote \u02dc ladv(\u03b4, (x, y)) = l(\u03b8, (x, y)) + \u03b4|g(f\u03b8(x)) \u2212y|\u2225\u2207f\u03b8(x)\u22252 + \u03b42d 2 |h\u2032\u2032(f\u03b8(x))|\u2225\u2207f\u03b8(x)\u22252 2. The following theorem suggests that minimizing the Mixup loss in neural nets also lead to a small adversarial loss. Theorem 3.3. Assume that f\u03b8(xi) = \u2207f\u03b8(xi)\u22a4xi, \u22072f\u03b8(xi) = 0 (which are satis\ufb01ed by the ReLU and max-pooling activation functions) and there exists a constant cx > 0 such that \u2225xi\u22252 \u2265cx \u221a d for all i \u2208{1, . . . , n}. Then, for any \u03b8 \u2208\u0398, we have \u02dc Lmix n (\u03b8, S) \u22651 n n X i=1 \u02dc ladv(\u03b5i \u221a d, (xi, yi)) \u22651 n n X i=1 \u02dc ladv(\u03b5mix \u221a d, (xi, yi)) where \u03b5i = RicxE\u03bb\u223c\u02dc D\u03bb[1 \u2212\u03bb], \u03b5mix = R \u00b7 cxE\u03bb\u223c\u02dc D\u03bb[1 \u2212\u03bb] and Ri = | cos(\u2207f\u03b8(xi), xi)|, R = mini\u2208{1,...,n} | cos(\u2207f\u03b8(xi), xi)|. Similar constant lower bound can be derived to the setting of neural networsk. Due to limited space, please see the detailed discussion in the appendix. On the value of R = mini Ri via experiments. For both linear models and neural networks, after training accuracy reaches 100%, the logistic loss is further minimized when \u2225f\u03b8(xi)\u22252 increases. Since \u2225f\u03b8(xi)\u22252 = \u2225\u2207f\u03b8(xi)\u22a4xi\u22252 = \u2225\u2207f\u03b8(xi)\u22252\u2225xi\u22252Ri, this suggests that Ri and R tend to increase after training accuracy reaches 100% (e.g., \u2207f\u03b8(xi) = \u03b8 in the case of linear models). We con\ufb01rm this phenomenon in Fig. 3. In the \ufb01gure, R is initially small but tends to increase after training accuracy reaches 100%, as expected. For example, for ANN, the value of R was initially 2.27\u00d710\u22125 but increased to 6.11\u00d710\u22122 after training. Fig. 3 (c) and (d) also show that Ri for each i-th data point tends to increase during training and that the values of Ri for many points are much larger than the pessimistic lower bound R: e.g., whereas R = 6.11 \u00d7 10\u22122, we have Ri > 0.8 for several data points in Fig. 3 (d). For this experiment, we generated 100 data points as xi \u223cN(0, I) and yi = 1{x\u22a4 i \u03b8\u2217> 0} where xi \u2208R10 and \u03b8\u2217\u223cN(0, I). We used SGD to train linear models and ANNs with ReLU activations and 50 neurons per each of two hidden layers. We set the learning rate to be 0.1 and the momentum coef\ufb01cient to be 0.9. We turned off weight decay so that R is not maximized as a result of bounding \u2225\u2207f\u03b8(xi)\u2225, which is a trivial case from the discussion above. 3.3 MIXUP AND GENERALIZATION In this section, we show that the data-dependent regularization induced by Mixup directly controls the Rademacher complexity of the underlying function classes, and therefore yields concrete generalization error bounds. We study two models \u2013 the Generalized Linear Model (GLM) and two-layer ReLU nets with squared loss. Generalized linear model. A Generalized Linear Model is a \ufb02exible generalization of ordinary linear regression, where the corresponding loss takes the following form: l(\u03b8, (x, y)) = A(\u03b8\u22a4x) \u2212y\u03b8\u22a4x, 7 \fPublished as a conference paper at ICLR 2021 where A(\u00b7) is the log-partition function, x \u2208Rp and y \u2208R. For instance, if we take A(\u03b8\u22a4x) = log(1 + e\u03b8\u22a4x) and y \u2208{0, 1}, then the model corresponds to the logistic regression. In this paragraph, we consider the case where \u0398, X and Y are all bounded. By further taking advantage of the property of shift and scaling invariance of GLM, we can further simplify the regularization terms in Lemma 3.1 and obtain the following results. Lemma 3.3. Consider the centralized dataset S, that is, 1/n Pn i=1 xi = 0. and denote \u02c6 \u03a3X = 1 nxix\u22a4 i . For a GLM, if A(\u00b7) is twice differentiable, then the regularization term obtained by the second-order approximation of \u02dc Lmix n (\u03b8, S) is given by 1 2n[ n X i=1 A\u2032\u2032(\u03b8\u22a4xi)] \u00b7 E\u03bb\u223c\u02dc D\u03bb[(1 \u2212\u03bb)2 \u03bb2 ]\u03b8\u22a4\u02c6 \u03a3X\u03b8, (7) where \u02dc D\u03bb = \u03b1 \u03b1+\u03b2 Beta(\u03b1 + 1, \u03b2) + \u03b1 \u03b1+\u03b2 Beta(\u03b2 + 1, \u03b1). Given the above regularization term, we are ready to investigate the corresponding generalization gap. Following similar approaches in Arora et al. (2020), we shed light upon the generalization problem by investigating the following function class that is closely related to the dual problem of Eq. (7): W\u03b3 := {x \u2192\u03b8\u22a4x, such that \u03b8 satisfying ExA\u2032\u2032(\u03b8\u22a4x) \u00b7 \u03b8\u22a4\u03a3X\u03b8 \u2a7d\u03b3}, where \u03b1 > 0 and \u03a3X = E[xix\u22a4 i ]. Further, we assume that the distribution of x is \u03c1-retentive for some \u03c1 \u2208(0, 1/2], that is, if for any non-zero vector v \u2208Rd, \u0002 Ex[A\u2032\u2032(x\u22a4v)] \u00032 \u2265\u03c1 \u00b7 min{1, Ex(v\u22a4x)2}. Such an assumption has been similarly assumed in Arora et al. (2020) and is satis\ufb01ed by general GLMs when \u03b8 has bounded \u21132 norm. We then have the following theorem. Theorem 3.4. Assume that the distribution of xi is \u03c1-retentive, and let \u03a3X = E[xx\u22a4]. Then the empirical Rademacher complexity of W\u03b3 satis\ufb01es Rad(W\u03b3, S) \u2264max{(\u03b3 \u03c1)1/4, (\u03b3 \u03c1)1/2} \u00b7 r rank(\u03a3X) n . The above bound on Rademacher complexity directly implies the following generalization gap of Mixup training. Corollary 3.1. Suppose A(\u00b7) is LA-Lipchitz continuous, X, Y and \u0398 are all bounded, then there exists constants L, B > 0, such that for all \u03b8 satisfying ExA\u2032\u2032(\u03b8\u22a4x)\u00b7\u03b8\u22a4\u03a3X\u03b8 \u2a7d\u03b3 (the regularization induced by Mixup), we have L(\u03b8) \u2a7dLstd n (\u03b8, S) + 2L \u00b7 LA \u00b7 max{(\u03b3 \u03c1)1/4, (\u03b3 \u03c1)1/2} \u00b7 r rank(\u03a3X) n ! + B r log(1/\u03b4) 2n , with probability at least 1 \u2212\u03b4. Remark 3.2. This result shows that the Mixup training would adapt to the intrinsic dimension of x and therefore has a smaller generalization error. Speci\ufb01cally, if we consider the general ridge penalty and consider the function class Wridge \u03b3 := {x \u2192\u03b8\u22a4x, \u2225\u03b8\u22252 \u2a7d\u03b3}, then the similar technique would yield a Rademacher complexity bound Rad(W\u03b3, S) \u2264max{(\u03b3/\u03c1)1/4, (\u03b3/\u03c1)1/2} \u00b7 p p/n, where p is the dimension of x. This bound is much larger than the result in Theorem 3.4 when the intrinsic dimension rank(\u03a3X) is small. Non-linear cases. The above results on GLM can be extended to the non-linear neural network case with Manifold Mixup (Verma et al., 2019a). In this section, we consider the two-layer ReLU neural networks with the squared loss L(\u03b8, S) = 1 n Pn i=1(yi \u2212f\u03b8(xi))2, where y \u2208R and f\u03b8(x) is a two-layer ReLU neural network, with the form of f\u03b8(x) = \u03b8\u22a4 1 \u03c3 \u0000Wx \u0001 + \u03b80. where W \u2208Rp\u00d7d, \u03b81 \u2208Rd, and \u03b80 denotes the bias term. Here, \u03b8 consists of W, \u03b80 and \u03b81. If we perform Mixup on the second layer (i.e., mix neurons on the hidden layer as proposed by Verma et al. (2019a)), we then have the following result on the induced regularization. 8 \fPublished as a conference paper at ICLR 2021 Lemma 3.4. Denote \u02c6 \u03a3\u03c3 X as the sample covariance matrix of {\u03c3(Wxi)}n i=1, then the regularization term obtained by the second-order approximation of \u02dc Lmix n (\u03b8, S) is given by E\u03bb\u223c\u02dc D\u03bb[(1 \u2212\u03bb)2 \u03bb2 ]\u03b8\u22a4 1 \u02c6 \u03a3\u03c3 X\u03b81, (8) where \u02dc D\u03bb \u223c \u03b1 \u03b1+\u03b2 Beta(\u03b1 + 1, \u03b2) + \u03b2 \u03b1+\u03b2 Beta(\u03b2 + 1, \u03b1). To show the generalization property of this regularizer, similar to the last section, we consider the following distribution-dependent class of functions indexed by \u03b8: WNN \u03b3 := {x \u2192f\u03b8(x), such that \u03b8 satisfying \u03b8\u22a4 1 \u03a3\u03c3 X\u03b81 \u2a7d\u03b3}, where \u03a3\u03c3 X = E[\u02c6 \u03a3\u03c3 X] and \u03b1 > 0. We then have the following result. Theorem 3.5. Let \u00b5\u03c3 = E[\u03c3(Wx)] and denote the generalized inverse of \u03a3\u03c3 X by \u03a3\u03c3\u2020 X . Suppose X, Y and \u0398 are all bounded, then there exists constants L, B > 0, such that for all f\u03b8 in WNN \u03b3 (the regularization induced by Manifold Mixup), we have, with probability at least 1 \u2212\u03b4, L(\u03b8) \u2a7dLstd n (\u03b8, S) + 4L \u00b7 s \u03b3 \u00b7 (rank(\u03a3\u03c3 X) + \u2225\u03a3\u03c3\u2020/2 X \u00b5\u03c3\u22252) n + B r log(1/\u03b4) 2n . 4 CONCLUSION AND FUTURE WORK Mixup is a data augmentation technique that generates new samples by linear interpolation of multiple samples and their labels. The Mixup training method has been empirically shown to have better generalization and robustness against attacks with adversarial examples than the traditional training method, but there is a lack of rigorous theoretical understanding. In this paper, we prove that the Mixup training is approximately a regularized loss minimization. The derived regularization terms are then used to demonstrate why Mixup has improved generalization and robustness against onestep adversarial examples. One interesting future direction is to extend our analysis to other Mixup variants, for example, Puzzle Mix (Kim et al., 2020) and Adversarial Mixup Resynthesis (Beckham et al., 2019), and investigate if the generalization performance and adversarial robustness can be further improved by these newly developed Mixup methods. ACKNOWLEDGMENTS The research of Linjun Zhang is supported by NSF DMS-2015378. The research of James Zou is supported by NSF CCF 1763191, NSF CAREER 1942926 and grants from the Silicon Valley Foundation and the Chan-Zuckerberg Initiative. The research of Kenji Kawaguchi is partially supported by the Center of Mathematical Sciences and Applications at Harvard University. This work is also in part supported by NSF award 1763665." + }, + { + "url": "http://arxiv.org/abs/1309.0961v4", + "title": "Exactly scale-free scale-free networks", + "abstract": "Many complex natural and physical systems exhibit patterns of interconnection\nthat conform, approximately, to a network structure referred to as scale-free.\nPreferential attachment is one of many algorithms that have been introduced to\nmodel the growth and structure of scale-free networks. With so many different\nmodels of scale-free networks it is unclear what properties of scale-free\nnetworks are typical, and what properties are peculiarities of a particular\ngrowth or construction process. We propose a simple maximum entropy process\nwhich provides the best representation of what are typical properties of\nscale-free networks, and provides a standard against which real and\nalgorithmically generated networks can be compared. As an example we consider\npreferential attachment and find that this particular growth model does not\nyield typical realizations of scale-free networks. In particular, the widely\ndiscussed \"fragility\" of scale-free networks is actually found to be due to the\npeculiar \"hub-centric\" structure of preferential attachment networks. We\nprovide a method to generate or remove this latent hub-centric bias --- thereby\ndemonstrating exactly which features of preferential attachment networks are\natypical of the broader class of scale-free networks. We are also able to\nstatistically demonstrate whether real networks are typical realizations of\nscale-free networks, or networks with that particular degree distribution;\nusing a new surrogate generation method for complex networks, exactly analogous\nthe the widely used surrogate tests of nonlinear time series analysis.", + "authors": "Linjun Zhang, Michael Small, Kevin Judd", + "published": "2013-09-04", + "updated": "2014-11-17", + "primary_cat": "physics.soc-ph", + "cats": [ + "physics.soc-ph", + "cs.SI", + "nlin.AO" + ], + "main_content": "Introduction The notion of scale-free networks has been around for a while [1]. The introduction of the preferential attachment (PA) algorithm for generating random scale-free graphs was a signi\ufb01cant step in understanding the properties of scalefree networks, and the physical processes that create them [2]. PA has spawned a good deal of subsequent algorithms and analysis. The purpose of this paper is to highlight the straightforward fact that not all scale-free networks have the same properties, and that algorithms, like PA (for example), do not capture the full richness of scale-free networks, nor do they necessarily display properties that may be termed typical of all scale-free networks. To achieve our aim, we \ufb01rst brie\ufb02y recall in this introduction the principal processes that have been proposed to describe and generate scale-free networks, and indicate de\ufb01ciencies of these processes when employed as models of typical scale-free networks. We then propose a maximum entropy process that provides an unbiased sample of the set of all scale-free networks. A maximum entropy process provides a better representation of expected properties of scale-free networks, both in terms of richness and typicality. A maximum entropy process also provides a unbiased standard against which other processes that generate scalefree networks can be compared. While a great deal of this maximum entropy process reduces to simple edge-switching, it is the application of this process to achieve maximum entropy realisations that is important. In Sec. 2 we make a careful comparison of PA with our unbiased standard to illustrate how PA has a signi\ufb01cant bias in the structural properties of the networks it generates. We demonstrate it has a hub-centric bias. In Sec. 3 we use a surrogate data approach to examine a selection of real-world networks 2 \fclaimed to be scale-free, and analyze in what sense these networks are typical of scale-free networks and how they di\ufb00er. 1.1. Scale-free networks from processes In this section we brie\ufb02y review the notion of scale-free networks, and the principal models of the physical processes that generate scale-free networks. These models can be broadly divided into growth models and con\ufb01guration models. Scale-free networks are usually identi\ufb01ed by the histogram of node degrees having a power-law tail. Many naturally occurring networks1 have been identi\ufb01ed as having a scale-free property: citation and collaboration networks [3, 4], airline networks [5], protein-protein interaction [6], metabolic pathways [7], the world-wide web and internet [8]. To understand the formation of scale-free networks various models have been proposed to mimic the physical or conceptual processes that build and shape these networks. One of the \ufb01rst, proposed by Barab\u00b4 asi and Albert, is preferential attachment [2], which is a restatement of the process described by de Solla Price [1] as a model of observed scale-free networks of citations [3]. Preferential attachment is an unchanging, additive growth process, where nodes of a \ufb01xed degree are added to the network, with links preferentially attached to existing nodes depending on their degree; usually proportional to the degree. Although this was suggested as a model of the process that grows the world-wide web (seen as hyperlinks between webpages), it was recognized that there is an additional aging process (AOL is replaced by Facebook, yahoo looses popularity to Google) so that attachment preferences change as the network growth proceeds [9, 10, 11]. Other processes can be introduced into a network growth model, such as shu\ufb04ing parts of the network, and deletion of links and nodes [9, 12], such that 1For a sample of the data we use here, see http://vlado.fmf.uni-lj.si/pub/networks/data/ . 3 \fthese actions do not change the underlying scale-free property of the networks produced. Moreover, scale-free networks can be produced by processes that are not growth processes. The most commonly considered of these are the socalled con\ufb01guration models. Con\ufb01guration models proceed by choosing nodes to have prescribed degrees, then connecting these together to form a scale-free network [13]; although care is needed to ensure the networks obtained satisfy other expected properties, such as, being simple (no self-links or multiple edges between nodes) and connected [14, 15, 12]. In addition to the problems with multplie edges and self-links [12], con\ufb01guration models do not provide a wellfounded sampling \u2014 in the sense of achieving maximum entropy realisations [16]. 1.2. Not all scale-free networks are the same With so many di\ufb00erent processes generating scale-free networks, the question arises: do they generate the same type of networks? The simple answer is no. Many di\ufb00erences in the scale-networks generated by di\ufb00erent processes have been noted; some particular di\ufb00erences are outlined in the following. Consider a preferential attachment process PA(m, A) that attaches nodes of degree m with the probability of linking a new node to a node of degree k is k + A \u22121, where A \u22651 is a constant called the initial attractiveness [17]. The minimum degree m clearly e\ufb00ects the robustness of the networks generated, and it e\ufb00ects other properties too [12]. It can be shown that for PA(m, A) the power-law tail has an exponent \u03b3 = 2 + A/m. This type of preferential attachment process restricts the power-law exponent to \u03b3 > 2. Many real-world networks exhibit \u03b3 < 2. In such cases the graphs have been termed dense [18] and generating such graphs has been seen as problematic. This led Del Genio and colleagues to conclude eponymously to the contrary. However, we recently provided an algorithm that easily \ufb01nd dense graph \u2014 the apparent paradox with the claim of [18] is resolved when one observes that [12] searches over the space of all networks while [18] starts with the constraint of viable graphs of a \ufb01xed degree sequence. Hence, both [19] and [12] proposed that these network 4 \fgrowth models can be modi\ufb01ed to shape networks and allow such power-laws. When other network statistics are considered (such as diameter, node-degree assortativity, clustering coe\ufb03cients), then the di\ufb00erences between scale-free networks found in the real-world and generated by di\ufb00erent models becomes more apparent. For example, consider the node-degree assortativity of scale-free networks generated by preferential attachment. These networks have low assortativity, but many natural networks are found to have very high assortativity [20]. Although these natural networks are not small-world (although they are scale-free typically estimates of the exponent are much smaller than observed for most processes that generate scale-free networks: \u03b3 < 2), it can be shown that other processes can generate scale-free networks with high node-degree assortativity [21]. Newman [22] observed systematic biases in the assortativity of realworld scale-free networks: technological networks tended to be dis-assortative, social networks assortative. He also showed that networks generated by preferential attachment have a de\ufb01nite bias compared to real-world networks; on the other hand, altruistic attachment does not [23]. Other issues with the assortativity of preferential attachment networks [9, 24] and scale-free networks more generally [25] have been observed previously. It has been noted that growth models have systematic biases (of network statistics) relative to con\ufb01guration models [26, 17]. However, while con\ufb01guration models can successfully alleviate that bias, it is not clear that they provide a sampling of the appropriate distribution. In particular, networks of growth models usually only attain the scale-free property asymptotically, and small networks display systematic biases [24, 20]; con\ufb01guration models remove this particular growth-based bias, but it is not clear at what expense. For this reason we seek a maximum entropy realisation from the ensemble of viable graphs [27]. 1.3. Maximum entropy processes In this section we aim to de\ufb01ne a process that generates scale-free networks in as pure form as possible, that is, without any of these biases. This process 5 \fis a maximum entropy process, which can serve as a standard to which other processes can be compared to both recognize and understand the nature of their particular biases. As previously noted, scale-free is usually taken to mean that the node-degree histogram has a power-law tail. We need to make this de\ufb01nition more precise. Let G be the set of connected simple graphs with N nodes, and let n(G) = (n1, . . . , nN\u22121), nk \u2208Z, be the degree histogram of G \u2208G, that is, nk is the number of nodes of degree k. De\ufb01nition 1. If p = (p1, . . . , pN\u22121), where pk \u2208[0, 1] and PN\u22121 k=1 pk = 1, then p has a power-law tail if pk \u221d(k \u2212\u03b1)\u2212\u03b3 for k \u2265\u03b2, where \u03b1, \u03b2 \u2208Z, \u03b3 \u2208R, \u03b1, \u03b2 \u22650 and \u03b3 > 1. In this de\ufb01nition \u03b1 shifts the power-law tail relative to k, \u03b2 is where the power-law tail is deemed to begin, and \u03b3 is the power-law of the tail. For the special case of preferential attachment PA(m, A), the key parameters \u03b1, \u03b2 and \u03b3 are implicitly de\ufb01ned by m and A. De\ufb01nition 2. G \u2208G is a scale-free graph if n(G) \u2243Np, where p has a powerlaw tail. Although Defn. 2 is apparently what most researchers appear to mean by scale-free, what approximately equal means in n(G) \u2243Np is not always clear. It is usually taken to mean that the form (k \u2212\u03b1)\u2212\u03b3 \ufb01ts n(G), in the sense of least-squares curve-\ufb01tting of log nk against log k, or a multinomial \ufb01t of n against p. Of interest are processes generating independent random scale-free graphs. A stationary stochastic process P generating independent random graphs in G is equivalent to selecting graphs according to a \ufb01xed probability mass function P : G \u2192[0, 1], where P G\u2208G P(G) = 1. Write G \u223cP to denote that G \u2208G is selected according to process P. (Since the process is completely de\ufb01ned by its probability mass function, we use the same symbol P to denote the process and its probability mass function.) 6 \fDe\ufb01nition 3. A process P generates scale-free graphs (on average), if E[ n(G) | G \u223cP ] = Np, (1) where p has a power-law tail. Such a process generates scale-free graphs, according to Defn. 2, with the notion of approximately equal, n(G) \u2243Np, implicitly de\ufb01ned by P. For example, one can compute the variance E \" N\u22121 X k=1 (nk \u2212Npk)2 \f \f \f \f \f G \u223cP . # , or compute the probability of a histogram n(G) deviating from Np in some norm by more than a threshold \u03f5, that is, Pr(\u2225n(G) \u2212Np\u2225> \u03f5 | G \u223cP). Preferential attachment PA(m, A) is one example of such a process; a network is grown by random preferential attachment until a graph with N nodes is obtained. This process implicitly de\ufb01nes the probability mass function for G \u2208G, which depends on m and on how the seed network is chosen. Other growth models (with link ageing, initial attractiveness, shu\ufb04ing) will usually obtain di\ufb00erent probability mass functions. A di\ufb03culty with growth processes, including PA(m, A), is that although they satisfy Defn. 3 of scale-free processes, the power-law \u03b3 is implicitly de\ufb01ned. In some cases (although not for PA) it is not clear that \u03b3 is necessarily the same for all graphs generated for \ufb01xed growth parameters. Depending on the initial seed graph, it is possible to imagine growth processes which would converge to di\ufb00erent power-laws. In contrast, one of the advantages of con\ufb01guration models is the mean histogram Np is prescribed: the process con\ufb01gures nodes with pre-de\ufb01ned degrees into a network [13]. Even if two processes generate scale-free graphs by Defn. 3, or have the same power-law \u03b3, or have the same expected histogram Np, the expected value of other graph statistics such as assortativity, clustering, diameter, may be quite di\ufb00erent \u2014 as noted above. If we are to understand the nature of scale-free graphs, and the nature of the processes that generate them, then it is useful to 7 \festablish some benchmark or standard against which processes are compared. To some extent PA(m, A) has served that propose, principally because it is a simple growth process. Unfortunately, it is not a satisfactory standard for many reasons we already indicated: it has a limited range of \u03b3 (typically \u03b3 is constrained to be between 2 and 3; for \u03b1 \u226bm and N \u2192\u221eone obtains \u03b3 \u21923); p and \u03b3 are implicitly de\ufb01ned; and, is subject to well known biases of several basic network measures (such as assortativity). We propose that the best standard against which to compare processes that generate scale-free networks is a maximum entropy process ME(p) with prescribed mean histogram Np. A maximum entropy process samples all graphs with mean histogram Np with the least bias or emphasis of structures and features that are not common features of all such graphs. De\ufb01nition 4. The maximum entropy process ME(p) uniformly samples n by the multinomial distribution of p, and uniformly samples graphs with the histogram n. That is, Pr(G | G \u223cPME(p)) = Pr(G | n) Pr(n | p), where Pr(G | n) = 1 |{G \u2208G | n(G) = n}| (2) and Pr(n | p) = N! N\u22121 Y k=1 pnk k nk! . (3) The process ME(p) generates connected graphs with maximum entropy from the speci\ufb01ed degree sequence partitions all graphs in G into equivalence classes by histogram n, and uniformly samples these (2), hence, achieving maximum entropy within these equivalence classes. Moreover, the sampling of n uses a multinomial distribution (3), which is the maximum entropy process for selecting a histogram with frequencies p. Hence, even though the degrees of nodes within a graph are not independent, the degrees of nodes of successive randomly generated graphs behave as though they were independent. That is, maximum entropy is a consequence of selecting degree histogram by multinomial sampling and thence uniform sampling from within that degree histogram. This is equivalent to the work presented by Bianconi [27, 16] for selecting maximum entropy 8 \ffor a speci\ufb01ed ensemble. For us, degree distribution and connectivity constraints dictate the ensemble of interest. For details on how to e\ufb03ciently implement ME(p), see Algorithm 1 in Appendix A. A maximum entropy process ME(p) can be implemented by as a Monte Carlo Markov Chain (MCMC) [12], however, a naive MCMC implementation could converge slowly and not be e\ufb00ective for large graphs [28, 29, 30]. The key ideas of Algorithm 1 are that the equivalence class of graphs with the same degree histogram can be sampled uniformily by a process of edge switching [31, 32], and a cannonical graph with a given degree histogram can be constructed by the Havel-Hakimi process [33, 34]. Combining these key ingredients with a test for connectivity [12] provides a su\ufb03cient algorithm. Following Bianconi [27, 16] we de\ufb01ne the maximum entropy scale free network ensemble as follows. De\ufb01nition 5. Let SF(\u03b3, m) denote the maximum entropy process ME(p), where p has a power-law tail \u03b3, with \u03b1 = \u03b2 = m and pk = 0 for k < m. In the following SF(\u03b3, m) will serve as our standard against which scale-free graphs and scale-free graph generators are compared. 2. A hub of hubs is more than a rich club In this section we use algorithm 1 to explore what are the typical properties of scale-free networks, and moreover, infer that PA networks exhibit an atypical \u201chub-centric\u201d structure. Power-law distributions alone imply that the degree distribution has a long tail and some nodes in the tail must have high degree, but the presence of \u201chubs\u201d implies something more and are often said to \u201chold the network together\u201d. What we observe in the following is something more than this. There are three properties to which we refer with the term \u201chubcentric\u201d (which we de\ufb01ne later in an algorithmic sense): (i) the distribution of hubs throughout the network, (ii) their interconnection, and (iii) their connection with low-degree nodes. The presence or absence of these three properties 9 \fdetermine, to a very great extent, many of the global properties of a scale-free network. Moreover, we observe that PA networks exhibit properties which are atypical of the broader class. However, before moving to the conclusion that the di\ufb00erence between these two types of networks is due to the hub-centric properties of PA networks, we \ufb01rst present a numerical study of the various main measures of network geometry: assortativity, clustering coe\ufb03cient and diameter. We apply these measures to various di\ufb00erent families of scale-free networks to highlight the wide variation which is possible. We will also explore other prominent features \u2014 most importantly: (i) the robustness to targeted worst-case attacks (i.e. \u201cattack vulnerability\u201d [35]) on hubs; and (ii) a detailed motif analysis. For our comparison experiments we generate four types of PA(m, A) networks, with m = 1, 2, 3, 4 and A = 0. For given N and m we generate a sample from PA(m, A) of M graphs Gi, i = 1, . . . , M. From a degree histogram n(G) we use the following formula [36] to estimate the power-law tail exponent \u03b3 of G, as b \u03b3(G) = 1 + N N\u22121 X k=1 nk log k m \u22121 2 !\u22121 . (4) Letting \u03b3 = 1 M PM i=1 b \u03b3(Gi) we use Algorithm 1 and generate an sample of M graphs from SF(\u03b3, m). Results reported in the following are for N = 2000, m = 1, 2, 3, 4, and M = 40. Computation with other values produced comparable results. In the following subsections we examine each of the network properties outlined above, starting with motif rank and then robustness. 2.1. Comparison of motif rank The motif, de\ufb01ned as a small connected subgraph that recurs in a graph, is the basic building block, or functional unit, of complex networks [37]. In real-world networks (e.g. gene regulatory networks), the relative frequencies of di\ufb00erent motifs often represent di\ufb00erent functions of the network [38, 39]. Moreover, it sheds light on the structure of the whole network. We restrict our analysis here to the four-node motifs, and classify them into three groups (based 10 \fFigure 1: Each \ufb01gure shows the frequencies of six types of four-node motifs, for a corresponding networks with m \u2208{1, 2, 3, 4}. On the left, are the results for preferential attachment, on the right, the uniform sampling algorithm. For each network, PA(m, A) or SF(\u03b3, m) (denoted in the \ufb01gure as UniS) with speci\ufb01ed m, we compute the frequency of each of the size possible arrangements of four-node path-connected subgraphs (labelled A to F) and then plot the absolute (logarithmic) frequency of occurrence of each graph. The order from most to least frequent is also provided. on the number of edges in the constituent motifs, see Fig. 1): {A, D}, {B, E}, {C, F}. Measuring the relative frequency of each of these groups allows us to determine the proportion of building blocks typical of the sparse, moderate, and dense networks. The result of the motif analysis is also shown in Fig. 1. From Fig.1, there is an apparent di\ufb00erence between the motif distribution of these two type of networks: as m increases, the motif frequencies of PA networks become larger, and the \u201cdense\u201d motifs of type C occur more frequently than the less dense type E \u2014 suggesting that the PA networks are denser than the corresponding SF(\u03b3, m) networks. This would be a natural consequence of the PA networks becomes more hub-centric (more cross links between hubs) as m increases, while the hubs of the SF(\u03b3, m) network remain more evenly distributed (less cross-links between hubs, and correspondingly fewer \u201cdense\u201d motifs). 11 \f2.2. Robustness and attack vulnerability Due to the long tail of the power-law distribution, scale-free networks are often claimed to be simultaneously robust to the random loss of nodes (i.e. \u201cerror tolerance\u201d) and fragile to targeted worst-case attacks [35]. The robustness is seen to be a consequence of the extremely high number of (\u201cunimportant\u201d) low degree nodes, the fragility is due to the extremely important role of the hubs. This property is also called the \u201cAchilles\u2019 heel\u201d of PA networks, or the \u201crobust yet fragile\u201d feature. Intuitively, the inhomogeneous connectivity distribution of many networks caused by the power-law distribution would possess these two properties. However, from our analysis, SF(\u03b3, m) networks generated by sampling uniformly from the family of all scale free networks do not exhibit this second property. Again, we may attribute this to the hub-centric nature of PA networks. We quantify the robustness to targeted attacks by speci\ufb01c removal of the most highly connected (or important) nodes until the network is disconnected. We then take the number of the nodes removed as a measure. For illustration, we use degree and betweenness centrality (BC) of a vertex as measures with which to target nodes for removal. Roughly, BC is de\ufb01ned as the number of geodesics (shortest paths) going through a vertex: BC(v) = X i\u0338=j,i\u0338=v,j\u0338=v givj/gij, where gij is total number of shortest paths between node i and j, and givj is the number of those paths that pass through v. The results are plotted in Fig. 2. The case of targeted attack is trivial for m = 1, in which situation the networks are highly likely to be trees. Hence, we restrict our analysis for the case where m = 2, 3. these results are shown in Fig. 2. From Fig. 2, it is safe to conclude that the SF(\u03b3, m) network is much more robust than the corresponding PA network when facing targeted attack. This is a consequence of the fact that the PA networks are more hub-centric, while the SF(\u03b3, m) networks are not. The fragility of the PA networks is due to the placement of hubs within the network \u2014 not the scale-free-ness of the network per se. 12 \f10 0 10 1 10 2 0 10 20 30 40 (a) m=1 10 0 10 1 10 2 0 5 10 15 20 25 30 (b) m=2 10 0 10 1 10 2 0 5 10 15 20 25 (c) m=3 robustness (number of nodes to collapse giant component) Figure 2: Numerical estimates of histograms showing the number of nodes which must be removed to induce collapse of the network giant component. Nodes are selected for removal via either node degree or node betweenness centrality (here we illustrate results for removal via node degree \u2014 results for betweenness centrality are visibly indistinguishable). Panels (from top to bottom) are for di\ufb00erent minimum degrees m = 1, 2, 3. Results for PA are plotted as a solid blue line and and results for SF(\u03b3, m) are plotted as a red dashed line. Histograms are computed with logarithmically equally space bins \u2014 the same binning is used for both network construction methods. We \ufb01nd that, SF(\u03b3, m) networks are, for m > 1, an order of magnitude more robust to targeted attack than typical PA networks. 13 \f2.3. Numerical statistics of network In this section we present the di\ufb00erence between the PA networks and SF(\u03b3, m) networks by computing the widely quoted numerical network properties: \u2022 diameter: the maximal shortest path length. \u2022 global clustering coe\ufb03cient: the proportion of the number of closed triplets divided by the number of connected triples of vertices, which is a measure of degree to which nodes in a graph tend to cluster together. \u2022 local clustering coe\ufb03cient: the proportion of links between the vertices within its neighborhood divided by the number of links that could possibly exist between them, which quanti\ufb01es how close its neighbors are to being a clique. \u2022 assortativity: the Pearson correlation coe\ufb03cient of degree between pairs of linked nodes, which measures the preference for a network\u2019s nodes to attach to others that are similar in degree. The application of these four statistics is summarized in Fig. 3. We select cases with m = 2 and m = 3 to visualize the curve of these statistics for the pair of networks in Fig.4. Although not shown, we note here that the case with m = 1 is similar to the case m = 2, while m = 4 is similar to m = 3. From the curves of various network statistics depicted above, we \ufb01nd that PA networks are atypical. In particular, PA networks have: (i) more negative assortativity compared with corresponding SF(\u03b3, m) networks; (ii) increasing clustering coe\ufb03cient as m increases; and, (iii) in the case m = 3 and m = 4 the two clustering coe\ufb03cient curves separate from each other. All these discrepancies point to the fact that as m increases, the PA network becomes more and more hub-centric, while the SF(\u03b3, m) network remain highly uniform \u2013 and the high degree nodes remain evenly distributed throughout the SF(\u03b3, m) networks. We now introduce algorithms to modify the speci\ufb01c aspects of the network which contribute to this hub-centric property. Note that the hub-centric struc14 \f1 2 3 4 5 10 15 20 25 (a) diameter 1 2 3 4 0 0.005 0.01 0.015 0.02 (b) global clustering 1 2 3 4 0 0.2 0.4 0.6 0.8 (c) local clustering 1 2 3 4 \u22120.35 \u22120.3 \u22120.25 \u22120.2 \u22120.15 \u22120.1 \u22120.05 0 (d) assortativity Figure 3: Boxplot of the four network statistics described in the text and Table 2.3. The boxplot depicts maximum and minimum (extremum of the box), upper quantile (75%) and lower quantile (25%) (the darker box), and median (solid dark line within the inner box) of the data. The left-most box within each pair is the summary of the SF(\u03b3, m) network, while the right one represents PA networks. Computation is depicted for m = 1, 2, 3, 4. Note that, for example, assortativity has an increasing negative bias for PA networks, while SF(\u03b3, m) approaches 0 as m is increased. 15 \fdiameter global cc local cc assortativty motif rank m=1 PA 19.47 (1.71) 0 0.67 (0.01) -0.095 (0.017) BECFDA m=1 SF(\u03b3, m) 15.55 (4.33) 0.0037 (0.0033) 0.73 (0.03) -0.18 (0.05) FECBDA m=2 PA 7.05 (0.22) 0.0059 (0.0005) 0.061 (0.011) -0.14 (0.014) FCEBDA m=2 SF(\u03b3, m) 9.45 (1.53) 0.014 (0.004) 0.093 (0.14) -0.095 (0.052) FECBDA m=3 PA 5.28 (0.45) 0.011 (0.0004) 0.12 (0.014) -0.19 (0.015) FECBDA m=3 SF(\u03b3, m) 7.57 (0.54) 0.0087 (0.0021) 0.017 (0.014) -0.038 (0.019) FCEBDA m=4 PA 4.93 (0.27) 0.014 (0.0004) 0.18 (0.017) -0.24 (0.019) FECBDA m=4 SF(\u03b3, m) 6.67 (0.47) 0.0072 (0.0019) 0.011 (0.011) -0.021 (0.017) FCEBDA Table 1: Summary of numerical statistics for the four pairs of networks (m = 1, 2, 3, 4). Details of computation and comparison of statistical distributions are provided in the main text and relevant \ufb01gures. 16 \fFigure 4: Distribution of the statistics of networks with m = 2 (upper four panels) and m = 3 (lower four panels). Diameter is a straight histogram count, and the probability densities in the other panels are computed with a Gaussian kernel estimator. We see systematic bias and decreased variance for preferential attachment across all indicators \u2014 exactly as one would expect. Most importantly, PA networks are smaller (diameter) typically more strongly dissassortative (see [25]). The SF(\u03b3, m) networks are labelled in the \ufb01gure captions as UniS \u2014 for uniform random sampling. 17 \fture is a global property of the network, since only modi\ufb01cation on a small portion of PA networks (such as the case with the so-called \u201crich-club\u201d phenomena [40]) is not su\ufb03cient: when we only make the modi\ufb01cation described in [40] to manipulate rich-club connections the PA and SF(\u03b3, m) network statistical distributions remain disparate. Therefore, the modi\ufb01cation scheme we propose in the following applies across the entire network structure \u2014 from super-rich nodes and hubs, to the poorest nodes. To maintain conciseness and focus, we present the brief idea of our modi\ufb01cations scheme here; for details see Algorithm 2 of Appendix B. In the following paragraphs we outline these two modi\ufb01cation schemes \u2014 one to remove what we call the hub-centric features of PA networks and a second scheme to add these features to SF(\u03b3, m) realisations. The aim of these computations is to show that these modi\ufb01cations alone are suf\ufb01cient to align the corresponding distributions of network structural properties (assortativity and so on). The modi\ufb01cation scheme for an initial PA network is the following. First, we cut the links within the group of rich nodes and between rich nodes and the group of poor nodes as far as possible while preserving the connectivity of the network. Then, we use the algorithm described in [40] among the giant nodes (which we de\ufb01ne as the nodes whose degree is even larger than the typical rich nodes), to reconstruct the structure of the rich nodes. That is, we obtain networks with minimal interconnection between hubs, minimal connection from hubs to low degree nodes, but a similar rich-club structure (the interconnection among the super hubs). After setting the thresholds \u03b11 = 60%, \u03b12 = 5% and \u03b13 = 0.5% in Alg. 2 (Appendix B), we apply this modi\ufb01cation to the case with m = 3 and m = 4. The result with m = 3 is shown in Fig. 5, and for conciseness, the result with m = 4, although similar, are omitted. From Fig. 5, the range of the probability distribution functions for the modi\ufb01ed graphs overlap with the unmodi\ufb01ed target distribution SF(\u03b3, m) \u2014 indicating that our modi\ufb01cation provides a good \ufb01t to the distribution of PA networks, with the addition of hub-centric links. Moreover, we test the modi\ufb01ed curves of the PA networks and the curves of SF(\u03b3, m) networks. For this purpose, we 18 \fmax_diameter Density 5 10 15 20 0.0 0.2 0.4 0.6 0.8 1.0 UniS PA PA' PA' ' 0.000 0.005 0.010 0.015 0.020 0 200 400 600 800 global_clustering coefficient Density UniS PA PA' PA' ' 0.00 0.05 0.10 0.15 0.20 0 50 100 150 200 local_clustering coefficient Density UniS PA PA' PA' ' \u22120.2 \u22120.1 0.0 0.1 0 20 40 60 80 assortativity Density UniS PA PA' PA' ' Figure 5: (Modi\ufb01cation on PA) The probability distribution function curve of the statistics for SF(\u03b3, m) networks (labelled UniS, solid lines), PA networks (bold dotted gray lines), modi\ufb01ed PA networks (termed PA\u00b4 with bold dotted black lines) and modi\ufb01ed \ufb01tted PA networks (termed PA\u00b4 \u00b4 with dotted black lines), where m = 3. 19 \fadd the following random factors. First, we choose \u03b11 randomly from the set [50%, 80%], and then the result for m = 3 is also depicted in Fig. 5. The results for m = 4 are similar, but omitted for conciseness. From the calculations of Fig. 5 we see that the modi\ufb01cation of the PA scheme (removing the hub-centric properties of such networks) allows us to produce network statistics which have a distribution similar to the unmodi\ufb01ed uniform sampling scheme (for the local clustering coe\ufb03cient and assortativity [25], at least) or bracketing the expected distribution for SF(\u03b3, m) (for the maximum diameter). Although this bracketing \u2014 by modifying the hub-centric nature of the PA network we go from smaller than SF(\u03b3, m) to larger that SF(\u03b3, m) \u2014 does not immediately provide indistinguishable statistical distributions, it is clear that this is due only to the unselective manner in which we choose the threshold parameters for this algorithm. Changing these parameters e\ufb00ects a continuous change in these statistical distributions. This is su\ufb03cient to make our case that these hub-centric groups of nodes are what causes the PA networks to be atypical. We do note, however, that the parametric changes we have explored are insu\ufb03cient to modify the PA network to reproduce the full distribution of observed global clustering coe\ufb03cients for the SF(\u03b3, m) network: there appears to be still further unexplored richness in the variety of these networks \u2014 scalefree networks with close to zero clustering (very tree-like networks [12]). To further support our claim, we also present here a reverse modi\ufb01cation algorithm on SF(\u03b3, m) networks to generate interconnection among hubs, which would result in the adjusted curves of the statistics for SF(\u03b3, m) networks approaching that of the PA networks. Leaving the details to Appendix B (Alg. 3), we again introduce the idea brie\ufb02y here. First, we delete the edges among the poor nodes and relink these edges to rich nodes with probability proportional to the degree of the rich nodes. Second, we use the algorithm described in [40] to make a club of super-rich (\u201cgiant\u201d) nodes (the de\ufb01nition is similar to the one in the \ufb01rst modi\ufb01cation) connected. In this way we are able generate the hub-centric structure in networks. Setting \u03b11 = 80%, \u03b12 = 5% and \u03b13 = 0.5% for Alg. 2, the result for m = 3 is shown in Fig. 6. The calculation for m = 4 is 20 \fmax_diameter Density 4 5 6 7 8 9 10 0.0 0.5 1.0 1.5 PA UniS UniS' 0.000 0.005 0.010 0.015 0.020 0 200 600 1000 global_clustering coefficient Density PA UniS UniS' 0.00 0.05 0.10 0.15 0.20 0.25 0 20 40 60 80 100 local_clustering coefficient Density PA UniS UniS' \u22120.30 \u22120.25 \u22120.20 \u22120.15 \u22120.10 \u22120.05 0.00 0 10 20 30 40 assortativity Density PA UniS UniS' Figure 6: (Modi\ufb01cation on SF(\u03b3, m)) The probability distribution function curve of the SF(\u03b3, m) networks (labelled UniS with solid lines), PA networks (bold dotted gray lines) and modi\ufb01ed SF(\u03b3, m) networks (termed UniS\u2019 with dotted lines) in the case m = 3 21 \fsimilar and omitted for conciseness. From Fig. 6, after generating hubs by arti\ufb01cially reconstructing the structure of networks, the numerical analysis of SF(\u03b3, m) networks indicates statistical distributions of structural properties which approach that of PA networks \u2014 supporting our claim that hub-centric structure in PA networks makes them atypical of random scale-free graphs. By modifying the properties of a uniformly sampled network we are able to produce distributions of maximum diameter and global clustering coe\ufb03cient very similar to that observed for the PA. That is, we add hub-centric structure to SF(\u03b3, m) network and achieve statistical features similar to that typical of PA networks. We do not achieve such good agreement for assortativity and local clustering coe\ufb03cient. Nonetheless, taken together Fig. 5 and 6 show that matching distributions can be achieved by either adding or removing the hubcentric features in each of the four statistics that we examine here. Hence, the features which we modify with the algorithms described in Appendix B are exactly the properties of PA networks that are atypical of the broader distribution of properties of scale free networks when sampled by a maximum entropy process. The only caveat being that the two algorithms are not exactly inverses of one another, and, in particular, measures of global clustering hint at further unexplored diversity in the global structure of typical scale free networks (as the networks become progressively more tree like, even for m > 1). 2.4. Hub-centric structure in PA networks To conclude this section, we now collate the results of our analysis. We see that the hubs of a PA network \u201chold\u201d that network together. The strength of interconnection among hubs (and connection with low degree nodes) may be explained by preferential attachment itself. In that growth model, once nodes and links are added, they will never be altered again. This, we claim, causes inhomogeneity in PA networks. The largest hubs will always (with probability approaching one) be the earliest nodes in the growing network and these nodes are necessarily closely interlinked. Conversely, the last nodes added to the 22 \fnetwork will have minimal degree and yet these nodes will (with very high probability) be directly connect to the hubs. In fact, it is clear that the last nodes added (those with the lowest degree) are most likely to be connected directly to the hubs \u2014 this is a consequence of the fact that the attachment bias favouring the original rich nodes increases with growth of the network \u2014 loosely speaking, that the \u201crich get richer\u201d. These are precisely the properties we alluded to earlier with our description of \u201chub-centric\u201d networks, and, these are precisely the properties which are adjusted by the modi\ufb01cation algorithms 2 and 3 of Appendix B. The consequences of this hub-centric structure of PA networks is two-fold. First, since PA is an elegant and intuitive way to generate graphs, there may exist some generation procedures in the real world which are similar to preferential attachment, and thus we can use this claim to illustrate the hub-centric structure in such growth networks. This observation has potential for practical use \u2014 for example, with hub detection in control of disease transmission, or, to control of a network by manipulating the hubs. Second, such a claim also indicates that there is systematic bias with PA networks. This bias will lead to di\ufb03culty when using PA networks to explain real world networks which do not result from such a constrained growth process \u2014 even when the degree sequence of each network satis\ufb01es the power law distribution. 3. Surrogate Networks: An application to particular putative scalefree systems In this section, we introduce a variant of the surrogate data test, proposed for nonlinear time series analysis, to interpret real world networks. We proceed by generating an ensemble of random networks, both similar to the observed data-based network (in that they share the same degree distribution) and yet, at the same time, random. By comparing the properties of the real network with the corresponding distribution of SF(\u03b3, m) networks we can determine whether the particular network is typical. Our observations here are both a 23 \fconsequence of the previous section and a motivation for the development of a network surrogate test. Just like the surrogate data methods for time series, results will depend on the choice of test statistic. We will see that certain test statistics indicate clear and systematic deviation between the maximum entropy realisations and particular experimental systems. This can be interpreted in two ways. First, as a straight-forward hypothesis test. Rejection on the basis of any particular statistic indicates that the observed network is not typical of the maximum entropy class. However, we are also able to be more particular. Hence, second, we can apply this method to determine which particular features of the observed network are atypical. Hence we provide a more sensitive classi\ufb01cation scheme dependent on individual features of the network. In what follows we will see that this is very often the case \u2014 it is particular features of the networks that di\ufb00er, rather than all aspects simultaneously. We will start with a brief discussion of robustness. As we saw above, PA networks are vulnerable to targeted attack, while SF(\u03b3, m) networks don\u2019t have such an evident \u201cAchilles\u2019 heel\u201d. Recent research into the structure of several important complex networks shows that, even if their degree distribution could have an approximately power-law tail, the networks in question are robust to targeted attack to some degree: the most highly connected nodes do not necessarily represent an \u201cAchilles\u2019 heel\u201d. In particular, recent results of modeling the router-level Internet has shown that the core of that network is constructed from a mesh of high bandwidth, low-connectivity routers, and [41] concludes that although power-law degree distributions imply the presence of high-degree vertices, they do not imply that such nodes form a necessarily \u201ccrucial hub\u201d. A related line of research into the structure of biological metabolic networks have shown that claims of SF structure fail to capture the most essential biochemical as well as \u201crobust yet fragile\u201d features of cellular metabolism and in many cases completely misinterpret the relevant biology \u2014 for example, [42] indicates the removal of high degree nodes leaves the biosynthetic pathways fully intact. Hence, real-world scale-free networks do, indeed, exhibit absence of the much24 \ftouted \u201crobustness\u201d properties. Our model provides an explanation for that absence. In the following, we will probe the limitation of the explanative power of PA networks via numerical statistics. We \ufb01nd that many real complex networks appear more \u201cuniform\u201d under our surrogate test. Also, based on this result, we propose a simplistic classi\ufb01cation of real world networks. Such a classi\ufb01cation could provide us with a new insight into the interior structure of real networks, to explore which property do make the networks interesting and special. Our surrogate tests are developed in this way: for a real world network which satis\ufb01es the power-law distribution, we estimate its minimum vertex degree and power law exponent by using the algorithm described in [36]. We then use the relation \u03b3 = 2+A/m in Section 1 to estimate its initial attractiveness, and then generate PA networks with controlled \u03b3 and m. We then substitute \u03b3 and m into Algorithm 1 to generate the SF(\u03b3, m) networks. For each set of parameter values, we generate 40 networks for SF(\u03b3, m) and PA networks respectively. In Fig. 7, we draw the boxplot of numerical statistics of the real networks, as well as its corresponding PA and SF(\u03b3, m) networks. From Fig. 7 we can conclude that, in terms of interpretation of real networks, SF(\u03b3, m) networks provides a signi\ufb01cantly better model for data from the Internet topology, and metabolic and protein interaction networks. For CS PhD collaboration and US airport connection networks, SF(\u03b3, m) is slightly better than PA. However, for Erd\u00a8 os collaboration network, neither SF(\u03b3, m) nor PA networks is good, although the statistics of PA networks are closer to that of the Erd\u00a8 os network than SF(\u03b3, m) networks. To examine the case of Erd\u00a8 os collaboration network more closely, we plot the motif rank and various other statistics of the Erd\u00a8 os collaboration networks in Fig. 8. This \ufb01gure suggests that the hub-centric structure in the Erd\u00a8 os collaboration network is much more signi\ufb01cant than PA allows: the Erd\u00a8 os collaboration network is a densely connected core along with loosely coupled radial branches reaching out from that core. Erd\u00a8 os practiced what he preached \u2013 he was a weaver of social networks and thus a builder of social capital. Moreover, 25 \fCS PhD Erdos Internet metabolic protein US Air 97 5 10 15 20 25 (a) diameter CS PhD Erdos Internet metabolic protein US Air 97 0 0.1 0.2 0.3 0.4 (b) global clustering CS PhD Erdos Internet metabolic protein US Air 97 0.2 0.4 0.6 0.8 1 (c) local clustering CS PhD Erdos Internet metabolic protein US Air 97 \u22120.6 \u22120.4 \u22120.2 0 0.2 (d) assortativity Figure 7: Network characteristics: (a) diameter; (b) global clustering coe\ufb03cient; (c) local clustering coe\ufb03cient; and, (d) assortativity. Boxplot analysis for collaboration and information networks, depicting maximum, minimum, upper quantile (75%), median and lower quantile (25%) of the data. 1. CS PhD collaboration 2. Erd\u00a8 os collaboration 3. a symmetrized snapshot of the structure of the Internet at the level of autonomous systems 4. a metabolic pathway network, 5. the S.cerevisiae protein-protein interaction network 6. US Airport connection. 26 \fFigure 8: The \ufb01rst panel shows the motif distribution and the lower panels the curves of statistics of the SF(\u03b3, m) networks (solid lines, labelled UniS), PA networks (dotted lines) and Erd\u00a8 os collaboration network (bold mixed lines). The motif rank in ascending order is shown below the graph. 27 \fthis collaboration network is speci\ufb01cally Erd\u00a8 os-centric \u2014 it is speci\ufb01cally focussed on that one unique hub and its connections. Hence, the nodes closer to Erd\u00a8 os bene\ufb01t most and become the strongest hubs (after Erd\u00a8 os himself) in the resultant network. By comparison, it would be helpful for us to identify which properties differ most markedly between the real network and a random surrogate sample.. The cases where SF(\u03b3, m) provides better representations would suggest, for instance, that the structure of protein interaction networks is more uniform, because the cellular ecosystem necessitates such stableness; and, in the case of the the Internet on the level of automatic system, the system is balanced and distributed in a rather deliberate way. Both these systems have been \u201cengineered\u201d for robustness. Finally, we note that neither the SF(\u03b3, m) nor PA networks performs well in reproducing the diameter of the biological networks. It is curious that biological networks present abnormally large diameters \u2014 we speculate that this more spread-out tree-like structure is a consequence of the components of biological systems having speci\ufb01c functional roles. Based on our results, we also note (via analysis of robustness, motif and numerical statistics of a network, and comparison with its corresponding PA and SF(\u03b3, m) networks) that one can classify the real network in the sense of hub-structure: if its property is closer to the PA networks, we can conclude it is hub-centric and typically the result of a growth process. Conversely, networks that are not PA are more likely to be purposefully widely distributed. 4. Conclusion and discussion We have proposed a new algorithm which allows us to make maximumentropy random-samples of \ufb01nite size graphs with the degree histogram being a probabilistic realization of a speci\ufb01ed degree distribution. We focus on putative scale free networks as an illustrative example and use this method to generate random networks with power-law tail degree distributions of arbitrary \u03b3 (exponent) and m (minimum degree). This provides a simple alternative to 28 \fthe various generative processes in the literature. However, our approach has the bene\ufb01t that it makes no particular biasing assumptions (such a preferential attachment). To emphasize the need for this algorithm we compare our results to distributions of networks obtained from the widely applied PA scheme of [2]. While there are many generative algorithms for scale-free networks, PA is still the most widely used. Hence, PA provides an excellent model of growth of a static network, however, we \ufb01nd that many real world networks do not conform to this model. This is not a new observation, however, it is still an important point to emphasise \u2014 growth (and preferential attachment in particular) are not adequate models to explain all observed scale-free networks. We propose particular structural modi\ufb01cation algorithms to uncover exactly how PA is atypical of what one would expect of maximum entropy. We found the high-degree nodes in PA networks are hub-centric, and that this hub-centric-ness has a greater in\ufb02uence on the overall structure than attributable only to the high-degree nodes themselves. In particular we attribute the co-called \u201crobust yet fragile\u201d feature found in many scale free networks to the distribution of hubs achieved via preferential attachment. Our observation sod the hub-centric-ness of preferential attachment scale free networks helps us assess the contribution of hubs in real world networks to the overall network structure. The surrogate network generation method we describe allows us to provide a robust numerical measure of this rich variation among scale-free complex networks. Appendix A. Implementing ME(p) A maximum entropy process ME(p) can be implemented by as a Monte Carlo Markov Chain (MCMC) [12]. However, a naive MCMC implementation could converge slowly and not be e\ufb00ective for large graphs [28, 29, 30]. Here we state an alternative algorithm; the key ingredients being edge switching [31, 32], connectivity testing [12], and constructing graphs by the Havel-Hakimi process [33, 34]. 29 \fFor G \u2208G let d(G) = (d1, . . . , dN) denote the degree sequence, that is, di > 0 is the degree of node i. An arbitrary degree sequence d \u2208ZN is called graphical if there exists G \u2208G such that d(G) = d. Algorithm 1. Let p be a target probability mass (with power-law tail). 1. Generate a sample n from the multinomial distribution of p. 2. Generate a uniformly sampled random degree sequence d from n. 3. Use the Havel-Hakimi process to test d is graphical and construct the canonical graph G with d(G) = d. If d is not graphical, then return to step 2. 4. Test whether G is connected; if not, then return to step 2. 5. Apply edge-switching to G to uniformly sample the equivalence class of n(G). Although this algorithm is generally e\ufb03cient for large graphs, when m = 1 and \u03b3 su\ufb03ciently large (\u03b3 > \u03b31, [12]) the Havel-Hakimi process often constructs disconnected graphs, and so step 3 becomes a bottle-neck. Fortunately, in this case an MCMC algorithm [12] which employs edge switching [31, 32] is e\ufb03cient. Step 1 requires a sample n from the multinomial distribution of p, and step 2 a uniformly sampled degree sequence d for n. Let qk = Pk i=1 pi, q0 = 0, and Ik = [qk\u22121, qk]. Choose uniform random variants xi \u2208(0, 1), i = 1, . . . , N. Then let di = k if xi \u2208Ik and nk = |{xi \u2208Ik}|. With step 3 we must determine whether a given d is graphical. Havel [33] and Hakimi [34] independently developed a test of d being graphical. The following Havel-Hakimi test implicitly constructs a canonical graph G if d is graphical. Let N \u22652 and b d = d. 1. Choose i such that b di > 0. 2. If b d does not have at least b di entries b dj > 0, j \u0338= i, then d is not graphical. 3. Subtract 1 from the b di entries b dj, j \u0338= i, of highest degree. Set b di = 0. 4. If d = 0, then d is graphical otherwise return to step 1. We note here that there is another e\ufb03cient method for graphicality [43], but the Havel-Hakimi is su\ufb03cient for our implementation. The canonical realization of 30 \fG \u2208G for a graphical d is implied by step 3 of the test: the node i is connected to those other nodes j selected in step 3. The graph G can be built as the test proceeds by constructing its adjacency matrix A. Step 4 of Algorithm 1 requires one to test whether a graph G is connected. This could be done by applying Dijkstra\u2019s algorithm on a certain node. Step 5 requires modifying a graph G by edge switching. Let A be the adjacency matrix of G. Let i, j, k, l be distinct nodes, such that Aij = Akl = 1 and Ail = Akj = 0. Then the edges are switched by setting Aij = Akl = 0 and Ail = Akj = 1. Edge switching does not change n(G), and if repeated su\ufb03ciently often the resulting graph is uniformly sampled from its equivalence class [32, 44]. Appendix B. Modi\ufb01cation algorithms In this section we provide the detailed modi\ufb01cation algorithms described in the main text. These algorithms are presented here to separate them from the more central SF(\u03b3, m) network generation algorithm (Alg. 1). The \ufb01rst modi\ufb01cation algorithm (Alg. 2) changes the distribution of PA networks\u2019 statistics, by \u201cspreading hubs\u201d, to make it closer to the SF(\u03b3, m) networks. Moreover, after adding random factors on the \ufb01rst modi\ufb01cation, we can \ufb01t the curve of the SF(\u03b3, m) networks. We also present the contrary modi\ufb01cation algorithm (Alg. 3) to change the curve of SF(\u03b3, m) networks to approach that of PA networks by \u201cconcentrating hubs\u201d. The following algorithm decentralizes hubs by spreading the hubs of a network. Algorithm 2. 1. Start with a simply connected network G (presumably generated with a PA process). 2. Select three percentages \u03b11, \u03b12, \u03b13 for the de\ufb01nition of the poor, rich and giant nodes in the algorithm. 3. Sort the degree sequence, select the \u03b11 lowest degree, and de\ufb01ne the corresponding vertices as the poor nodes. Similarly, select the \u03b12 and \u03b13 highest degree, de\ufb01ne them as the rich and giant nodes. 31 \f4. Delete all the links between the poor nodes and rich nodes. 5. Add the minimal number of links among the rich nodes to make them connected, and de\ufb01ne them as a \u201cclub\u201d. 6. Check the poor nodes sequentially, if one is not linked to the club, random choose one member in club and link this member to the poor node and add the poor node to the club, until the members of club includes all the vertices of G. 7. Randomly pick up 2 linked giant nodes g1, g2, and 2 linked non-giant nodes v1, v2 which are not linked to g1, g2. 8. Apply the edge-switching method among v1, v2, g1, g2 [40]. (At least one of the possible edge switches will result in a connected graph.) 9. Repeat Step 7 and Step 8 until there is no links among the giant nodes. In brief, Steps 4-6 cut the links between the group of rich nodes and group of poor nodes, and Step 7-9 change the structure among the rich nodes, since it is this group of rich nodes possess most of the edges. Steps 7-9 preserve the degree sequence. The result is that the PA networks become less hub-centric. We also note that, in Step 5, there is often no need to add links, since the group of rich nodes is often already connected. We choose \u03b11 = 60%, \u03b12 = 5%, \u03b13 = 0.5% for our modi\ufb01cation algorithm, and apply it to the case with m = 3 and m = 4. The result with m = 3 is shown in Fig. 5, other results are similar, but omitted for conciseness. The following algorithm concentrates hubs by modifying the initial network to make the hubs more strongly centralized. Algorithm 3. 1. Start with a simply connected network G (presumably generated with the SF(\u03b3, m) process). 2. Select three percentage \u03b11, \u03b12, \u03b13 for the de\ufb01nition of the poor rich and giant nodes in the algorithm. 3. Sort the degree sequence, select the \u03b11 lowest degree, and de\ufb01ne the corresponding vertices as the poor nodes. Similarly, select the \u03b12 and \u03b13 highest degree, de\ufb01ne them as the rich and giant nodes. 32 \f4. Traverse the poor nodes of G. (a) For each poor node vi, if there exists another poor node among its adjacent nodes, then select randomly one node among them, and delete the links between them. Otherwise go to Step 5. (b) For vi, select one node from the group of rich nodes with the assigned probability proportional to their degree, link vi with it. 5. Randomly pick up 2 not linked to giant nodes g1, g2, and 2 not linked to non-giant nodes v1, v2 which are connected with g1 and g2. 6. Apply the edge-switching method among v1, v2, g1, g2 [40]. (At least one of the possible edge switches will result in a connected graph.) 7. Repeat Step 5 and Step 6 until the subgraph induced by giant nodes is complete. This reverse modi\ufb01cation algorithm on SF(\u03b3, m) networks aims to generate interconnected hubs. Step 4 deletes the edges between the poor nodes and rich nodes as far as possible under the constraint of connectivity, and relinks these edges among rich nodes, and Step 5 forces the giant nodes to connect with each other. Hence, we can generate the hub-centric structure in networks, which results in the adjusted curves of the statistics for SF(\u03b3, m) networks approaching that of the PA networks. Setting \u03b11 = 80%, \u03b12 = 5% and \u03b13 = 0.5% on this modi\ufb01cation, the result for m = 3 is shown in Fig.6. Acknowledgements MS is supported by an Australian Research Council Future Fellowship (FT110100896) and Discovery Project (DP140100203). LZ was supported by the UWA-USTC Research Training Programme. 33" + } + ], + "Jonathan Ullman": [ + { + "url": "http://arxiv.org/abs/1407.1571v3", + "title": "Private Multiplicative Weights Beyond Linear Queries", + "abstract": "A wide variety of fundamental data analyses in machine learning, such as\nlinear and logistic regression, require minimizing a convex function defined by\nthe data. Since the data may contain sensitive information about individuals,\nand these analyses can leak that sensitive information, it is important to be\nable to solve convex minimization in a privacy-preserving way.\n A series of recent results show how to accurately solve a single convex\nminimization problem in a differentially private manner. However, the same data\nis often analyzed repeatedly, and little is known about solving multiple convex\nminimization problems with differential privacy. For simpler data analyses,\nsuch as linear queries, there are remarkable differentially private algorithms\nsuch as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS\n2010) that accurately answer exponentially many distinct queries. In this work,\nwe extend these results to the case of convex minimization and show how to give\naccurate and differentially private solutions to *exponentially many* convex\nminimization problems on a sensitive dataset.", + "authors": "Jonathan Ullman", + "published": "2014-07-07", + "updated": "2015-03-14", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS" + ], + "main_content": "Introduction 3 1.1 Our Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Connection to Generalization Error in Adaptive Data Analysis . . . . . . . . . . . . . . . 6 2 Preliminaries 6 2.1 Datasets ,Histograms, and Di\ufb00erential Privacy . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Convex Minimization (CM) Queries and Accuracy . . . . . . . . . . . . . . . . . . . . . . 6 3 Online Private Multiplicative Weights for CM Queries 7 3.1 The Online Sparse Vector Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4 Privacy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4.1 Composition of Di\ufb00erential Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4.2 Proof of Theorem 3.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4 Applications of Theorem 3.8 13 4.1 Interpreting Theorem 3.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2.1 Lipschitz and Bounded Loss Functions. . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2.2 Generalized Linear Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2.3 Strongly Convex Loss Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Running Time and Discussion of Computational Complexity . . . . . . . . . . . . . . . . 15 Acknowledgements 16" + }, + { + "url": "http://arxiv.org/abs/1207.6945v3", + "title": "Answering n^{2+o(1)} Counting Queries with Differential Privacy is Hard", + "abstract": "A central problem in differentially private data analysis is how to design\nefficient algorithms capable of answering large numbers of counting queries on\na sensitive database. Counting queries of the form \"What fraction of individual\nrecords in the database satisfy the property q?\" We prove that if one-way\nfunctions exist, then there is no algorithm that takes as input a database D in\n({0,1}^d)^n, and k = n^{2+o(1)} arbitrary efficiently computable counting\nqueries, runs in time poly(d, n), and returns an approximate answer to each\nquery, while satisfying differential privacy. We also consider the complexity\nof answering \"simple\" counting queries, and make some progress in this\ndirection by showing that the above result holds even when we require that the\nqueries are computable by constant depth (AC-0) circuits.\n Our result is almost tight in the sense that nearly n^2 counting queries can\nbe answered efficiently while satisfying differential privacy. Moreover,\nsuper-polynomially many queries can be answered in exponential time.\n We prove our results by extending the connection between differentially\nprivate counting query release and cryptographic traitor-tracing schemes to the\nsetting where the queries are given to the sanitizer as input, and by\nconstructing a traitor-tracing scheme that is secure in this setting.", + "authors": "Jonathan Ullman", + "published": "2012-07-30", + "updated": "2013-10-01", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CC" + ], + "main_content": "Introduction Consider a database D \u2208({0, 1}d)n, in which each of the n rows corresponds to an individual\u2019s record, and each record consists of d binary attributes. The goal of privacy-preserving data analysis is to enable rich statistical analyses on the database while protecting the privacy of the individuals. It is especially desirable to achieve di\ufb00erential privacy [DMNS06], which guarantees that no individual\u2019s data has a signi\ufb01cant in\ufb02uence on the information released about the database. Some of the most basic statistics on a database are counting queries, which are queries of the form, \u201cWhat fraction of individual records in D satisfy some property q?\u201d In particular we would like to construct di\ufb00erentially private sanitizers that, given a database D and k counting queries q1, . . . , qk from a family Q, outputs an approximate answer to each of the queries. We would like \u2217Supported by NSF grants CNS-0831289, CNS-1237235, and a gift from Google, Inc. 1 arXiv:1207.6945v3 [cs.CR] 1 Oct 2013 \fthe number of queries, k, to be as large as possible, and the set of feasible queries, Q, to be as general as possible. Ideally, Q, would contain all counting queries.1 Moreover, we would like the algorithm to run as e\ufb03ciently as possible. Some of the earliest work in di\ufb00erential privacy [DMNS06] gave an e\ufb03cient sanitizer\u2014the so-called Laplace Mechanism. The Laplace Mechanism answers any set of k arbitrary e\ufb03ciently computable counting queries by perturbing the answers with appropriately calibrated random noise, providing good accuracy (say, within \u00b1.01 of the true answer) as long as k \u2272n2. The ability to approximately answer n2 counting queries is quite powerful, especially in settings where data is abundant and n is large. However, being limited to n2 queries can be restrictive in settings where data is expensive or otherwise di\ufb03cult to acquire, and n is small. It can also be restrictive when the budget of queries is shared between multiple analysts. Fortunately, a remarkable result of Blum et al. [BLR08] (with subsequent developments in [DNR+09, DRV10, RR10, HR10, GRU12, HLM12]), showed that di\ufb00erentially private algorithms are not limited to n2 queries. They showed how to approximately answer arbitrary counting queries even when k is exponentially larger than n. Unfortunately, their algorithm, and all subsequent algorithms capable of answering more than n2 arbitrary counting queries, run in time (at least) poly(2d, n, k). The result of Blum et al., raises the exciting possibility of an e\ufb03cient algorithm that can privately compute approximate answers to large numbers of counting queries. Unfortunately, Dwork et al. [DNR+09] gave evidence that e\ufb03cient sanitizers are inherently less powerful than their computationally unbounded counterparts. They study the problem of construcing di\ufb00erentially private one-shot sanitizers that, given a database D, produce a summary from which approximate answers to every query in Q can be computed, while both the sanitizer and the summary run in time much less than the size of Q. Dwork et al. constructed a family of 2 e O(\u221an) queries for which there is no e\ufb03cient (time poly(d, n)) one-shot sanitizer (under certain cryptographic assumptions), even though there is an ine\ufb03cient (time poly(2d, n, |Q|)) one-shot sanitizer even if |Q| is nearly 2n. For any family Q, constructing an e\ufb03cient one-shot sanitizer is one way of constructing an e\ufb03cient sanitizer that answers any polynomial number of queries from Q. Thus, hardness results for oneshot sanitizers rule out a particular way of constructing e\ufb03cient sanitizers. However, ultimately a polynomial-time analyst will only be able to ask a polynomial number of queries, and hardness results for one-shot sanitzers still leave hope that there might be an e\ufb03cient sanitizer that can answer many more arbitrary counting queries than the Laplace Mechanism. Unfortunately, we show that this is not the case\u2014there is no e\ufb03cient, di\ufb00erentially private algorithm that takes a database D \u2208({0, 1}d)n, and e \u0398(n2) arbitrary, e\ufb03ciently computable counting queries as input and outputs an approximate answer to each of the queries. One way to summarize our results is that, unless we restrict the set Q of allowable queries, or allow exponential running time, then the Laplace Mechanism is essentially the best possible algorithm for answering counting queries with di\ufb00erential privacy. 1.1 Our Results and Techniques As discussed above, in this paper we give new hardness results for answering counting queries while satisfying di\ufb00erential privacy. To make the statement of our results more concrete, we will assume that the counting queries are given to the sanitizer as input in the form of circuits that, on input 1It may require super-polynomial time just to evaluate an arbitrary counting query, which would rule out e\ufb03ciency for reasons that have nothing to do with privacy. For this discussion, we restrict attention to queries that are e\ufb03ciently computable, so are not the bottleneck in the computation. 2 \fan individual record x \u2208{0, 1}d, decide whether or not the record x satis\ufb01es the property q. We say the queries are e\ufb03ciently computable if the corresponding circuits are of size poly(d, n). Theorem 1.1. Assuming the existence of one-way functions, there exists a k = n2+o(1) such that there is no algorithm that, on input a database D \u2208({0, 1}d)n and k e\ufb03ciently computable counting queries, runs in time poly(d, n) and returns an approximate answer to each query to within \u00b1.49, while satisfying di\ufb00erential privacy. In particular, Theorem 1.1 applies to interactive sanitizers, which are sanitizers that receive (possibly adaptively chosen) queries one at a time. Many positive results achieve this stronger notion of sanitization. In particular, the Laplace mechanism is an e\ufb03cient interactive sanitizer that answers e \u2126(n2) queries and there exist interactive sanitizers that can answer nearly 2n queries in time poly(2d, n) per query interactively [RR10, HR10, GRU12]. We also show that, the same theorem holds even for queries that are computable by unboundedfan-in circuits of depth 6 over the basis {\u2227, \u2228, \u00ac} (a subset of the well-studied class AC0), albeit under a stronger (but still plausible) cryptographic assumption. Theorem 1.2. Under the assumptions described in Section 5.6, there exists a k = n2+o(1) such that there is no algorithm that, on input a database D \u2208({0, 1}d)n and k e\ufb03ciently computable depth-6 queries (circuits), runs in time poly(d, n) and returns an approximate answer to each query to within \u00b1.49, while satisfying di\ufb00erential privacy. Theorem 1.2 should be contrasted with the results of Hardt, Rothblum, and Servedio [HRS12] as well as Thaler, Ullman, and Vadhan [TUV12], which give e\ufb03cient sanitizers for answering n\u2126( \u221a k) \u226bn2 monotone k-way conjunction queries, a much simpler class than polynomial-size depth-6 circuits.2 We now describe our techniques. The Connection with Traitor-Tracing We prove our results by building on the connection between di\ufb00erentially private sanitizers for counting queries and traitor-tracing schemes discovered by Dwork et al. [DNR+09]. Traitor-tracing schemes were introduced by Chor, Fiat, and Naor [CFN94] for the purpose of identifying pirates who violate copyright restrictions. Roughly speaking, a (fully collusion-resilient) traitor-tracing scheme allows a sender to generate keys for n users so that 1) the sender can broadcast encrypted messages that can be decrypted by any user, and 2) any e\ufb03cient pirate decoder capable of decrypting messages can be traced to at least one of the users who contributed a key to it, even if an arbitrary coalition of the users combined their keys in an arbitrary e\ufb03cient manner to construct the decoder. Dwork et al. show that the existence of traitor-tracing schemes implies hardness results for one-shot sanitizers. Very informally, they argue as follows: Suppose a coalition of users takes their keys and builds a database D \u2208({0, 1}d)n where each record contains one of their user keys. The family Q will contain a query qc for each possible ciphertext c. The query qc asks \u201cWhat fraction of the records (user keys) in D would decrypt the ciphertext c to the message 1?\u201d Every user can decrypt, so if the sender encrypts a message m \u2208{0, 1} as a ciphertext c, then every user will decrypt c to m. Thus the answer to the counting query, qc, will be m. 2A monotone k-way conjunction query on a database D \u2208({0, 1}d)\u2217is speci\ufb01ed by a set of positions S \u2286[d], |S| = k \u2264d, and asks \u201cWhat fraction of records in D have a 1 in every position in S?\u201d. 3 \fSuppose there were an e\ufb03cient one-shot sanitizer for Q. Then the coalition could use it to e\ufb03ciently produce a summary of the database D that enables one to e\ufb03ciently compute an approximate answer to every query qc, which would also allow one to e\ufb03ciently decrypt the ciphertext. Such a summary can be viewed as an e\ufb03cient pirate decoder, and thus the tracing algorithm can use the summary to trace one of the users in the coalition. However, if there is a way to identify one of the users in the database from the summary, then the summary is not di\ufb00erentially private. In order to instantiate their result, they need a traitor-tracing scheme. Since Q contains a query for every ciphertext, the parameter to optimize is the length of the ciphertexts. Using the fully collusion-resilient traitor-tracing scheme of Boneh, Sahai, and Waters [BSW06], which has ciphertexts of length e O(\u221an), they obtain a family of queries of size 2 e O(\u221an) for which there is no e\ufb03cient one-shot sanitizer. Dwork et al. also discovered a partial converse\u2014proving hardness of one-shot sanitization for a smaller family of queries requires constructing traitor-tracing schemes with shorter ciphertexts, which is a seemingly di\ufb03cult open problem. Our Approach In our setting of sanitization (rather than one-shot sanitization, as studied by Dwork et al. [DNR+09]), we don\u2019t expect to answer every query in Q, only a much smaller set of queries requested by the analyst. At \ufb01rst glance, this should make answering the queries much easier, and thus make it more di\ufb03cult to demonstrate hardness. However, the attacker does have the power to choose the queries which he wants answered, and can choose queries that are most di\ufb03cult to sanitize. Our \ufb01rst observation is that in the traitor-tracing scenario, the tracing algorithms only query the pirate decoder on a polynomial number of ciphertexts, which are randomly chosen and depend on the particular keys that were instantiated for the scheme. For many schemes, even e O(n2) queries is su\ufb03cient. Thus it would seem that the tracing algorithm could simply decide which queries it will make, give those queries as input to the sanitizer, and then use the answers to those queries to identify a user and violate di\ufb00erential privacy. However, this intuition ignores an important issue. Many traitor-tracing schemes (including [BSW06]) can only trace stateless pirate decoders, which essentially commit to a response to each possible query (or a distribution over responses) once and for all. For one-shot sanitizers, the private summary is necessarily stateless, and thus the result of Dwork et al. can be instantiated with any scheme that allows tracing of stateless pirate decoders. However, an arbitrary sanitizer might give answers that depend on the sequence of queries. Thus, in order to prove our results, we will need a traitor-tracing scheme that can trace stateful pirate decoders. The problem of tracing stateful pirates is quite natural even without the implications for private data analysis. Indeed, this problem has been studied in the literature, originally by Kiayias and Yung [KY01]. They considered pirates that can abort and record history. However, their solution, and all others known, does not apply to our speci\ufb01c setting due to a certain \u201cwatermarking assumption\u201d that doesn\u2019t apply when proving hardness-of-sanitization (see discussion below). To address this problem, we also re\ufb01ne the basic connection between traitor-tracing schemes and di\ufb00erential privacy by showing that, in many respects, fairly weak traitor-tracing schemes su\ufb03ce to establish the hardness of preserving privacy. In particular, although the pirate decoder obtained from a sanitizer may be stateful and record history, the accuracy requirement of the sanitizer means that the corresponding pirate decoder cannot abort, which will make it easier to construct a traitor-tracing scheme for these kinds of pirates. Indeed, we construct such a scheme to establish Theorem 1.1. The scheme also has weakened requirements in other respects, having nothing to do with the statefulness of the pirate or the tracing algorithm. These weakened requirements allow us to reduce 4 \fthe complexity of the decryption, which means that the queries used by the attacker do not need to be arbitrary polynomial-size circuits, but instead can be circuits of constant depth, which allows us to establish Theorem 1.2. Another technical issue arises in that all k queries must be given to the sanitizer at once, whereas tracing algorithms typically are allowed to query the pirate interactively. However, we are able to show that the scheme we construct can be traced using one round of queries. See Sections 3.1 and 4 for a precise statement of the kind of traitor-tracing scheme that su\ufb03ces and Section 5 for our construction. Our construction is based on a well-known fully collusion resilient traitor-tracing scheme [CFN94], but with a modi\ufb01ed tracing algorithm. The tracing algorithm uses \ufb01ngerprinting codes [BS98, Tar08], which have been employed before in the context of traitor-tracing and content distribution, but our tracing algorithm is di\ufb00erent from all those we are aware of. The resulting scheme allows for tracing with a minimal number of non-adaptively chosen queries, achieves tracing without context-speci\ufb01c watermarking assumptions, simpli\ufb01es the decryption circuit (at the expense of weakening the security parameters and functionality). The restriction to non-aborting pirates may not be so natural in the setting of content distribution, which may explain why the scheme was not previously known. 1.2 Related Work In addition to the hardness results for one-shot sanitizations [DNR+09], which apply to arbitrary one-shot sanitizers, there are several hardness-of-sanitization results for restricted classes of sanitizers. Dwork et al. proved stronger hardness results for sanitizers whose output is a \u201csynthetic database\u201d\u2014roughly, a new database (of the same dimensions) that approximately preserves the answer to some set of queries. Their results were extended by Ullman and Vadhan [UV11], who showed that it is hard to generate a private synthetic database that is accurate for essentially any non-trivial family of queries, even 2-literal conjunctions. Gupta et al. [GHRU11] considered algorithms that can only access the database by making a polynomial number of \u201cstatistical queries\u201d (essentially counting queries). They showed that such algorithms cannot be a one-shot sanitizer (even ignoring privacy constraints) that approximately answers certain simple families of counting queries with high accuracy. Finally, Dwork, Naor, and Vadhan [DNV12] gave information-theoretic lower bounds for stateless sanitizers, which take k queries as input, but whose answers to each query do not depend on the other k \u22121 input queries. They showed that (even computationally unbounded) stateless sanitizers can answer at most e O(n2) queries with non-trivial accuracy, while satisfying di\ufb00erential privacy. The Laplace Mechanism is a stateless sanitizer that answers e \u2126(n2) queries, and thus their result is tight in this respect. Although their result is information theoretic, and considers a highly restricted type of sanitizer, their techniques are related to ours. We elaborate on this connection in the appendix. 2 Preliminaries Di\ufb00erentially Private Algorithms Let a database D \u2208({0, 1}d)n be a collection of n rows (x(1), . . . , x(n)) \u2208{0, 1}d. We say that two databases D, D\u2032 \u2208({0, 1}d)n are adjacent if they di\ufb00er only on a single row, and we denote this by D \u223cD\u2032. 5 \fDe\ufb01nition 2.1 (Di\ufb00erential Privacy [DMNS06]). Let M: ({0, 1}d)n \u2192R be a randomized algorithm that takes a database as input (where n and d are varying parameters). M is (\u03b5, \u03b4)di\ufb00erentially private if for every two adjacent databases D \u223cD\u2032 and every subset S \u2286R, Pr [M(D) \u2208S] \u2264e\u03b5Pr \u0002 M(D\u2032) \u2208S \u0003 + \u03b4. If M is (\u03b5, \u03b4)-di\ufb00erentially private for some functions \u03b5 = \u03b5(n) = O(1), \u03b4 = \u03b4(n) = o(1/n), we will drop the parameters \u03b5 and \u03b4 and say that M is di\ufb00erentially private. The choice of \u03b5 = O(1), \u03b4 = o(1/n) is a fairly weak setting of the privacy parameters, and most known constructions of di\ufb00erentially private mechanisms for various tasks give quantitatively stronger privacy guarantees. These parameters are essentially the weakest possible, as (\u03b5, \u03b4)di\ufb00erentially privacy is not a satisfactory privacy guarantee for \u03b5 = \u03c9(1) or \u03b4 = \u2126(1/n). That our lower bounds apply to the parameters speci\ufb01ed in De\ufb01nition 2.1 makes our results stronger. Sanitizers for Counting Queries Since an algorithm that always outputs \u22a5satis\ufb01es De\ufb01nition 2.1, we also need to specify what it means for the sanitizer to be useful. In this paper we study sanitizers that give accurate answers to counting queries. A counting query on {0, 1}d is de\ufb01ned by a predicate q: {0, 1}d \u2192{0, 1}. Abusing notation, we de\ufb01ne the evaluation of the query q on a database D = (x(1), . . . , x(n)) \u2208({0, 1}d)n to be q(D) = 1 n n X i=1 q(x(i)) We will use Q(d) to denote a set of counting queries on {0, 1}d and Q = S d\u2208N Q(d). We are interested in sanitizers that take a sequence of queries from some set Q as input. Formally a sanitizer is a function M: ({0, 1}d)n \u00d7 (Q(d))k \u2192Rk (where, again, n, d, and k are varying parameters). Notice that we assume that M outputs k real-valued answers. Think of the j-th component of the output of M as an answer to the j-th query. For the results in this paper, this assumption will be without loss of generality.3 De\ufb01nition 2.1 extends naturally to sanitizers by requiring that for every q1, . . . , qk \u2208Q, the sanitizer Mq1,...,qk(\u00b7) = M(\u00b7, q1, . . . , qk) is (\u03b5, \u03b4)-di\ufb00erentially private as a function of the input database. Now we formally de\ufb01ne what it means for a sanitizer to give accurate answers. De\ufb01nition 2.2 (Accuracy). Let D be a database and q1, . . . , qk be a set of counting queries. A sequence of answers a1, . . . , ak is \u03b1-accurate for q1, . . . , qk on D if \u2200j \u2208[k], |qj(D) \u2212aj| \u2264\u03b1. 3In certain settings, M(D, q1, . . . , qk) is allowed to output a \u201csummary\u201d z \u2208R for some range R. In this case, we would also require that there exists an \u201cevaluator\u201d E : R \u00d7 Q \u2192R that takes a summary and a query and returns an answer E(z, q) = a that approximates q(D). The extra generality is used to allow M to run in less time than the number of queries it is answering (e.g. releasing a \ufb01xed family of queries), but this is not relevant for our range of parameters where k = e O(n2). Indeed, a generic sanitizer, M that outputs a summary in R can be converted to a generic sanitizer with output in Rk simply by running M(D, q1, . . . , qk) to obtain z \u2208R and then computing a1 = E(z, q1), . . . , ak = E(z, qk). This conversion increases the running time by a factor of k, which is the minimum time required to read the input queries. 6 \fLet Q be a set of counting queries, k \u2208N and \u03b1, \u03b2 \u2208[0, 1] be parameters. A sanitizer M is (\u03b1, \u03b2, Q, k)-accurate if for every database D \u2208({0, 1}d)n and every sequence of queries q1, . . . , qk \u2208 Q(d) Pr M\u2019s coins [M(D, q1, . . . , qk) is \u03b1-accurate for D and q1, . . . , qk] \u22651 \u2212\u03b2. If M is (\u03b1, \u03b2, Q, k)-accurate for any (constant) \u03b1 < 1/2 and \u03b2 = \u03b2(n) = o(1/n2), we will drop \u03b1 and \u03b2 and say that M is (Q, k)-accurate. The choice of \u03b1 < 1/2, \u03b2 = o(1/n2) is also signi\ufb01cantly weaker than what can be achieved by known constructions of sanitizers. These parameters are also essentially the weakest parameters possible, as a mechanism that answers 1/2 to every query achieves \u03b1 = 1/2, \u03b2 = 0 for any number of arbitrary queries. Also, if there is a mechanism that achieves (\u03b1, \u03b2, Q, k)-accuracy for \u03b2 < 1/2, then there is another mechanism that achieves (\u03b1, o(1/n2), Q, k)-accuracy with only an O(log n) loss in the privacy parameters and the e\ufb03ciency of the mechanism.4 That our lower bound applies to the parameters speci\ufb01ed in De\ufb01nition 2.2 makes our results stronger. E\ufb03ciency of Sanitizers Simply, a sanitizer is e\ufb03cient if it runs in time polynomial in the length of its input. To make the statement more precise, we need to specify how the queries are given to the sanitizer as input. Notice that to specify an arbitrary counting query q: {0, 1}d \u2192{0, 1} requires 2d bits. In this case, a sanitizer whose running time is polynomial in the time required to specify the query is not especially e\ufb03cient. Thus, we restrict attention to queries that are e\ufb03ciently computable, and have a succinct representation. In this work, we will \ufb01x the representation to be Boolean circuits over the basis {\u2227, \u2228, \u00ac} with possibly unbounded-fan-in. In this representation, any query can be evaluated in time |q|, where |\u00b7| denotes the size of the circuit computing q. We also want to consider the case where the queries are computable by circuits of low depth. For a constant h \u2208N, we use Q(d) depth\u2212h to denote the set of all counting queries on {0, 1}d speci\ufb01ed by circuits of depth h. Finally, we use Q(d) all to denote the set of all counting queries on {0, 1}d. De\ufb01nition 2.3 (E\ufb03cient Sanitizers). A sanitizer M is e\ufb03cient if, on input a database D \u2208 ({0, 1}d)n and k queries q1, . . . , qk \u2208Q(d) all , M runs in time poly(d, n, k, |q1| + \u00b7 \u00b7 \u00b7 + |qk|). For every h \u2208N, a sanitizer M is e\ufb03cient for depth-h queries if, on input a database D \u2208({0, 1}d)n and k queries q1, . . . , qk \u2208Q(d) depth\u2212h, M runs in time poly(d, n, k, |q1| + \u00b7 \u00b7 \u00b7 + |qk|). For comparison with our results, we will recall the properties of some known mechanisms, stated in our terminology and for our choice of parameters: Theorem 2.4 (Laplace Mechanism [DN03, BDMN05, DMNS06]). There exists a sanitizer MLap that is 1) di\ufb00erentially private, 2) e\ufb03cient, and 3) (Q(d) all , e \u2126(n2))-accurate. Theorem 2.5 (\u201cAdvanced Query Release Mechanisms\u201d [BLR08, DNR+09, DRV10, HR10, GRU12, HLM12]). There exists a sanitizer MAdv that is 1) di\ufb00erentially private and 2) (Q(d) all , 2e \u2126(n/ \u221a d))accurate. For queries q1, . . . , qk \u2208Q(d) all , MAdv runs in time poly(2d, n, k, |q1| + \u00b7 \u00b7 \u00b7 + |qk|). 4Given a sanitizer M that answers every query accurately with probability 1/2+\u2126(1), one can obtain a mechanism M\u2032 that answers every query accurately with probability 1\u2212\u03b2. M\u2032 will run M independently r = O(log(1/\u03b2)) times and answers each query with the median of the r answers for that query. 7 \fAs we mentioned above, these mechanisms can achieve stronger quantitative privacy and accuracy guarantees (in terms of \u03b5, \u03b4 for privacy and \u03b1, \u03b2 for accuracy) with only a small degradation in the number of queries. Also, notice that both of these mechanisms provide accuracy guarantees that are independent of the complexity of the queries (although the running time of the mechanism will depend on the complexity of the queries). Our hardness results will apply to sanitizers that only provide accuracy for queries of size poly(d, n). 3 Traitor-Tracing Schemes In this section we give de\ufb01ne a traitor-tracing scheme. Throughout, we will use ATT to denote algorithms associated with traitor-tracing schemes. 3.1 Traitor-Tracing Schemes We now give a de\ufb01nition of a traitor-tracing scheme, heavily tailored to the task of proving hardness results for generic sanitizers. We will sacri\ufb01ce some consistency with the standard de\ufb01nitions. See below for further discussion of the ways in which our de\ufb01nition departs from the standard de\ufb01nition of traitor-tracing. In some cases, the non-standard aspects of the de\ufb01nition will be necessary to establish our results, and in others it will be for convenience. Despite these di\ufb00erences, we will henceforth refer to schemes satisfying our de\ufb01nition simply as traitor-tracing schemes. Intuitively, a traitor-tracing scheme is a form of broadcast encryption, in which a sender can broadcast an encrypted message that can be decrypted by each of a large set of users. The standard notion of security for such a scheme would require that an adversary that doesn\u2019t have any of the keys cannot decrypt the message. A traitor-tracing scheme has the additional property that given any decoder capable of decrypting the message (which must in a very loose sense \u201cknow\u201d at least one of the keys), there is a procedure for determining which user\u2019s key is being used. Moreover, we want the scheme to be \u201ccollusion resilient,\u201d in that even if a coalition of users gets together and combines their keys in some way to produce a decoder, there is still a procedure that identi\ufb01es at least one member of the coalition. We now describe the syntax of a traitor-tracing scheme more formally. For functions n, kTT : N \u2192 N, an (n, kTT)-traitor-tracing scheme is a tuple of algorithms (GenTT, EncTT, DecTT, TraceTT). The parameter n speci\ufb01es the number of users of the scheme and the parameter kTT will specify the number of queries that the tracing algorithm makes to the pirate decoder. We allow all the algorithms to be randomized except for DecTT.5 \u2022 The algorithm GenTT takes a security parameter, \u03ba, and returns a sequence of n = n(\u03ba) user keys \u20d7 sk = (sk(1), . . . , sk(n)) \u2208{0, 1}\u03ba. Formally, \u20d7 sk = (sk(1), . . . , sk(n)) \u2190R GenTT(1\u03ba). \u2022 The algorithm EncTT takes a sequence of n user keys \u20d7 sk and a message bit b \u2208{0, 1}, and generates a ciphertext c \u2208C = C(\u03ba). Formally, c \u2190R EncTT( \u20d7 sk, b). \u2022 The algorithm DecTT takes any single user key sk and a ciphertext c \u2208C, runs in time poly(\u03ba, n(\u03ba)) and deterministically returns a message bit b b \u2208{0, 1}. Formally b b = DecTT(sk, c). 5It would not substantially a\ufb00ect our results if DecTT were randomized, but we will assume that DecTT is deterministic for ease of presentation. 8 \f\u2022 The algorithm TraceTT takes as input a set of user keys \u20d7 sk \u2208({0, 1}\u03ba)n(\u03ba) and an oracle P : (C(\u03ba))kTT(\u03ba) \u2192{0, 1}kTT(\u03ba), makes one kTT-tuple of queries, (c1, . . . , ckTT) \u2208C(\u03ba) to its oracle (kTT = kTT(\u03ba)), and returns the name of a user i \u2208[n(\u03ba)]. Formally, i \u2190R TraceP TT( \u20d7 sk). Intuitively, think of the oracle P as being given some subset of keys \u20d7 skS = (sk(i))i\u2208S for a non-empty set S \u2286[n], and TraceTT is attempting to identify a user i \u2208S. Clearly, if P ignores its input and always returns 0, TraceTT cannot have any hope of success, so we must assume that P is capable of decrypting ciphertexts. De\ufb01nition 3.1 (Available Pirate Decoder). Let the tuple of algorithms \u03a0TT = (GenTT, EncTT, DecTT, TraceTT) be an (n, kTT)-traitor-tracing scheme. Let P be a (possibly randomized) algorithm. We say that P is a kTT-available pirate decoder if for every \u03ba \u2208N, every set of user keys \u20d7 sk = (sk(1), . . . , sk(n)) \u2208{0, 1}\u03ba, every S \u2286[n] such that |S| \u2265n \u22121, and every c1, . . . , ckTT \u2208C(\u03ba), Pr \uf8ee \uf8f0 (b b1, . . . ,b bkTT) \u2190R P( \u20d7 skS, c1, . . . , ckTT) \u2203j \u2208[kTT], b \u2208{0, 1} \u0010\u0000\u2200i \u2208S, DecTT(sk(i), cj) = b \u0001 \u2227 \u0010 b bj \u0338= b \u0011\u0011 \uf8f9 \uf8fb\u2264o \u0012 1 n(\u03ba)2 \u0013 . In other words, if every user key sk(i) (for i \u2208S) decrypts c to 1 (resp. 0), then P( \u20d7 skS, \u00b7) decrypts c to 1 (resp. 0), with high probability. We can now de\ufb01ne a secure, (n, kTT)-traitor-tracing scheme: De\ufb01nition 3.2 (Traitor-Tracing Scheme). Let the tuple of algorithms \u03a0TT = (GenTT, EncTT, DecTT, TraceTT) be an (n, kTT)-traitor-tracing scheme. Let kTT : N \u2192N be a function. We say that \u03a0TT is a secure (n, kTT)-traitor-tracing scheme if for every S \u2286[n(\u03ba)] such that |S| \u2265n(\u03ba) \u22121, for every (possibly randomized) algorithm P that 1) runs in time poly(\u03ba, n(\u03ba), kTT(\u03ba)) and 2) is a kTT-available pirate decoder, we have Pr \u20d7 sk\u2190RGenTT(1\u03ba) P\u2019s, TraceTT\u2019s coins h TraceP( \u20d7 skS,\u00b7) TT ( \u20d7 sk) \u0338\u2208S i = o \u0012 1 n(\u03ba) \u0013 Remarks About Our De\ufb01nition of Traitor-Tracing The traitor-tracing schemes we consider are somewhat di\ufb00erent than those previously studied in the literature. \u2022 We do not require the encryption or tracing algorithms to use public keys. In the typical application of traitor-tracing schemes to content distribution, these would be desirable features, however they are not necessary for proving hardness of sanitization. \u2022 We only require that the tracing algorithm succeeds with probability 1 \u2212o(1/n). Typically one would require that the tracing algorithm succeeds with probability 1 \u2212n\u2212\u03c9(1). \u2022 We do not give the pirate decoder access to an encryption oracle. In other words, we do not require CPA security. Most traitor-tracing schemes in the literature are public-key, making this distinction irrelevant. Here, we only need an encryption scheme that is secure for an a priori bounded number of messages. 9 \f\u2022 We allow the pirate decoder to be stateful, but in an unusual way. We require (roughly) that if any of the queries are ciphertexts generated by Enc( \u20d7 sk, b), then the pirate decoder answers b to those queries, regardless of the other queries issued. In many models, the pirate is allowed to abort, and answer \u22a5if it detects that it is being traced. However, we do allow our pirate to correlate its answers to di\ufb00erent queries, subject to this accuracy constraint. We also allow the pirate to see all the queries made by the tracer at once, which is more power than is typically given to the pirate. Roughly, the \ufb01rst three modi\ufb01cations will allow us to \ufb01nd a candidate scheme with very simple decryption and the fourth modi\ufb01cation will allow us to trace stateful pirates even in the setting of bit-encryption. 3.2 Decryption Function Families For Theorem 1.2, we are interested in traitor-tracing schemes where DecTT is a \u201csimple\u201d function of the user key (for every ciphertext c \u2208C). De\ufb01nition 3.3 (Decryption Function Family). Let (GenTT, EncTT, DecTT) be a traitor-tracing scheme where GenTT produces keys in {0, 1}\u03ba and EncTT produce ciphertexts in C = C(\u03ba). For every c \u2208C, we de\ufb01ne the c-decryption function qc : {0, 1}\u03ba \u2192{0, 1} to be qc(sk) = DecTT(sk, c). We de\ufb01ne the decryption function family Q(\u03ba) DecTT = {qc}c\u2208C(\u03ba). In what follows, we will say that \u03a0TT is an traitor-tracing scheme with decryption function family Q(\u03ba) DecTT. 4 Attacking E\ufb03cient Sanitizers In this section we will prove our main result, showing that the existence of traitor-tracing schemes (as in De\ufb01nition 3.2) implies that e\ufb03cient sanitizers cannot answer too many counting queries while satisfying di\ufb00erential privacy. Theorem 4.1 (Attacking E\ufb03cient Sanitizers). Assume that there exists an (n(\u03ba), kTT(\u03ba))-secure traitor-tracing scheme \u03a0TT = (GenTT, EncTT, DecTT, TraceTT) with decryption function family Q(\u03ba) = Q(\u03ba) DecTT. Then there does not exist any sanitizer M: ({0, 1}d)n \u00d7 (Q(d))kTT(d) \u2192RkTT(d) that is simultaneously 1) di\ufb00erentially private, 2) e\ufb03cient, and 3) (Q, kTT(d))-accurate for Q = \u222ad\u2208NQ(d). In the typical setting of parameters, n(\u03ba) = poly(\u03ba), kTT(\u03ba) = e \u0398(n2), and decryption can be implemented by circuits of size poly(n) = poly(\u03ba). Then Theroem 4.1 will state that there is no sanitizer M that takes a database D \u2208({0, 1}d)poly(d), runs in poly(d) time, and accurately answers e \u0398(n2) queries implemented by circuits of size poly(d), while satisfying di\ufb00erential privacy. The main di\ufb00erence between Theorem 4.1 and the result of Dwork et al. [DNR+09] is that we only assume the existence of a sanitizer for kTT(d) queries from Q(d) = Q(d) DecTT, whereas Dwork et al. assume the existence of a one-shot sanitizer that answers every query in Q(d). To o\ufb00set the weaker assumption on the sanitizer, we assume that the traitor-tracing scheme is secure against certain stateful pirate decoders (as in De\ufb01nition 3.1) whereas Dwork et al. only need to trace stateless pirates. Theorem 4.1 also explicitly allows the traitor-tracing scheme to have the relaxed 10 \ffunctionality and security properties discussed at the end of Section 3, although it is implicit in Dwork et al. that the relaxed properties are su\ufb03cient to prove hardness results. We now sketch the proof: Every function qc \u2208Q(d) is viewed as a query qc(x) on a database row x \u2208{0, 1}d. Assume there is an e\ufb03cient sanitizer is that is (Q(d), kTT(d))-accurate for these queries. The fact that M is accurate for these queries will imply that (after small modi\ufb01cations) M is a kTT-available pirate decoder (De\ufb01nition 3.1). Here is where we di\ufb00er from Dwork et al., who assume that M accurately answers all queries in Q(d), in which case M can be viewed as a stateless pirate decoder (but must solve a harder sanitization problem). We complete the proof as in Dwork et al. Consider two experiments: In the \ufb01rst, we construct an n-row database D by running GenTT(1d) to obtain n user keys, and putting one in each row of D. Then we run TraceTT on M(D, \u00b7) and obtain a user i. Since M is useful, and \u03a0TT is secure, we will have that i \u2208[n] with probability close to 1, and thus there is an i\u2217\u2208[n] such that i = i\u2217 with probability \u22731/n. In the second experiment, we construct a database D\u2032 exactly as in the \ufb01rst, however we exclude the key sk(i\u2217). Since D and D\u2032 di\ufb00er in only one row, di\ufb00erential privacy requires that TraceTT, run with oracle M(D\u2032, \u00b7), still outputs i\u2217with probability \u2126(1/n). However, in this experiment, i\u2217, sk(i\u2217) is no longer given to the pirate decoder, and thus security of \u03a0TT says that TraceTT, run with this oracle, must output i\u2217with probability o(1/n). Thus, we will obtain a contradiction. Proof. Let \u03a0TT = (GenTT, EncTT, DecTT, TraceTT) be the assumed traitor-tracing scheme, and assume there exists an e\ufb03cient, di\ufb00erentially private, (Q(d), kTT(d))-accurate sanitizer M. We de\ufb01ne the pirate decoder PM as follows: Algorithm 1 The pirate decoder PM Input: A set of user keys ( \u20d7 skS) \u2208{0, 1}d and a set of ciphertexts c1, . . . , ckTT (kTT = kTT(d)). Construct circuits specifying the queries qc1, . . . , qckTT \u2208QDecTT,d. Construct a database D = (sk(i))i\u2208S \u2208({0, 1}d)|S|. Let a1, . . . , akTT \u2190R M(D, qc1, . . . , qckTT). Round the answers a1, . . . , akTT \u2208[0, 1] to obtain b b1, . . . ,b bkTT \u2208{0, 1} (i.e. b bj = \u2308aj\u230b) Output: b b1, . . . ,b bkTT. Since M is e\ufb03cient, its running time is poly(d, n(d), kTT(d), |qc1| + . . . + |qckTT|), which is poly(d, n(d), kTT(d)). Recall that the size of the circuits (queries) qc \u2208Q(d) DecTT is poly(d, n). In this case PM runs in time poly(d, n(d), kTT(d)) as well, since constructing the queries can be done in time polynomial in their size, and the \ufb01nal rounding step can be done in time poly(kTT(d)). Next, we claim that if M is accurate for Q(d), then PM is a useful pirate decoder. Claim 4.2. If M is (QDecTT, kTT)-accurate, then PM is a kTT-useful pirate decoder. Proof of Claim 4.2. Let \u20d7 sk \u2208{0, 1}d be a set of user keys for \u03a0TT and let S \u2286[n] be a subset of the users such that |S| \u2265n \u22121. Suppose c \u2208C(d) and b \u2208{0, 1} are such that for every i \u2208S, DecTT(sk(i), c) = b. Then we have that, for D as in PM, qc(D) = 1 |S| X i\u2208S qc(sk(i)) = 1 |S| X i\u2208S DecTT(sk(i), c) = b 11 \fLet c1, . . . , ckTT be a set of ciphertexts, qc1, . . . , qckTT and a1, . . . , akTT be as in PM. The accuracy of M (with constant error \u03b1 < 1/2) guarantees that Pr \u0002 \u2203j \u2208[kTT], \f \faj \u2212fcj(D) \f \f \u22651/2 \u0003 = o(1/|S|2) Since |S| \u2265n \u22121, o(1/|S|2) = o(1/n2). Assuming a1, . . . , akTT is accurate up to error \u03b1 < 1/2 for qc1, . . . , qckTT, aj will be rounded to exactly qcj whenever qcj(D) \u2208{0, 1}. That is, Pr \uf8ee \uf8f0 \u2203j \u2208[kTT], b \u2208{0, 1} \u0000\u2200i \u2208S, DecTT(sk(i), cj) = b \u0001 \u2227 \u0010 b bj \u0338= b \u0011 \uf8f9 \uf8fb= o \u0012 1 n(\u03ba)2 \u0013 Thus, PM is kTT-useful. This completes the proof of the claim. Since PM is a kTT-useful pirate decoder, and \u03a0TT is a (n, kTT)-secure traitor-tracing scheme, running TraceTT on PM will always return some user i \u2208[n]. Thus there must be some user i\u2217that TraceTT returns with probability \u22731/n. Speci\ufb01cally, for every \u03ba \u2208N, there exists i\u2217(\u03ba) \u2208[n(\u03ba)] such that, Pr \u20d7 sk\u2190RGenTT(1\u03ba) PM\u2019s, TraceTT\u2019s coins h TracePM( \u20d7 sk,\u00b7) TT ( \u20d7 sk) = i\u2217(\u03ba) i \u2265 1 n(\u03ba) \u2212o \u0012 1 n(\u03ba) \u0013 . (1) Let S(\u03ba) = [n(\u03ba)] \\ {i\u2217(\u03ba)} Now we claim that if M is di\ufb00erentially private, then TraceTT will output i\u2217(\u03ba) with signi\ufb01cant probability, even PM is not given the key of user i\u2217(\u03ba). Claim 4.3. If M is di\ufb00erentially private (for \u03b5 = O(1), \u03b4 = o(1/n)), then Pr \u20d7 sk\u2190RGenTT(1\u03ba) PM\u2019s, TraceTT\u2019s coins h TracePM( \u20d7 sk,\u00b7) TT ( \u20d7 sk) = i\u2217(\u03ba) i \u2265\u2126 \u0012 1 n(\u03ba) \u0013 . Proof of Claim 4.3. Fix any \u03ba and let kTT = kTT(\u03ba) and i\u2217= i\u2217(\u03ba), S = S(\u03ba) as above. Let D = \u20d7 sk and D\u2212i\u2217= \u20d7 skS. Take T to be the set of responses b b1, . . . ,b bkTT such that TraceTT( \u20d7 sk), after querying its oracle on ciphertexts c1, . . . , ckTT and receiving responses b b1, . . . ,b bkTT, outputs i\u2217 (T depends on the coins of GenTT and TraceTT). By di\ufb00erential privacy, we have that Pr h M(D, qc1, . . . , qckTT) \u2208T i \u2264eO(1) \u00b7 Pr h M(D\u2212i\u2217, qc1, . . . , qckTT) \u2208T i + o \u0012 1 n \u0013 . Note that the queries constructed by PM depends only on c1, . . . , ckTT, not on \u20d7 skS. Also note that the \ufb01nal rounding step does not depend on the input at all. Thus, for every T \u2286{0, 1}kTT Pr h PM( \u20d7 sk, c1, . . . , ckTT) \u2208T i \u2264eO(1) \u00b7 Pr h PM( \u20d7 skS, c1, . . . , ckTT) \u2208T i + o \u0012 1 n \u0013 . (2) The claim follows by combining with (1). To complete the proof, notice that the probability in Claim 4.3 is exactly the probability that TraceTT outputs the user i\u2217, when given the oracle PM( \u20d7 skS), for S = [n] \\ {i\u2217}. However, the fact that PM is e\ufb03cient, and \u03a0TT is a secure traitor-tracing scheme implies that this probability is o(1/n). Thus we have obtained a contradiction. This completes the proof of the Theorem. 12 \f5 Constructions of Traitor-Tracing Schemes In this section we show how to construct traitor-tracing schemes that satisfy De\ufb01nition 3.2, and thus can be used to instantiate Theorem 4.1. First we will informally describe a simple construction that requires the tracing algorithm to make a sub-optimal number of queries, but will hopefully give the reader more intuition about the construction and how it di\ufb00ers from previous constructions of traitor-tracing schemes. Then we will give precise de\ufb01nitions of the encryption schemes (Section 5.2) and \ufb01ngerprinting codes (Section 5.3) required for our construction. Then we will present the \ufb01nal construction more formally (Section 5.4) and prove its security. Finally, we will use the weakened security requirements of the encryption scheme to show that our traitor-tracing scheme can be instantiated so that decryption is computable by constant-depth circuits (Section 5.6). 5.1 A Simple Construction Our construction is a variant of the most basic tracing traitor-tracing scheme [CFN94]. Start with any encryption scheme (Gen, Enc, Dec). Generate an independent key sk(i) \u2190R Gen for each user (we will ignore the security parameter in the informal description). To encrypt a bit b \u2208{0, 1}, we encrypt it under each user\u2019s key independently and concatenate the ciphertexts. That is EncTT( \u20d7 sk, b) = (Enc(sk(1), b), . . . , Enc(sk(n), b)). Clearly each user can decrypt the ciphertext by applying Dec, as long as she knows which part of the ciphertext to decrypt. Now we describe how an available pirate decoder for this scheme can be traced. As with all traitor-tracing schemes, we will form ciphertexts that di\ufb00erent users would decrypt di\ufb00erently, assuming they decrypt as intended using the algorithm DecTT(sk(i), \u00b7). We can do so with the following algorithm: TrEncTT( \u20d7 sk, i) = (Enc(sk(1), 1), . . . , Enc(sk(i), 1), Enc(sk(i+1), 0), . . . , Enc(sk(n), 0) for i = 0, 1, . . . , n. The algorithm forms a ciphertext that users 1, . . . , i will decrypt to 0 and users i + 1, . . . , n will decrypt to 1. The tracing algorithm generates a random sequence i1, . . . , ikTT \u2208{0, 1, . . . , n}, for kTT = (n + 1)s, such that each element of {0, 1, . . . , n} appears exactly s times, where s is a parameter to be chosen later. Then, for every j it generates a ciphertext cj \u2190R TrEncTT( \u20d7 sk, ij). Next, it queries P \u20d7 skS(c1, . . . , ckTT). Given the output of the pirate, the tracing algorithm computes Pi = 1 s X j:ij=i P( \u20d7 sk, c1, . . . , ckTT)j for i = 0, 1, . . . , n. Finally, the tracing algorithm outputs any i\u2217such that Pi\u2217\u2212Pi\u2217\u22121 \u22651/n. The tracing algorithm generates a random sequence of indices i1, . . . , ikTT \u2208{0, 1, . . . , n}, for kTT = (n + 1)s, such that each element of {0, 1, . . . , n} appears exactly s times, where s is a parameter to be chosen later. Then, for every j it generates a ciphertext cj \u2190R TrEncTT( \u20d7 sk, ij). Next, it queries P \u20d7 skS(c1, . . . , ckTT). Given the output of the pirate, the tracing algorithm computes 13 \fPi = 1 s P j:ij=i P( \u20d7 sk, c1, . . . , ckTT)j for i = 0, 1, . . . , n. Finally, the tracing algorithm outputs any i\u2217 such that Pi\u2217\u2212Pi\u2217\u22121 \u22651/n. Now we explain why this algorithm successfully traces e\ufb03cient available pirate decoders. Notice that if we choose c according to TrEncTT( \u20d7 sk, 0), then every user decrypts c to 0, so P0 = 0. Similarly, Pn = 1. Thus, there exists i\u2217such that Pi\u2217\u2212Pi\u2217\u22121 \u22651/n. Next, we argue that i\u2217is in S except with small probability. Notice that TrEncTT( \u20d7 sk, i\u2217) and TrEncTT( \u20d7 sk, i\u2217\u22121) di\ufb00er only in the message encrypted under key sk(i\u2217), so if i\u2217\u0338\u2208S, this key is unknown to the pirate decoder. The security of the encryption scheme (made precise in De\ufb01nition 5.2) guarantees that if sk(i\u2217) is unknown to an e\ufb03cient pirate, then we can replace kTT uses of Enc(sk(i\u2217), 1) with Enc(sk(i\u2217), 0), and this change will only a\ufb00ect the success probability of the pirate by o(1/n). But after we make this replacement, TrEncTT( \u20d7 sk, i\u2217) and TrEncTT( \u20d7 sk, i\u2217\u22121) are (perfectly, information-theoretically) indistinguishable to the pirate. Since the sequence of indices i1, . . . , ikTT is random, the pirate has no information about which elements ij are i\u2217and which are i\u2217\u22121. Thus, if the pirate wants to make Pi\u2217larger than Pi\u2217\u22121, for some i\u2217\u0338\u2208S, she can do no better than to \u201cguess\u201d. If we take s = e O(n2), and apply a Cherno\ufb00bound, it turns out that for every i \u0338\u2208S, Pi \u2212Pi\u22121 = o(1/n). This conclusion also holds after we take into account the security loss of the encryption scheme, which is o(1/n). Thus, the scheme we described is a secure traitor-tracing scheme in the sense of De\ufb01nition 3.2. In arguing that the scheme is secure, we used the fact that P0 = 0 and Pn = 1 no matter what other queries are made to the pirate. In many applications, this assumption would not be reasonable. However, when the pirate is derived from an accurate sanitizer, this condition will be satis\ufb01ed. For this scheme, the tracer makes (n + 1)s = e O(n3) queries. Before proceeding, we will explain how to reduce the number of queries from e O(n3) to e O(n2). The high-level argument that the scheme is secure used two facts: 1. By the availability of the pirate decoder, if every user in S would decrypt a ciphertext c to b, then the pirate decrypts c to b (in the above, P0 = 0, Pn = 1). 2. Because of the encryption, a pirate decoder without user i\u2019s key \u201cdoesn\u2019t know\u201d how user i would decrypt each ciphertext. Systems leveraging these two properties to identify a colluding user are called \ufb01ngerprinting codes [BS98], and have been studied extensively. In fact, the tracing algorithm we described is identical to the tracing algorithm we de\ufb01ne in Section 5.4, but instantiated with the \ufb01ngerprinting code of Boneh and Shaw [BS98], which has length e O(n3). Tardos [Tar08] constructed shorter \ufb01ngerprinting codes, with length e O(n2), which we use to reduce the number of queries to trace. Next we de\ufb01ne the precise security requirement we need out of the underlying encryption scheme, and then we will give a formal de\ufb01nition of \ufb01ngerprinting codes. 5.2 Encryption Schemes We will build our traitor-tracing scheme from a suitable encryption scheme. An encryption scheme is tuple of e\ufb03cient algorithms (Gen, Enc, Dec). All the algorithms may be randomized except for Dec. The scheme has the following syntactic properties: \u2022 The algorithm Gen takes a security parameter \u03ba, runs in time poly(\u03ba), and returns a private key sk \u2208{0, 1}\u03ba. Formally sk \u2190R Gen(1\u03ba). 14 \f\u2022 The algorithm Enc takes a private key and a message bit b \u2208{0, 1}, runs in time poly(\u03ba), and generates a ciphertext c \u2208C = C(\u03ba). Formally, c \u2190R Enc(sk, b). \u2022 The algorithm Dec takes a private key sk \u2208{0, 1}\u03ba and a ciphertext c \u2208C(\u03ba), runs in time poly(\u03ba), and returns a message bit b b. First we de\ufb01ne (perfectly) correct decryption6 De\ufb01nition 5.1 (Correctness). An encryption scheme (Gen, Enc, Dec) is (perfectly) correct if for every b \u2208{0, 1}, and every \u03ba \u2208N, Pr sk\u2190RGen(1\u03ba) [Dec(sk, Enc(sk, b)) = b] = 1. We require that our schemes have the following kEnc-message security property. De\ufb01nition 5.2 (Security of Encryption). Let \u03b5Enc : N \u2192[0, 1] and kEnc : N \u2192N, TEnc : N \u00d7 N \u2192N be functions. An encryption scheme \u03a0Enc = (Gen, Enc, Dec) is (\u03b5Enc, kEnc, TEnc)-secure if for every TEnc(\u03ba, kEnc(\u03ba))-time algorithm AEnc and every b = (b1, . . . , bkEnc), b\u2032 = (b\u2032 1, . . . , b\u2032 kEnc) \u2208{0, 1} (for kEnc = kEnc(\u03ba)), \f \f \f \f \f Pr sk\u2190RGen(1\u03ba) [AEnc(Enc(sk, b1), . . . , Enc(sk, bkEnc)) = 1] \u2212 Pr sk\u2190RGen(1\u03ba) \u0002 AEnc(Enc(sk, b\u2032 1), . . . , Enc(sk, b\u2032 kEnc)) = 1 \u0003 \f \f \f \f \f \u2264\u03b5Enc(\u03ba). Notice that we do not require \u03a0Enc to be secure against adversaries that are given Enc(sk, \u00b7) as an oracle. That is, we do not require CPA security. De\ufb01nition 5.3 (Encryption Scheme). We say that a tuple of algorithms \u03a0Enc = (Gen, Enc, Dec) is an (\u03b5Enc, kEnc, TEnc)-encryption scheme if it satis\ufb01es correctness and (\u03b5Enc, kEnc, TEnc)-security. 5.3 Fingerprinting Codes As we alluded to above, our tracing algorithm will use a \ufb01ngerprinting code, introduced by Boneh and Shaw [BS98]. A \ufb01ngerprinting code is a pair of e\ufb03cient (possibly randomized) algorithms (GenFP, TraceFP) with the following syntax. \u2022 The algorithm GenFP takes a number of users n as input and outputs a codebook of n codewords of length \u2113FP = \u2113FP(n), W = (w(1), . . . , w(n)) \u2208{0, 1}\u2113FP. Formally W \u2190R GenFP(1n). We will think of W \u2208{0, 1}n\u00d7\u2113FP as a matrix with each row containing a codeword. \u2022 The algorithm TraceFP takes an n-user codebook W and a word w\u2032 \u2208{0, 1}\u2113FP and returns an index i \u2208[n]. Formally, i = TraceFP(W, w\u2032). 6It would not substantially a\ufb00ect our results if Dec were allowed to fail with negligible probability, however we will assume perfect correctness for ease of presentation. 15 \fGiven a non-empty subset S \u2286[n] and a set of codewords WS = (w(i))i\u2208S \u2208{0, 1}\u2113FP, we de\ufb01ne the set of feasible codewords to be F(WS) = n w\u2032 \u2208{0, 1}\u2113FP | \u2200j \u2208[\u2113FP] \u2203i \u2208S w\u2032 j = w(i) j o . Informally, if all users in S have a 0 (resp. 1) in the j-th symbol of their codeword, then they must produce a word with 0 (resp. 1) as the j-th symbol. We also de\ufb01ne the critical positions to be the set of indices for which this constraint is binding. That is, Crit(WS) = n j \u2208[\u2113FP] | \u2200i, i\u2032 \u2208S w(i) j = w(i\u2032) j o . The security of a \ufb01ngerprinting code asserts that an adversary who is given a subset WS of the codewords should not be able to produce an element of F(WS) that does not trace to a user i \u2208S. More formally, De\ufb01nition 5.4 (Secure Fingerprinting Code). Let \u03b5FP : N \u2192[0, 1] and \u2113FP : N \u2192N be functions. A pair of algorithms (GenFP, TraceFP) is an (\u03b5FP, \u2113FP)-\ufb01ngerprinting code if GenFP(1n) outputs a codebook W \u2208{0, 1}n\u00d7\u2113FP(n), and furthermore, for every (possibly ine\ufb03cient) algorithm AFP, and every non-empty S \u2286[n], Pr W\u2190RGenFP(1n) [AFP(WS) \u2208F(WS) \u2227TraceFP(W, AFP(WS)) \u0338\u2208S] \u2264\u03b5FP(n) where the two executions of AFP are understood to be the same. Tardos [Tar08] gave a construction of \ufb01ngerprinting codes of essentially optimal length, improving on the original construction of Boneh and Shaw [BS98]. Theorem 5.5 ([Tar08]). For every function \u03b5FP : N \u2192[0, 1], there exists an \u0000\u03b5FP, O(n2 log(n/\u03b5FP)) \u0001 \ufb01ngerprinting code. In particular, there exists a \u0000o(1/n2), O(n2 log n) \u0001 -\ufb01ngerprinting code. 5.4 The Traitor-Tracing Scheme We are now ready to state the construction more formally. The key generation, encryption, and decryption algorithms are as we described in the sketch (Section 5.1), and stated below. 5.5 Security of \u03a0TT In this section we will prove that out construction of \u03a0TT = (GenTT, EncTT, DecTT, TraceTT) is an (n, \u2113FP(n))-secure traitor-tracing scheme. It can be veri\ufb01ed from the speci\ufb01cation of the scheme that it has the desired syntactic properties, that it generates n(\u03ba) user keys, and that the tracing algorithm makes \u2113FP(n(\u03ba)) non-adaptive queries to its oracle. Now we show how an available pirate decoder for this scheme can be traced. As in the sketch (Section 5.1), we want to generate a set of ciphertexts that di\ufb00erent users decrypt in di\ufb00erent ways. Speci\ufb01cally, given a \ufb01ngerprinting code W \u2208{0, 1}n\u00d7\u2113FP (represented as a matrix with w(i) in the i-th row), we want to generate a set of ciphertexts c1, . . . , c\u2113FP, such that user i, if she decrypts as intended using DecTT(sk(i), \u00b7), will decrypt cj to w(i) j . That is, DecTT(sk(i), cj) = w(i) j . TraceTT will 16 \fAlgorithm 2 The algorithms (GenTT, EncTT, DecTT) for \u03a0TT. Let an encryption \u03a0Enc = (Gen, Enc, Dec) and a function n: N \u2192N be parameters of the scheme. Assume that n(\u03ba) \u22642\u03ba/2 for every \u03ba \u2208N GenTT(1\u03ba): For every user i = 1, . . . , n(\u03ba), let sk (i) \u2190R Gen(1\u03ba/2) Let sk(i) = (sk (i), i) (padded with zeros to have length exactly \u03ba). Output \u20d7 sk = (sk(1), . . . , sk(n)) (We will sometimes use sk(i) and sk (i) interchangeably) EncTT(sk(1), . . . , sk(n), b): For every user i, let c(i) \u2190R Enc(sk(i), b) Output c = (c(1), . . . , c(n)) DecTT(sk(i), c): Output b b = Dec(sk(i), c(i)) Algorithm 3 The algorithm TraceTT for \u03a0TT The tracing algorithm for \u03a0TT and the subroutine TrEncTT. Let a length \u2113FP = \u2113FP(n) \ufb01ngerprinting code \u03a0FP = (GenFP, TraceFP) be a parameter of the scheme and let \u03a0Enc = (Gen, Enc, Dec) be the encryption scheme used above. TrEncTT(sk(1), . . . , sk(n), W): Let n \u00d7 k be the dimensions of W For every i \u2208[n], j \u2208[k], let c(i) j \u2190R Enc(sk(i), Wi,j) For every j \u2208[k], let cj = (c(1) j , . . . , c(n) j ) Output c1, . . . , ck (Notice that Dec(sk(i), c(i) j ) = Wi,j) TraceP TT( \u20d7 sk): Let n be the number of user keys and \u2113FP = \u2113FP(n) Let W \u2190R GenFP(1n) Let b b1, . . . ,b b\u2113FP \u2190R P(TrEncTT( \u20d7 sk, W)) and let w\u2032 = b b1\u2225. . . \u2225b b\u2113FP Output i \u2190R TraceFP(W, w\u2032) 17 \fquery the pirate decoder on these ciphertexts, treat these responses as a word w\u2032, run the tracing algorithm for the \ufb01ngerprinting code on w\u2032, and use the output of TraceFP as its own output. If P is available, its output will be a feasible codeword for WS. To see this, recall that if every user i \u2208S decrypts cj to the same bit, then an available pirate decoder P( \u20d7 skS, \u00b7), decrypts cj to that bit. However, the critical positions of WS are exactly those for which every user i \u2208S has the same symbol in position j. Thus, the codeword returned by the pirate is feasible, and the \ufb01ngerprinting code\u2019s tracing algorithm can identify a user in S. The catch in this argument is that TrEncTT takes all of W as input, however an attacker for the \ufb01ngerprinting code is only allowed to see WS, and thus cannot simulate TrEncTT in a security reduction. However, if P only has keys \u20d7 skS, and i \u0338\u2208S, then an e\ufb03cient P cannot decrypt the i-th component of a ciphertext c = (c(1), . . . , c(n)). But these are the only components that depend on w(i). So w(i) is computationally hidden from P anyway, and we could replace that codeword with a string of zeros without signi\ufb01cantly a\ufb00ecting the success probability of P. Formalizing this intuition will yield a valid attacker for the \ufb01ngerprinting code, and obtain a contradiction. Theorem 5.6 (From Encryption to Traitor-Tracing). Let \u03a0Enc be any (\u03b5Enc, kEnc, TEnc)-secure encryption scheme, and \u03a0FP be a (\u03b5FP, \u2113FP)-\ufb01ngerprinting code, \u03a0FP. Let n, kTT : N \u2192N be any functions such that for every \u03ba \u2208N, n(\u03ba) \u22642\u03ba/2 and 1. the encryption scheme and \ufb01ngerprinting code have su\ufb03ciently strong security, n(\u03ba) \u00b7 \u03b5Enc(\u03ba) + \u03b5FP(n(\u03ba)) = o \u0012 1 n(\u03ba)2 \u0013 , 2. the encryption scheme is secure for su\ufb03ciently many queries, kEnc(\u03ba) \u2265kTT(\u03ba) = \u2113FP(n(\u03ba)), 3. the encryption scheme is secure against adversaries whose running time is as long as the pirate decoder\u2019s, for every a > 0, TEnc(\u03ba/2, kTT(\u03ba)) \u2265(\u03ba + n(\u03ba) + kTT(\u03ba))a. Then \u03a0TT instantiated with \u03a0Enc and \u03a0FP is an (n, kTT)-traitor-tracing scheme. Proof. Suppose there exists a poly(\u03ba, n(\u03ba), kTT(\u03ba))-time pirate decoder P that violates the security of \u03a0TT. That is, for every \u03ba \u2208N, there exists S = S(\u03ba) \u2286[n(\u03ba)], |S| \u2265n(\u03ba) \u22121, such that Pr \u20d7 sk\u2190RGenTT(1\u03ba) \u0014 Trace P( \u20d7 skS(\u03ba),\u00b7) TT ( \u20d7 sk) \u0338\u2208S \u0015 = \u2126 \u0012 1 n(\u03ba) \u0013 where the probability is also taken over the coins of P and TraceTT. Since there are only n(\u03ba) such sets, for a randomly chosen i \u2190R [n(\u03ba)], we have Pr \u20d7 sk\u2190RGenTT(1\u03ba) i\u2190R[n(\u03ba)] \u0014 Trace P( \u20d7 skS\u2212i,\u00b7) TT ( \u20d7 sk) \u0338\u2208S \u0015 = \u2126 \u0012 1 n(\u03ba)2 \u0013 . 18 \fBoth of these probabilities are also taken over the coins of P and TraceTT. We will show that such a pirate decoder must either violate the security of the encryption scheme or violate the security of the \ufb01ngerprinting code. Given a matrix W \u2208{0, 1}(n)\u00d7\u2113FP(n), we de\ufb01ne W\u2212i \u2208{0, 1}(n\u22121)\u00d7\u2113FP to be W with the i-th codeword removed and f W\u2212i \u2208{0, 1}n\u00d7\u2113FP(n) to be W with the i-th codeword replaced with \u20d7 0\u2113FP(n). We also use S\u2212i as a shorthand for [n] \\ {i} Consider the following algorithm AP FP Algorithm 4 The \ufb01ngerprinting security adversary. AP FP(S\u2212i, W\u2212i): Let n be the number of users for the \ufb01ngerprinting code and \u03ba be such that n(\u03ba) = n Generate keys \u20d7 sk \u2190R GenTT(1\u03ba) and ciphertexts (c1, . . . , c\u2113FP) \u2190R TrEncTT( \u20d7 sk, f W\u2212i) Output w\u2032 = (b b1, . . . ,b b\u2113FP) \u2190R P( \u20d7 sk\u2212i, c1, . . . , c\u2113FP) (Note that f W\u2212i is just W\u2212i with a row of zeros added, so the attacker is well-de\ufb01ned.) Since the \ufb01ngerprinting code is secure, for a randomly chosen i \u2190R [n] (in fact, for every i \u2208[n]), Pr W \u2190RGenFP(1n) i\u2190R[n] \u0002 AP FP(S\u2212i, W\u2212i) \u2208F(W\u2212i) \u2227TraceFP(W, AP FP(S\u2212i, W\u2212i)) = i \u0003 \u2264\u03b5FP(n) (3) This condition could hold simply because AFP outputs an infeasible codeword with high probability, not because we are successfully tracing a user in S. The next claim states that if P is an available pirate decoder, then this is not the case. Claim 5.7. Let kTT = kTT(\u03ba) = \u2113FP(n(\u03ba)) for every \u03ba \u2208N. If P is a kTT-available pirate decoder, then for every \u03ba \u2208N, every i \u2208[n(\u03ba)], and every W \u2208{0, 1}n\u00d7\u2113FP(n) (for n = n(\u03ba)) Pr \u0002 AP FP(S\u2212i, W\u2212i) \u0338\u2208F(W\u2212i) \u0003 = o \u0012 1 n(\u03ba)2 \u0013 Proof of Claim 5.7. If P is kTT-useful, then, by de\ufb01nition, for every \u20d7 sk = (sk(1), . . . , sk(n)), every i \u2286[n], and every c1, . . . , ckTT, if every user i\u2032 \u0338= i decrypts some cj to the same bit bj, then so does P( \u20d7 sk\u2212i, \u00b7) (with high probability). That is, for b b1, . . . ,b bkTT \u2190R P( \u20d7 sk\u2212i, c1, . . . , ckTT), Pr h \u2203j \u2208[kTT], b \u2208{0, 1} \u0010\u0010 \u2200i\u2032 \u0338= i, DecTT(sk(i\u2032), cj) = b \u0011 \u2227 \u0010 b bj \u0338= b \u0011\u0011i = o \u0012 1 n(\u03ba)2 \u0013 (4) Consider any critical position j \u2208Crit(W\u2212i). These are the positions for which every user i\u2032 \u0338= i has the same bit w(i\u2032) j = bj. It\u2019s easy to see from the de\ufb01nition of TrEncTT (and the correctness of \u03a0Enc) that if c1, . . . , ckTT \u2190R TrEncTT( \u20d7 sk, f W\u2212i) then every user i\u2032 \u0338= i will decrypt cj to bj. Thus, with probability close to 1, for every critical position j, the j-th output of P( \u20d7 sk\u2212i, c1, . . . , ckTT) will be equal to bj, which implies w\u2032 = (b b1, . . . ,b b\u2113FP) is feasible. Since P outputs feasible codewords with high probability, we obtain Pr W \u2190RGenFP(1n) i\u2190R[n] \u0002 TraceFP(W, AP FP(S\u2212i, W\u2212i)) = i \u0003 \u2264\u03b5FP(n(\u03ba)) + o \u0012 1 n(\u03ba)2 \u0013 (5) 19 \fby combining the previous claim with (3). There are only two di\ufb00erences between the success of the pirate decoder in fooling TraceTT and the success of the \ufb01ngerprinting adversary in fooling TraceFP (in the experiment described in (5)): The \ufb01rst is that in the traitor-tracing security condition, P is given \u20d7 sk\u2212i for a \ufb01xed i \u2208[n], whereas the \ufb01ngerprinting adversary is given W\u2212i for a random i \u2190R [n]. This di\ufb00erence only a\ufb00ects the error by a factor of n. That is, for every i \u2208[n] Pr h TraceP( \u20d7 sk\u2212i,\u00b7) TT ( \u20d7 sk) = i i \u2264n \u00b7 Pr i\u2190R[n] h TraceP( \u20d7 sk\u2212i,\u00b7) TT ( \u20d7 sk) = i i The second di\ufb00erence is that in TraceTT, the ciphertexts given to the pirate are generated by TrEncTT( \u20d7 sk, W) whereas in AFP the ciphertexts are generated by TrEncTT( \u20d7 sk, f W\u2212i). But these ciphertexts only di\ufb00er in the i-th component, and sk(i) is unknown to P, so this does not a\ufb00ect the behavior of the pirate decoder signi\ufb01cantly. This fact is established in the following claim. Claim 5.8. If \u03a0Enc is (\u03b5Enc, kEnc, TEnc)-secure for kEnc, TEnc as in the statement of the Theorem, then for every poly(\u03ba, n(\u03ba), kTT(\u03ba)) pirate decoder P, \f \f \f \f \f Pr W \u2190RGenFP(1n) \u20d7 sk\u2190RGenTT,i\u2190R[n] h TraceFP(W, P( \u20d7 sk\u2212i, TrEncTT( \u20d7 sk, W))) = i i \u2212 Pr W \u2190RGenFP(1n) \u20d7 sk\u2190RGenTT,i\u2190R[n] h TraceFP(W, P( \u20d7 sk\u2212i, TrEncTT( \u20d7 sk, f W\u2212i))) = i i \f \f \f \f \f \u2264\u03b5Enc(\u03ba) Proof of Claim 5.8. Let \u03a0Enc = (Gen, Enc, Dec) be the encryption scheme. The main observation required to prove the claim is that the two experiments we want to relate can both be simulated without sk(i), given challenges for the encryption scheme (De\ufb01nition 5.2). Fix a codebook W \u2190R GenFP(1n). Now consider two distributions on ciphertexts (of \u03a0Enc): In either case, generate a random key sk(i) \u2190R Gen(1\u03ba) \u2022 In the \ufb01rst case c(i) 1 \u2190R Enc(sk(i), w(i) 1 ), . . . , c(i) \u2113FP \u2190R Enc(sk(i), w(i) \u2113FP) \u2022 In the second case sk(i) \u2190R Gen(1\u03ba) and c(i) 1 \u2190R Enc(sk(i), 0), . . . , c(i) \u2113FP \u2190R Enc(sk(i), 0) Suppose we receive a set of \u2113FP ciphertexts from one of these two distributions. Note that GenTT chooses keys for each user independently, and TrEncTT generates ciphertext components for each user independently. So we can generate keys \u20d7 sk\u2212i, and ciphertext components for users other than i independently, and use the challenge ciphertexts in place of the ciphertext components for user i, without knowing sk(i). Suppose we simulate TrEncTT( \u20d7 sk, W) in this way. Notice that if the challenge ciphertexts come from the \ufb01rst distribution, then simulated ciphertexts will be distributed exactly as in TrEncTT( \u20d7 sk, W), and if the challenge ciphertexts come from the second distribution, then the simulated ciphertexts will be distributed exactly as in TrEncTT( \u20d7 sk, f W\u2212i). But, if the claim were false, then we would have found an adversary for the encryption scheme that can distinguish between the two distributions with advantage greater than \u03b5Enc(\u03ba). It is easy to see that if the pirate decoder is e\ufb03cient, then so will the adversary for the encryption scheme (since TraceFP, Gen, Enc are all assumed to be e\ufb03cient. We conclude that if the claim is false, then AEnc violates the security of \u03a0Enc. 20 \fWe now complete the proof of the theorem by combining Equation (5) and Claim 5.8. 5.6 Decryption Function Family of \u03a0TT Recall that the two goals of constructing a new traitor-tracing scheme were to trace stateful pirates and to reduce the complexity of decryption. We addressed tracing of stateful pirates in the previous section, and now we turn to the complexity of decryption. We do so by instantiating the traitortracing scheme with various encryption schemes and making two observations: 1) The type of encryption schemes we require are su\ufb03ciently weak that there already exist plausible candidates with a very simple decryption operation, and 2) Decryption for the traitor-tracing scheme is not much more complex than decryption for the underlying encryption scheme. We summarize the second point with the following simple lemma. Lemma 5.9 (Decryption Function Family for \u03a0TT). Let \u03a0TT be as de\ufb01ned, with \u03a0Enc as its underlying encryption scheme. Let (sk, i) = sk \u2208{0, 1}\u03ba and c = (c(1), . . . , c(n)) \u2208C(\u03ba) be any user key and ciphertext for \u03a0TT. Then DecTT,c(sk) = DecTT,c(sk, i) = _ i\u2032\u2208[n] \u00001i\u2032(i) \u2227Decc(i\u2032)(sk) \u0001 Here, the function 1x(y) takes the value 1 if y = x and 0 otherwise. The lemma follows directly from the construction of DecTT. Also note that the function 1i\u2032 : {0, 1}\u2308log n\u2309\u2192{0, 1} is just a conjunction of \u2308log n\u2309bits (a single gate of fan-in O(log n)), and we need to compute n of these functions. In addition to computing 1i\u2032 and Decc(i\u2032), there are n conjunctions and a single outer disjunction. Thus we add an additional n + 1 gates, compute decryption n times, and increase the depth by 2. Hence, an intuitive summary of the lemma is that if Dec can be implemented by circuits of size s and depth h, DecTT can be implemented by circuits of size n \u00b7 (s + O(log n)) = e O(ns) and depth h + 2. This summary will be precise enough to state our main results. By combining Lemma 5.9 with Theorem 5.6, we easily obtain the following corollary. Corollary 5.10 (One-way Functions Imply Traitor-Tracing w/ Poly-Time Decryption). Let n = n(\u03ba) be any polynomial in \u03ba. Assuming the existence of (non-uniformly secure) one-way functions, there exists an (n, e O(n2))-secure traitor-tracing scheme with decryption function family QDecTT,\u03ba consisting only of circuits of size poly(\u03ba) Proof. The existence of one-way functions implies the existence of an encryption scheme \u03a0Enc that is (1/\u03baa, \u03baa, \u03baa)-secure for every constant a > 0 and su\ufb03ciently large \u03ba with decryption function QDec,\u03ba consisting only of circuits of size t(\u03ba) = poly(\u03ba) for every \u03ba \u2208N. From Lemma 5.9, it is easy to see that if \u03a0TT uses \u03a0Enc as its encryption scheme, then QDecTT,\u03ba consists only of circuits of size e O(n(\u03ba)t(\u03ba/2)) = poly(\u03ba). Theorem 1.1 in the introduction follows by combining Theorem 4.1 with Corollary 5.10. We will now consider the possibility of constructing a traitor-tracing scheme where the decryption functionality can be implemented by circuits of constant depth, and thus obtaining hardness results for generic sanitizers that are e\ufb03cient for constant-depth queries (Theorem 1.2). First, we summarize our observation that the traitor-tracing scheme almost preserves the depth of the decryption function. 21 \fCorollary 5.11 (Encryption with Constant-Depth Decryption Impies Traitor-Tracing w/ Constant-Depth Decryption). Let n = n(\u03ba) be any polynomial in \u03ba. If there exists an encryption scheme, (Gen, Enc, Dec), that is (o(1/n2), \u03c9(n4), na)-secure for every a > 0 and has decryption family Q(\u03ba) Dec consisting of circuits of size poly(\u03ba) and depth h, then there exists a (n, e O(n2))-secure traitortracing scheme with decryption function family Q(\u03ba) DecTT consisting of circuits of size e O(n) \u00b7 poly(\u03ba) and depth h + 2. The corollary is clear from Lemma 5.9 and Theorem 5.6. The corollary is not interesting without an encryption scheme that can be decrypted by constantdepth circuits. However, we observe that such a scheme (meeting our relaxed security criteria) can be constructed from a su\ufb03ciently good local pseudorandom generator (PRG). A recent result of Applebaum [App12] gave the \ufb01rst plausible candidate construction of a local PRG for the range of parameters we need, giving plausibility to the assumption that such PRGs (and, as we show, traitortracing schemes with constant-depth decryption) exist. We note that local PRGs actually imply encryption schemes with local decryption, which is stronger than just constant-depth decryption. Although it may be signi\ufb01cantly easier to construct encryption schemes that only have constantdepth decryption, we are not aware of any other ways of constructing such a scheme. De\ufb01nition 5.12 (Local Pseudorandom Generator). An e\ufb03cient algorithm G: {0, 1}\u03ba \u2192{0, 1}sPRG(\u03ba) is a \u03b5PRG-pseudorandom generator if for every poly(sPRG(\u03ba))-time adversary APRG \f \fPr [APRG(G(U\u03ba)) = 1] \u2212Pr \u0002 APRG(UsPRG(\u03ba)) = 1 \u0003\f \f \u2264\u03b5PRG(\u03ba) If, in addition, if each bit of the output depends only on some set of L bits of the input, then G is a (\u03b5PRG, L)-local pseudorandom generator. It is a well known result in Cryptography that pseudorandom generators imply encryption schemes satisfying De\ufb01nition 5.2 (for certain ranges of parameters). We will use a particular construction whose decryption can be computed in constant-depth whenever the underlying PRG is locally-computable (or, more generally, computable by constant-depth circuits). The construction is the standard \u201ccomputational one-time pad\u201d, however we give a construction to verify that the decryption can be computed by constant-depth circuits. Algorithm 5 An encryption scheme \u03a0LocalEnc that can be decrypted in constant depth. Gen(1\u03ba): Let s \u2190R {0, 1}\u03ba and output sk = s Enc(sk, b): Let r \u2190R {1, 2, . . . , sPRG(\u03ba)} and output c = (r, G(sk)r \u2295b) Dec(sk, c): Let (r\u2032, b\u2032) = c and output: b = G(sk)r \u2295b\u2032 Lemma 5.13 (Local PRGs \u2192Encryption). If there exists a (\u03b5PRG(\u03ba), L)-local pseudorandom generator G: {0, 1}\u03ba \u2192{0, 1}sPRG(\u03ba), then there exists an (\u03b5Enc = \u03b5PRG + k2 Enc/sPRG, kEnc)-Secure Encryption Scheme (Gen, Enc, Dec) with decryption function family QDec,\u03ba consisting of circuits of size poly(\u03ba) and depth 4. 22 \fProof. The security follows from standard arguments: If we choose a random s \u2190R {0, 1}\u03ba, then G(s) is indistinguishable from uniform up to error \u03b5PRG. If we generate kEnc encryptions with key s, and no two encryptions use the same choice of r, then the output is indistinguishable from encryptions using uniform random bits in place of G(s). If we use uniform random bits in place of G, then the message is information-theoretically hidden. The probability that no two encryptions out of kEnc use the same choice of r is at most k2 Enc/sPRG, so we lose this term in the security of the encryption scheme. Let 1i(j) be the indicator variable for the condition j = i. For every c = (r, b) \u2208C, we can write Dec(r,b)(s) = _ i\u2208[sPRG(\u03ba)] (1i(r) \u2227(Gi(s) \u2295b)) . Observe that, since Gi is a function of L bits of the input, it can be computed by a size-2L DNF (depth-2 circuit), thus Gi(s) \u2295b can be computed by a size 2L + 1, depth-3 circuit. The indicator 1i can be computed by a conjunction of \u2308log2 sPRG(\u03ba)\u2309bits, which is a size-\u2308log2 sPRG(\u03ba)\u2309, depth-1 circuit. The outer disjunction increases the depth by one level and the size by 1. Putting it all together, we have shown that Decr,b(s) can be computed by depth-4 circuits of size e O(2LsPRG(\u03ba)) = poly(sPRG(\u03ba)). Combining Corollary 5.11 with Lemma 5.13 easily yields the following corollary. Corollary 5.14 (Local Pseudorandom Generators Imply traitor-tracing w/ AC0 Decryption). Let n = n(\u03ba) be any polynomial in \u03ba. Assuming the existence of a (o(1/n2), n7, L)-local pseudorandom generator for some constant L \u2208N, there exists an (n, e O(n2))-secure traitor-tracing scheme with decryption function family QDecTT,\u03ba consisting of circuits of size e O(n) \u00b7 poly(\u03ba) and depth 6. Theorem 1.2 in the introduction follows by combining Theorem 4.1 with Corollary 5.14. Acknowledgements We thank Cynthia Dwork for suggesting that we look further at the connection between traitortracing and di\ufb00erential privacy. We thank Salil Vadhan for helpful discussions about the connection between traitor-tracing and di\ufb00erential privacy, and about the presentation of this work. We also thank Dan Boneh, Moritz Hardt, Hart Montgomery, Ananth Raghunathan, Aaron Roth, Guy Rothblum, and Thomas Steinke for helpful discussions." + } + ], + "Zhun Deng": [ + { + "url": "http://arxiv.org/abs/2309.13786v2", + "title": "Distribution-Free Statistical Dispersion Control for Societal Applications", + "abstract": "Explicit finite-sample statistical guarantees on model performance are an\nimportant ingredient in responsible machine learning. Previous work has focused\nmainly on bounding either the expected loss of a predictor or the probability\nthat an individual prediction will incur a loss value in a specified range.\nHowever, for many high-stakes applications, it is crucial to understand and\ncontrol the dispersion of a loss distribution, or the extent to which different\nmembers of a population experience unequal effects of algorithmic decisions. We\ninitiate the study of distribution-free control of statistical dispersion\nmeasures with societal implications and propose a simple yet flexible framework\nthat allows us to handle a much richer class of statistical functionals beyond\nprevious work. Our methods are verified through experiments in toxic comment\ndetection, medical imaging, and film recommendation.", + "authors": "Zhun Deng, Thomas P. Zollo, Jake C. Snell, Toniann Pitassi, Richard Zemel", + "published": "2023-09-25", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Learning-based predictive algorithms are widely used in real-world systems and have significantly impacted our daily lives. However, many algorithms are deployed without sufficient testing or a thorough understanding of likely failure modes. This is especially worrisome in high-stakes application areas such as healthcare, finance, and autonomous transportation. In order to address this critical challenge and provide tools for rigorous system evaluation prior to deployment, there has been a rise in techniques offering explicit and finite-sample statistical guarantees that hold for any unknown data distribution and black-box algorithm, a paradigm known as distribution-free uncertainty quantification (DFUQ). In (Angelopoulos et al., 2021), a framework is proposed for selecting a model based on bounds on expected loss produced using validation data. Subsequent work (Snell et al., 2023) goes beyond expected loss to provide distribution-free control for a class of risk measures known as quantile-based risk measures (QBRMs) (Dowd and Blake, 2006). This 1 arXiv:2309.13786v2 [cs.LG] 6 Mar 2024 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL includes (in addition to expected loss): median, value-at-risk (VaR), and conditional value-at-risk (CVaR) (Rockafellar et al., 2000). For example, such a framework can be used to get bounds on the 80th percentile loss or the average loss of the 10% worst cases. While this is important progress towards the sort of robust system verification necessary to ensure the responsible use of machine learning algorithms, in some scenarios measuring the expected loss or value-at-risk is not enough. As models are increasingly deployed in areas with long-lasting societal consequences, we should also be concerned with the dispersion of error across the population, or the extent to which different members of a population experience unequal effects of decisions made based on a model\u2019s prediction. For example, a system for promoting content on a social platform may offer less appropriate recommendations for the long tail of niche users in service of a small set of users with high and typical engagement, as shown in (Lazovich et al., 2022). This may be undesirable from both a business and societal point of view, and thus it is crucial to rigorously validate such properties in an algorithm prior to deployment and understand how the outcomes disperse. To this end, we offer a novel study providing rigorous distribution-free guarantees for a broad class of functionals including key measures of statistical dispersion in society. We consider both differences in performance that arise between different demographic groups as well as disparities that can be identified even if one does not have reliable demographic data or chooses not to collect them due to privacy or security concerns. Well-studied risk measures that fit into our framework include the Gini coefficient (Yitzhaki, 1979) and other functions of the Lorenz curve as well as differences in group measures such as the median (Bhutta et al., 2020). See Figure 4 for a further illustration of loss dispersion. Figure 1: Example illustrating how two predictors (here h1 and h2) with the same expected loss can induce very different loss dispersion across the population. Left: The loss CDF produced by each predictor is bounded from below and above. Middle: The Lorenz curve is a popular graphical representation of inequality in some quantity across a population, in our case expressing the cumulative share of the loss experienced by the best-off \u03b2 proportion of the population. CDF upper and lower bounds can be used to bound the Lorenz curve (and thus Gini coefficient, a function of the shape of the Lorenz curve). Under h2 the worst-off population members experience most of the loss. Right: Predictors with the same expected loss may induce different median loss for (possibly protected) subgroups in the data, and thus we may wish to bound these differences. In order to provide rigorous guarantees for socially important measures that go beyond expected loss or other QBRMs, we provide two-sided bounds for quantiles and nonlinear functionals of quantiles. Our framework is simple yet flexible and widely applicable to a rich class of nonlinear functionals of quantiles, including Gini coefficient, Atkinson index, and group-based measures of 2 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL inequality, among many others. Beyond our method for controlling this richer class of functionals, we propose a novel numerical optimization method that significantly tightens the bounds when data is scarce, extending earlier techniques (Moscovich et al., 2016; Snell et al., 2023). We conduct experiments on toxic comment moderation, detecting genetic mutations in cell images, and online content recommendation, to study the impact of our approach to model selection and tailored bounds. To summarize our contributions, we: (1) initiate the study of distribution-free control of societal dispersion measures; (2) generalize the framework of Snell et al. (2023) to provide bounds for nonlinear functionals of quantiles; (3) develop a novel optimization method that substantially tightens the bounds when data is scarce; (4) apply our framework to high-impact NLP, medical, and recommendation applications. 2. Problem setup We consider a black-box model that produces an output Z on every example. Our algorithm selects a predictor h, which maps an input Z \u2208Z to a prediction h(Z) \u2208\u02c6 Y. A loss function \u2113: \u02c6 Y \u00d7 Y \u2192R quantifies the quality of a prediction \u02c6 Y \u2208\u02c6 Y with respect to the target output y \u2208Y. Let (Z, Y ) be drawn from an unknown joint distribution P over Z \u00d7 Y. We define the random variable Xh := \u2113(h(Z), Y ) as the loss induced by h on P. The cumulative distribution function (CDF) of the random variable Xh is F h(x) := P(Xh \u2264x). For brevity, we sometimes use X and F when we do not need to explicitly consider h. We define the inverse of a CDF (also called inverse CDF) F as F \u2212(p) = inf{x : F(x) \u2265p} for any p \u2208R. Finally, we assume access to a set of validation samples (Z, Y )1:n = {(Z1, Y1), . . . , (Zn, Yn)} for the purpose of achieving distribution-free CDF control with mild assumptions on the loss samples X1:n. We emphasize that the \u201cdistribution-free\u201d requirement is on (Z, Y )1:n instead of X1:n, because the loss studied on the validation dataset is known to us and we can take advantage of properties of the loss such as boundedness. 3. Statistical dispersion measures for societal applications In this section, we motivate our method by studying some widely-used measures of societal statistical dispersion. There are key gaps between the existing techniques for bounding QBRMs and those needed to bound many important measures of statistical dispersion. We first define a QBRM: Definition 1 (Quantile-based Risk Measure) Let \u03c8(p) be a weighting function such that \u03c8(p) \u22650 and R 1 0 \u03c8(p) dp = 1. The quantile-based risk measure defined by \u03c8 is R\u03c8(F) := Z 1 0 \u03c8(p)F \u2212(p)dp. A QBRM is a linear functional of F \u2212, but quantifying many common group-based risk dispersion measures (e.g. Atkinson index) also involves forms like nonlinear functions of the (inverse) CDF or nonlinear functionals of the (inverse) CDF, and some (like maximum group differences) further involve nonlinear functions of functionals of the loss CDF. Thus a much richer framework for achieving bounds is needed here. 3 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL For clarity, we use J as a generic term to denote either the CDF F or its inverse F \u2212depending on the context, and summarize the building blocks as below: (i) nonlinear functions of J, i.e. \u03be(J); (ii) functionals in the form of integral of nonlinear functions of J, i.e. R \u03c8(p)\u03be(J(p))dp for a weight function \u03c8; (iii) composed functionals as nonlinear functions of functionals \u03be(T(J)) for the functional T(J) with forms in (ii). 3.1. Standard measures of dispersion We start by introducing some classic non-group-based measures of dispersion. Those measures usually quantify wealth or consumption inequality within a social group (or a population) instead of quantifying differences among groups. Note that for all of these measures we only consider non-negative losses X, and assume that R 1 0 F \u2212(p)dp > 0 1. Gini family of measures. Gini coefficient (Yitzhaki, 1979; Yitzhaki and Schechtman, 2013) is a canonical measure of statistical dispersion, used for quantifying the uneven distribution of resources or losses. It summarizes the Lorenz curve introduced in Figure 4. From the definition of Lorenz curve, the greater its curvature is, the greater inequality there exists; the Gini coefficient is measuring the ratio of the area that lies between the line of equality (the 45\u25e6line) and the Lorenz curve to the total area under the line of equality. Definition 2 (Gini coefficient) For a non-negative random variable X, the Gini coefficient is G(X) := E|X \u2212X\u2032| 2EX = R 1 0 (2p \u22121)F \u2212(p)dp R 1 0 F \u2212(p)dp , where X\u2032 is an independent copy of X. G(X) \u2208[0, 1], with 0 indicating perfect equality. Because of the existence of the denominator in the Gini coefficient calculation, unlike in QBRM we need both an upper and a lower bound for F \u2212(see Section 4.1.1). In the appendix, we also introduce the extended Gini family. Atkinson index. The Atkinson index (Atkinson et al., 1970; Lazovich et al., 2022) is another renowned dispersion measure defined on the non-negative random variable X (e.g., income, loss), and improves over the Gini coefficient in that it is useful in determining which end of the distribution contributes most to the observed inequality by choosing an appropriate inequality-aversion parameter \u03b5 \u22650. For instance, the Atkinson index becomes more sensitive to changes at the lower end of the income distribution as \u03b5 increases. Definition 3 (Atkinson index) For a non-negative random variable X, for any \u03b5 \u22650, the Atkinson index is defined as the following if \u03b5 \u0338= 1: A(\u03b5, X) := 1 \u2212(E[X1\u2212\u03b5]) 1 1\u2212\u03b5 E[X] = 1 \u2212 \u0000 R 1 0 (F \u2212(p))1\u2212\u03b5dp \u0001 1 1\u2212\u03b5 R 1 0 F \u2212(p)dp . 1. This assumption is included for concise description, but not necessary and set 0/0 = 0. 4 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL And for \u03b5 = 1, A(1, X) := lim\u03b5\u21921 A(\u03b5, X), which will converge to a form involving the geometric mean of X. A(\u03b5, X) \u2208[0, 1], and 0 indicates perfect equality. The form of Atkinson index includes a nonlinear function of F \u2212, i.e. (F \u2212)1\u2212\u03b5, but this type of nonlinearity is easy to tackle since the function is monotonic w.r.t. the range of F \u2212(see Section 4.2.1). Remark 4 The reason we study the CDF of X and not X1\u2212\u03b5 is that it allows us to simultaneously control the Atkinson index for all \u03b5\u2019s. In addition, there are many other important measures of dispersion involving more complicated types of nonlinearity such as the quantile of extreme observations and mean of range. Those measures are widely used in forecasting weather events or food supply. We discuss and formulate these dispersion measures in the appendix. 3.2. Group-based measures of dispersion Another family of dispersion measures refer to minimizing differences in performance across possibly overlapping groups in the data defined by (protected) attributes like race and gender. Under equal opportunity (Hardt et al., 2016), false positive rates are made commensurate, while equalized odds (Hardt et al., 2016) aims to equalize false positive rates and false negative rates among groups. More general attempts to induce fairly-dispersed outcomes include CVaR fairness (Williamson and Menon, 2019) and multi-calibration (H\u00b4 ebert-Johnson et al., 2018; Kearns et al., 2018). Our framework offers the flexibility to control a wide range of measures of a group CDF Fg, i.e. T(Fg), as well as the variation of T(Fg) between groups. As an illustration of the importance of such bounds, Bhutta et al. (2020) finds that the median white family in the United States has eight times as much wealth as the median black family; this motivates a dispersion measure based on the difference in group medians. Absolute/quadratic difference of risks and beyond. The simplest way to measure the dispersion of a risk measure (like median) between two groups are quantities such as |T(Fg) \u2212T(Fg\u2032)| or [T(Fg) \u2212T(Fg\u2032)]2. Moreover, one can study \u03be(T(Fg) \u2212T(Fg\u2032)) for some general nonlinear functions. These types of dispersion measures are widely used in algorithmic fairness (Hardt et al., 2016; Le Boudec, 2005). CVaR-fairness risk measure and its extensions. In (Williamson and Menon, 2019), the authors further consider a distribution for each group, Pg, and a distribution over group indices, PIdx. Letting CV aR\u03b1,PZ(Z) := EZ\u223cPZ[Z|Z > \u03b1] for any distribution PZ, they define the following dispersion for the expected loss of group g (i.e. \u00b5g := EX\u223cPg[X]) for \u03b1 \u2208(0, 1): DCV,\u03b1(\u00b5g) := CV aR\u03b1,PIdx \u0000\u00b5g \u2212Eg\u223cPIdx[\u00b5g] \u0001 . A natural extension would be DCV,\u03b1(T(Fg)) for general functional T(Fg), which we can write in a more explicit way (Rockafellar et al., 2000): DCV,\u03b1(T(Fg)) = min \u03c1\u2208R \u001a \u03c1 + 1 1 \u2212\u03b1 \u00b7 Eg\u223cPIdx[T(Fg) \u2212\u03c1]+ \u001b \u2212Eg\u223cPIdx[T(Fg)]. 5 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL The function [T(Fg) \u2212\u03c1]+ is a nonlinear function of T(Fg), but it is a monotonic function when \u03c1 is fixed and its further composition with the expectation operation is still monotonic, which can be easily dealt with. Uncertainty quantification of risk measures. Berkhouch et al. (2019) study the problem of uncertainty of risk assessment, which has important consequences for societal measures.They formulate a deviation-based approach to quantify uncertainty for risks, which includes forms like: \u03c1\u03be(PIdx) := Eg\u223cPIdx[\u03be(T(Fg))] for different types of nonlinear functions \u03be. Examples include variance uncertainty quantification, where Eg\u223cPIdx \u0000T(Fg)\u2212Eg\u223cPIdxT(Fg) \u00012; and E\u03c8[\u03be(F \u2212(\u03b1))] := R 1 0 \u03be(F \u2212(\u03b1))\u03c8(\u03b1)d\u03b1 to quantify how sensitive the \u03b1-VaR value is w.r.t its parameter \u03b1 for some non-negative weight function \u03c8. 4. Distribution-free control of societal dispersion measures In this section, we introduce a simple yet general framework to obtain rigorous upper bounds on the statistical dispersion measures discussed in the previous section. We will provide a high-level summary of our framework in this section, and leave detailed derivations and most examples to the appendix. Our discussion will focus on quantities related to the inverse of CDFs, but similar results could be obtained for CDFs. In short, our framework involves two steps: produce upper and lower bounds on the CDF (and thus inverse CDF) of the loss distribution, and use these to calculate bounds on a chosen target risk measure. First, we will describe our extension of the one-sided bounds in (Snell et al., 2023) to the two-sided bounds necessary to control many societal dispersion measures of interest. Then we will describe how these CDF bounds can be post-processed to provide control on risk measures defined by nonlinear functions and functionals of the CDF. Finally, we will offer a novel optimization method for tightening the bounds for a chosen, possibly complex, risk measure. 4.1. Methods to obtain confidence two-sided bounds for CDFs For loss values {Xi}n i=1, let X(1) \u2264. . . \u2264X(n) denote the corresponding order statistics. For the uniform distribution over [0,1], i.e. U(0, 1), let U1, . . . , Un \u223ciid U(0, 1) denote the corresponding order statistics U(1) \u2264. . . \u2264U(n). We will also make use of the following: Proposition 5 For the CDF F of X, if there exists two CDFs FU, FL such that FU \u2ab0F \u2ab0FL, then we have F \u2212 L \u2ab0F \u2212\u2ab0F \u2212 U . We use ( \u02c6 F \u03b4 n,L, \u02c6 F \u03b4 n,U) to denote a (1 \u2212\u03b4)-confidence bound pair ((1 \u2212\u03b4)-CBP), which satisfies P( \u02c6 F \u03b4 n,U \u2ab0F \u2ab0\u02c6 F \u03b4 n,L) \u22651 \u2212\u03b4. We extend the techniques developed in (Snell et al., 2023), wherein one-sided (lower) confidence bounds on the uniform order statistics are used to bound F. This is done by considering a onesided minimum goodness-of-fit (GoF) statistic of the following form: S := min1\u2264i\u2264n si(U(i)), where s1, . . . , sn : [0, 1] \u2192R are right continuous monotone nondecreasing functions. Thus, 6 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL P(\u2200i : F(X(i)) \u2265s\u2212 i (s\u03b4)) \u22651 \u2212\u03b4, for s\u03b4 = infr{r : P(S \u2265r) \u22651 \u2212\u03b4}. Given this step function defined by s1, . . . , sn, it is easy to construct \u02c6 F \u03b4 n,L via conservative completion of the CDF. Snell et al. (2023) found that a Berk-Jones bound could be used to choose appropriate si\u2019s, and is typically much tighter than using the Dvoretzky\u2013Kiefer\u2013Wolfowitz (DKW) inequality to construct a bound. 4.1.1. A REDUCTION APPROACH TO CONSTRUCTING UPPER BOUNDS OF CDFS Now we show how we can leverage this approach to produce two-sided bounds. In the following lemma we show how a CDF upper bound can be reduced to constructing lower bounds. Lemma 6 For 0 \u2264L1 \u2264L2 \u00b7 \u00b7 \u00b7 \u2264Ln \u22641, if P(\u2200i : F(X(i)) \u2265Li) \u22651 \u2212\u03b4, then, we have P(\u2200i : lim \u03f5\u21920+ F(X(i) \u2212\u03f5) \u22641 \u2212Ln\u2212i+1) \u22651 \u2212\u03b4. Furthermore, let R(x) be defined as 1 \u2212Ln if x < X(1); 1 \u2212Ln\u2212i+1 if X(i) \u2264x < X(i+1) for i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , n \u22121}; 1 if X(n) \u2264x. Then, F \u2aafR. Thus, we can simultaneously obtain ( \u02c6 F \u03b4 n,L, \u02c6 F \u03b4 n,U) by setting Li = s\u2212 i (s\u03b4) and applying (different) CDF conservative completions. In practice, the CDF upper bound can be produced via postprocessing of the lower bound. One clear advantage of this approach is that it avoids the need to independently produce a pair of bounds where each bound must hold with probability 1 \u2212\u03b4/2. 4.2. Controlling statistical dispersion measures Having described how to obtain the CDF upper and lower bounds ( \u02c6 F \u03b4 n,L, \u02c6 F \u03b4 n,U), we next turn to using these to control various important risk measures such as the Gini coefficient and group differences. 4.2.1. CONTROL OF NONLINEAR FUNCTIONS OF CDFS First we consider bounding \u03be(F \u2212), which maps F \u2212to another function of R. Control for a monotonic function. We start with the simplest case, where \u03be is a monotonic function in the range of X. For example, if \u03be is an increasing function, and with probability at least 1 \u2212\u03b4, \u02c6 F \u03b4 n,U \u2ab0F \u2ab0\u02c6 F \u03b4 n,L; then further by Proposition 5, we have that \u03be( \u02c6 F \u03b4,\u2212 n,L) \u2ab0\u03be( \u02c6 F \u2212) \u2ab0\u03be( \u02c6 F \u03b4,\u2212 n,U) holds with probability at least 1 \u2212\u03b4. This property could be utilized to provide bounds for the Gini coefficient or Atkinson index by controlling the numerator and denominator separately as integrals of monotonic functions of F \u2212. Example 1 (Gini coefficient) If given a (1 \u2212\u03b4)-CBP ( \u02c6 F \u03b4 n,L, \u02c6 F \u03b4 n,U) and \u02c6 F \u03b4 n,L \u2ab00 2, we can provide the following bound for the Gini coefficient. Notice that G(X) = R 1 0 (2p \u22121)F \u2212(p)dp R 1 0 F \u2212(p)dp = R 1 0 2pF \u2212(p)dp R 1 0 F \u2212(p)dp \u22121. 2. This can be easily achieved by taking truncation over 0. Also, the construction of \u02c6 F \u03b4 n,L in Section A.2 always satisies this constraint. 7 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL Given F \u2212(p) \u22650 (since we only consider non-negative losses, i.e. X is always non-negative), we know G(X) \u2264 R 1 0 2p \u02c6 F \u03b4,\u2212 n,L(p)dp R 1 0 \u02c6 F \u03b4,\u2212 n,U(p)dp \u22121, with probability at least 1 \u2212\u03b4. Control for absolute and polynomial functions. Many societal dispersion measures involve absolute-value functions, e.g., the Hoover index or maximum group differences. We must also control polynomial functions of inverse CDFs, such as in the CDFs of extreme observations. For any polynomial function \u03d5(s) = P k=0 \u03b1ksk, if k is odd, sk is monotonic w.r.t. s; if k is even, sk = |s|k. Thus, we can group \u03b1ksk according to the sign of \u03b1k and whether k is even or odd, and flexibly use the upper and lower bounds already established for the absolute value function and monotonic functions to obtain an overall upper bound. Example 2 If we have (T \u03b4 L(Fg), T \u03b4 U(Fg)) such that T \u03b4 L(Fg) \u2264T(Fg) \u2264T \u03b4 U(Fg) holds for all g we consider, then we can provide high probability upper bounds for \u03be(T(Fg1) \u2212T(Fg2)) for any polynomial functions or the absolute function \u03be. For example, with probability at least 1 \u2212\u03b4 |T(Fg1) \u2212T(Fg2)| \u2264max{|T \u03b4 U(Fg1) \u2212T \u03b4 L(Fg2)|, |T \u03b4 L(Fg1) \u2212T \u03b4 U(Fg2)|}. Control for a general function. To handle general nonlinearities, we introduce the class of functions of bounded total variation. Roughly speaking, if a function is of bounded total variation on an interval, it means that its range is bounded on that interval. This is a very rich class including all continuously differentiable or Lipchitz continuous functions. The following theorem shows that such functions can always be decomposed into two monotonic functions. Theorem 7 For ( \u02c6 F \u03b4 n,L, \u02c6 F \u03b4 n,U), if \u03be is a function with bounded total variation on the range of X, there exists increasing functions f1, f2 with explicit and calculable forms, such that with probability at least 1 \u2212\u03b4, \u03be(F \u2212) \u2aaff1( \u02c6 F \u03b4,\u2212 n,L) \u2212f2( \u02c6 F \u03b4,\u2212 n,U). As an example, recall that Berkhouch et al. (2019) studies forms like Z 1 0 \u03be(F \u2212(\u03b1))\u03c8(\u03b1)d\u03b1 to quantify how sensitive the \u03b1-VaR value is w.r.t its parameter \u03b1. For nonlinear functions beyond polynomials, consider the example where \u03be = ex + e\u2212x. This can be readily bounded since it is a mixture of monotone functions. 8 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL 4.2.2. CONTROL OF NONLINEAR FUNCTIONALS OF CDFS AND BEYOND Finally, we show how our techniques from the previous section can be applied to handle the general form Z 1 0 \u03be(T(F \u2212(\u03b1)))\u03c8(\u03b1)d\u03b1 where \u03c8 can be a general function (not necessarily non-negative) and \u03be can be a functional of the inverse CDF. To control \u03be(T(F \u2212)), we can first obtain two-sided bounds for T(F \u2212) if T(F \u2212) is in the class of QBRMs or in the form of R \u03c8(p)\u03be2(F \u2212(p))dp for some nonlinear function \u03be2 (as in (Berkhouch et al., 2019)). We can also generalize the weight functions in QBRMs from nonnegative to general weight functions once we notice that \u03c8 can be decomposed into two non-negative functions, i.e. \u03c8 = max{\u03c8, 0} \u2212max{\u2212\u03c8, 0}. Then, we can provide upper bounds for terms like R max{\u03c8, 0}\u03be(F \u2212(p))dp by adopting an upper bound for \u03be(F \u2212). 4.3. Numerical optimization towards tighter bounds for statistical functionals Having described our framework for obtaining CDF bounds and controlling rich families of risk measures, we return to the question of how to produce the CDF bounds. One drawback of directly using the bound returned by Berk-Jones is that it is not weight function aware, i.e., it does not leverage knowledge of the target risk measures. This motivates the following numerical optimization method, which shows significant improvement over previous bounds including DKW and Berk-Jones bounds (as well as the truncated version proposed in (Snell et al., 2023)). Our key observation is that for any 0 \u2264L1 \u2264\u00b7 \u00b7 \u00b7 \u2264Ln \u22641, we have P \u0000\u2200i, F(X(i)) \u2265Li \u0001 \u2265 n! R 1 Ln dxn R xn Ln\u22121 dxn\u22121 \u00b7 \u00b7 \u00b7 R x2 L1 dx1, where the right-hand side integral is a function of {Li}n i=1 and its partial derivatives can be calculated exactly by the package in (Moscovich et al., 2016). Consider controlling R 1 0 \u03c8(p)F \u2212(p)dp as an example. For any {Li}n i=1 satisfying P \u0000\u2200i, F(X(i)) \u2265Li \u0001 \u22651 \u2212\u03b4, one can use conservative CDF completion to obtain \u02c6 F \u03b4 n,L, i.e. Z 1 0 \u03c8(p)\u03be( \u02c6 F \u03b4,\u2212 n,L(p))dp = n+1 X i=1 \u03be(X(i)) Z Li Li\u22121 \u03c8(p)dp, where Ln+1 is 1, L0 = 0, and X(n+1) = \u221eor a known upper bound for X. Then, we aim to obtain a small value for n+1 X i=1 \u03be(X(i)) Z Li Li\u22121 \u03c8(p)dp such that P \u0000\u2200i, F(X(i)) \u2265Li \u0001 \u22651 \u2212\u03b4, and 0 \u2264L1 \u2264\u00b7 \u00b7 \u00b7 \u2264Ln \u22641. We optimize the above problem with gradient descent and a simple post-processing procedure to make sure the obtained {\u02c6 Li}n i=1 strictly satisfy the above constraints. In practice, we re-parameterize {Li}n i=1 with a network \u03d5\u03b8 that maps n random seeds to a function of the Li\u2019s, and transform the optimization objective from {Li}n i=1 to \u03b8. We find that a simple parameterized neural network model with 3 fully-connected hidden layers of dimension 64 is enough for good performance and robust to hyper-parameter settings. 9 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL Post-processing for a rigorous guarantee for constraints. Notice that we may not ensure the constraint n! R 1 Ln(\u03b8) dxn R xn Ln\u22121(\u03b8) dxn\u22121 \u00b7 \u00b7 \u00b7 R x2 L1(\u03b8) dx1 \u22651 \u2212\u03b4 is satisfied in the above optimization because we may use surrogates like Langrange forms in our optimization processes. To make sure the constraint is strictly satisfied, we can do simple post-processing. Let us denote the Li\u2019s obtained by optimizing (1) as Li(\u02c6 \u03b8). Then, we look for \u03b3\u2217\u2208[0, Ln(\u02c6 \u03b8)] such that \u03b3\u2217= inf{\u03b3 : n!\u03c5(L1(\u02c6 \u03b8) \u2212\u03b3, \u00b7 \u00b7 \u00b7 , Ln(\u02c6 \u03b8) \u2212\u03b3, 1) \u22651 \u2212\u03b4, \u03b3 \u22650}. Notice that there is always a feasible solution. We can use binary search to efficiently find (a good approximate of) \u03b3\u2217. 5. Experiments With our experiments, we aim to examine the contributions of our methodology in two areas: bound formation and responsible model selection. 5.1. Learn then calibrate for detecting toxic comments Using the CivilComments dataset (Borkan et al., 2019), we study the application of our approach to toxic comment detection under group-based fairness measures. CivilComments is a large dataset of online comments labeled for toxicity as well as the mention of protected sensitive attributes such as gender, race, and religion. Our loss function is the Brier Score, a proper scoring rule that measures the accuracy of probabilistic predictions, and we work in the common setting where a trained model is calibrated post-hoc to produce confidence estimates that are more faithful to ground-truth label probabilities. We use a pre-trained toxicity model and apply a Platt scaling model controlled by a single parameter to optimize confidence calibration. Our approach is then used to select from a set of hypotheses, determined by varying the scaling parameter in the range [0.25, 2] (where scaling parameter 1 recovers the original model). See the Appendix for more details on the experimental settings and our bound optimization technique. 5.1.1. BOUNDING COMPLEX EXPRESSIONS OF GROUP DISPERSION First, we investigate the full power of our framework by applying it to a complex statistical dispersion objective. Our overall loss objective considers both expected mean across groups as well as the maximum difference between group medians, and can be expressed as: L = Eg[T1(Fg)] + \u03bb supg,g\u2032 |T2(Fg) \u2212T2(Fg\u2032)|, where T1 is expected loss and T2 is a smoothed version of a median (centered around \u03b2 = 0.5 with spread parameter a = 0.1). Groups are defined by intersectional attributes: g \u2208G = {black female, white female, black male, white male}. We use 100 and 200 samples from each group, and select among 50 predictors. For each group, we use our numerical optimization framework to optimize a bound on O = T1(Fg) + T2(Fg) using the predictor (and accompanying loss distribution) chosen under the Berk-Jones method. Results are shown in Table 1. We compare our numerically-optimized bound (NN-Opt.) to the bound given by Berk-Jones as well as an application of the DKW inequality to lower-bounding a CDF. 10 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL Our framework enables us to choose a predictor that fits our specified fairness criterion, and produces reasonably tight bounds given the small sample size and the convergence rate of 1 \u221an. Moreover, there is a large gain in tightness from numerical optimization in the case where n = 100, especially with respect to the bound on the maximum difference in median losses (0.076 vs. 0.016). These results show that a single bound can be flexibly optimized to improve on multiple objectives at once via our numerical method, a key innovation point for optimizing bounds reflecting complex societal concerns like differences in group medians (Bhutta et al., 2020). Table 1: Applying our full framework to control an objective considering expected group loss as well as a maximum difference in group medians for n = 100 and n = 200 samples. n = 100 n = 200 Method Exp. Grp. Max Diff. Total Exp. Grp. Max Diff. Total DKW 0.36795 0.90850 1.27645 0.32236 0.96956 1.29193 BJ 0.34532 0.07549 0.42081 0.31165 0.00666 0.31831 NN-Opt. (ours) 0.32669 0.01612 0.34281 0.30619 0.00292 0.30911 Empirical 0.20395 0.00004 0.20399 0.20148 0.00010 0.20158 5.1.2. OPTIMIZING BOUNDS ON MEASURES OF GROUP DISPERSION Having studied the effects of applying the full framework, we further investigate whether our method for numerical optimization can be used to get tight and flexible bounds on functionals of interest. First, \u03b2-CVaR is a canonical tail measure, and we bound the loss for the worst-off 1 \u2212\u03b2 proportion of predictions (with \u03b2 = 0.75). Next, we bound a specified interval of the VaR ([0.5, 0.9]), which is useful when a range of quantiles of interest is known but flexibility to answer different queries within the range is important. Finally, we consider a worst-quantile weighting function \u03c8(p) = p, which penalizes higher loss values on higher quantiles, and study a smooth delta function around \u03b2 = 0.5, a more robust version of a median measure. We focus on producing bounds using only 100 samples from a particular intersectionally-defined protected group, in this case black females, and all measures are optimized with the same hyperparameters. The bounds produced via numerical optimization (NN-Opt.) are compared to the bounds in (Snell et al., 2023) (as DKW has been previously shown to produce weak CDF bounds), including the typical Berk-Jones bound as well as a truncated version tailored to particular quantile ranges. See Table 2 and the Appendix for results. The numerical optimization method induces much tighter bounds than Berk-Jones on all measures, and also improves over the truncated Berk-Jones where it is applicable. Further, whereas the truncated Berk-Jones bound will give trivial control outside of [\u03b2min, \u03b2max], the numerically-optimized bound not only retains a reasonable bound on the entire CDF, but even improves on Berk-Jones with respect to the bound on expected loss in all cases. For example, after adapting to CVaR, the numericallyoptimized bound gives a bound on the expected loss of 0.23, versus 0.25 for Berk-Jones and 0.50 for Truncated Berk-Jones. Thus numerical optimization produces both the best bound in the range of interest as well as across the rest of the distribution, showing the value of adapting the bound to the particular functional and loss distribution while still retaining the distribution-free guarantee. 11 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL Table 2: Optimizing bounds on measures for protected groups. Method CVaR VaR-Interval Quantile-Weighted Smoothed-Median Berk-Jones 0.91166 0.38057 0.19152 0.00038 Truncated Berk-Jones 0.86379 0.34257 NN-Opt. (ours) 0.85549 0.32656 0.17922 0.00021 5.2. Investigating bounds on standard measures of dispersion Next, we aim to explore the application of our approach to responsible model selection under non-group-based fairness measures, and show how using our framework leads to a more balanced distribution of loss across the population. Further details for both experiments can be found in the Appendix. 5.2.1. CONTROLLING BALANCED ACCURACY IN DETECTION OF GENETIC MUTATION RxRx1 (Taylor et al., 2019) is a task where the input is a 3-channel image of cells obtained by fluorescent microscopy, the label indicates which of 1,139 genetic treatments the cells received, and there is a batch effect that creates a challenging distribution shift across domains. Using a model trained on the train split of the RxRx1 dataset, we evaluate our method with an out-of-distribution validation set to highlight the distribution-free nature of the bounds. We apply a threshold to model output in order to produce prediction sets, or sets of candidate labels for a particular task instance. Prediction sets are scored with a balanced accuracy metric that equally weights sensitivity and specificity, and our overall objective is: L = T1(F) + \u03bbT2(F), where T1 is expected loss, T2 is Gini coefficient, and \u03bb = 0.2. We choose among 50 predictors (i.e. model plus threshold) and use 2500 population samples to produce our bounds. Results are shown in Figure 2. Figure 2: Left: Bounds on the expected loss, scaled Gini coefficient, and total objective across different hypotheses. Right: Lorenz curves induced by choosing a hypothesis based on the expected loss bound versus the bound on the total objective. The y-axis shows the cumulative share of the loss that is incurred by the best-off \u03b2 proportion of the population, where a perfectly fair predictor would produce a distribution along the line y = x. The plot on the left shows how the bounds on the expected loss T1, scaled Gini coefficient \u03bbT2, and total objective L vary across the different hypotheses (i.e. model and threshold combination for producing prediction sets). The bold points indicate the optimal threshold choice for each quantity. On the right is shown the Lorenz curves (a typical graphical expression of Gini) of the loss 12 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL distributions induced by choosing a hypothesis based on the expected loss bound versus the bound on the total objective. Incorporating the bound on Gini coefficient in hypothesis selection leads to a more equal loss distribution. Taken together, these figures illustrate how the ability to bound a non-group based dispersion measure like the Gini coefficient can lead to less skewed outcomes across a population, a key goal in societal applications. 5.2.2. PRODUCING RECOMMENDATION SETS FOR THE WHOLE POPULATION Using the MovieLens dataset (Harper and Konstan, 2015), we test whether better control on another important non-group based dispersion measure, the Atkinson index (with \u03f5 = 0.5), leads to a more even distribution of loss across the population. We train a user/item embedding model, and compute a loss that balances precision and recall for each set of user recommendations. Results are shown in Figure 3. Tighter control of the Atkinson index leads to a more dispersed distribution of loss across the population, even for subgroups defined by protected attributes like age and gender that are unidentified for privacy or security reasons. Figure 3: We select two hypotheses h0 and h1 with different bounds on Atkinson index produced using 2000 validation samples, and once again visualize the Lorenz curves induced by each. Tighter control on the Atkinson index leads to a more equal distribution of the loss (especially across the middle of the distribution, which aligns with the choice of \u03f5), highlighting the utility of being able to target such a metric in conservative model selection. 6. Related work The field of distribution-free uncertainty quantification has its roots in conformal prediction (Shafer and Vovk, 2008). The coverage guarantees of conformal prediction have recently been extended and generalized to controlling the expected loss of loss functions beyond coverage (Angelopoulos et al., 2021; Bates et al., 2021). The framework proposed by Snell et al. (2023) offers the ability to select predictors beyond expected loss, to include a rich class of quantile-based risk measures (QBRMs) like CVaR and intervals of the VaR; they also introduce a method for achieving tighter bounds on certain QBRMs by focusing the statistical power of the Berk-Jones bound on a certain quantile range. Note that these measures cannot cover the range of dispersion measures studied in this work. There is a rich literature studying both standard and group-based statistical dispersion measures, and their use in producing fairer outcomes in machine learning systems. Some work in fairness has aimed at achieving coverage guarantees across groups (Romano et al., 2020a,b), but to our knowledge there has not been prior work exploring controlling loss functions beyond coverage, such as the plethora of loss functions aimed at characterizing fairness, which can be expressed as group-based 13 \fZ. DENG, T.P. ZOLLO, J.C. SNELL, T. PITASSI & R. ZEMEL. STATISTICAL DISPERSION CONTROL measures (cf. Section 3.2). Other recent fairness work has adapted some of the inequality measures found in economics. Williamson and Menon (2019) aims to enforce that outcomes are not too different across groups defined by protected attributes, and introduces a convex notion of group CVaR, and Romano et al. (2020a) propose a DFUQ method of equalizing coverage between groups. Lazovich et al. (2022) studies distributional inequality measures like Gini and Atkinson index since demographic group information is often unavailable, while Do et al. (2021) use the notion of Lorenz efficiency to generate rankings that increase the utility of both the worst-off users and producers. 7. Conclusion In this work, we focus on a rich class of statistical dispersion measures, both standard group-based, and show how these measures can be controlled. In addition, we offer a novel numerical optimization method for achieving tighter bounds on these quantities. We investigate the effects of applying our framework via several experiments and show that our methods lead to more fair model selection and tighter bounds. Limitations include: the assumption that test data is from the same distribution as validation data; the scope of hypotheses/predictors we can select from is limited (due to theoretical constraints); and the generated bounds may not be tight, depending on the amount of available validation data and unavoidable limits of the techniques used to produce the bounds. Nonetheless we believe our study offers a significant step towards the sort of thorough and transparent validation that is critical for applying machine learning algorithms to applications with societal implications." + }, + { + "url": "http://arxiv.org/abs/2303.04379v1", + "title": "HappyMap: A Generalized Multi-calibration Method", + "abstract": "Multi-calibration is a powerful and evolving concept originating in the field\nof algorithmic fairness. For a predictor $f$ that estimates the outcome $y$\ngiven covariates $x$, and for a function class $\\mathcal{C}$, multi-calibration\nrequires that the predictor $f(x)$ and outcome $y$ are indistinguishable under\nthe class of auditors in $\\mathcal{C}$. Fairness is captured by incorporating\ndemographic subgroups into the class of functions~$\\mathcal{C}$. Recent work\nhas shown that, by enriching the class $\\mathcal{C}$ to incorporate appropriate\npropensity re-weighting functions, multi-calibration also yields\ntarget-independent learning, wherein a model trained on a source domain\nperforms well on unseen, future, target domains(approximately) captured by the\nre-weightings.\n Formally, multi-calibration with respect to $\\mathcal{C}$ bounds\n$\\big|\\mathbb{E}_{(x,y)\\sim \\mathcal{D}}[c(f(x),x)\\cdot(f(x)-y)]\\big|$ for all\n$c \\in \\mathcal{C}$. In this work, we view the term $(f(x)-y)$ as just one\nspecific mapping, and explore the power of an enriched class of mappings. We\npropose \\textit{HappyMap}, a generalization of multi-calibration, which yields\na wide range of new applications, including a new fairness notion for\nuncertainty quantification (conformal prediction), a novel technique for\nconformal prediction under covariate shift, and a different approach to\nanalyzing missing data, while also yielding a unified understanding of several\nexisting seemingly disparate algorithmic fairness notions and\ntarget-independent learning approaches.\n We give a single \\textit{HappyMap} meta-algorithm that captures all these\nresults, together with a sufficiency condition for its success.", + "authors": "Zhun Deng, Cynthia Dwork, Linjun Zhang", + "published": "2023-03-08", + "updated": "2023-03-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY", + "stat.ME", + "stat.ML" + ], + "main_content": "Introduction Prediction algorithms score individuals or individual instances, assigning to each a score in [0, 1] typically interpreted as a probability, for example, the probability that it will rain tomorrow. The predictions are calibrated if, for all v \u2208[0, 1], among those instances assigned the value v, a v fraction have a positive outcome. Calibration has been viewed as the sine qua non of prediction for decades [7]. \u2217Author names are listed in alphabetical order. \u00a9 Zhun Deng, Cynthia Dwork, and Linjun Zhang; licensed under Creative Commons License CC-BY 4.0 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Editor: Yael Tauman Kalai; Article No. 54; pp. 54:1\u201354:26 Leibniz International Proceedings in Informatics Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik, Dagstuhl Publishing, Germany arXiv:2303.04379v1 [cs.LG] 8 Mar 2023 \f54:2 HappyMap: A Generalized Multicalibration Method The requirement that a predictor be simultaneously calibrated on each of two or more disjoint groups, meaning, it is calibrated on each group when viewed in isolation, was \ufb01rst proposed as a fairness condition by Kleinberg, Mullainathan, and Raghavan [31]. Inspired by [31] and in an attempt to bridge the gap between individual fairness [14], which demands that similar individuals be treated similarly but requires task-speci\ufb01c measures of similarity, and group fairness, which can be specious [14], H\u00e9bert-Johnson, Kim, Reingold, and Rothblum proposed multicalibration, which requires calibration on a (possibly large) pre-speci\ufb01ed collection of arbitrarily intersecting sets that can be identi\ufb01ed within a speci\ufb01ed class of computations [22]. A related, independent, work of Kearns, Neel, Roth, and Wu considered the analogous setting but for Boolean-valued classi\ufb01ers. They argued that including intersecting groups can prevent fairness gerrymandering, and developed multi-parity [28]. The area has blossomed in theory and in application. For example, multicalibration has been used for fair ranking [15], and for providing an indistinguishability-based interpretation of individual probabilibilities, i.e., probabilities for non-repeatable events [16]. Multicalibration has also been shown to yield omniprediction, meaning that for every \u201cnice\" loss function \u2113, the scores assigned by the multicalibrated predictor can be post-processed, with no additional training, to be competitive, on the training distribution, with the best predictor in C [19]. \u25b6De\ufb01nition 1 (Multicalibration [22] as presented in [30]). Let C \u2286{[0, 1] \u00d7 X \u2192R} be a collection of functions. For a given distribution supported on X \u00d7Y, a predictor f : X 7\u2192[0, 1] is (C, \u03b1)-multicalibrated over D if \u2200c \u2208C: \f \f \fE(x,y)\u223cD[c(f(x), x) \u00b7 (f(x) \u2212y)] \f \f \f \u2264\u03b1. (1) Fairness is captured by incorporating demographic subgroups into the class of functions C. By enriching the class C to incorporate appropriate propensity re-weighting functions, multicalibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains (approximately) captured by the re-weightings. [30]. In this work, we view the term (f(x) \u2212y) in Equation (1) as just one speci\ufb01c mapping s(f, y) : R \u00d7 Y \u2192R, and explore the power of an enriched class of mappings. To this end, we propose HappyMap, a generalization of multicalibration, which yields a wide range of new applications, including a new algorithm for fair uncertainty quanti\ufb01cation, a novel technique for conformal prediction under distributional (a.k.a. covariate) shift, and a di\ufb00erent approach to analyzing missing data, while also yielding a uni\ufb01ed understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. We give a single HappyMap meta-algorithm that captures all these results, together with a su\ufb03ciency condition for its success. Roughly speaking, the requirement is that the mapping have an anti-derivative satisfying a smoothness-like assumption (see Section 3). We say such a mapping is happy. Loosely speaking, the anti-derivative serves as a potential function, which yields an upper bound on the number of iterations of our algorithm. Summary of Contributions. 1. We propose HappyMap, a generalization of multicalibration, by enriching the class of mappings (alternatives for the term (f(x) \u2212y) in Equation 2), as discussed above, and provide a HappyMap meta-algorithm having comparable running time and sample complexity to the other multicalibration algorithms in the literature, and give su\ufb03cient conditions for its success (Section 3). 2. We demonstrate the \ufb02exibility of HappyMap by \ufb01rst applying it to obtain generalized versions of fair uncertainty quanti\ufb01cation [37] (Section 4). \fZ. Deng, C. Dwork, and L. Zhang 54:3 3. Furthermore, we apply HappyMap to problems in target-independent learning that lie beyond the statistical estimation problems considered in [30], obtaining target-independent statistical inference and uncertainty quanti\ufb01cation. Our approach also yields a fruitful new perspective on analyzing missing data, giving new solutions to this problem (Section 5). A High-Level Perspective. The seminal work of H\u00e9bert-Johnson et al. [22] has long tantalized us with the suggestion of a new paradigm for machine learning. We coin the term micro-learning to describe gradient descent, a learning paradigm paradigm based on loss minimization, consisting of a sequence of local model updates. In contrast, the multiaccuracy and multicalibration algorithms of [22] produce predictors via interactions (through weak agnostic learning) with auditors who \ufb01nd large problem areas without the intercession of a loss function, and make correspondingly large model updates, suggested to us what we call macro-learning. This perspective has a compelling transparency story, \u201cthe learning algorithm \ufb01nds large errors and \ufb01xes them\", which is particularly appealing when the auditors are interrogating the treatment of demographic subgroups. The fact that macro-learning can be used as post-processing [22] tells us that the two paradigms can work in concert. The concept of s-Happy Multi-calibration proposed in this paper advances the broad vision of macro-learning. 2 Preliminaries 2.1 Notation For d \u2208N+, any convex set S \u2286Rd and vector v \u2208Rd, we use \u03a0S(v) to denote the the projection of v on S under Euclidean distance. We sometimes use the probability notion P also for density function, for instance P(x) means the density function evaluated at x for continuous random vector x. Let us denote X \u2208X as the feature vector (typically X = Rd), Y \u2208Y (typically Y = R for regression problems and Y = {0, 1} for classi\ufb01cation problems) as the response that we are trying to predict. For a joint distribution of (x, y) \u2208X \u00d7 Y, let us denote the marginal distribution of x and y as DX and DY respectively. For two positive sequences {ak} and {bk}, we write ak = O(bk) (or an \u2272bn), and ak = o(bk), if limk\u2192\u221e(ak/bk) < \u221eand limk\u2192\u221e(ak/bk) = 0 respectively. \u02dc O(\u00b7) denotes the term, neglecting the logarithmic factors. We also write ak = \u0398(bk) (or ak \u224dbk) if ak \u2272bk and bk \u2272ak. We use Op(\u00b7) to denote stochastic boundedness: a random variable Zk = Op(ak) for some real sequence ak if \u2200\u03f5 > 0, there exists M, N > 0 such that if k > N, P(|Zk/ak| > M) \u2264\u03f5. For two numbers a < b, we use the notation U(a, b) to denote the uniform distribution on [a, b]. For \u00b5 \u2208R and \u03c3 > 0, we use N(\u00b5, \u03c32) to denote a normal distribution with mean \u00b5 and variance \u03c32. 3 s-Happy Multi-calibration We now formally state our generalization of multicalibration. \u25b6De\ufb01nition 2 (s-Happy Multi-calibration). Let C \u2286{R\u00d7X \u2192R} be a collection of functions. For a given distribution supported on X \u00d7 Y and a mapping s : R \u00d7 X 7\u2192R, a predictor f : X 7\u2192R is (C, \u03b1)-s-happily multicalibrated over D and s if for all c \u2208C: \f \f \fE(x,y)\u223cD[c(f(x), x) \u00b7 s(f(x), y)] \f \f \f \u2264\u03b1. (2) When D and s are clear from the context, we will also simply say f is (C, \u03b1)-happily multicalibrated. Sometimes we will constrain the range of f to a certain convex set O, for ITCS 2023 \f54:4 HappyMap: A Generalized Multicalibration Method example, [0, 1]; in this case we say f : X 7\u2192O is (C, \u03b1)-s-happily multicalibrated when it satis\ufb01es Eq. (2). When c(f(x), x) is independent of f and only depends on x, we will simply write c(x). s-Happy Multi-calibration (Equation (2)) captures several multi-group fairness notions in the literature. For example, if we let c(f(x), x) be be independent of f, and take s(f(x), y) = f(x) \u2212y, we recover multi-accuracy [29, 22]; if we take c(f(x), x) = \u02dc c(x) \u00b7 w(f(x)) for some functions \u02dc c and w and s(f(x), y) = f(x) \u2212y, we recover the low-degree multicalibration notion in [20]; if we take s(f(x), y) to be a non-negative loss function and c(x) to be indicator functions of di\ufb00erent groups, we recover the minimax group fairness in [10]. The HappyMap Meta-Algorithm. We now describe the HappyMap meta-algorithm and prove its key properties. For simplicity, we describe the population version of our algorithm, where we are allowed to access E(x,y)\u223cD[c(f(x), x)s(f(x), y)]. Of course, in practice, we will need to estimate these quantities from training data, and so we describe the sample version of Algorithm 1 in Appendix B.1 and derive the corresponding required sample complexity. In keeping with the multi-group fairness literature [22, 29], we can either use a fresh sample per iteration or, when samples are limited, apply techniques from adaptive data analysis [11, 12, 13] to re-use the same samples in each iteration, as suggested in [22]. We consider the general case and aim to return a f : X 7\u2192O, for a given convex set O, which is (C, \u03b1)-happily multicalibrated. Without loss of generality, let us assume the class C is symmetric in the sense that c and \u2212c are both in C (we can always augment \u02dc C by including \u2212c for c \u2208\u02dc C, so this is without loss of generality). The algorithm is invoked with an initial predictor f0, which can be trivial (f0(x) = c for all x \u2208X) or can be an artisanal predictor imbued with extensive domain knowledge. Algorithm 1 HappyMap Input: Tolerance \u03b1 > 0, step size \u03b7 > 0, bound T \u2208N+ on number of iterations, initial predictor f0(\u00b7), distribution D, convex set O \u2286R, mapping s : R \u00d7 Y \u2192R Set t = 0 while t < T and \u2203ct \u2208C : E(x,y)\u223cD[ct(ft(x), x)s(ft(x), y)] > \u03b1 do Let ct be an arbitrary element of C satisfying the condition in the while statement \u2200x \u2208X, ft+1(x) = \u03a0O \u0002 ft(x) \u2212\u03b7 \u00b7 ct(f(x), x) \u0003 t = t + 1 end Output: ft(\u00b7) As in previous work [22, 37, 30, 28], we do not address the complexity of the weak learner whose job it is to search for functions ct \u2208C satisfying the condition of the while loop, or to report that none exists. The problem is at least as hard as agnostic learning [22, 28]. See [5] for discussion of this issue and its implications for fairness. Henceforth, our unit of computational complexity will be an invocation of the weak learner, i.e., an iteration of the while loop. Analysis. Theorem 3 below summarizes the theoretical guarantees for Algorithm 1. As is common in the literature, the proof of termination relies on a potential function argument (see, e.g., [4, 22]). The key new ingredient is the notion of a happy mapping, which provides the conditions under which we can \ufb01nd the necessary potential function. We will require some assumptions, which we state and discuss before stating the theorem. Three Assumptions. (a) For all c \u2208C, x \u2208X, Ex\u223cDX [c2(f(x), x)] \u2264B, where DX is the marginal distribution of D on x; (b) There exists a potential function L : R \u00d7 Y 7\u2192R, constants Cl L, Cu L, positive constant \u03baL > 0, such that (1) for all f, \u02dc f : X 7\u2192R, \fZ. Deng, C. Dwork, and L. Zhang 54:5 Cl L \u2264E(x,y)\u223cD[L(f(x), y)], (2) Cl U \u2265E(x,y)\u223cD[L(f0(x), y)] for the initialization f0, and (3) E(x,y)\u223cD[L(f(x), y) \u2212L( \u02dc f(x), y)] \u2265E(x,y)\u223cD[(f(x) \u2212\u02dc f(x))s(f(x), y)] \u2212\u03baLEx\u223cDX [(f(x) \u2212 \u02dc f(x))2]; (c) For all y \u2208Y and v \u2208R, L(\u03a0O[v], y) \u2264L(v, y). Assumption (a) is routine and Assumption (c) says that the potential function decreases upon projection with respect to its \ufb01rst coordinate. One concrete example is the case in which O = Y = [0, 1] and L(v, y) = |v \u2212y|2. We now turn to Assumption (b), focusing on how to construct the potential function L. First note that the assumption is closely related to smoothness, a widely used concept in optimization. We use the following fact. Fact. If L(\u00b7, \u00b7) : R \u00d7 X 7\u2192R is 2\u03baL-smooth with respect to its \ufb01rst coordinate, i.e. L(\u00b7, \u00b7) is continuously di\ufb00erentiable with respect to its \ufb01rst coordinate, and the corresponding partial derivative is 2\u03baL-Lipchitz, then, for all x \u2208X, y \u2208Y and f, \u02dc f : X 7\u2192R, L(f(x), y) \u2212L( \u02dc f(x), y) \u2265\u2202uL(u, y)|u=f(x)(f(x) \u2212\u02dc f(x)) \u2212\u03baL(f(x) \u2212\u02dc f(x))2. (3) Thus, if s(f(x), y) is di\ufb00erentiable with respect to its \ufb01rst coordinate and |\u2202us(u, y)| \u2264\u03baL, we can take the potential function to be the anti-derivative of s, speci\ufb01cally, L(f(x), y) = R f(x) g s(u, y)du for any g \u2208R, as long as we can ensure R f(x) g s(u, y)du \u2265Cl L for all f(x) and y. Then, taking expectation over both sides of this equation and (3), we can satisfy (b). In fact, even when the function s(u, y) is not di\ufb00erentiable everywhere with respect to u, we can still follow the general idea above to construct L, and assumption (b) can still be satis\ufb01ed with respect to the expectation of L. For example, in Section 5.2, when s(u, y) = 1{u \u2264y} \u2212(1 \u2212\u03b4), taking L(f(x), y) = (1 \u2212\u03b4) \u00b7 f(x) \u2212min(f(x) \u2212y, 0) will satisfy the desired condition. \u25b6Theorem 3. Under Assumptions (a)-(c) above, and if C is symmetric2, then Algorithm 1 with a suitably chosen \u03b7 = O(\u03b1/(\u03baLB)) converges in T = O((Cu L \u2212Cl L)\u03baLB/\u03b12) iterations and outputs a function f satisfying \f \f \fE(x,y)\u223cD[c(f(x), x)s(f(x), y)] \f \f \f \u2264\u03b1. Proof of Theorem 3. First, since the class C is symmetric, that means as long as we can prove for all c \u2208C, E(x,y)\u223cD[c(ft(x), x)s(ft(x), y)] \u2264\u03b1, for the output ft(\u00b7) of Algorithm 1, we will also have for all c \u2208C, \f \f \fE(x,y)\u223cD[c(ft(x), x)s(ft(x), y)] \f \f \f \u2264\u03b1. By our assumption, there exists a potential function L, a constant CL, and a non-negative constant \u03baL, such that for any x \u2208X, E(x,y)\u223cDL(ft(x), y) \u2212E(x,y)\u223cDL(ft+1(x), y) \u2265E(x,y)\u223cD(ft(x) \u2212ft+1(x))s(ft(x), y) \u2212E(x,y)\u223cD\u03baL(ft(x) \u2212ft+1(x))2. As a result, we have E(x,y)\u223cDL(ft(x), y) \u2212E(x,y)\u223cDL(ft+1(x), y) \u2265E(x,y)\u223cDL(ft(x), y) \u2212E(x,y)\u223cDL \u0000ft(x) \u2212\u03b7ct(ft(x), x), y \u0001 \u2265E(x,y)\u223cD\u03b7ct(ft(x), x) \u00b7 s(ft(x), y) \u2212\u03baLE(x,y)\u223cD(\u03b7ct(ft(x), x))2. 2 Recall that symmetry is without loss of generality. ITCS 2023 \f54:6 HappyMap: A Generalized Multicalibration Method The \ufb01rst inequality is because of (c) in our assumption and the second inequality is because of (b) in our assumption. Given Ex\u223cDX [c2(f(x), x)] \u2264B, if there exists ct \u2208C, E(x,y)\u223cD[ct(ft(x), x)s(ft(x), y)] > \u03b1 E(x,y)\u223cDL(ft(x), y) \u2212E(x,y)\u223cDL(ft+1(x), y) \u2265\u03b7E(x,y)\u223cDct(ft(x), x) \u00b7 s(ft(x), y) \u2212\u03baLE(x,y)\u223cD(\u03b7ct(ft(x), x))2 \u2265\u03b7\u03b1 \u2212\u03baL\u03b72B. Take \u03b7 = \u03b1/(2\u03baLB), we have E(x,y)\u223cDL(ft(x), y) \u2212E(x,y)\u223cDL(ft+1(x), y) \u2265 \u03b12 2\u03baLB . Since Cu L \u2265E(x,y)\u223cDL(f0(x), y) and E(x,y)\u223cDL(f(x), y \u2265Cl L for all f by assumption, each update will result in a progress at least \u03b12 2\u03baLB for each iteration if it happens, we know there are at most 2\u03baLB(Cu L \u2212Cl L)/\u03b12 updates. \u25c0 In the following sections, we will always assume C is symmetric. Also, for simplicity, we assume C is closed with respect to L2-norm. 4 Application: Algorithmic Fairness in Prediction Intervals Conformal prediction is a popular approach to uncertainty quanti\ufb01cation in prediction models. Continuing in this vein, Romano et al. proposed a new group fairness criterion, equalized coverage [37], in which the goal is to construct a prediction interval I(x) that covers y with comparable probability across all protected groups of interest. More precisely, given a collection of disjoint protected demographic groups A \u22822X , the set-valued function I : X \u2192R provides equalized coverage if P(y \u2208I(x) | x \u2208A) \u22651 \u2212\u03b4, for all A \u2208A. (4) In this section, we focus for simplicity on one-sided prediction intervals; by applying our results twice, or to a non-conformity score, we achieve two-sided intervals (see Corollary 5 and Remark 6 below). Moreover, as usual in the multicalibration literature, our results hold even for the case of arbitrarily intersecting population subgroups. A somewhat di\ufb00erent version of the intersectional case was also studied in [21, 26]; later, we will explain how HappyMap can be used for this as well. In the one-sided version of (4), we let I(x) = (l\u03b4(x), \u221e) be a one-sided (1 \u2212\u03b4)-prediction interval, and study the following coverage criterion: P(x \u2208A) \u00b7 |P(y \u2265l\u03b4(x) | x \u2208A) \u2212(1 \u2212\u03b4)| \u2264\u03b1, for all A \u2208A, (5) which can then be rewritten as |E[1(x \u2208A) \u00b7 (1(y \u2265l\u03b4(x)) \u2212(1 \u2212\u03b4))]| \u2264\u03b1, for all A \u2208A. A few remarks about the problem de\ufb01nition are in order. First, there is a trivial deterministic solution to the original equalized coverage problem in Equation (4), whether or not the groups are disjoint: just set I(x) = R.3 Similarly, while our (oneor two-sided) version in Equation (5) rules this out, it admits the trivial randomized solution in which we take 3 This does not mean that previous algorithms are trivial! \fZ. Deng, C. Dwork, and L. Zhang 54:7 l\u03b4(x) = \u2212\u221ewith probability 1\u2212\u03b4, and l\u03b4(x) = \u221ewith probability \u03b4. One approach to ruling out both trivial solutions is to require a stronger condition, multivalidity, proposed in [21, 26]. Adopting our notation, instead of asking for a small |P(y \u2265l\u03b4(x) | x \u2208A) \u2212(1 \u2212\u03b4)| as in Eq. (6), multivalidity requires |P(y \u2265l\u03b4(x) | x \u2208A, l\u03b4(x)) \u2212(1 \u2212\u03b4)| to be small. Analogous to the relationship between multicalibration and multi-accuracy, multivalidity is stronger than our requirement in Equation (6). As discussed below, we can use HappyMap for this as well. In Theorem 9, we will further provide a di\ufb00erent argument that our algorithm is doing something nontrivial, even when we do not enforce multivalidity. In the following, we generalize Eq. (5) to obtain the Intersectional Equalized Coverage requirement: sup c\u2208C \f \f \fE[c(x)(1{y \u2265l\u03b4(x)} \u2212(1 \u2212\u03b4))] \f \f \f \u2264\u03b1, (6) where C denotes an arbitrary pre-speci\ufb01ed collection of functions (including indicator functions of pre-speci\ufb01ed sub-populations, and also more general continuous functions, which is typical in the multicalibration literature). For example, C might be the functions that can be computed by decision trees of a \ufb01xed depth. Applying HappyMap with s(l, y) = (1\u2212\u03b4)\u22121{l \u2264 y}, yields the following result. \u25b6Theorem 4. Suppose that y | x is a continuous random variable 4, the conditional density of y given x is upper bounded by Kp, and |E[y]| < C for some universal constant C > 0. In addition, suppose that Ex\u223cDX [c2(x)] \u2264B for all c \u2208C. Then for a suitably chosen \u03b7 = O(\u03b1/(KpB)), using the potential function L(l, y) = (1 \u2212\u03b4) \u00b7 l \u2212min(l \u2212y, 0), HappyMap (Algorithm 1) converges in T = O(KpB2/\u03b12) steps, and outputs a function l\u03b4(\u00b7) satisfying sup c\u2208C \f \f \fE[c(x)(1{y \u2265l\u03b4(x)} \u2212(1 \u2212\u03b4))] \f \f \f \u2264\u03b1. Proof of Theorem 4. In order to apply Theorem 3, it is su\ufb03cient to verify that the potential function L(l, y) = (1 \u2212\u03b4)l \u2212min(l \u2212y, 0) the smooth-like condition, that is E[L(l, y) \u2212L(l\u2032, y)] \u2265E[(l(x) \u2212l\u2032(x))s(f(x), y)] \u2212\u03baLE[(l(x) \u2212l\u2032(x))2]. In particular, we have L(l, y) \u2212L(l\u2032, y) =(1 \u2212\u03b4)(l \u2212l\u2032) + min(l\u2032 \u2212y, 0) \u2212min(l \u2212y, 0) =(1 \u2212\u03b4)(l \u2212l\u2032) + (l\u2032 \u2212l)1{l \u2212y, l\u2032 \u2212y < 0} \u2212(l \u2212y)1{1\u2032 \u2212y > 0 > l \u2212y} + (l\u2032 \u2212y)1{1\u2032 \u2212y < 0 < l \u2212y} =(1 \u2212\u03b4)(l \u2212l\u2032) + (l\u2032 \u2212l)1{l \u2212y < 0} \u2212(l\u2032 \u2212y)1{1\u2032 \u2212y > 0 > l \u2212y} + (l\u2032 \u2212y)1{1\u2032 \u2212y < 0 < l \u2212y} =(l \u2212l\u2032)((1 \u2212\u03b4) \u22121{l \u2212y < 0}) + (l\u2032 \u2212y)(1{1\u2032 < y < l} \u22121{1\u2032 > y > l}). We have |(l\u2032 \u2212y)(1{1\u2032 < y < l} \u22121{1\u2032 > y > l})| \u2264|l \u2212l\u2032| \u00b7 (1{1\u2032 < y < l} + 1{1\u2032 > y > l})). The above inequality is due to the fact that if a |l \u2212l\u2032| perturbation can change the sign of l\u2032 \u2212y, then |l\u2032 \u2212y| < \u03b7|c(x)|. 4 Our analysis can be directly extended to the case where y is stored in \ufb01nite precision and \u03b1 is taken to be larger than the precision error ITCS 2023 \f54:8 HappyMap: A Generalized Multicalibration Method Assuming the conditional density of y given x is upper bounded by Kp, we then have E[|(l\u2032 \u2212y)(1{\u2113\u2032 < y < l} + 1{\u2113\u2032 > y > l})|] \u2264L\u03b72E[c(x)2] \u2264Kp|l \u2212l\u2032|2. In addition, we verify that E[L(l, y)] has a uniform lower bound. We discuss in cases. If l\u2212y < 0 (i.e., l < y), we have L(l, y) = (1\u2212\u03b4)\u00b7l\u2212min(l\u2212y, 0) = (1\u2212\u03b4)\u00b7l\u2212l+y = y \u2212\u03b4 \u00b7l > (1\u2212\u03b4)\u00b7y; if l\u2212y > 0 (i.e., l > y), we have L(l, y) = (1\u2212\u03b4)\u00b7l\u2212min(l\u2212y, 0) = (1\u2212\u03b4)\u00b7l > (1\u2212\u03b4)\u00b7y. Since |E[y]| < C for some universal constant C > 0 by assumption, we have E[L(l, y)] \u2265\u2212(1 \u2212\u03b4)C. \u25c0 The following Corollary is immediate from Theorem 4. \u25b6Corollary 5. If we apply the algorithm above (i.e. the HappyMap specialized to our current setting) to two di\ufb00erent cuto\ufb00values \u03b4/2 and (1 \u2212\u03b4/2) and obtain l\u03b4/2(x) and l1\u2212\u03b4/2(x) respectively, we obtain the two-sided prediction interval I(x) = [l\u03b4/2(x), l1\u2212\u03b4/2(x)] such that supc\u2208C E[c(x)(1{y \u2208I(x)} \u2212(1 \u2212\u03b4))] \u22642\u03b1. \u25b6Remark 6. In the conformal prediction literature, there is another commonly used way to construct two-sided prediction intervals based on non-conformity scores. More speci\ufb01cally, the non-conformity score m(x, y) is a metric that measures how the response y fails to \u201cconform\u201d to a prediction h(x), where h is an arbitrarily \ufb01xed prediction function. For example, a popular choice of the non-conformity score for regression tasks is m(x, y) = |y \u2212h(x)|. Additional choices for the non-conformity score can be found in [40, 1]. Given a non-conformity score m(x, y), the set {y : m(x, y) \u2264f(x)} will naturally yield a two-sided interval for y. To this end, applying our method directly to (x, \u02dc y) with \u02dc y := m(x, y) will then produce a valid (1\u2212\u03b4) prediction interval. Recall that multivalidity [21, 26] bounds |P(y \u2265l\u03b4(x) | x \u2208A, l\u03b4(x)) \u2212(1 \u2212\u03b4)| for all A and value of l\u03b4(x). Analogous to the relationship between multicalibration and multiaccuracy, multivalidity is generally stronger than the requirement in Equation (6), resulting in potentially much longer prediction intervals (a more detailed discussion is deferred to Section E.3). By considering c of the form c(l\u03b4(x), x) = 1{l\u03b4(x) \u2208I} for I \u2208I, where I is the collection of small bins I = {[\u2212C, \u2212C + \u03bb], [\u2212C + \u03bb, \u2212C + 2\u03bb], ..., [C \u22122\u03bb, C \u2212\u03bb], [C \u2212\u03bb, C]} for some discretization level \u03bb and C being the upper bound of |y|, s-Happy Multi-calibration recovers multivalidity, and applying Algorithm 1 yields a new algorithm that achieves multivalidity. The extension of (4) to intersecting sets is also considered in [18]. They de\ufb01ne the set collection A ex post facto, after the training data have been collected, to be all su\ufb03ciently large subsets of the training set, and then enumerate over all these (exponentially many) sets. In contrast, as is typical in the multicalibration literature, we name the sets a priori and rely on weak learning, which can be more e\ufb03cient than exhaustive search when the collection A has special structure, even if the collection is in\ufb01nite. HappyMap produces a prediction interval that is fair with respect to a large collection of groups. In the following, we present a result analyzing the utility, i.e., the width of the constructed prediction interval, when HappyMap is applied as post-processing of a highquality, but not necessarily group-fair, initial conformal map l0. We are agnostic regarding the source of l0: it may be given to us or we may obtain it by splitting the sample and building a high-quality but fairness-unaware conformal mapping using any standard method. \fZ. Deng, C. Dwork, and L. Zhang 54:9 To facilitate the analysis, we \ufb01rst introduce the following two de\ufb01nitions on quantiles. These de\ufb01nitions are standard and follow the literature of quantile estimation, see [6, 41] and the reference therein. \u25b6De\ufb01nition 7. For a distribution Q with supp Q \u2282[\u2212C, C] for a universal constant C > 0, let us denote the \u03c4-quantile of Q by F\u03c4(Q) := {t \u2208R : Q((\u2212\u221e, t]) \u2265\u03c4 and Q([t, \u221e)]) \u22651\u2212\u03c4}. In case of F\u03c4(Q) being an interval, we write tmin(Q) = min F\u03c4(Q) and tmax(Q) = max F\u03c4(Q). Q is said to have a \u03c4-quantile of type q \u2208(1, \u221e) if there exist constants \u03b1Q > 0 and bQ > 0 such that for all s \u2208[0, \u03b1Q], Q(tmin \u2212s, tmin) \u2265bQsq\u22121 Q(tmax, tmax + s) \u2265bQsq\u22121. Since we are not interested in a single distribution Q on R but in distributions P on X \u00d7 R, the following de\ufb01nition extends the previous de\ufb01nition to such P. \u25b6De\ufb01nition 8. Let p \u2208(0, \u221e], q \u2208(1, \u221e) and P be a distribution on X \u00d7 R. P is said to have a \u03c4-quantile of p-average type q, if there exists a set \u2126X \u2282X such that P(x \u2208\u2126X ) = 1, supp(P(\u00b7|x)) \u2282[\u2212C, C] for a universal constant C > 0, P(\u00b7|x) has a \u03c4-quantile of type q for all any x \u2208\u2126X , and there exist \u03b1P (\u00b7|x) and bP (\u00b7|x) as de\ufb01ned in De\ufb01nition 7, such that E[|b\u2212p P (\u00b7|x)(x) \u00b7 \u03b1(1\u2212q)p P (\u00b7|x) (x)|] < \u221e. We then have the following result showing that applying HappyMap comes at nearly no cost in the width of the prediction interval: if the input l0 is close to the optimal quantile function, then the HappyMap algorithm can preserve this approximation. \u25b6Theorem 9. Suppose the conditions in Theorem 4 hold, and P, the distribution on X \u00d7 Y, has a \u03b4-quantile of p-average type q. Assume pq/(p + 1) \u22652. Let l\u2217 \u03b4(x) \u2208F\u03b4(P(\u00b7 | x)) be any element in the \u03b4-quantile of Y | X = x. If an input l0 satis\ufb01es Ex\u223cPX(l0(x) \u2212l\u2217 \u03b4(x))2 \u2264\u03b2, then there exists a universal constant C, such that the output of the HappyMap Algorithm 1, l\u03b4, satis\ufb01es E(l\u03b4(x) \u2212l\u2217 \u03b4(x))2 \u2264C\u03b2. Proof of Theorem 9. Recall that we use the potential function L(l, y) = (1\u2212\u03b4)l\u2212min(l\u2212y, 0) when we apply HappyMap to this current problem. According to the derivations in the Proof of Theorem 4. We have that |E[L(l, y)] \u2212E[L(l\u2032, y)]| \u2264|E[(l \u2212l\u2032)((1 \u2212\u03b4) \u22121{l \u2212y < 0})]| + |E[(l\u2032 \u2212y)(1{1\u2032 < y < l} \u22121{1\u2032 > y > l})]| \u2264|E[(l \u2212l\u2032)((1 \u2212\u03b4) \u22121{l \u2212y < 0})]| + Kp \u00b7 E[|l \u2212l\u2032|2]. Since l\u2217 \u03b4(x) = arg minl E[L(l, y) | x], and P(l\u2217 \u03b4(x) < y | X = x) = 1 \u2212\u03b4, we have E[L(l0, y)] \u2212E[L(l\u2217 \u03b4, y)] \u2264Kp \u00b7 E|l0(x) \u2212l\u2217 \u03b4(x)|2 \u2264Kp\u03b2. Moreover, by the Proof of Theorem 4, we have E[L(l, y)] \u2264E[L(l0, y)], implying E[L(l, y)] \u2212E[L(l\u2217 \u03b4, y)] \u2264Kp\u03b2. Then by Theorem 2.7 in [41], and use the fact that the function L2 norm is dominated by the function Lpq/(p+1) norm when pq/(p + 1) \u22652, we have E[|l(x) \u2212l\u2217 \u03b4(x)|2] \u2264C\u2032 \u00b7 E[L(l, y)] \u2212E[L(l\u2217 \u03b4, y)] \u2264C\u2032Kp\u03b2. \u25c0 ITCS 2023 \f54:10 HappyMap: A Generalized Multicalibration Method The above theorem indicates that when the conditional distribution Y | X = x is unimodal and symmetric5, if we have a good initialization function f0 with approximation error \u03b2 = o(1), then the prediction interval constructed as in either Corollary 5 or Remark 6 would converge to the width of the optimal (1 \u2212\u03b4) prediction interval [l\u2217 \u03b4/2(x), l\u2217 1\u2212\u03b4/2(x)] where we know the distribution Y | X = x exactly. We note that \u03b2 = o(1) can be potentially achieved by \ufb01rst splitting the data into two halves: l0 is trained on the \ufb01rst half, and processed by HappyMap on the second half. Such data-splitting is common in the conformal prediction literature [32, 38, 33]. 5 Application: Target-independent Learning Kim et al. [30] demonstrated that, with an appropriate collection C of propensity reweighting functions, training a (C, \u03b1)-multicalibrated predictor on source data Dso yields e\ufb03cient estimation of statistics of interest on data drawn from previously unseen target distributions Dta [30]. In this section, we use HappyMap to extend these results to problems in targetindependent learning that lie beyond the statistical estimation problems considered in [30]: target-independent prediction and uncertainty quanti\ufb01cation. Our approach also yields a fruitful new perspective on analyzing missing data, giving new solutions to this problem. Let us use Z = {so, ta} to indicate sampling in the source and target distributions, respectively. As in [30], we consider a joint distribution Djoint over (x, y, z) triples. The source, respectively, target, populations Dso and Dta can be viewed as the joint distribution on (x, y), conditioning on z = ta or z = so, respectively. We similarly use the notation DX z and DY z to denote the marginal distributions obtained by conditioning on z \u2208Z. As in [30], we assume certain functional relationship between x and y is the same on the distributions Dso and Dta, which is known as \u201cignorability\" in the causal inference literature and \"covariate shift\" in machine learning. Formally, we have: \u25b6Assumption 10 (Covariate shift assumption). For a triplet (x, y, z) drawn from Djoint, P(x,y,z)\u223cDjoint(x, y, z) = P(x)P(y|x)P(z|x). Based on Assumption 10, we have P(x,y)\u223cDso(y|x) = P(x,y)\u223cDta(y|x). By convention and without loss of generality, throughout this section, we assume uniform prior over Z = {so, ta}, i.e. P(z = ta) = P(z = so). A common tool for performing covariate shift studies is Propensity score reweighting. The propensity score is de\ufb01ned to be e(x) = P(z = so|x), and the propensity score ratio 1\u2212e(x) e(x) = P(z=ta|x) P(z=so|x) can be used to convert an expectation over DX so to an expectation over DX ta. However, without observing samples from Dta at training time, we cannot estimate the propensity score ratio. Kim et al. [30] proposed multicalibrating with respect to the class C = { 1\u2212\u03c3(x) \u03c3(x) : \u03c3 \u2208\u03a3}, for a family of functions \u03a3 that (it is hoped) captures the propensity score ratios of interest. They showed that multicalibration with respect to this class (in fact, even multi-accuracy with respect to this class) ensures that the resulting predictor provides estimation accuracy competitive with the best propensity reweighting function in the class C, a notion they call universal adaptability. In the realizable, case, when the propensity ratio for the unseen target domain is in the class C, f is guaranteed to yield a good estimate for the statistic of interest on the target domain. 5 similar assumptions have been used in conformal prediction literature, see, eg. [32, 33] \fZ. Deng, C. Dwork, and L. Zhang 54:11 5.1 Universally Adaptive Predictors under \u21132 Loss As a warm-up exercise, which does not require the full power of HappyMap, we consider universal adaptivity of predictors f : X \u2192[0, 1] under \u21132 loss. This is more complex than statistical estimation under covariate shift, and it requires a more complicated class of functions c(f(x), x). Formally, our goal is to obtain a prediction f with a small estimation error E(x,y)\u223cDta(f(x) \u2212E(x,y)\u223cDta[y|x])2 for Y = [0, 1]. We note that this quantity is commonly used as a measure of the quality of a prediction function. By Assumption 10 (covariate shift), E(x,y)\u223cDso[y|x] = E(x,y)\u223cDta[y|x]), yielding E(x,y)\u223cDta(f(x) \u2212E(x,y)\u223cDta[y|x])2 = E(x,y)\u223cDta(f(x) \u2212E(x,y)\u223cDta[y|x])(f(x) \u2212y) = E(x,y)\u223cDta(f(x) \u2212E(x,y)\u223cDso[y|x])(f(x) \u2212y). Using the propensity score ratio, we have E(x,y)\u223cDta(f(x)\u2212E(x,y)\u223cDso[y|x])(f(x)\u2212y) = E(x,y)\u223cDso \u00121 \u2212e(x) e(x) (f(x) \u2212E(x,y)\u223cDso[y|x]) \u0013 (f(x)\u2212y). At training time, we cannot see the samples from Dta, so we cannot estimate e(\u00b7) from samples as in the classical reweighting approach. However, following [30], one can use a class { 1\u2212\u03c3(x) \u03c3(x) : \u03c3 \u2208\u03a3} to represent the propensity score ratios of interest, and try to \ufb01nd an f : R 7\u2192[0, 1] with a small error: sup \u03c3\u2208\u03a3 E(x,y)\u223cDso \u00121 \u2212\u03c3(x) \u03c3(x) (f(x) \u2212E(x,y)\u223cDso[y|x]) \u0013 \u00b7 (f(x) \u2212y). This almost gives us the form we need in Eq. (2); it remains only to deal with E(x,y)\u223cDso[y|x]. This is accomplished by introducing a class P = {p : X 7\u2192[0, 1]} containing, it is hoped, a good approximation of E(x,y)\u223cDso[y|x], and trying to \ufb01nd an f : R 7\u2192[0, 1] such that \f \f \f sup \u03c3\u2208\u03a3,p\u2208P E(x,y)\u223cDso \u00121 \u2212\u03c3(x) \u03c3(x) (f(x) \u2212p(x)) \u0013 \u00b7 (f(x) \u2212y) \f \f \f \u2264\u03b1, (7) for some small value \u03b1. In other words, we take C = {c(f(x), x) = \u00b1 1\u2212\u03c3(x) \u03c3(x) (f(x) \u2212p(x)) : \u03c3 \u2208\u03a3, p \u2208P}, where \u03c3, p \u2208(0, 1). We de\ufb01ne the approximation error by \u03b21(p) = q E(x,y)\u223cDso(p(x) \u2212E(x,y)\u223cDso[y|x])4 and \u03b22(\u03c3) = s E(x,y)\u223cDso(1 \u2212\u03c3(x) \u03c3(x) \u22121 \u2212e(x) e(x) )4. Although we do not use the full power of HappyMap, since we have stayed with the original mapping (f(x) \u2212y), we can nonetheless apply Theorem 3 to obtain: \u25b6Theorem 11. Assume \u03c3(\u00b7) \u2208(c1, c2), where 0 < c1 < c2 < 1. Then, we have |c(f(x), x)| \u2264 2(1 \u2212c1)/c1 := B. Let \u03b2 = inf\u03c3\u2208\u03a3,p\u2208P p 2B2\u03b21(p) + 2\u03b22(\u03c3). Suppose we run Algorithm 1 with a suitably chosen \u03b7 = O(\u03b1/B), then the algorithm converges in T = O(2B/\u03b12) iterations, using the potential function L(f(x), y) = 1/2(f(x) \u2212y)2 and O = [0, 1], which results in \f \f \fE(x,y)\u223cDso[1 \u2212e(x) e(x) (ft(x) \u2212E(x,y)\u223cDso[y|x])(ft(x) \u2212y)] \f \f \f \u2264\u03b1 + \u03b2, for the output ft(\u00b7) of Algorithm 1. The proof of Theorem 11 is deferred to Section D.1. ITCS 2023 \f54:12 HappyMap: A Generalized Multicalibration Method 5.2 Universally Adaptive Conformal Prediction In this section, we apply HappyMap to achieve universally adaptive conformal prediction, wherein we train a conformal prediction model on source data while ensuring validity on unseen target distributions, in the covariate shift setting. Recall that the goal of standard conformal prediction is to obtain a (1 \u2212\u03b4) prediction interval for y. In this section, for simplicity of presentation, we consider the one-sided interval (l(x), \u221e) and focus on the case in which the response y is continuous6 Letting Dta denote the unseen target distribution, our goal is to construct l(\u00b7) such that P(x,y)\u223cDta(l(x) \u2264y) is close to the desired level (1 \u2212\u03b4): |P(x,y)\u223cDta(l(x) \u2264y) \u2212(1 \u2212\u03b4)| \u2264\u03b1. Note that the above inequality implies P(x,y)\u223cDta(l(x) \u2264y) \u22651 \u2212\u03b4 \u2212\u03b1, which is the more standard requirement used in the conformal prediction literature. By Assumption 10, one can rewrite this probability: P(x,y)\u223cDta(l(x) \u2264y) = E(x,y)\u223cDso \u00101 \u2212e(x) e(x) 1{l(x) \u2264y} \u0011 . Now, in the same spirit as in the previous section (speci\ufb01cally, Eq. (7)), we seek l(\u00b7) such that \f \f \f sup \u03c3\u2208\u03a3 E(x,y)\u223cDso[1 \u2212\u03c3(x) \u03c3(x) \u00b7 (1{l(x) \u2264y} \u2212(1 \u2212\u03b4))] \f \f \f \u2264\u03b1, (8) for unseen Dta and for some \u03b1 > 0. To this end, we apply HappyMap (Algorithm 1) with C = {c(x) = 1\u2212\u03c3(x) \u03c3(x) : \u03c3 \u2208\u03a3} and the mapping s(l, y) = 1{l \u2264y} \u2212(1 \u2212\u03b4). \u25b6Theorem 12. Suppose that y | x is a continuous random variable, the conditional density of y given x is upper bounded by Kp, and |E[y]| < C for some universal constant C > 0. Assume Ex[c2(x)] \u2264B, \u2200c \u2208C, and consider the HappyMap meta-algorithm (Algorithm 1) with s(l, y) = 1{l \u2264y} \u2212(1 \u2212\u03b4). Then for any target distribution whose marginal density function of x, pta(x), and let \u03b2 satisfy infc\u2208C E[(c(x) \u2212pta(x) pso(x))2] \u2264\u03b22, there exists a choice of \u03b7 = O(\u03b1/(KpB)) such that Algorithm 1 terminates in T = O(LB/\u03b12), using the potential function L(l, y) = (1 \u2212\u03b4) \u00b7 l \u2212min(l \u2212y, 0). The resulting function l\u03b4 satis\ufb01es |P(x,y)\u223cDta(l\u03b4(x) \u2264y) \u2212(1 \u2212\u03b4)| \u2264\u03b1 + \u03b2. The proof Theorem 12 follows a combination of the universal adaptivity argument and the analysis of Theorem 4, and is deferred to Section D.2. \u25b6Remark 13. To our knowledge, Tibshirani et al. [42] were the \ufb01rst to consider conformal prediction under covariate shift. Their method relies on exact knowledge of (1 \u2212e(x))/e(x). In contrast, we achieve a valid asymptotic coverage guarantee without exact knowledge of (1 \u2212e(x))/e(x), provided something close to this score is in \u03a3. Finally, we note that we can extend our results to universally adaptive equalized coverage and universal multivalidity if we enrich C. For example, by including in C both {c(x) = 1\u2212\u03c3(x) \u03c3(x) : \u03c3 \u2208\u03a3} and the sub-population indicator functions, we can achieve equalized coverage under covariate shift. 6 As in Section 4, we can handle the two-sided case with two applications of our technique (Corollary 5). \fZ. Deng, C. Dwork, and L. Zhang 54:13 5.3 Prediction with Missing Data s-Happy Multi-calibration can also be applied to the study of missing data, a common issue in statistical data analysis [35, 43]. Suppose we consider learning tasks that predict response y \u2208[0, 1] based on features x \u2208X. Let X denote the complete data matrix, where the i-th row xi is the features of the i-th observation and the response vector is Y = (y1, ..., yn)\u22a4. Suppose the data are i.i.d., with (xi, yi) i.i.d. \u223cD. In the missing data problem, some entries of X are missing, so we de\ufb01ne x(1) i to be the components of xi that are observed for observation i, and x(0) i the components of xi that are missing. In addition, we let Ri be the indicator of complete cases, that is, Ri = 1 i\ufb00xi is fully observed (i.e. x(1) i = xi). There are three major missing data mechanisms: missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). We say a dataset is MCAR if the distribution of missingness indicators R is independent of X. For MAR, R depends on X only through its observed components, i.e., Ri \u22a5 \u22a5x(0) i |x(1) i . For MNAR, the distribution of R could depend on the missing components of X. Our method, described next, applies to all three mechanisms. Suppose our goal is to learn a predictor function f such that the test mean squared error (MSE) E(x,y)\u223cD[(y \u2212f(x))2] is small. There are two principal methods in the literature: weighting and imputation. The weighting approach [34] minimizes the following loss function that is evaluated on the complete data, arg min f n X i=1 1{Ri = 1}w(xi)(yi \u2212f(xi))2, (9) where w(xi) approximates P(xi|Ri=1) P(xi) . We will adapt the weighting approach to our framework, and use C as a class of functions where some c(f(x), x) \u2208C approximates P(x|Ri=1) P(x) (E[y|x]\u2212f(x)). Since E(x,y)\u223cD[(y \u2212f(x))2] can be equivalently written as E(x,y)\u223cD|R=1[ P(x|Ri=1) P(x) (E[y|x] \u2212f(x)) \u00b7 (y \u2212f(x))]. Instead of minimizing the loss function as in (9), we aim to \ufb01nd an f such that supc\u2208C E(x,y)\u223cD|R=1[c(x)(y\u2212 f(x))] \u2264\u03b1, where C is a class of functions that approximates P(x|Ri=1) P(x) (E[y|x] \u2212f(x)). Applying HappyMap with such a function class C and s(f, y) = f \u2212y, we obtain the following result. \u25b6Theorem 14. Suppose infc\u2208C q E[(c(f(x), x) \u2212P(x|Ri=1) P(x) (E[y|x] \u2212f(x))]2 \u2264\u03b2, and sup c\u2208C E[c2(f(x), x)] \u2264B. Consider the HappyMap meta-algorithm (Algorithm 1) with s(f, y) = f \u2212y. Then there exists a suitable choice of \u03b7 = O(\u03b1/B) such that Algorithm 1 terminates in T = O(LB/\u03b12), using the potential function L(f(x), y) = 1/2(f(x) \u2212y)2 and O = [0, 1]. The resulting function l\u03b4 satis\ufb01es E(x,y)\u223cD[(y \u2212f(x))2] \u2264\u03b1 + \u03b2. The proof of Theorem 14 is deferred to Section D.3. \u25b6Remark 15. The analysis above considered learning a predictor with small mean squared error in the missing data setting. The same idea can be used to other settings when there is missing data, including constructing prediction intervals when part of the observed data are missing, and enforcing multi-group fairness notions (that are studied in Section 4) with missing data. ITCS 2023 \f54:14 HappyMap: A Generalized Multicalibration Method 6 Discussion and Conclusion In this paper, we propose s-Happy Multi-calibration, a generalization of multi-calibration, unifying many existing algorithmic fairness notions and target-independent learning approaches, and yielding a wide range of new applications, including a new fairness notion for uncertainty quanti\ufb01cation, a novel technique for conformal prediction under covariate shift, and a di\ufb00erent approach to analyzing missing data. At a higher level, we advance the \ufb01eld of macro-learning. An interesting future direction is to extend s-Happy Multi-calibration to (low-dimensional) representation functions. As argued recently in the deep learning community, pre-training or representation learning is a powerful tool in improving \ufb01nal prediction accuracy7. The hope is that a suitable extension of s-Happy Multi-calibration for representation learning will further facilitate target-independent learning by using an extended HappyMap that post-processes the representation learned through pre-training. In addition, our current target-independent learning requires the covariate shift assumption (that is, the conditional distribution y | x is invariant across di\ufb00erent environments). An interesting question is to study how to use s-Happy Multi-calibration for target-independent learning when there are some other types of invariance, e.g., as in invariant risk minimization (IRM) [2, 8, 9]. The power of the \u201cmulti-X\"/macro-learning framework, including s-Happy Multi-calibration, stems from choosing a rich collection C. However, a rich C means a harder task for the weak agnostic learner, and increased sample complexity. This motivates our investigation of reduced function class complexity (Section F). To fully realize the potential of macro-learning it would be helpful to understand, even from the perspective of heuristics, the trade-o\ufb00 between the power of C and the computational complexity of the weak learner." + }, + { + "url": "http://arxiv.org/abs/2211.03994v1", + "title": "Reinforcement Learning with Stepwise Fairness Constraints", + "abstract": "AI methods are used in societally important settings, ranging from credit to\nemployment to housing, and it is crucial to provide fairness in regard to\nalgorithmic decision making. Moreover, many settings are dynamic, with\npopulations responding to sequential decision policies. We introduce the study\nof reinforcement learning (RL) with stepwise fairness constraints, requiring\ngroup fairness at each time step. Our focus is on tabular episodic RL, and we\nprovide learning algorithms with strong theoretical guarantees in regard to\npolicy optimality and fairness violation. Our framework provides useful tools\nto study the impact of fairness constraints in sequential settings and brings\nup new challenges in RL.", + "authors": "Zhun Deng, He Sun, Zhiwei Steven Wu, Linjun Zhang, David C. Parkes", + "published": "2022-11-08", + "updated": "2022-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CY" + ], + "main_content": "Introduction Decision making systems trained with real-world data are deployed ubiquitously in our daily life, for example, in regard to credit, education, and medical care. However, those decision systems may demonstrate discrimination against disadvantaged groups due to the biases in the data [16]. In order to mitigate this issue, many have proposed to impose fairness constraints [16,20] on the decision, such that certain statistical parity properties are achieved. Despite the fact that fair learning has been extensively studied, most of this work is in the static setting without considering the sequential feedback e\ufb00ects of decisions. At the same time, in many scenarios, algorithmic decisions may incur changes in the underlying features or quali\ufb01cation status of individuals, which further feeds back to the decision making process; for example, banks\u2019 decision may induce borrowers to react, for example changing their FICO score by closing credit cards. When there exist sequential feedback e\ufb00ects, even ignoring one-step feedback e\ufb00ects can harm minority groups [30]. In response, [36] advocates to study a discrete-time sequential decision process, where responses to the decisions made at each time step are accompanied by changes in the features and quali\ufb01cations of the population in the next time step. In particular, they study and show the drawback of myopic optimization together with requiring fairness at each time step, which we refer to as stepwise fairness constraints. While imposing stepwise fairness constraints is a natural way to incorporate fairness into a Markov decision process (MDP), it makes more sense from the perspective of a decision maker to consider the long-term reward. Thus, in this paper, we study stepwise fairness constraints together with optimal, sequential-decision making, taking the perspective of a forward-looking decision maker. On one hand, our work could be viewed as a Fair Partially Observable Markov Decision Process (F-POMDP) framework to promote fair sequential decision making 1. On the other hand, our work also provides a computational tool for studying the quantities of interests, such as well-being of groups, in a natural sequential decision making setting. In particular, we initiate both the theoretical and experimental studies of reinforcement learning, i.e. optimizing long-term reward in a partially observable Markov decision process (POMDP), under stepwise fairness constraints. We consider an episodic setting, which models for example economic and societal activities that exhibit seasonality; e.g., new mortgage applicants who apply for loans from banks more often in the spring and summer season every year, or graduate school admission, which usually starts in \u2217Harvard University, zhundeng@g.harvard.edu. \u2020Harvard University, he sun@g.harvard.edu. \u2021Carnegie Mellon University, zstevenwu@cmu.edu. \u00a7Rutgers University, linjun.zhang@rutgers.edu. \u00b6Harvard University, parkes@eecs.harvard.edu. 1The problem of long-term well-being of groups brought up in [30] is due to a misalignment of the institute\u2019s reward function and well-being measurements, and cannot be solved in any existing literature. It also doesn\u2019t prevent stepwise fairness constraints from being a way to build fair MDP. 1 arXiv:2211.03994v1 [cs.LG] 8 Nov 2022 \fthe autumn and completes around December every year. Similar to [30, 36], we mainly consider two types of fairness notions, demographic parity and equalized opportunity, and for a POMDP framework that has discrete actions and a discrete state space. These are illustrative of other stepwise fairness constraints that could be adopted. We take a model-based learning approach, and provide practical optimization algorithms that enjoy strong theoretical guarantees in regard to policy optimality and fairness violations as the number of episodes increases. We summarize our contributions as below: 1. Theoretically, we demonstrate how to use sampled trajectories of individuals to solve RL with fairness constraints and provide theoretical guarantees to ensure vanishing regrets in reward and fairness violation as the number of episodes increases. 2. Experimentally, we implement the \ufb01rst algorithm for tabular episodic RL with stepwise fairness constraints. 1.1 Related work As the rapid development of machine learning [11, 12, 14, 15, 18, 22, 23], there is an increasing interest in the study of decision making problems in the context of people [2, 6, 19, 32, 33]. Hardt et al. [19] model a classi\ufb01cation problem as a sequential game (Stackelberg competition) between two players, where the \ufb01rst player has the ability to commit to his strategy before the second player responds. They characterize the equilibruim and obtain near optimal computationally e\ufb03cient learning algorithms. Shavit and Moses [32] study an algorithmic decision-maker who incentivizes people to act in certain ways to receive a better decision. Ball et al. [2] study a model of predictive scoring, where there is a sender agent being scored, a receiver agent who wants to predict the quality of the sender, and an intermediary who observes multiple, potentially mutable features of the sender. There is also a growing literature on algorithmic fairness [4, 5, 9, 10, 13, 16, 29, 30, 35]. Liu et al. [30], for example, characterize the delayed impact of standard fairness criteria under a feedback model with a single period of adaptation. They use a one step feedback model to capture the sequential dynamics of the environment. However, these papers do not consider the fairness in a more general sequential decision process. There is also a line of literature regarding fair bandits, but not more general (PO)MDP problems [21, 25]. In regard to fairness considerations in reinforcement learning, it has also gained great attention [7,12,14,17, 24,26,31,34]. In particular, Creager et al. [7] use causal directed acyclic graphs as a unifying framework for fairness. D\u2019Amour et al. [17] use simulation to study the fairness of algorithms and show that neither static nor single-step analyses is enough to understand the long-term consequences of a decision system. Jabbari et al. [24] de\ufb01ne fairness constraints to require that an algorithm never prefers one action over another if the long-term reward of choosing the latter action is higher, whereas we consider groupwise notions of fairness. Mandal et al. [31] adopt a welfare-based, axiomatic approach, and give a regret bound for the Nash Social, Minimum and generalized Gini Welfare. In contrast with our work, the fairness concepts are not group-based but rather based on the value contributed from di\ufb00erent agents in the system. Similar to this paper, Zhang et al. [37] study the dynamics of population quali\ufb01cation and algorithmic decisions under a partially observed Markov decision problem setting, but whereas they only consider myopic policies, we frame this as a general reinforcement learning policies. 2 Preliminaries We consider a binary decision setting, with training examples that consist of triplets (x, y, \u03d1), where x \u2208X is a feature vector, \u03d1 \u2208\u039b is a protected group attribute such as race or gender, and the label y \u2208{0, 1}.For simplicity, we only consider binary sensitive attributes \u039b = {\u03b1, \u03b2}, but our method can also be generalized to deal with multiple sensitive attributes (see Appendix B). For k \u2208N+, we use [k] to denote the set {1, 2, \u00b7 \u00b7 \u00b7 , k}. Based on feature x, a decision maker makes a decision a \u2208A = {0, 1} (e.g., make a loan or not). We will also denote an individual\u2019s state as s = (x, y), and let S = X \u00d7 Y. After a decision is made, a possibly group-dependent reward, which may be stochastic, r\u03d1 : (s, a) 7\u2192R is obtained by the decision maker. A 2 \fconcrete example of a reward function, with r+, r\u2212> 0, is r\u03d1(s, a) = \uf8f1 \uf8f2 \uf8f3 r+, if y = 1, a = 1; \u2212r\u2212, if y = 0, a = 1; 0, if a = 0. Here, the decision maker gains r+ by accepting a quali\ufb01ed individual and incurs a cost r\u2212by accepting an unquali\ufb01ed individual. 2.1 Sequential setting Our model as a partially observable Markov decision process (POMDP) mainly follows [36], but they consider a fair, myopically-optimizing policy while we consider long-term rewards as in typical RL settings. Following [30, 36], the decision maker is interested in the expected reward achieved across time for a random individual drawn from the population. Each random individual has their group membership sampled according to p\u03b1 = P(\u03d1 = \u03b1) and p\u03b2 = P(\u03d1 = \u03b2), and interacts with the decision maker over multiple time steps. At each time step h, the sampled individual with attribute \u03d1 has feature x\u03d1 h = x\u03d1 \u2208X along with a hidden quali\ufb01cation status y\u03d1 h = y\u03d1 \u2208{0, 1}. One example is that the feature x\u03d1 h is determined by the hidden quali\ufb01cation status with x\u03d1 h \u223cp(\u00b7|y\u03d1). We call s\u03d1 h = (x\u03d1 h, y\u03d1 h) the state of the individual at time h. The initial state s\u03d1 1 is sampled from p\u03d1. At each time step h, the decision maker adopts a decision a\u03d1 h based on the observed feature x\u03d1 by following a group-dependent policy \u03c0\u03d1 h(x\u03d1), i.e. a\u03d1 h \u223c\u03c0\u03d1 h(x\u03d1 h), where \u03c0\u03d1 h : X \u2192\u2206(A), and \u2206(A) is the set of distributions on A.2 The decision maker receives reward r\u03d1(s\u03d1 h, a\u03d1 h), and r\u03d1 \u2208[l, u], where \u2212\u221e< l \u2212u < \u221e. Without loss of generality, let us assume r\u03d1 \u2208[0, 1]. The mean of stochastic reward function is denoted by r\u2217\u03d1. After the decision is made, the individual is informed of the decision and their quali\ufb01cation status, and their quali\ufb01cation status, y\u03d1 h+1, and features x\u03d1 h+1, may then undergo a stochastic transition. We assume that this transition follows a time-invariant but group-dependent transition kernel, which we denote as p\u2217\u03d1(s\u03d1 h+1|s\u03d1 h, a\u03d1 h), where p\u2217\u03d1 : S \u00d7 A 7\u2192\u2206(S), and \u2206(S) is the set of distributions on S. As explained in [36], in addition to thinking about a single, randomly chosen individual repeatedly interacting with the decision maker, this also immediately models a \ufb01nite population of randomly chosen individuals, some from each group, and with all individuals in a group subject to the same, group-contingent decision policies. In addition, we have a reward function pair, (r\u03b1, r\u03b2), which may be stochastic. 2.2 Fair policies The goal of the decision maker is to \ufb01nd a policy, \u03c0 = (\u03c0\u03b1, \u03c0\u03b2), to maximize the total, expected reward over an episode for a random individual, while satisfying stepwise fairness constraints, i.e. imposing a certain type of fairness constraint on states and actions for each time step. A random individual in group \u03d1, and with decision policy \u03c0\u03d1, follows stochastic state, action, reward sequence s\u03d1 1, a\u03d1 1, r\u03d1(s\u03d1 1, a\u03d1 1), s\u03d1 2, a\u03d1 2, r\u03d1(s\u03d1 2, a\u03d1 2), s\u03d1 3, \u00b7 \u00b7 \u00b7 . Let E\u03c0,p[\u00b7], P\u03c0,p[\u00b7] be the expectation and probability of a random variable de\ufb01ned with respect to this stochastic process. We denote the expected reward for a random individual at time step h as, R\u2217 h(p\u2217, \u03c0) = X \u03d1\u2208{\u03b1,\u03b2} p\u03d1 \u00b7 E\u03c0\u03d1,p\u2217\u03d1[r\u2217\u03d1(s\u03d1 h, a\u03d1 h)]. Note here that E\u03c0\u03d1,p\u2217\u03d1[\u00b7] refers to the expected value for an individual in group \u03d1, i.e., conditioned on the individual being sampled in this group. Our ideal goal is to obtain the policy \u03c0\u2217that solves the following 2Following [36], we use group-dependent policies so that the formulation can be more generalized. Our technique can also be used for group-independent policies if this is required. 3 \foptimization problem: max \u03c0 H X h=1 R\u2217 h(p\u2217, \u03c0) s.t. \u2200h \u2208[H], faircon({\u03c0\u03d1, p\u2217\u03d1, s\u03d1 h, a\u03d1 h}\u03d1={\u03b1,\u03b2}), where faircon corresponds to a particular fairness concept. We consider two group fairness de\ufb01nitions, and also discuss how the approach can be extended to additional fairness concepts (see Section 6.2). Speci\ufb01cally, we consider the following two fairness concepts: (i) RL with demographic parity (DP). For this case, faircon is P\u03c0\u03b1,p\u2217\u03b1[a\u03b1 h = 1] = P\u03c0\u03b2,p\u2217\u03b2[a\u03b2 h = 1], which means at each time step h, the decision for individuals from di\ufb00erent groups is statistically independent of the sensitive attribute. (ii) RL with equalized opportunity (EqOpt). For this case, faircon is P\u03c0\u03b1,p\u2217\u03b1[a\u03b1 h = 1|y\u03b1 h = 1] = P\u03c0\u03b2,p\u2217\u03b2[a\u03b2 h = 1|y\u03b2 h = 1], which means at each time step h, the decision for a random individual from each of the two di\ufb00erent groups is statistically independent of the sensitive attribute conditioned on the individual being quali\ufb01ed. In each case, P\u03c0\u03b1,p\u2217\u03b1[\u00b7] refers to the probability for an individual in group \u03d1, i.e., conditioned on the individual being sampled in this group. The above optimization problems are feasible under technical Assumption 4.1, presented later, and we assume feasibility throughout the paper (see also Appendix C). Remark 2.1. Since we are modelling the behavior of a randomly drawn individual from the population, the objective should be viewed as to \ufb01nd a policy pair \u03c0 = (\u03c0\u03b1, \u03c0\u03b2) to optimize the long-term reward of of the decision maker while ensuring fairness over the population. 2.3 Episodic RL protocol This is a learning setting, and we study an episodic sequential decision setting where a learner repeatedly interacts with an environment across K > 0 independent episodes. Such a scenario is natural in a number of practical settings, as we discussed in the introduction. Throughout the paper, we consider the tabular case, i.e., we assume \ufb01nite cardinality for S and A.3 Without loss of generality, we can assume that the initial state of an episode is a \ufb01xed state s0 (the next state can be sampled randomly, and state s0 does not contribute to any reward or fairness considerations, see [3] for further detailed explanation). At episode k \u2208[K], denote policy pair \u03c0k = (\u03c0\u03b1 k , \u03c0\u03b2 k ) = {(\u03c0\u03b1 k,h, \u03c0\u03b2 k,h)}H h=1, where H is the horizon. An individual sampled from group G\u03d1 starts from state s\u03d1 k,1, thus, we can consider starting state pair (s\u03b1 k,1, s\u03b2 k,1) = (s\u03b1 0 , s\u03b2 0) for the trajectory of di\ufb00erent groups (the initial state depends on the group from which the individual is drawn). At each time step h \u2208[H], the decision maker selects an action a\u03d1 k,h \u223c\u03c0\u03d1 k,h(x\u03d1 k,h). Here, although the policy only uses the x component, it is convenient to write it as a function of s. The decision maker gets reward r\u03d1(s\u03d1 k,h, a\u03d1 k,h), and the state of the individual for next time step is drawn according to s\u03d1 k,h+1 \u223cp\u2217\u03d1(\u00b7|s\u03d1 k,h, a\u03d1 k,h). Remark 2.2. The policy is only based on the feature vector x. However, we are able to access y in the training data. 3 Learning Algorithms Before we formally state our algorithms, we need to gather data to estimate the unknown quantities such as reward function mean pair r\u2217= (r\u2217\u03b1, r\u2217\u03b2) and transition probability pair p\u2217. In addition, we will incorporate an exploration bonus to further modify the estimation. 3In the example of credit score and loan payment in [30], the credit score is discretized and served as X here and the action space A and quali\ufb01cation status space Y are both {0, 1}. 4 \fData gathering and estimation. In order to analyze the policy e\ufb00ect on the population, we model the behavior of a randomly drawn individual who interacts with the environment across H steps. Here, we demonstrate how to aggregate individuals\u2019 data for each episode and estimate quantities of interest. Speci\ufb01cally, at episode k \u2208[K], for each group \u03d1, we assume n\u03d1 k individuals are drawn, according to p\u03b1 and p\u03b2. Throughout the paper, we assume n\u03d1 k \u22651, for each \u03d1 and k. A decision is made independently for each individual at each step, using a group-speci\ufb01c policy, leading to a stochastic transition in the state of the individual. In Appendix D, we will further discuss how to gather data when allowing individuals who opt in or out during an episode. We use the counting method to obtain estimates of the statistics of interest. For the i-th individual in episode k, their status and action at time step h is denoted as s\u03d1,(i) k,h and a\u03d1,(i) k,h . For \u03d1 \u2208{\u03b1, \u03b2}, let pk = (p\u03b1 k, p\u03b2 k) and rk = (r\u03b1 k , r\u03b2 k), N \u03d1 k (s, a) = max \b 1, X t\u2208[k\u22121],h\u2208[H],i\u2208[n\u03d1 k] 1{s\u03d1,(i) k,h = s, a\u03d1,(i) k,h = a} \t , p\u03d1 k(s\u2032|s, a) = 1 N \u03d1 k (s, a) X t\u2208[k\u22121],h\u2208[H],i\u2208[n\u03d1 k] 1{s\u03d1,(i) k,h = s, a\u03d1,(i) k,h = a, s\u03d1,(i) k,h+1 = s\u2032}, \u02c6 r\u03d1 k(s, a) = 1 N \u03d1 k (s, a) X t\u2208[k\u22121],h\u2208[H],i\u2208[n\u03d1 k] r\u03d1(s, a)1{s\u03d1,(i) k,h = s, a\u03d1,(i) k,h = a}. Exploration bonus method. In RL, it is standard to introduce optimism in order to encourage exploring under-explored states. Speci\ufb01cally, for \u03d1 \u2208{\u03b1, \u03b2}, we adopt a bonus term, \u02c6 b\u03d1 k, to add to the estimated reward function \u02c6 r\u03d1 k, such that we obtain r\u03d1 k(s, a) = \u02c6 r\u03d1 k(s, a) + \u02c6 b\u03d1 k(s, a), where the \u02c6 b\u03d1 k(s, a) values assign larger values for under-explored (s, a)\u2019s. We specify how to choose \u02c6 b\u03d1 k in Section 4.1, and denote Rk,h(p, \u03c0) = X \u03d1\u2208{\u03b1,\u03b2} p\u03d1 \u00b7 E\u03c0\u03d1,p\u03d1[r\u03d1 k(s\u03d1 k,h, a\u03d1 k,h)]. For the purpose of analysis, we treat p\u03d1\u2019s as known constants for simplicity; for example, perhaps these proportions are provided by census. Practical optimization for DP. In reality, given we don\u2019t have access to p\u2217and r\u2217\u03d1, we need to solving a surrogate optimization problem and hope the optimal policy can have similar performance as the ideal optimal policy under certain performance criteria. In the following, we provide a simple algorithm for RL with demographic parity. It is based on optimization under p\u03d1 k and r\u03d1 k: max \u03c0\u2208\u03a0k H X h=1 Rk,h(pk, \u03c0), s.t. \u2200h \u2208[H], |P\u03c0\u03b1,p\u03b1 k (a\u03b1 k,h = 1) \u2212P\u03c0\u03b2,p\u03b2 k(a\u03b2 k,h = 1)| \u2264\u02c6 ck,h, where we have \u03a0k = {(\u03c0\u03b1, \u03c0\u03b2) : \u03c0\u03d1(a = 1|x) \u2265\u03b7DP k , \u2200x, a, h, \u03d1}. \u03a0k can ensure the reachability from any x to the decision a = 1. Here {\u03b7DP k }k is a sequence of real numbers and {\u02c6 ck,h}k,h are relaxations. Intuitively, if we set \u03b7DP k and \u02c6 ck,h to be decreasing and vanishing as k increases, we would expect the above optimization to approach the ideal optimization problem as k increases. We formalize this in Section 4.1. Practical optimization for EqOpt. For equalized opportunity, and similar to the case of DP, we have max \u03c0\u2208\u03a0k H X h=1 Rk,h(pk, \u03c0), s.t. \u2200h \u2208[H], |P\u03c0\u03b1,p\u03b1 k (a\u03b1 k,h = 1|y\u03b1 k,h = 1) \u2212P\u03c0\u03b2,p\u03b2 k(a\u03b2 k,h = 1|y\u03b2 k,h = 1)| \u2264\u02c6 dk,h, where we have \u03a0k = {(\u03c0\u03b1, \u03c0\u03b2) : \u03c0\u03d1(a = 1|x) \u2265\u03b7EqOpt k , \u2200x, a, h, \u03d1}. Here {\u03b7EqOpt k }k is a sequence of real numbers and { \u02c6 dk,h}k,h are relaxations. We formalize this in Section 4.1. 5 \fAlgorithm. We can solve these optimization problems through occupancy measures, and they each become di\ufb00erent kinds of quadratically constrained linear programs (QCLP) (see Appendix A). Although QCLP is generally NP-hard, many methods based on relaxations and approximations such as semi-de\ufb01nite program (SDP) have been extensively discussed. We use Gurobi to solve these relaxed optimization problems. 4 Theoretical Analysis In order to track the performance, we consider the following regrets. For policy pairs {\u03c0k}K k=1, for reward regret in episode k, we track: Rtype reg (k) = 1 H H X h=1 \u0010 R\u2217 h(p\u2217, \u03c0\u2217type) \u2212 k X t=1 R\u2217 h(p\u2217, \u03c0k) \u0011 , where \u03c0\u2217type is the optimal policy pair of RL with constraint types mentioned above and type \u2208{DP, EqOpt}. For simplicity, we will omit the supscript \u201ctype\u201d when it is clear from the context and use \u03c0\u2217= (\u03c0\u2217\u03b1, \u03c0\u2217\u03b2). For the fairness constraints, we consider the violation for each type of constraint in episode k as the following, CDP reg (k) = 1 H H X h=1 \f \fP\u03c0\u03b1 k ,p\u2217\u03b1(a\u03b1 k,h = 1) \u2212P\u03c0\u03b2 k ,p\u2217\u03b2(a\u03b2 k,h = 1) \f \f, and CEqOpt reg (k) = 1 H H X h=1 \f \fP\u03c0\u03b1 k ,p\u2217\u03b1(a\u03b1 k,h = 1 \f \fy\u03b1 k,h = 1) \u2212P\u03c0\u03b2 k ,p\u2217\u03b2(a\u03b2 k,h = 1|y\u03b2 k,h = 1) \f \f. The theoretical guarantees hold for any episode, not just the last episode. 4.1 Choice of various of quantities In this part, we provide a formal theoretical guarantee for the performance of our algorithms under the previously mentioned criteria with suitably chosen quantities {\u02c6 b\u03d1 k}k, {\u02c6 ck}k, and { \u02c6 dk}k, for each episode k. Q and V functions. Two of the mostly used concepts in RL are Q and V functions. Speci\ufb01cally, Q functions track the expected reward when a learner starts from state s \u2208S. Meanwhile, V functions are the corresponding expected Q functions of the selected action. For a reward function r and a transition function p, the Q and V functions are de\ufb01ned as: Q\u03c0,p r (s, a, h) = r(s, a) + X s\u2032\u2208S p(s\u2032|s, a)V \u03c0,p r (s\u2032, h + 1), V \u03c0,p r (s, h) = Ea\u223c\u03c0(\u00b7|s)[Q\u03c0,p r (s, a, h)], where we set V \u03c0,p r (s, H + 1) = 0. Choice of \u02c6 b\u03d1 k. For {\u02c6 b\u03d1 k}k, similar to [3], we need {\u02c6 b\u03d1 k}k to be valid. De\ufb01nition 4.1 (Validity). A bonus \u02c6 b\u03d1 k is valid for episode k if for all (s, a) \u2208S \u00d7 A and h \u2208[H], \f \f \f\u02c6 r\u03d1 k(s, a) \u2212r\u2217\u03d1 k (s, a) + X s\u2032\u2208S \u0010 p\u03d1 k(s\u2032|s, a) \u2212p\u2217\u03d1(s\u2032|s, a) \u0011 V \u03c0\u2217\u03d1,p\u2217\u03d1 r\u2217\u03d1 (s\u2032, h + 1) \f \f \f \u2264\u02c6 b\u03d1 k(s, a). Following the exporation-bonus setting [3], we set \u02c6 b\u03d1 k = min n 2H, 2H q 2 ln(16SAHk2/\u03b4) N \u03d1 k (s,a) o , and have the following lemma. 6 \fLemma 4.1. With probability at least 1 \u2212\u03b4, for \u02c6 b\u03d1 k(s, a) = min n 2H, 2H s 2 ln(16SAHk2/\u03b4) N \u03d1 k (s, a) o , the bonus \u02c6 b\u03d1 k(s, a) is valid for every episode k simultaneously. Choice of \u02c6 ck,h and \u02c6 dk,h. For {\u02c6 ck,h}k,h and { \u02c6 dk,h}k,h, we require them to be compatible. De\ufb01nition 4.2 (Compatibility). The sequence {\u02c6 ck,h}k,h is compatible if for all h \u2208[H], k \u2208[K], |P\u03c0\u2217\u03b1,p\u03b1 k (ah = 1) \u2212P\u03c0\u2217\u03b2,p\u03b2 k(ah = 1)| \u2264\u02c6 ck,h. The sequence { \u02c6 dk,h}k,h is compatible if for all h \u2208[H], k \u2208[K], |P\u03c0\u2217\u03b1,p\u03b1 k (ah = 1|yh = 1) \u2212P\u03c0\u2217\u03b2,p\u03b2 k(ah = 1|yh = 1)| \u2264\u02c6 dk,h. Brie\ufb02y speaking, we hope that when substituting p\u2217\u03d1 to p\u03d1 k, that \u02c6 ck,h and \u02c6 dk,h can control the fairness constraints violation. Let us use S to denote |S| and A to denote |A|. Lemma 4.2. Denote N \u03d1,min k = mins,a N \u03d1 k (s, a). For any {\u03f5k}K k=1, with probability at least 1 \u2212\u03b4, we take \u02c6 ck,h = X \u03d1\u2208{\u03b1,\u03b2} H s 2S ln(16SAHk2/(\u03f5k\u03b4)) N \u03d1,min k + 2\u03f5kHS. Then, the sequence {\u02c6 ck,h}k,h is compatible. Similarly, for \u02c6 dk,h, we have the following lemma. Lemma 4.3. Denote p\u03d1,min k = mins,a p\u03d1 k(y = 1|s, a). For any {\u03f5k}K k=1, with probability at least 1 \u2212\u03b4, we take \u02c6 dk,h = X \u03d1\u2208{\u03b1,\u03b2} 3H r 2S ln(32SAk2/(\u03f5k\u03b4)) N \u03d1,min k + 3\u03f5kHS p\u03d1,min k \u0012 p\u03d1,min k \u2212 r 4 ln 2+2 ln(4SAk2/\u03b4) N \u03d1,min k \u0013 if p\u03d1,min k > r 4 ln 2+2 ln(4SAk2/\u03b4) N \u03d1,min k ; Otherwise, we set \u02c6 dk,h = 1. Then, the sequence { \u02c6 dk,h}k,h is compatible. 4.2 Main theorems In this subsection, we provide our formal theoretical guarantees for the reward regret and fairness constraints violation. We require technical Assumption 4.1. Assumption 4.1. (a). For all (s, a, s\u2032) \u2208S \u00d7 A \u00d7 S, there exists a universal constant C > 0, such that p\u2217\u03d1(s\u2032|s, a) \u2265C for \u03d1 = {\u03b1, \u03b2}. (b). For all (x, a) \u2208X \u00d7 A, there exists a universal constant \u02dc C, such that \u03c0\u2217\u03d1(a|x) \u2265\u02dc C. Assumption 4.1 implies irreducibility of the Markov process and ensures feasibility of our optimization problems (see Appendix C). Recall that at episode k \u2208[K], for each group \u03d1, we have n\u03d1 k \u22651 individuals drawn for each group \u03d1. And as a concrete exempli\ufb01ed choice of \u03b7k and \u03f5k, we take \u03b7k = k\u22121 3 and \u03f5k = (kHS)\u22121. 7 \fReward regret. For the reward regret, for either demographic parity or equalized opportunity, we can provide the following theoretical guarantee. Recall for two positive sequences {aj} and {bj}, we write aj = O(bj) if limj\u2192\u221e(aj/bj) < \u221e. Theorem 4.1. For type \u2208{DP, EqOpt}, with probability at least 1 \u2212\u03b4, there exists a threshold T = O \u0012\u0010 H ln(SA/\u03b4) nk \u00113\u0013 , such that for all k \u2265T, Rtype reg (k) = O \u0010 Hk\u22121 3 p HS ln(S2AH2k3/\u03b4) \u0011 . By Theorem 4.1, our algorithms for each of the group fairness notions can ensure vanishing reward regrets when k goes to in\ufb01nity, which implies that the performance in regard to regret reward improves as the number of episodes increases. Fairness constraints violation. For fairness violation, we have the following Theorem 4.2. Theorem 4.2. For type \u2208{DP, EqOpt}, with probability at least 1 \u2212\u03b4, there exists a threshold T = O \u0012\u0010 H ln(SA/\u03b4) nk \u00113\u0013 , such that for all k \u2265T, Ctype reg (k) \u2264O \u0010 k\u22121 3 p SH ln(S2HAk3/\u03b4) \u0011 . By Theorem 4.2, our algorithms for each of the group fairness notions can ensure vanishing fairness violation when k goes to in\ufb01nity, which implies the performance in regard to fairness violations improves as the number of episodes increases. 5 Experiments Settings. We take H = 8 for each episode, and update our policy every k = 2l episodes, where l = 3, 4, . . . , 18. We update our policy in this non-linear way for reasons of computational cost, i.e., to reduce the number of optimization problems we need to solve while still collecting a large quantity of data. We choose the relaxation parameters \u02c6 ck,h and \u02c6 dk,h as de\ufb01ned in the previous sections. After each policy update, we use 8,000 episodes to evaluate the new policy. We also produce con\ufb01dence intervals by repeating each experiment \ufb01ve times. Estimation and Optimization Process. We estimate transition probabilities and rewards using the counting method outlined above. These estimates are used as the input for our algorithm. The optimization problems are non-convex, and the detailed optimization formulations are included in Appendix F. We use Gurobi to solve, and set the optimality-value tolerance, feasibility tolerance, and solving time limit as 1e-3, 1e-6, and 300 seconds, respectively. We use the barrier algorithm for all problems. FICO Data. We adapt FICO score data4 to form our data generating process. The FICO score dataset contains data from non-Hispanic White and Black cohorts consist of j \u2208X = {0, . . . , 4}, which represent normalized score {0, 25, 50, 75, 100} and can be viewed as the feature. In particular, the FICO data provides empirical distributions for credit scores of di\ufb00erent sensitive groups, i.e. \u02c6 PFICO(x\u03d1 = j), along with an empirical quali\ufb01cation distribution conditioned on each score level \u02c6 PFICO(y\u03d1 = y|x\u03d1 = j), where y \u2208{0, 1}. We simulate the data generating process according to the empirical distributions stated above. For the population dynamics, we model the intial score distribution as P(x\u03d1 0 = j) = \u02c6 PFICO(x\u03d1 0 = j), according to the empirical FICO distribution. For the initial quali\ufb01cation distribution conditioned on score, P(y\u03d1 0 = 1|x0 = j) = \u02c6 PFICO(y\u03d1 = 1|x\u03d1 = j). Then, we generate the underlying group-dependent and time invariant transition kernel p\u2217\u03d1 in the following way: we \ufb01rst set a distribution for p\u2217\u03d1(x\u2032 = j\u2032|x = j, y = w, a = v) for j\u2032, j \u2208X and w, v \u2208{0, 1}. Then, we set p\u2217\u03d1(x\u2032, y\u2032|x, y, a) as p\u2217\u03d1(x\u2032|x, y, a)\u02c6 PFICO(y\u03d1 = y\u2032|x\u03d1 = x\u2032). More details about the data generating process is speci\ufb01ed in Appendix F. 4https://docs.responsibly.ai/notebooks/demo-fico-analysis.html 8 \f(a) DP FICO Pareto (b) DP FICO estConstr (c) DP FICO return (d) EqOpt FICO Pareto (e) EqOpt FICO estConstr (f) EqOpt FICO return Figure 1: FICO data result. 1a and 1d for Pareto frontier, 1b1c for the constraint violation level in the training dynamics 1e 1f for the avergage episodic return over the training dynamics. In Pareto plots, the point with cross marker is the result for the \ufb01nal episode. The text close to points stand for penalty parameters Reward function. We choose a score-dependent reward, conditioned on the quali\ufb01cation level and decision as the following: r\u03d1(xh, yh, ah) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03b2\u03d1 1 xh, if yh = 1, ah = 1; \u2212\u03b2\u03d1 2 xh, if yh = 0, ah = 1; 0, if ah = 0, where \u03b2\u03d1 1 , \u03b2\u03d1 2 \u2208(0, 1). Our reward function is chosen based on the following considerations: in real life, the decision maker may give a higher loan amount to a candidate with a higher credit score. As a result, the decision maker bene\ufb01ts more when a quali\ufb01ed candidate has higher score, i.e. higher reward value (e.g. higher total repayment interests) gained from a quali\ufb01ed candidate with higher score because of higher loan amount; the decision maker also su\ufb00ers more (more negative reward gained) if the candidate with higher score is not quali\ufb01ed, because a larger amount of loan is not repaid. In our experiment, we set \u03b2\u03b1 1 = 0.1, \u03b2\u03b2 1 = 0.9, \u03b2\u03b1 2 = 0.9, \u03b2\u03b2 2 = 0.1. This di\ufb00erence in reward function per group further re\ufb02ects a decision maker who may, potentially irrationally and unfairly, bene\ufb01t or penalize through their reward function for di\ufb00erent groups, even for two individuals who otherwise have the same score x and quali\ufb01cation y (perhaps the decision maker worries that the model is not equally accurate or calibrated per group). Baselines. We use the following policies as baselines: 1. Sequentially optimal policies without fairness constraints. 2. Sequentially optimal policies with an objective that includes a fairness penalty term, which serves as an alternative method for our proposed optimization problem. In this case, our optimization objective, for a particular kind of fairness constraint, is: max \u03c0\u2208\u03a0k H X h=1 \u0002 Rk,h(pk, \u03c0) + FP({\u03c0\u03d1, p\u03d1 k, s\u03d1 h, a\u03d1 h}\u03d1; \u03bb) \u0003 , 9 \fwhere FP({\u03c0\u03d1, p\u03d1 k, s\u03d1 h, a\u03d1 h}\u03d1; \u03bb) is the fairness penalty and \u03bb is the penalty parameter. (a). Demographic parity. For this case, we choose FP({\u03c0\u03d1, p\u03d1 k, s\u03d1 h, a\u03d1 h}\u03d1; \u03bb) as \u03bb(P\u03c0\u03b1,p\u03b1 k (a\u03b1 k,h = 1) \u2212P\u03c0\u03b2,p\u03b2 k(a\u03b2 k,h = 1))2. (b). Equalized opportunity. For this case, we choose FP({\u03c0\u03d1, p\u03d1 k, s\u03d1 h, a\u03d1 h}\u03d1; \u03bb) as \u03bb \u0010 P\u03c0\u03b1,p\u03b1 k (a\u03b1 k,h = 1|y\u03b1 k,h = 1) \u2212P\u03c0\u03b2,p\u03b2 k(a\u03b2 k,h = 1|y\u03b2 k,h = 1) \u00112 Experimental results. Figure 1a shows the Pareto frontier in terms of episodic total return and episodic step-average fairness violation for demographic parity, and Figure 1d shows the counterpart for equal opportunity. Figure 1b and 1c demonstrate the training dynamics of di\ufb00erent algorithms for demographic parity, and Figures 1e and 1f demonstrate the counterpart for equal opportunity. Our proposed method outperforms the baselines in terms of Pareto frontiers, and converges to a stable level in terms of fairness violation over the training episodes. Overall, from the Pareto frontier, we can observe that our method reaches smaller fairness violation level for non-hispanic white and black individuals in both fairness notions, under a \ufb01xed reward level for the decision maker. In addition, from the con\ufb01dence intervals, we can see that our algorithm has a much narrower con\ufb01dence band than the other baseline. 6 Discussion In this section, we discuss possible extensions of our framework and future directions. 6.1 Extension of stepwise fairness notions In our paper, we mainly consider two popular types of fairness criteria for stepwise fairness constraints, i.e. demographic parity and equalized opportunity. However, our techniques can also be extended in future work to additional types of fairness criteria. In particular, we can consider a family of fairness constraints, as formally introduced in [1]. They consider constraints in the form M\u00b5(a) \u2264c for matrix M and vector c, where the j-th coordinate of \u00b5 is \u00b5j(a) = E[g(x, y, a, \u03d1)|Ej] for j \u2208J , and M \u2208R|K|\u00d7|J |, c \u2208RK. Here, K = A \u00d7 Y \u00d7 {+, \u2212} (+, \u2212impose positive/negative sign so as to recover | \u00b7 | in constraints), for Y = {0, 1}, and J = (\u039b \u222a{\u2217}) \u00d7 {0, 1}. Ej is an event de\ufb01ned with respect to (x, y, \u03d1) and \u2217denotes the entire probability space. This formulation includes demographic parity and equalized opportunity as special cases. 6.2 Aggregated fairness notions Despite the fact that we consider stepwise fairness constraints, our techniques can also be extended in future work to aggregate fairness notions that consider the entire episodic process. Speci\ufb01cally, we could consider aggregate equalized opportunity (also called aggregate true positive rate in [8]): H X h=1 P(ah = 1|yh = 1, \u03d1) P(yh = 1|\u03d1) PH h=1 P(yh = 1|\u03d1) , which can be viewed as a weighted sum of equalized opportunity across steps. This should be relatively straightforward to handle through our techniques. 10 \f6.3 Non-episodic, in\ufb01nite horizon Markov decision processes Another natural direction is to extend our framework to non-episodic in\ufb01nite horizon. Taking DP as an example, we could consider max \u03c0\u2208\u03a0k \u221e X h=1 \u03b3hRh(p, \u03c0), s.t. \u2200h \u2208[H], \u03b3h|P\u03c0\u03b1,p\u03b1(a\u03b1 h = 1) \u2212P\u03c0\u03b2,p\u03b2(a\u03b2 h = 1)| \u2264\u02c6 ch. This will involve using a more advanced version of concentration inequality for Markov chain, and we leave this to future work." + }, + { + "url": "http://arxiv.org/abs/2206.02792v1", + "title": "FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data", + "abstract": "Algorithmic fairness plays an important role in machine learning and imposing\nfairness constraints during learning is a common approach. However, many\ndatasets are imbalanced in certain label classes (e.g. \"healthy\") and sensitive\nsubgroups (e.g. \"older patients\"). Empirically, this imbalance leads to a lack\nof generalizability not only of classification, but also of fairness\nproperties, especially in over-parameterized models. For example,\nfairness-aware training may ensure equalized odds (EO) on the training data,\nbut EO is far from being satisfied on new users. In this paper, we propose a\ntheoretically-principled, yet Flexible approach that is\nImbalance-Fairness-Aware (FIFA). Specifically, FIFA encourages both\nclassification and fairness generalization and can be flexibly combined with\nmany existing fair learning methods with logits-based losses. While our main\nfocus is on EO, FIFA can be directly applied to achieve equalized opportunity\n(EqOpt); and under certain conditions, it can also be applied to other fairness\nnotions. We demonstrate the power of FIFA by combining it with a popular fair\nclassification algorithm, and the resulting algorithm achieves significantly\nbetter fairness generalization on several real-world datasets.", + "authors": "Zhun Deng, Jiayao Zhang, Linjun Zhang, Ting Ye, Yates Coley, Weijie J. Su, James Zou", + "published": "2022-06-06", + "updated": "2022-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.CY", + "stat.ML" + ], + "main_content": "Introduction 0.0 0.1 0.2 0.3 0.4 0.5 Classi\ufb01cation Error 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Fairness (EO) Violation Train Test 0.0460 0.0465 0.0470 0.56 0.58 0.60 Figure 1: Each marker corresponds to a su\ufb03ciently well-trained ResNet-10 model trained on an imbalanced image classi\ufb01cation dataset CelebA ([44]). The generalization of fairness constraints (EO) is of magnitudes poorer compared that onclassi\ufb01cation error. Machine learning systems are becoming increasingly vital in our daily lives. The growing concern that they may inadvertently discriminate against minorities and other protected groups when identifying or allocating resources has attracted numerous attention from various communities both inside and outside academia. While signi\ufb01cant e\ufb00orts have been devoted in understanding and correcting biases in classical models such as logistic regressions and supported vector machines (SVM), see, e.g., [1, 29], those derived tools are far less e\ufb00ective on modern over-parameterized models such as neural networks (NN). Over-parameterization can lead to poor generalization, as extensive e\ufb00orts in both theoretical and empirical studies have exhibited ([62, 50, 4]). \u2217Corresponding authors: zhundeng@g.harvard.edu, zjiayao@upenn.edu. 1 arXiv:2206.02792v1 [cs.LG] 6 Jun 2022 \fFurthermore, in large models, it is also di\ufb03cult for measures of fairness (such as equalized odds to be introduced shortly) to generalize, as shown in Fig. 1. Here we \ufb01nd that su\ufb03ciently trained ResNet-10 models generalize well on classi\ufb01cation error but poorly on fairness constraints\u2014the gap in equalized odds between the test and training data is more than ten times larger than the gap for classi\ufb01cation error between test and training. AdultIncome CelebA DutchConsensus Dataset 0.0 0.1 0.2 0.3 0.4 0.5 Relative Frequency Subgroup (Female, -) (Male, -) (Female, +) (Male, +) Figure 2: The subgroups (sensitive attribute, either Male or Female, and label class, either + or \u2013) are very imbalanced in many popular datasets across di\ufb00erent domains. In parallel, another outstanding challenge for generalization with real-world datasets is that they are often imbalanced across label and demographic groups (see Fig. 2 for imbalance in three commonly used datasets across various domains). This inherent nature of real-world data, greatly hinders the generalization of classi\ufb01ers that are unaware of this innate imbalance, especially when the performance measure places substantial emphasis on minority classes or subgroups without su\ufb03cient samples (e.g., when considering the average classi\ufb01cation error for each label class). Although generalizations with imbalanced data has been extensively studied and mitigation strategies are proposed [10, 47, 30, 2, 31, 40], it\u2019s unclear how well fairness properties generalize. It is also an open challenge how to improve the generalization of fairness in over-parameterized models. Our contributions. In this paper, we initiate the study for this open challenge of fairness generalization for supervised classi\ufb01cation tasks in imbalanced datasets. Inspired by recent works on regularizing the minority classes more strongly than the frequent classes by imposing class-dependent margins [10] in standard supervised learning, we design a theoretically-principled, Flexible and Imbalance-FairnessAware (FIFA) approach that takes both classi\ufb01cation error and fairness constraints violation into account when training the model. Our proposed method FIFA can be \ufb02exibly combined with many fair learning methods with logits-based losses such as the soft margin loss [43] by encouraging larger margins for minority subgroups. While our method appears to be motivated for over-parameterized models such as neural networks, it nonetheless also helps simpler models such as logistic regressions. Experiments on both large datasets using over-parameterized models as well as smaller datasets using simpler models demonstrate the e\ufb00ectiveness, and \ufb02exibility of our approach in ensuring a better fairness generalization while preserving good classi\ufb01cation generalization. Related work. Over-parameterized models such as neural network has achieved great performance in modern learning, however, the nature of optimization [17, 22, 34, 33, 36, 64], robustness [18, 21, 19], and generalization [19, 20, 63, 57, 58] of neural networks remains mysterious. Even worse, the inherent imbalance in certain label classes bring great trouble for neural networks to generalize in supervised learning, which has attracted signi\ufb01cant interest in the machine learning communities. Several methods including resampling, reweighting, and data augmentation have been developed and deployed in practice [47, 30, 2, 31, 40, 11, 28, 51, 56, 16, 9]. Theoretical analyses of those methods include margin-based approaches [42, 35, 37, 10]. Somewhat tangentially, an outstanding and emerging problem faced by modern models with real-world data is algorithmic fairness [25, 3, 14, 54, 45, 49, 13, 8], where practical algorithms are developed for pre-processing [27], in-processing [61, 26, 59, 23, 46, 48, 41], and post-processing [29, 38, 12] steps. Nonetheless, there are several challenges when applying fairness algorithms in practice [5, 52, 32, 45]. Speci\ufb01cally, as hinted in Fig. 1, the fairness generalization guarantee, especially in over-parameterized models and large datasets, is not well-understood, leading to various practical concerns. We remark that although [39] claims it is necessary to use multiplicative instead of additive logits adjustments, their motivating example is di\ufb00erent from ours and they studied SVM with \ufb01xed and speci\ufb01ed budgets for all inputs. In another closely related work, [15], classical learning theory is being studied without addressing issues raised by imbalanced data. To the best of our knowledge, this is the \ufb01rst work tackling the open challenge of fairness generalization with imbalanced data. 2 \f2 Background Notation. For any k \u2208N+, we use [k] to denote the set {1, 2, \u00b7 \u00b7 \u00b7 , k}. For a vector v, let vi be the i-th coordinate of v. We use 1 to denote the indicator function. For a set S, we use |S| to denote the cardinality of S. For two positive sequences {ak} and {bk}, we write ak = O(bk) (or an \u2272bn), and ak = o(bk), if limk\u2192\u221e(ak/bk) < \u221eand limk\u2192\u221e(ak/bk) = 0, respectively. We use P for probability and E for expectation, and we use \u02c6 P and \u02c6 E for empirical probability and expectation. For the two distributions D1 and D2, we use pD1 + (1 \u2212p)D2 for p \u2208(0, 1) to denote the mixture distribution such that a sample is drawn with probabilities p and (1 \u2212p) from D1 and D2 respectively. We use Np(\u00b5, \u03a3) to denote p-dimensional Gaussian distribution with mean \u00b5 and variance \u03a3. Fairness notions. Throughout the paper, we consider datasets consisting of triplets of the form (x, y, a), where x \u2208X is a feature vector, a \u2208A is a sensitive attribute such as race and gender, and y \u2208Y is the corresponding label. The underlying random triplets corresponding to (x, y, a) is denoted as (X, Y, A). Our goal is to learn a predictor h \u2208H : X 7\u2192Y, where h(X) is a prediction of the label Y of input X. In this paper, we mainly consider equalized odds (EO) [29] that has been widely used in previous literature on fairness. But our method could also be directly used to equalized opportunity (EqOpt) given that EqOpt is quite similar to EO. In addition, under certain conditions, our method could also be used to demographic parity (DP), which we will mainly discuss in the Appendix. Speci\ufb01cally, the notions mentioned above are de\ufb01ned as follows. (i). Equalized odds (EO) and Equalized opportunity (EqOpt). A predictor h satis\ufb01es equalized odds if h(X) is conditionally independent of the sensitive attribute A given Y : P(h(X) = y|Y = y, A = a) = P(h(X) = y|Y = y). If Y = {0, 1} and we only require P(h(X) = 1|Y = 1, A = a) = P(h(X) = 1|Y = 1), we say h satis\ufb01es equalized opportunity. (ii). Demographic parity (DP). A predictor h satis\ufb01es demographic parity if h(X) is statistically independent of the sensitive attribute A: P(h(X) = Y |A = a) = P(h(X) = Y ). 3 Theory-inspired derivation While we will formally introduce our new approach in Section 4, this section gives an informal derivation, with an emphasis on insights. We design an imbalance-fairness-aware approach that can be \ufb02exibly combined with fair learning methods with logits-based losses. Consider the supervised k-class classi\ufb01cation problem, where a model f : X 7\u2192Rk provides k scores, and the label is assigned as the class label with the highest score. The corresponding predictor h(x) = arg maxi f(x)i if there are no ties. Let us use Pi = P(X|Y = i) to denote the conditional distribution when the class label is i for i \u2208[k] and Pbal to denote the balanced distribution PIdx, where Idx is uniformly drawn from [k]. Similarly, let us use Pi,s = P(X|Y = i, A = s) to denote the conditional distribution when Y = i and A = s. The corresponding empirical distributions induced by the training data are \u02c6 Pi, \u02c6 Pbal and \u02c6 Pi,s. For the training dataset {(xj, yj, aj)}j, let Si = {j : yj = i}, Si,a = {j : yj = i, aj = a}, and the corresponding sample sizes be ni and ni,a, respectively. Although Pi, Pbal and Pi,s are all distributions on X, we sometimes use notations like (x, y) \u223cPi to denote the distribution of (x, i), where x \u223cPi . Our goal is to ensure Lbal[f] = P(x,y)\u223cPbal[f(x)y < maxl\u0338=y f(x)l] and the fairness violation error to be as small as possible. In order to do that, we need to take the margin of subgroups divided according to sensitive attributes in each label class (so called demographic subgroups in di\ufb00erent classes) into account. Margin trade-o\ufb00between classes of equalized odds. In the setting of standard classi\ufb01cation with imbalanced training datasets such as in [10, 51], the aim is to reach a small balanced test error 3 \fLbal[f]. However, in a fair classi\ufb01cation setting, our aim is not only to reach a small Lbal[f], but also to satisfy certain fairness constraints at test time. Speci\ufb01cally, for EO, the aim is: min f Lbal[f] s.t. \u2200y \u2208Y, a \u2208A, P(h(X) = y|Y = y, A = a) = P(h(X) = y|Y = y), where we recall that h(\u00b7) = arg maxi f(\u00b7)i. We remark here that in addition to the class-balanced loss Lbal[f], we can also consider the loss function that is balanced across all demographic subgroups in di\ufb00erent classes, the derivation is similar and we omit it here. Recall our motivating example in Figure 1. Whether the fairness violation error is small at test time should also be taken into account. Thus, our performance criterion for optimization should be: Lbal[f] + \u03b1Lfv, (1) where Lfv is a measure of fairness constraints violation that we will specify later, and \u03b1 is a weight parameter chosen according to how much we care about the fairness constraints violation. For simplicity, we start with Y = {0, 1} and A = {a1, a2}. In the Appendix, we will further discuss the case when there are multiple classes and multiple demographic groups. We also assume the training data is well-separated that all the training samples are perfectly classi\ufb01ed and fairness constraints are perfectly satis\ufb01ed. The setting has been considered in [10] and can be satis\ufb01ed if the model class is rich, for instance, for over-parameterized models such as neural networks. Speci\ufb01cally, if all the training samples are classi\ufb01ed perfectly by h, not only P(x,y)\u223c\u02c6 Pbal(h(x) \u0338= y) = 0 is satis\ufb01ed, we also have that P(x,y)\u223c\u02c6 Pi,aj (h(x) \u0338= y) = 0 for all i \u2208Y and aj \u2208A. We remark here that P(h(X) = i|Y = i, A = a) = 1 \u2212P(x,y)\u223cPi,a(h(x) \u0338= y). Denote the margin for class i by \u03b3i = minj\u2208Si \u03b3(xj, yj), where \u03b3(x, y) = f(x)y \u2212maxl\u0338=y f(x)l. One natural way to choose Lfv is to take P i\u2208Y |P(h(X) = i|Y = i, A = a1) \u2212P(h(X) = i|Y = i, A = a2)|. Then, our performance criterion for optimization in (A.4) is: M[f] = Lbal[f] + \u03b1 X i\u2208Y |P(h(X) = i|Y = i, A = a1) \u2212P(h(X) = i|Y = i, A = a2)| . By a similar margin-based generalization arguments in [10, 35], we proved the following theorem. Theorem 3.1 (Informal) With high probability over the randomness of the training data, for Y = {0, 1}, A = {a1, a2}, and for some proper complexity measure of class F, for any f \u2208F, M[f] \u2272 X i\u2208Y 1 \u03b3i s C(F) ni + X i\u2208Y,a\u2208A \u03b1 \u03b3i,a s C(F) ni,a \u2264 X i\u2208Y 1 \u03b3i s C(F) ni + X i\u2208Y,a\u2208A \u03b1 \u03b3i s C(F) ni,a , (2) where \u03b3i is the margin of the i-th class\u2019s sample set Si and \u03b3i,a is the margin of demographic subgroup\u2019s sample set Si,a. Optimizing the upper bound in (A.5) with respect to margins in the sense that g(\u03b30, \u03b31) \u2264g(\u03b30\u2212\u03b4, \u03b31+\u03b4) for g(\u03b30, \u03b31) = P i\u2208Y 1 \u03b3i\u221ani + \u03b1 P i\u2208Y,a\u2208A 1 \u03b3i\u221ani,a and all \u03b4 \u2208[\u2212\u03b31, \u03b30], we obtain \u03b30/\u03b31 = \u02dc n\u22121/4 0 /\u02dc n\u22121/4 1 , where the adjusted sample size \u02dc ni = ni\u03a0a\u2208Ani,a ( p \u03a0a\u2208Ani,a + \u03b1 P a\u2208A \u221anini,a)2 4 \ffor i \u2208{0, 1}. From Theorem 3.1, we see how sample sizes of each subgroups are taken into account and how they a\ufb00ect the optimal ratio between class margins. Based on this theorem, we will propose our theoretical framework in Section 4. A closely related derivation has been used in [10], but their focus is only on the classi\ufb01cation error and its generalization. As we will show in Example 3.1, when fairness constraints are also considered, their methods could sometimes perform poorly with respect to the generalization of those constraints. We remark here that if we do not consider the fairness constraints violation, then \u03b1 = 0, and the e\ufb00ective sample sizes degenerate to \u02dc ni = ni. For illustration, we demonstrate the advantage of applying our approach to select margins over directly using the margin selection in [10] by considering Gaussian models, which is widely used in machine learning theory [53, 64, 21]. Speci\ufb01cally, our training data follow distribution: x | y = 0 \u223c P2 i=1 \u03c00,aiNp(\u00b5i, I), x | y = 1 \u223cP2 i=1 \u03c01,aiNp(\u00b5i + \u03b2\u2217, I). Here, in class j, subgroup ai is drawn with probability \u03c0j,ai, then, given the sample is from subgroup ai in class j, the data is distributed as a Gaussian random vector. Recall the corresponding training dataset indices of subgroup ai in class j is denoted as Sj,ai, and |Sj,ai| = nj,ai. Consider the case \u03b1 = 1 , \u03c00,a1 = \u03c00,a2, and the following class of classi\ufb01ers: F = n 1{\u03b2\u2217\u22a4x > c} : c \u2208R o , which is a linear classi\ufb01er class that contains classi\ufb01ers di\ufb00er from each other by a translation in a particular direction. Example 3.1 Given function f and set S, let dist(f, S) = minx,s\u2208S \u2225f(x) \u2212s\u22252. Consider two classi\ufb01ers \u02dc f, f \u2208F such that dist( \u02dc f, S0)/ dist( \u02dc f, S1) = \u02dc n\u22121/4 0 /\u02dc n\u22121/4 1 and dist(f \u2032, S0)/ dist(f \u2032, S1) = n\u22121/4 0 /n\u22121/4 1 . Suppose \u2225\u03b2\u2217\u2225\u226b\u221ap log n, \u2225\u00b5i\u2225< C, (\u00b5\u2217 1 \u2212\u00b5\u2217 2)\u22a4\u03b2 = 0, and \u03c01,a2 \u2264c1\u03c01,a1 for a su\ufb03ciently small c1 > 0, then when n0, n1 are su\ufb03ciently large, with high probability we have M[ \u02dc f] < M[f]. Remark 3.1 (1). We provide analyses for the 0-1 loss as our ultimate goal is to strike a balance between good test accuracy and small fairness constraints violation. If we use surrogates such as the softmax-cross-entropy loss for the 0-1 loss in training, our theoretical analyses still stand since we always adjust margins based on the 0-1 loss as our interests are in quantities such as test accuracy. (2). Our analysis can be readily applied to EqOpt and DP constraints under certain conditions. We provide analyses and experiments for DP in the Appendix. 4 Flexible combination with logits-based losses f \u03b41,a1 \u03b3 1 Label Class Sensitive Attribute 0 1 a2 a1 Figure 3: Illustration of \u03b4i,a and the margin \u03b3 of classi\ufb01er: \u03b41,a1 is set to be non-negative and \u03b41,a2 is set to be zero as the subgroup (1, a2) is closer to the decision boundary than (1, a1). For ease of exposition, we focus solely on the EO constraint hereafter and discuss other constraints in the Appendix. Inspired by the margin trade-o\ufb00 characterized in Section 3, we propose our FIFA approach for Flexible Imbalance-Fairness-Aware classi\ufb01cation that can be easily combined with di\ufb00erent types of logits-based losses, and further incorporated into any existing fairness algorithms such as those discussed in Section 5. Let us recall that in Theorem 3.1, \u03b3i,a = \u03b3i + \u03b4i,a and \u03b4i,a \u22650 (since \u03b3i = min{\u03b3i,a1, \u03b3i,a2}, also see Fig. 3 for illustration), hence the middle term of Eq. (3) can be further upper bounded by the last term in Eq. (A.5). The \ufb01nal upper bound in Eq. (A.5) is indeed su\ufb03cient for obtaining the margin trade-o\ufb00between classes. Nonetheless, if we want to further enforce margins for each demographic subgroup in each class, we need to use the re\ufb01ned bound. Speci\ufb01cally, in Section 3, we have identi\ufb01ed a way to select \u03b30/\u03b31, based on which we propose to enforce margins for each demographic subgroup\u2019s training set Si,a of the form \u03b3i,a = C \u02dc n1/4 i + \u03b4i,a, (3) 5 \fwhere \u03b4i,a and C are all non-negative tuning parameters. In light of the trade-o\ufb00between the class margins \u03b30/\u03b31 = \u02dc n\u22121/4 0 /\u02dc n\u22121/4 1 , we can set \u03b3i of the form C/\u02dc n\u22121/4 i . Given \u03b3i,a \u2265\u03b3i, a natural choice for margins for subgroups is Eq. (3). How to select \u03b4i,a? Knowing the form of margins from the preceding discussions, an outstanding question remains: how to select \u03b4i,a for imbalanced datasets? Let Y = {0, 1} and A = {a1, a2}, within each class i, we identify Si,a with the largest cardinality |Si,a| and set the corresponding \u03b4i,a = 0. The remaining \u03b4i,A\\a are tuned as a non-negative parameter. As a further illustration, without loss of generality, assume for all i, |Si,a1| \u2265|Si,a2|. Thus selected {\u03b4i,a}i,a ensures the upper bound in the middle of Eq. (A.5) is tighter in the sense that for any \u03b4 > 0, X i\u2208Y \u0010 1 \u03b3i\u221ani,a1 + 1 (\u03b3i + \u03b4)\u221ani,a2 \u0011 \u2264 X i\u2208Y \u0010 1 (\u03b3i + \u03b4)\u221ani,a1 + 1 \u03b3i\u221ani,a2 \u0011 . In the Appendix, we will present how to choose \u03b4i,a\u2019s when there are multiple demographic groups. Brie\ufb02y speaking, our results similar to the above inequality are proved by an application of the rearrangement inequality. Simple as it is, the high-level view is meaningful \u2013 the decision boundaries of a fair predictor should be farther away from the less-frequent subgroup than the more-frequent subgroup to ensure better fairness generalization. Flexible imbalance-fairness-aware (FIFA) approach. We will demonstrate how to apply the above motivations to design better margin losses. Loosely speaking, we consider a logits-based loss \u2113((x, y); f) = \u2113(f(x)y, {f(x)i}i\u2208Y\\y), which is non-increasing with respect to its \ufb01rst coordinate if we \ufb01x the second coordinate. Such losses include (i). 0-1 loss: 1{f(x)y < maxi\u2208Y\\y f(x)i}. (ii). Hinge loss: max{maxi\u2208Y\\y f(x)i \u2212f(x)y, 0}. (iii). Softmax-cross-entropy loss: \u2212log ef(x)y/(ef(x)y + P i\u0338=y ef(x)i). Our \ufb02exible imbalance-fairness-aware (FIFA) approach modi\ufb01es the above losses by enforcing margin of the form in Eq. (3). Speci\ufb01cally, we use the following loss function during training \u2113FIFA((x, y, a); f) = \u2113(f(x)y \u2212\u2206y,a, {f(x)i}i\u2208Y\\y) (4) where \u2206i,a = C/\u02dc n1/4 i + \u03b4i,a. We remark here \u2113FIFA((x, y, a); f) is used only during training phase, where we allow access to sensitive attribute a. In the test time, we only need to use f but not a. 5 Example: combining FIFA with reductions-based fair algorithms In this section, we demonstrate the power of our approach by combining it with a popular reductionbased fair classi\ufb01cation algorithm [1] as an example. In Section 6, we show that incorporating our approach can bring a signi\ufb01cant gain in terms of both combined loss and fairness generalization comparing with directly applying their method in vanilla models trained with softmax-cross-entropy losses. The reduction approach proposed in [1] has two versions: (i). Exponentiated gradient (ExpGrad) that produces a randomized classi\ufb01er; and (ii). Grid search (GridS) that produces a deterministic classi\ufb01er. Our approach can be combined with both. Exponentiated gradient (ExpGrad). We \ufb01rst brie\ufb02y describe the algorithm here. For Y = {0, 1}, by [1], EO constraints could be rewritten as M\u00b5(h) \u2264c for certain M and c, where \u00b5j(h) = E[h(X)|Ej] for j \u2208J , M \u2208R|K|\u00d7|J |, and c \u2208RK. Here, K = A \u00d7 Y \u00d7 {+, \u2212} (+, \u2212impose positive/negative sign so as to recover | \u00b7 | in constraints) and J = (A \u222a{\u2217}) \u00d7 {0, 1}. E(a,y) = {A = a, Y = y} 6 \fand E(\u2217,y) = {Y = y}. Let err(h) = P(h(X) \u0338= Y ), instead of considering minh\u2208H err(h) such that M\u00b5(h) \u2264c, ExpGrad obtains the best randomized classi\ufb01er, by sampling a classi\ufb01er h \u2208H from a distribution over H. Formally, this optimization can be formulated as min Q\u2208\u2206H err(Q) such that M\u00b5(Q) \u2264c, where err(Q) = X h\u2208H Q(h) err(h), and \u00b5(Q) = P h\u2208H Q(h)\u00b5(h), Q is a distribution over H, and \u2206H is the collection of distributions on H. Let us further use c err(Q) and \u02c6 \u00b5(Q) to denote the empirical versions and also allows relaxation on constraints by using \u02c6 c = c + \u03f5, where \u02c6 ck = ck + \u03f5k for relaxation \u03b5k \u22650. By classic optimization theory, it could be transferred to a saddle point problem, and [1] aims to solve the following prime dual problems simultaneously for L(Q, \u03bb) = c err(Q) + \u03bb\u22a4(M \u02c6 \u00b5(Q) \u2212\u02c6 c): (P) : min Q\u2208\u2206 max \u03bb\u2208R|K| + ,\u2225\u03bb\u22251\u2264B L(Q, \u03bb), (D) : max \u03bb\u2208R|K| + ,\u2225\u03bb\u22251\u2264B min Q\u2208\u2206L(Q, \u03bb). To summarize, ExpGrad takes training data {(xi, yi, ai)}n i=1, function class H, constraint parameters M, \u02c6 c, bound B, accuracy tolerance v > 0, and learning rate \u03b7 as inputs and outputs ( \u02c6 Q, \u02c6 \u03bb), such that L( \u02c6 Q, \u02c6 \u03bb) \u2264L(Q, \u02c6 \u03bb) + \u03bd for all Q \u2208\u2206H and L( \u02c6 Q, \u02c6 \u03bb) \u2264L( \u02c6 Q, \u03bb) \u2212\u03bd for all \u03bb \u2208R|K| + , \u2225\u03bb\u22251 \u2264B, and ( \u02c6 Q, \u02c6 \u03bb) is called a \u03bd-approximate saddle point. As implemented in [1], H roughly consists of h(x) = 1{f(x)1 \u2265f(x)0} for f \u2208F (in fact, a smoothed version is considered in [1]) and gives c err(Q) = X h\u2208H \u02c6 P(h(X) \u0338= Y )Q(h) = \u02c6 P(f(X)Y < f(X){0,1}\\Y )Q(h). To combine our approach, we consider optimizing c errnew(Q) = X h\u2208H \u02c6 P(f(X)Y \u2212\u2206Y,A \u2264f(X){0,1}\\Y )Q(h), such that M \u02c6 \u00b5new(Q) \u2264\u02c6 c, where \u02c6 \u00b5new(Q) = P h\u2208H Q(h)\u02c6 \u00b5new(f) and \u02c6 \u00b5new j (f) = \u02c6 P(f(X)Y \u2212\u2206Y,A > f(X){0,1}\\Y |Ej). We can modify ExpGrad to optimize prime dual problems simultaneously for Lnew(Q, \u03bb) = c errnew(Q) + \u03bb\u22a4(M \u02c6 \u00b5new(Q) \u2212\u02c6 c). In practice, while Section 3 is motivated for deterministic classi\ufb01ers, FIFA works for the randomized version too \u2013 the modi\ufb01ed ExpGrad can be viewed as encouraging a distribution Q that puts more weights on classi\ufb01ers with a certain type of margin trade-o\ufb00between classes. Moreover, the modi\ufb01ed algorithm enjoys similar convergence guarantee as the original one. Theorem 5.1 Let \u03c1 = maxf \u2225M \u02c6 \u00b5new(f) \u2212\u02c6 c\u2225\u221e. For \u03b7 = \u03bd/(2\u03c12B), the modi\ufb01ed ExpGrad will return a \u03bd-approximate saddle point of Lnew in at most 4\u03c12B2 log(|K| + 1)/\u03bd2 iterations. Grid search (GridS). When the number of constraints is small, e.g., when there are only few sensitive attributes, one may directly perform a grid search on the \u03bb vectors to identify the deterministic classi\ufb01er that attains the best trade-o\ufb00between accuracy and fairness. In practice, GridS is preferred for larger models due to its memory e\ufb03ciency, since ExpGrad needs to store all intermediate models to compute the randomized classi\ufb01er at prediction time, which is less feasible for over-parameterized models. We describe our \ufb02exible approach in Algorithm 1 that combines with GridS used in practice in the o\ufb03cial code base FairLearn ([7]). Remark 5.1 As we stated, the above algorithm is just one of the examples that can be combined with our approach. FIFA can also be applied to many other popular algorithms such as fair representation [46] and decision boundary approach [60]. We will discuss them in more detail in the Appendix. 7 \f6 Experiments Algorithm 1 FIFA Combined Grid Search Input: fairness algorithm GridS, training data set {xi, yi, ai}n i=1, fairness tolerance \u03f5, margins {\u2206y,a}y,a, a classi\ufb01er h(\u00b7; \u03b8). Output: the learnt classi\ufb01er h\u2217. 1: Load training data to GridS. 2: GridS produces a set of reduction-labels \u02c6 ytrain and a set of sample weights wtrain based on the type of fairness constraint and tolerance \u03f5. 3: for t = 1, 2, . . . , T do 4: Compute the FIFA loss \u2113FIFA via (4) using reductionlabels \u02c6 ytrain (in mini-batches). 5: Update \u03b8 in h using back-propagation. 6: Logging training metrics using true labels {yi}n i=1 and attributes {ai}n i=1. 7: end for We now use our \ufb02exible approach on several datasets in the classi\ufb01cation task with a sensitive attribute. Although our method is proposed for over-parameterized models, it can also boost the performance on small models. Depending on the speci\ufb01c dataset and model architectures, we use either the grid search or the exponentiated gradient method developed by [1] as fairness algorithms to enforce the fairness constraints, while adding our FIFA loss in the inner training loop. Note that our method can be combined with other fairness algorithms. Datasets. We choose both a large image dataset and two simpler datasets. We use the o\ufb03cial train-test split of these datasets. More details and statistics are in the Appendix. (i). CelebA ([44]): the task is to predict whether the person in the image has blond hair or not where the sensitive attribute is the gender of the person. (ii). AdultIncome ([24]): the task is to predict whether the income is above 50K per year, where the sensitive attribute is the gender. (iii). DutchConsensus ([55]): the task is predict whether an individual has a prestigious occupation and the sensitive attribute is the gender. Both AdultIncome and DutchConsensus datasets are also used in [1]. 10\u22124 10\u22123 10\u22122 C 0.0 0.2 0.4 0.6 0.8 1.0 Density Histogram of C in EO-Constrained Sweeps Method FIFA LDAM Figure 4: Histogram (cumulative density) of hyper-parameter C in the sweeps for FIFA and LDAM. Vertical lines mark the values corresponding to the best performing models in Table 1. Method. Due to computational feasibility (ExpGrad needs to store all intermediate models at prediction time), we combine Grid Search with FIFA for the CelebA dataset and ResNet-18 and use both Grid Search and Exponentiated Gradient on the AdultIncome with logistic regression. Besides C and \u03b4i,a, we also treat \u03b1 as tuning parameters (in Eq. (A.5)). We then perform hyper-parameter sweeps on the grids (if used) over C, \u03b4i,a and \u03b1 for FIFA, and grids (if used) for vanilla training (combine fairness algorithms with the vanilla softmaxcross-entropy loss). The sweeps are done on the wandb platform [6], where all hyper-parameters except for the grid, are searched using its built-in Bayesian backend. All models for the same dataset are trained with a \ufb01xed number of epochs where the training accuracies converge. Batch training with size 128 is used for CelebA and full batch training is used for AdultIncome. More details are included in the Appendix. We compare FIFA with directly applying fair algorithms to NN\u2019s trained with softmax-cross-entropy loss, which is the most natural baseline. Also as a special case of FIFA, when \u03b4i,a = 0 for all i, a and \u03b1 = 0 the FIFA loss degenerates to the LDAM loss proposed in [10] that is not as fairness-aware as FIFA, so we brie\ufb02y discuss it in Table 1 too. FIFA further \ufb01netunes \u03b4i,a and \u03b1, and to ensure a fair comparison, we set the same coverage for the the common hyper-parameter C in the sweeps, as shown in Fig. 4. Evaluation and Generalization. When evaluating the model, we are mostly interested in the generalization performance measured by a combined loss that take into consideration both fairness violation and balanced error. We de\ufb01ne the combined loss as Lcb[f] = 1 2Lbal[f] + 1 2Lfv[f], which favors those classi\ufb01ers that have a equally well-performance in terms of classi\ufb01cation and fairness. We consider both the value of the combined loss evaluated on the test set Stest, and the generalization gap for a loss L is de\ufb01ned as gap[L, f] = |L[f](Stest) \u2212L[f](Strain)| . 8 \fFairness Tolerance \u03f5 0.01 0.05 0.10 Method FIFA LDAM Vanilla FIFA LDAM Vanilla FIFA LDAM Vanilla Combined Loss Train 7.37% 5.22% 7.14% 5.46% 5.47% 8.84% 5.92% 8.48% 8.90% Test 6.71% 7.29% 14.01% 6.34% 7.38% 13.05% 6.54% 7.34% 16.71% Gap 0.66% 2.07% 6.87% 0.88% 1.91% 4.21% 0.62% 1.14% 7.82% Fairness Violation Train 5.31% 2.32% 6.69% 2.63% 2.57% 9.45% 3.11% 6.93% 11.37% Test 2.75% 5.39% 20.29% 3.29% 5.57% 17.92% 2.65% 2.96% 26.15% Gap 2.57% 3.07% 13.59% 0.66% 3.00% 8.47% 0.46% 3.97% 14.78% Table 1: Grid search with EO constraint on CelebA dataset [44] using ResNet-18, best results with respect to test combined loss among sweeps of hyper-parameters are shown. As an interesting special case of our FIFA method, we note that although the LDAM method improves the performance compared with vanilla GS, it is not as e\ufb00ective as our method. 0.02 0.04 0.06 0.08 0.10 \u03f5 0.10 0.15 0.20 0.25 Test Combined Loss Method Vanilla FIFA (a) Lcb[f](Stest). 0.02 0.04 0.06 0.08 0.10 \u03f5 0.1 0.2 0.3 0.4 Test Fairness Violation Method Vanilla FIFA (b) Lfv[f](Stest). 0.02 0.04 0.06 0.08 0.10 \u03f5 0.02 0.04 0.06 0.08 0.10 0.12 Combined Loss Generalization Gap Method Vanilla FIFA (c) d gap[Lcb, f]. 0.02 0.04 0.06 0.08 0.10 \u03f5 0.05 0.10 0.15 0.20 Fairness Violation Generalization Gap Method Vanilla FIFA (d) d gap[Lfv, f]. Figure 5: Loss on test set (5a,5b) and generalization gaps (5c,5d) of the combined loss and the fairness loss on CelebA dataset. We repeat the experiment for 20 times using the hyper-parameters corresponding to the best-performing models in Table 1. Solid blue line marks the grid search combined with vanilla training whereas dashed orange line marks grid search combined with the FIFA loss. We also plot 95% con\ufb01dence band based on 20 repeated runs. We observe that our method FIFA has signi\ufb01cantly better generalization performance in terms of both smaller losses on the test set as well as narrower generalization gaps. 6.1 E\ufb00ectiveness of FIFA on over-parameterized models In this subsection, we thoroughly analyze the results from applying FIFA to over-parameterized models for the CelebA dataset using ResNet-18. We use the grid search algorithm with fairness violation tolerance parameter \u03f5 \u2208{0.01, 0.05, 0.1} (with a little abuse of notation) for all constraints. We perform sweeps on hyper-parameters C \u2208[0, 0.01], \u03b1 \u2208[0, 0.01], \u03b40,Male, \u03b41,Male \u2208[0, 0.01], and \u03b40,Female = \u03b41,Female = 0. As a special case that may be of interest, when \u03b1 = 0 and \u03b40,Male = \u03b41,Male = 0, the FIFA loss coincides with the LDAM loss proposed in [24], with one common hyper-parameter C \u2208[0, 0.01]. We log the losses on the whole training and test set. We summarize our main \ufb01ndings below and give more details in the Appendix including experiments with DP constraints, delayed-reweighting (DRW, [10]), and reweighting methods. Logits-based methods improve fairness generalization. We summarize the best results for each method under di\ufb00erent tolerance parameter \u03f5 in Table 1. We note that both FIFA and LDAM signi\ufb01cantly improve the test performance of both combined loss and fairness violation among all three choices of \u03f5, while having comparable training performance This implies the e\ufb00ectiveness and necessity of using logits-based methods to ensure a better fairness generalization. FIFA accommodates for both fairness generalization and dataset imbalance. Although both logits-based method improve generalization as seen in Table 1, our method FIFA has signi\ufb01cantly better generalization performance compared with LDAM, especially in terms of fairness violation. For example, when \u03f5 = 0.01 and 0.05, FIFA achieves a test fairness violation that is at least 2% smaller compared with LDAM. This further demonstrates the importance of our theoretical motivations. 9 \fDataset AdultIncome DutchConsensus Metric Combined Loss Fairness Violation Combined Loss Fairness Violation \u03f5 Method Train Test Train Test Train Test Train Test 0.01 FIFA 15.7217% 13.4812% 7.8863% 2.7776% 12.8013% 13.1686% 3.7220% 4.3532% Vanilla 13.9561% 14.3001% 6.7861% 6.7475% 12.8444% 13.2267% 3.7935% 4.4323% 0.05 FIFA 13.5634% 13.5491% 5.7088% 4.9315% 12.8820% 13.2228% 3.8525% 4.4323% Vanilla 14.4697% 14.8647% 7.5962% 7.8575% 12.8818% 13.2236% 3.8525% 4.4323% 0.10 FIFA 13.5857% 13.9043% 6.1217% 5.9689% 12.8717% 13.1748% 3.8326% 4.3532% Vanilla 15.5342% 15.9387% 9.7514% 10.0750% 12.8757% 13.2099% 3.8392% 4.4059% Table 2: Exponentiated gradient with EO constraint on the AdultIncome and DutchConsensus datasets using logistc regression (as a one-layer neural net), best results with respect to test combined loss among sweeps of hyper-parameters are shown. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Balanced Classi\ufb01cation Error 0.0 0.2 0.4 0.6 Fairness (EO) Violation Pareto Frontier (\u03f5 = 0.01) Vanilla (Train) Vanilla (Test) FIFA (Train) FIFA (Test) (a) \u03f5 = 0.01. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Balanced Classi\ufb01cation Error 0.0 0.2 0.4 0.6 Fairness (EO) Violation Pareto Frontier (\u03f5 = 0.05) Vanilla (Train) Vanilla (Test) FIFA (Train) FIFA (Test) (b) \u03f5 = 0.05. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Balanced Classi\ufb01cation Error 0.0 0.2 0.4 0.6 Fairness (EO) Violation Pareto Frontier (\u03f5 = 0.1) Vanilla (Train) Vanilla (Test) FIFA (Train) FIFA (Test) (c) \u03f5 = 0.10. Figure 6: Pareto frontiers of the balanced loss (Lbal) and fairness loss (Lfv) of CelebA using ResNet-18 with grid search combined with FIFA and vanilla softmax-cross-entropy loss respectively. Best-performing hyper-parameters from Table 1 are used, where each con\ufb01guration is repeated 20 times independently. Here blue and orange markers correspond to vanilla and FIFA respectively, and circular and cross markers correspond to training and testing metrics respectively. We observe that our FIFA method is e\ufb00ective in signi\ufb01cantly lowering the Pareto frontier comparing with the vanilla method, implying that FIFA mitigates fairness generalization issues as seen in Figure 1. Improvements of generalization are two-fold for FIFA. When it comes to generalization, two relevant notions are often used, namely the absolute performance on the test set, and also the generalization gap between the training and test set. We compute the generalization gap in Table 1 for both combined loss and fairness violation. We observe that FIFA generally dominates LDAM and vanilla in terms of both test performance and generalization gap. We further illustrate this behavior in Fig. 5, where we give 95% con\ufb01dence band over randomness in training. We note that our FIFA signi\ufb01cantly outperforms vanilla in a large margin in terms of both generalization notions, and the improvements are mostly due to better fairness generalization. In fact, as suggested by the similarity in the shapes of curves between Fig. 5c and Fig. 5d, fairness generalization dominates classi\ufb01cation generalization, and thus improvements in fairness generalization elicit more prominently overall. Towards a more e\ufb03cient Pareto frontier. Let us recall our motivating example in Fig. 1 where a su\ufb03ciently well-trained ResNet-10 demonstrate stellar classi\ufb01cation and fairness performance on the training set but poor fairness generalization on the test set. In Fig. 6 we plot the Pareto frontier of balanced classi\ufb01cation error (Lbal) and fairness violation (Lfv) for all three choices of \u03f5. We observe that the models trained by combining grid search with the FIFA loss achieve frontiers that are lower and more centered compared with those trained in vanilla losses with grid search. This rea\ufb03rms that our FIFA approach mitigates the fairness generalization problem decently well. 10 \f6.2 E\ufb00ectiveness of FIFA on smaller datasets and models We use logistic regression (implemented as a one-layer neural net) for the AdultIncome and DutchConsensus datasets with similar sweeping procedure are similar to those described in Section 6.1 except that we set \u03b40,Female, \u03b41,Female \u2208[0, 0.01], and \u03b40,Male = \u03b41,Male = 0 for the AdultIncome dataset and \u03b40,Male, \u03b41,Female \u2208[0, 0.01], and \u03b40,Female = \u03b41,Male = 0 for the DutchConsensus Dataset. Results. We tabulate the best-performing models (in terms of test combined loss) among sweeps in Table 2 and include more details in the Appendix. From Table 2, we observe the same message as we did in Section 6.1, namely, our method FIFA outperforms vanilla on both dataset across three di\ufb00erent tolerance parameter \u03f5, illustrating the e\ufb00ectiveness of FIFA on boosting fairness generalization on smaller datasets and models as well. Nonetheless, since the datasets are much simpler in this case, the improvements are not as large as in the case for CelebA and larger models. 7 Conclusions Generalization (especially in over-parameterized models) has always been an important and di\ufb03cult problem in machine learning research. In this paper, we set out the \ufb01rst exposition in the study of the generalization of fairness constraints that has previously been overlooked. Our theoretically-backed FIFA approach is shown to mitigate poor fairness generalization observed in vanilla models large or small. In our paper, the analysis is based on margin upper bounds. We will leave a more \ufb01ne-grained analysis of the margins to the future work. Our work is motivated by the societal impact of improving fairness across subpopulations. Methods like FIFA should be used with care and may not fully mitigate bias; better training data is still needed in many settings. Acknowledgements The research of Z. Deng is supported by the Sloan Foundation grants, the NSF grant 1763665, and the Simons Foundation Collaboration on the Theory of Algorithmic Fairness. J. Zhang is supported by ONR Contract N00015-19-1-2620. W.J. Su is supported in part by NSF through CCF-1934876, an Alfred Sloan Research Fellowship, and the Wharton Dean\u2019s Research Fund. We would like to thank Dan Roth and the Cognitive Computation Group at the University of Pennsylvania for stimulating discussions and for providing computational resources. J. Zhang also thanks Chenwei Wu from Duke University for helpful discussions." + }, + { + "url": "http://arxiv.org/abs/2106.10189v1", + "title": "Adversarial Training Helps Transfer Learning via Better Representations", + "abstract": "Transfer learning aims to leverage models pre-trained on source data to\nefficiently adapt to target setting, where only limited data are available for\nmodel fine-tuning. Recent works empirically demonstrate that adversarial\ntraining in the source data can improve the ability of models to transfer to\nnew domains. However, why this happens is not known. In this paper, we provide\na theoretical model to rigorously analyze how adversarial training helps\ntransfer learning. We show that adversarial training in the source data\ngenerates provably better representations, so fine-tuning on top of this\nrepresentation leads to a more accurate predictor of the target data. We\nfurther demonstrate both theoretically and empirically that semi-supervised\nlearning in the source data can also improve transfer learning by similarly\nimproving the representation. Moreover, performing adversarial training on top\nof semi-supervised learning can further improve transferability, suggesting\nthat the two approaches have complementary benefits on representations. We\nsupport our theories with experiments on popular data sets and deep learning\narchitectures.", + "authors": "Zhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi, James Zou", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Transfer learning is a popular methodology to obtain well-performing machine learning models in settings where high-quality labeled data is scarce Donahue et al. [2014], Sharif Razavian et al. [2014]. The general idea of transfer learning to take a pre-trained model from a source domain\u2014where labeled data is abundant\u2014and adapt it to a new target domain. Because the target data distribution often di\ufb00ers from the source setting, standard transfer learning \ufb01ne-tunes the model using a small-amount of labeled data from the target domain. In many applications, the \ufb01ne-tuning is performed only on the last few layers of the network if the amount of target data is limited or if the one only has access to a representation (i.e. intermediate layers) produced by the model instead of the full model. Transfer learning has demonstrated substantial empirical success and there is an exciting literature investigating di\ufb00erent approaches to making transfer learning more e\ufb00ective Huh et al. [2016], Kolesnikov et al. [2019]. Recent experiments empirically demonstrated an intriguing phenomenon that models that are trained using adversarial-robust optimization on the source data transfer better to target data compared to non-adversarially trained models. We illustrate this phenomenon in Figure 1, which replicates the \ufb01ndings in Salman et al. [2020]. Here two models are trained on the full ImageNet and 10% of ImageNet using di\ufb00erent levels of adversarial training\u2014\u03f5 is the l2 magnitude of the adversarial attack. Following Salman et al. [2020], we \ufb01ne-tuned the last layer of the models using data from CIFAR-10 and plot the \ufb01nal accuracy on the target CIFAR-10. Adversarial training (\u03b5 > 0) signi\ufb01cantly improves the transfer performance compared to model without adversarial training (\u03b5 = 0). Additional experiments demonstrating this e\ufb00ect are provided in Salman et al. [2020], Utrera et al., however it is still an open question how adversarial training in source helps transfer learning. \u2217Equal contribution. 1Harvard University, zhundeng@g.harvard.edu, kkawaguchi@fas.harvard.edu 2Rutgers University, linjun.zhang@rutgers.edu 3Stanford University, kailasv@stanford.edu 4Stanford University, jamesz@stanford.edu 1 arXiv:2106.10189v1 [cs.LG] 18 Jun 2021 \f0 1 2 3 4 5 Epsilon 70.0% 75.0% 80.0% 85.0% 90.0% 95.0% Accuracy (%) Models Trained with 10% ImageNet 100% ImageNet Figure 1: Transfer accuracy improves with adversarial training on source task. We plot target task (CIFAR-10) accuracy across different levels of \u21132-adversarial training on the source task (ImageNet). The value of \u03b5 corresponds to the size of the adversarial attack; i.e., \u03b5 = 0 indicates no adversarial training. The two curves correspond to training the source model using all of ImageNet and a 10% subsample of ImageNet. As our \ufb01rst contribution, we initialize the study of how adversarial training helps \ufb01xed-feature transfer learning from a theoretical perspective. Our analysis shows how that adversarial training on the source learns a better representation such that \ufb01ne-tuning on this representation leads to better performance on the target. Interestingly, we show that the robust representation can help transfer learning even when the source performance declines due to adversarial training. To the best of our knowledge, this is the \ufb01rst rigorous analysis of the e\ufb00ect of adversarial training on transfer learning. As our second contribution, we extend our analysis to show that semi-supervised learning using pseudo-labeling can similarly lead to better representations for transfer learning. We support our theory with empirical experiments. Moreover our experiments demonstrate for the \ufb01rst time that performing adversarial training on top of pseudo-labeling in the source can further boost transfer learning performance. This suggests that the two data augmentation techniques of adversarial training and pseudo-labeling have complementary bene\ufb01ts on learned representations. As a third technical contribution, we generalize the techniques in prior papers for analyzing transfer learning in regressions to classi\ufb01cation settings, where adversarial training and pseudo-labeling are more commonly used. Together, our results provide a useful and tractable framework to understand factors that improve transfer learning. Related Work Adversarial robust optimization has been a major focus in machine learning security [Biggio and Roli, 2018, Dalvi et al., 2004, Lowd and Meek, 2005, Goodfellow et al., 2014, Carlini and Wagner, 2017, Nguyen et al., 2015]. A serious of works has been proposed to increase the adversarial robustness both empirically Madry et al. [2017], Miyato et al. [2018], Balaji et al. [2019] and theoretically Cohen et al. [2019], Lecuyer et al. [2019], Raghunathan et al. [2018], Liu et al. [2020], Chiang et al. [2020], Kurakin et al. [2016], Hendrycks et al. [2019], Deng et al. [2020b], Zhang et al. [2020]. Meanwhile, other works demonstrate how to quantify the trade-o\ufb00of adversarial robustness and standard accuracy [Zhang et al., 2019, Schmidt et al., 2018, Carmon et al., 2019, Stanforth et al., 2019, Deng et al., 2020a]. Recently Utrera et al. [2020], Salman et al. [2020] empirically studied the transfer performance of adversarially robust networks, but it is still not clear yet why adversarial training leads to a better transfer from a theoretical perspective. Transfer learning has been used in a variety of applications, ranging from medical imaging Raghu et al. [2019], natural language processing Houlsby et al. [2019], Conneau and Kiela [2018], to object detection Lim [2012], Shin et al. [2016]. On the theoretical side, the early work of Baxter [2000], Ben-David and Schuller [2003], Maurer et al. [2016] studied the test accuracy of the target task in the multi-task learning setting. Recent work Tripuraneni et al. [2020b,a], Du et al. [2020] focused more on the representation learning and provide a theoretical framework to study linear representatioin in the regression setting. In this work, we provide a counterpart to theirs and studies the classi\ufb01cation setting. Prior works in semi-supervised leaerning largely focus on improving the prediction accuracy with unlabeled data [Zhu et al., 2003, Zhu and Goldberg, 2009, Berthelot et al., 2019]. Works have also shown that semi-supervised learning can improve adversarial robustness [Carmon et al., 2019, Deng et al., 2021]. Several works have identi\ufb01ed that using unlabeled data can empirically improve transfer learning [Zhou et al., 2018, Zhong et al., 2020, Mokrii et al., 2021], but a rigorous theoretical understanding of why this happens is lacking. 2 Preliminaries and model setup Notation. We use [m] for {1, 2, \u00b7 \u00b7 \u00b7 , m} for any m \u2208N+ and for any set S, let |S| to denote the cardinality of S. For a matrix M, we denote \u03c3k(M) as the k-th singular value of matrix M. We use Om\u00d7l to denote 2 \fthe space of matrices of dimension m \u00d7 l whose columns are orthonormal and use Sp\u22121 to denote the unit sphere of dimension p. For two real matrices E, F \u2208Om\u00d7l, we denote the subspaces spanned by the column vectors of E and F by E and F correspondingly. The subspace distance between E and F is de\ufb01ned as \u2225sin \u0398(E, F)\u2225F [Yu et al., 2015], where \u0398(E, F) = diag(cos\u22121 \u03c31(E\u22a4F), \u00b7 \u00b7 \u00b7 , cos\u22121 \u03c3l(E\u22a4F)). For a vector v, we use \u2225v\u2225q to denote the \u2113q norm. Let \u2272and \u2273denote \u201cless than\u201d and \u201cgreater than\u201d up to a universal constant respectively. a \u226ab to denote b \u2265C \u00b7 a for a su\ufb03ciently large universal constant C. Our use of O(\u00b7), \u2126(\u00b7), o(\u00b7) follows the standard literature of computer science. With some abuse of notation, we also write a = \u0398(b) if a = O(b) and a = \u2126(b) for a, b \u2208R. Data generating processes. We assume there are T source tasks. For each task t \u2208[T], we have corresponding training data set of size nt, i.e. St = {(x(t) 1 , y(t) 1 ), \u00b7 \u00b7 \u00b7 , (x(t) nt , y(t) nt )}, where x(t) i \u2208X \u2286Rp and y(t) i \u2208{\u22121, 1} are i.i.d. drawn from a joint distribution P(t) x,y. We further denote n = mint\u2208[T ] nt. In other words, n is the smallest size of source data sets. The goal of transfer learning is to learn from multiple source tasks in the hope of learning a common representation such that for a target task with distribution P(T +1) x,y , we only need few data points to learn extra structures beyond the common representation and the learned model still achieves good prediction performance. With this spirit, we assume that for t \u2208[T + 1], {(x(t) i , y(t) i )}nt i are i.i.d. drawn from P(t) x,y, such that x(t) i = \u03b7(t) i + y(t) i \u00b5t, (1) for i.i.d. noise \u03b7(t) i that is independent of y(t) i , where \u00b5t = Bat \u2208Rp, at \u2208Rr and B \u2208Rp\u00d7r is an orthonormal matrix representing the projection onto a subspace, i.e. B\u22a4B = Ir. Here, B is the common structure shared among all the source tasks and the target task, and at\u2019s are task-speci\ufb01c parameters. Although this model is simple, the analysis is already highly nontrivial, and it captures the essense of the problem in transfer learning. In fact, similar models haves been considered in Tripuraneni et al. [2020a], Du et al. [2020]. Speci\ufb01cally, we consider the case where r \u226ap. It can be viewed in a way that the data is generated by mapping low dimensional data signal to the high-dimension, which coincides with the fact that commonly used real image data sets lie in the lower dimensional manifolds. In addition, we assume the noise term \u03b7(t) i is of zero-mean and is \u03c12 t-sub-gaussian, i.e. E[exp(\u03bbv\u22a4\u03b7(t) i )] \u2264exp(\u03bb2\u03c12 t/2) for all v \u2208Sp\u22121 and \u03bb \u2208R. Throughout this paper, we consider \u03c1t = \u0398(1) for all t \u2208[T]. Remark 1. (i). The sub-gaussian assumption is quite \ufb02exible since many commonly used data sets such as image sets are all bounded, which implies sub-gaussianity. (ii). Di\ufb00erent from the regression settings considered in previous theoretical work on transfer learningDu et al. [2020], Tripuraneni et al. [2020a], we focus on classi\ufb01cation settings, in which adversarial training is more commonly studied. Loss functions. The loss functions considered in this paper take the following form: for each task t \u2208[T +1], \u2113(x, y, w(t) 2 , W1) = \u2212yf (t)(x), (2) where f (t)(x) is a two-layer linear neural network parametrized by W1 and w(t) 2 , i.e. f (t)(x) = w(t)\u22a4 2 W \u22a4 1 x, with W1 \u2208Op\u00d7r, w(t) 2 \u2208Rr\u00d71 and \u2225w(t) 2 \u2225\u22641. Here, we mainly consider the case W1 is well-speci\ufb01ed, i.e. with the same dimension of B. Our argument can be further extended to the case where r is unknown by \ufb01rst estimating r and details are left to the appendix. We put norm constraint on w(t) 2 since otherwise the minimizer is always of norm in\ufb01nity. The loss function in (2) along with its variants have been commonly used in the theoretical machine learning community Schmidt et al. [2018], Deng et al. [2021]. Although in its simple form, it has been consistently useful to shed light upon complex phenomena. Meanwhile, even under this natural setting, it is highly non-trivial to demonstrate the e\ufb00ect of adversarial training in transfer learning. Roughly speaking, like most settings in transfer learning, W1 is assumed to be the common weights shared among the models for all source tasks so as to learn a \u201cgood\" common representation. For each individual task t, parameter w(t) 2 aims to perform task-speci\ufb01c linear classi\ufb01cation. We leave the detailed discussions about how to take advantage of combining all the source tasks and obtaining a \u201cgood\" W1 to Section 3. Further, we 3 \fdenote the empirical loss for task t as \u02c6 L(St, w(t) 2 , W1) = Pnt i=1 \u2212y(t) i \u27e8W1w(t) 2 , x(t) i \u27e9/nt. The expected loss for task t is L(P(t) x.y, w(t) 2 , W1) = \u2212E(x,y)\u223cP(t) x,y[y\u27e8W1w(t) 2 , x\u27e9]. Problem Setup In \ufb01xed-representation transfer learning, the \ufb01rst step is to learn the common representation in the model architectures using data from source tasks. The representation (e.g. the penultimate layer of a neural network) is then \ufb01xed. Finally, the target data is used to train or \ufb01ne-tune a small model on top of the representation. Following this popular practice, in our model setting, we use the data of T source tasks {St}T t=1 to obtain an estimator \u02c6 W1. Then, we use the data of target task ST +1 to obtain an estimator \u02c6 w(T +1) 2 of the task-speci\ufb01c parameter. Our evaluation criteria is the excess risk: R( \u02c6 W1, \u02c6 w(T +1) 2 ) = L(P(T +1) x,y , \u02c6 w(T +1) 2 , \u02c6 W1) \u2212 min \u2225w2\u2225\u22641,W1\u2208Op\u00d7r L(P(T +1) x,y , w2, W1). (3) 3 Adversarial Training Help Representation Learning In this section, we demonstrate our results about how adversarial training can learn a better representation, and therefore leads to smaller excess risks. We \ufb01rst describe our algorithm, and demonstrate the nearoptimality of our algorithm in representation learning by a minimax lower bound. We then demonstrate for the settings where data has varying noise-signal ratios or sparsity structures, how \u21132 or \u2113\u221e-adversarial training can help improve the representation learning. 3.1 Representation learning algorithm Recall that the loss function for each task is \u2113(x, y, w(t) 2 , W1) = \u2212yw(t)\u22a4 2 W \u22a4 1 x, where W1 \u2208Op\u00d7r is a common structure in model architectures shared among all the source tasks and the target set. In the spirit of transfer learning, the goal is to jointly learn W1 from source tasks and then use the data from the target task to learn its task-speci\ufb01c parameter w(T +1) 2 . Here, W1 essentially aims to recover the common structure B in the data generating processes Eq.(1) (or more rigorously, recover the column space of B), such that the obtained estimator \u02c6 W1 satis\ufb01es \u2225sin \u0398( \u02c6 W1, B)\u2225F \u21920. Note that in our two-layer linear neural network structure, optimizing w2 and W1 simultaneously for a single task has the issue of non-identi\ufb01ability \u2013 the loss value will not change if we multiply an orthonormal matrix \u039b \u2208Rr\u00d7r to W1 and \u039b\u22121 to w2. However, we still can jointly learn a good estimator \u02c6 W1 to recover B following a similar method in Tripuraneni et al. [2020a] via singular value decomposition (SVD). In particular, we \ufb01rst simultaneously optimize w(t) 2 and W1 for each individual task for t \u2208[T], which is equivalent to optimizing a single parameter \u03b2t = W \u22a4 1 w(t) 2 (since W1 is an orthonormal matrix, the norm of \u03b2t is still upper bounded by 1). Then, we apply SVD to the matrix consisting of the optimizers \u02c6 \u03b2t\u2019s to obtain \u02c6 W1. In the \ufb01nal step, we use St+1 to learn w(t+1) 2 . Algorithm 1 Learning for Linear Representations Input: {St}T +1 t=1 Step 1: Optimize the loss function on each individual source task t \u2208[T] and obtain \u02c6 \u03b2t = argmin\u2225\u03b2t\u2225\u22641 1 nt nt X i=1 \u2212y(t) i \u27e8\u03b2t, x(t) i \u27e9. Step 2: \u02c6 W1\u03a3V \u22a4\u2190top-r SVD of [\u02c6 \u03b21, \u02c6 \u03b22, \u00b7 \u00b7 \u00b7 , \u02c6 \u03b2T ], where \u03a3 is a r \u00d7 r diagonal matrix consists of singular values, and V is a T \u00d7 r matrix consists of orthonomal columns. Step 3: \u02c6 w(T +1) 2 \u2190argmin\u2225w(T +1) 2 \u2225\u22641 1 nT +1 PnT +1 i=1 \u2212y(T +1) i \u27e8w(T +1) 2 \u02c6 W1, x(T +1) i \u27e9. Return \u02c6 W1, \u02c6 w(T +1) 2 . Next, we provide a lemma about the representation learning in the two-layer linear neural network model under the assumption below. Combining this lemma with a minimax lower bound, we will show 4 \fthat adversarial training cannot have any gain in representation or transfer learning without extra special data structures, which motivates our subsequent theories. To facilitate the presentation, let us de\ufb01ne M = [a1/\u2225a1\u2225, a2/\u2225a2\u2225, \u00b7 \u00b7 \u00b7 , aT /\u2225aT \u2225]. Assumption 1 (Task normalization and diversity). For all the tasks, \u2225at\u2225= \u0398(1) for all t \u2208[T + 1] and \u03c3r(M \u22a4M/T) = \u2126(1/r). Remark 2. Throughout the paper, we consider the low-rank case, where r is smaller than T and p. Meanwhile, notice that \u2225M\u22252 F = T = Pr i=1 \u03c32 i (M), this assumption implies the condition number \u03c31(M)/\u03c3r(M) = O(1), which roughly means {ai/\u2225ai\u2225}T i=1 cover all the directions of Rr evenly. Loosely speaking, if we denote \u02c6 \u00b5T +1 = PnT +1 i=1 x(T +1) i y(T +1) i /nT +1, under some regularity conditions, with high probability R( \u02c6 W1, \u02c6 w(T +1) 2 ) = L(P(T +1) x,y , \u02c6 w(T +1) 2 , \u02c6 W1) \u2212 min \u2225w2\u2225\u22641,W1\u2208Op\u00d7r L(P(T +1) x,y , w2, W1) \u2272\u2225sin \u0398( \u02c6 W1, B)\u2225F | {z } representation error + \u2225B\u22a4\u02c6 \u00b5T +1 \u2212B\u22a4\u00b5T +1\u2225 | {z } task-speci\ufb01c error . (4) The task-speci\ufb01c error is easy to deal with given Eq. (4), we mainly focus on providing a lemma to characterize the representation error. Lemma 1. Under Assumption 1, if n > c1 max{pr2/T, r2 log(1/\u03b4)/T, r2} for some universal constant c1 > 0 and 2r \u2264min{p, T}, for all t \u2208[T]. For \u02c6 W1 obtained in Algorithm 1, with probability at least 1 \u2212O(n\u2212100), \u2225sin \u0398( \u02c6 W1, B)\u2225F \u2272r r 1 n + r p nT + r log n nT ! . Application of Lemma 1 gives us the following corollary about the excess risk R( \u02c6 W1, \u02c6 w(T +1) 2 ). Corollary 1. Under Assumption 1, if n > c1 max{pr2/T, r2 log(1/\u03b4)/T, r2, rnT +1} for some universal constant c1 > 0, 2r \u2264min{p, T}, then for \u02c6 W1 obtained in Algorithm 1, with probability at least 1 \u2212O(n\u2212100), R( \u02c6 W1, \u02c6 w(T +1) 2 ) \u2272 s r + log n nT +1 + r r2p nT . Remark 3. Lemma 1 and Corollary 1 provide counterparts of the bound of subspace distance and excess risk studied in Tripuraneni et al. [2020a], Du et al. [2020] under the setting of regression models. Since they use squared losses, our bounds are di\ufb00erent from theirs by square roots. Squaring our bounds provide results with similar rates as those in previous work. If we do not use data from source tasks, we will obtain an excess risk bound of order p p/nT +1 instead, which will be signi\ufb01cantly larger than the one in Corollary 1 if r + log n \u226ap and pr2 \u226anT, which happens in our low rank situation with abundant source task data. Meanwhile, we provide the following minimax lower bound to justify the near-optimality of our algorithm in learning the representation in general cases. Proposition 1. Let us consider the parameter space \u039e = {A \u2208Rp\u00d7r, B \u2208Rp\u00d7r : \u03c3r(A\u22a4A/T) \u22731, B\u22a4B = Ir}. If nT \u2273rp, we then have inf \u02c6 W1 sup \u039e E\u2225sin \u0398(B, \u02c6 W1)\u2225F \u2273 r rp nT . Remark 4. For high dimensional data such that p is much larger than T and log n, the lower bound in Proposition 1 matches the upper bound in Lemma 1 up to a factor \u221ar. Since r is considered as a small constant in our settings, we can see that in general cases when there is no additional structural assumptions, our algorithm already obtains the near-optimal rate in representation learning. However, in later sections, when we introduce some additional structural assumptions such as varying signal-to-noise ratios and sparsity structures among tasks, which commonly happens in real applications, we will show that adversarial training can improve representation learning and further leads to smaller excess risks. 5 \f3.2 How \u21132-adversarial training improves representation learning for transfer In this subsection, we consider the bene\ufb01t of \u21132-adversarial training. Speci\ufb01cally, if the signal-to-noise ratios varies among tasks in the sense that \u2225at\u2225\u2019s have di\ufb00erent scales, \u21132-adversarial training can lead to a sharper representation estimation error than standard training. In contrast, Lemma 1 and Proposition 1 demonstrate that under the case of uniform signal-to-noise ratios, adversarial training cannot have any gain over standard training. From a high-level perspective, signal-to-noise ratios determiine the di\ufb03culties of classi\ufb01cation. For those tasks with small signal-to-noise ratios, while adversarial attacks make them even harder to perform classi\ufb01cation (increase bias), but also make these tasks less competitive (decrease variance). Thus, adversarial training will bias the model to focus on learning the representation out of those with large signal-to-noise ratios. Assumption 2 (Varying signal-to-noise ratios). For the T source tasks, they can be divided into two disjoint sets. The \ufb01rst set is S1 = {t \u2208[T] : \u2225at\u2225= \u0398(1)}, and the second set is S2 = {t \u2208[T] : \u2225at\u2225= \u2126(\u03b1T )}, where \u03b1T \u2192\u221eas T \u2192\u221e, and S1 \u222aS2 = [T]. In addition, |S2|/T = \u0398(1). For the matrix M = [a1/\u2225a1\u2225, a2/\u2225a2\u2225, \u00b7 \u00b7 \u00b7 , aT /\u2225aT \u2225], we further denote MS1 as the sub-matrix of M, whose columns consist of of at/\u2225at\u2225for t \u2208S1. For instance, if S1 = {1, 2, 3}. then MS1 = [a1/\u2225a1\u2225, a2/\u2225a2\u2225, a3/\u2225a3\u2225]. We de\ufb01ne MS2 similarly. Assumption 3 (Task diversity). For the T source tasks, min{\u03c3r(M \u22a4 S2MS2/T), \u03c3r(M \u22a4M/T)} = \u2126(1/r). Remark 5. Assumption 2 indicates if we have more source tasks (larger T), more tasks with large signal-tonoise ratios would show up. Similar to Assumption 1, Assumption 3 requires both the columns in M and MS2 cover Rr evenly. Now, we consider the adversarial training algorithm for \u2113q-attack for q = 2, \u221e. Algorithm 2 Adversarial Learning for Linear Features Input: {St}T +1 t=1 , q Step 1: Optimize the adversarial loss function on each individual source task t \u2208[T] and obtain \u02c6 \u03b2adv t = argmin\u2225\u03b2t\u2225\u22641 max \u2225\u03b4i\u2225q\u2264\u03b5 1 nt nt X i=1 \u2212y(t) i \u27e8\u03b2t, x(t) i + \u03b4i\u27e9. Step 2: \u02c6 W adv 1 \u03a3advV adv\u22a4\u2190top-r SVD of [\u02c6 \u03b2adv 1 , \u02c6 \u03b2adv 2 , \u00b7 \u00b7 \u00b7 , \u02c6 \u03b2adv T ], where \u03a3adv is a r\u00d7r diagonal matrix consists of singular values, and V adv is a T \u00d7 r matrix consisting of orthonomal columns. Step 3: \u02c6 wadv,(T +1) 2 \u2190argmin\u2225w(T +1) 2 \u2225\u22641 1 nT +1 PnT +1 i=1 \u2212y(T +1) i \u27e8w(T +1) 2 \u02c6 W adv 1 , x(T +1) i \u27e9. Return \u02c6 W adv 1 , \u02c6 wadv,(T +1) 2 . The following theorem shows that even when the \u02c6 \u03b2adv t \u2019s obtained by \u21132-adversarial training have large excess risk for each source task, the \u02c6 W adv 1 extracted from [\u02c6 \u03b2adv 1 , \u02c6 \u03b2adv 2 , \u00b7 \u00b7 \u00b7 , \u02c6 \u03b2adv T ] can transfer knowledge from multiple source tasks better and result in a smaller excess risk on the target task. Theorem 1. Under Assumption 2 and 3, for \u2225aT +1\u2225= \u03b1 = \u2126(1), if n > c1 max{r2, r/\u03b1T }\u00b7max{p log T, log n/T, 1} and n > c2(\u03b1\u03b1T )2rnT +1 for universal constants c1, c2, 2r \u2264min{p, T}. There exists a universal constant c3, such that if we choose \u03b5 \u2208[maxt\u2208S1 \u2225at\u2225+ c3 p p log T/n, mint\u2208S2 \u2225at\u2225\u2212c3 p p log T/n] (this set will not be empty if T, n are large enough), for \u02c6 W adv 1 , \u02c6 wadv,(T +1) 2 obtained in Algorithm 2 with q = 2, with probability at least 1 \u2212O(n\u2212100), \u2225sin \u0398( \u02c6 W adv 1 , B)\u2225F \u2272(\u03b1T )\u22121 r r2 n + r pr2 nT + r r2 log n nT ! , and the excess risk R( \u02c6 W adv 1 , \u02c6 wadv,(T +1) 2 ) \u2272\u03b1 s r + log n nT +1 + (\u03b1T )\u22121 r r2p nT ! . 6 \fSimilar to Assumption 2, here \u03b1 can also be a function of the target task data size nT +1. \u21132-adversarial training v.s. standard training: Under the exact same conditions in Theorem 1, a simple modi\ufb01cation of Lemma 1 leads to \u2225sin \u0398( \u02c6 W1, B)\u2225F \u2272 p r2/n + p pr2/(nT) + p r2 log n/(nT) and R( \u02c6 W1, \u02c6 w(T +1) 2 ) \u2272\u03b1 p (r + log n)/nT +1 + p r2p/(nT) with high probability. We can see that the adversarial training would lead to a better representation and an improved excess risk when \u03b1T is growing. Such a scenario happens when the source data consist of a large diversity of tasks with varying di\ufb03culties of classi\ufb01cation. Our proof indeed reveals that adversarial training would help the model to focus on learning the representation from easy-to-classify tasks, and therefore improves the convergence rate of representation learning. The gain in representation learning further leads to smaller rates of R( \u02c6 W adv 1 , \u02c6 wadv,(T +1) 2 ) compared with R( \u02c6 W1, \u02c6 w(T +1) 2 ). 3.3 How \u2113\u221e-adversarial training improves representation learning for transfer In this subsection, we further consider the bene\ufb01t of \u2113\u221e-adversarial training. It is well-recognized that commonly used real data sets, such as MNIST and CIFAR-10, actually lie in lower dimensional manifolds compared with their ambient dimensions. After certain transformations Baraniuk [2007], Cand\u00e8s et al. [2006], it is equivalent to having sparsity structure in the coordinates. We demonstrate that if there are some underlying sparsity structures in the mean parameters \u00b5t = Bat for t \u2208[T], then \u2113\u221e-adversarial training leads to sharper bounds regarding the representation error and excess risk. To facilitate the discussion, let us use \u00b5t,j to denote the j-th coordinates of \u00b5t. Assumption 4 (Structural sparsity). For an integer s such that 0 < s < p, we assume for all t \u2208[T], sign(\u00b5t,j are i.i.d. and P(sign(\u00b5t,j) = 0) = 1 \u2212\u03b7s, P(sign(\u00b5t,j) = 1) = P(sign(\u00b5t,j) = \u22121) = \u03b7s/2. We also refer s as the sparsity level. Assumption 4 guarantees that the sparsity of each column is upper bounded by O(s log T) with high probability. Similar assumptions have been commonly used in the high-dimensional statistics literature Bayati and Montanari [2011], Su et al. [2017]. For \u2113\u221e-adversarial training, we provide bounds obtained through adversarial training below. Theorem 2. Under Assumptions 1 and 4, if n > c1 \u00b7 r2 max{s2 log2 T/T, rnT +1, 1} for some universal constants c1 > 0, 2r \u2264min{p, T}. There exists a universal constant c2, such that if we choose \u03b5 > c2 p log p/n, for and \u02c6 W adv 1 , \u02c6 wadv,(T +1) 2 obtained in Algorithm 2 with q = \u221e, with probability at least 1 \u2212O(n\u2212100) \u2212 O(T \u2212100), \u2225sin \u0398( \u02c6 W adv 1 , B)\u2225F \u2272r r 1 n + r s2 nT ! \u00b7 log(T + p), and the excess risk R( \u02c6 W adv 1 , \u02c6 wadv,(T +1) 2 ) \u2272 s r + log n nT +1 + r r s2 nT ! \u00b7 log(T + p). (5) \u2113\u221e-adversarial training v.s. standard training: Under the exact same conditions in Theorem 2, again, a simple modi\ufb01cation of Lemma 1 shows that without adversarial training, with high probability, we have \u2225sin \u0398( \u02c6 W1, B)\u2225F \u2272r( p 1/n + p p/nT + p log n/nT) and the excess risk R( \u02c6 W1, \u02c6 w(T +1) 2 ) \u2272 p (r + log n)/nT +1 + r p p/nT. Theorem 2 shows that \u2113\u221e-adversarial training is able to learn signi\ufb01cantly better representations when s2 \u226ap. This scenario is common in image classi\ufb01cation where the label of an image only depends on a small set of feature. Our proof reveals that \u2113\u221e-adversarial training would help remove the redundant features in the classi\ufb01cation tasks and therefore improves the representation learning and the subsequent downstream prediction on target domain. 4 Pseudo-Labeling and Adversarial Training In the previous section, we have shown that combining abundant data from source tasks with robust training can help learn a good classi\ufb01er for the target task. Sometimes, however, even the sources have limited labeled 7 \f0 1 2 3 4 5 Epsilon 70.0% 75.0% 80.0% 85.0% 90.0% 95.0% Accuracy (%) Models Trained with 10% ImageNet 10% ImageNet + 90% Pseudo-labels 100% ImageNet 0 1 2 3 4 5 6 7 8 Epsilon 70.0% 75.0% 80.0% 85.0% 90.0% 95.0% Accuracy (%) Models Trained with 10% ImageNet 10% ImageNet + 90% Pseudo-labels 100% ImageNet (a) \u21132 norm training (b) \u2113\u221enorm training Figure 2: Comparison of target task (CIFAR-10) accuracy for models trained on source task (ImageNet) using (i). a 10% sample of data from the source task, (ii). the 10% sample with ground truth labels, and the remaining 90% with pseudo-labels, and (iii). 100% of the source task data with ground truth labels. Models trained on the source task with (a). \u21132-adversarial training and (b). \u2113\u221e-adversarial training both exhibit similar behavior. The x-axis refers to the magnitude, \u03b5, used in adversarial training\u2014larger values indicate allowing more di\ufb03cult adversarial examples; 0 corresponds to no adversarial training. The \u03b5 value in (b) is scaled up by 255 so it corresponds to pixel di\ufb00erence on a [0,255] scale. data. In that case, data augmentation by incorporating unlabeled source data, which are easier to obtain, can be a powerful way to improve prediction accuracy. One of the most commonly used semi-supervised learning algorithms is the pseudo-labeling algorithm [Chapelle et al., 2009]. In this section, we explore how using pseudo-labeling in the source data can improve transfer learning and how adversarial training can further boost that improvement, both empirically and theoretically. Experiments. We perform empirical study of image classi\ufb01cation. Our source tasks are image classi\ufb01cation on ImageNet Russakovsky et al. [2015]; our target tasks are image classi\ufb01cation on CIFAR-10 Krizhevsky and Hinton [2009]. To simulate the pseudo-labeling setup, we sample 10% of ImageNet, train a ResNet-18 model on this sample (without adversarial training), and generate pseudo-labels for the remaining 90%. We then train a new source model using all of the source labeled and pseudo-labeled data with and without adversarial training. We use a public library for adversarial training Engstrom et al. [2019]. The high-level approach for adversarial training is as follows: at each iteration, take a small number of gradient steps to generate adversarial examples from an input batch; then update network weights using the loss gradients from the adversarial batch. In Figure 2, we plot the target task accuracy of models trained on our source task in 3 di\ufb00erent settings, across di\ufb00erent levels of adversarial training. Models in Figure 2 (a) and (b) are trained on the source task with l2 and l\u221e-adversarial training respectively. We compare models trained on the source task using: 1) a \ufb01xed 10% sample of ImageNet,; 2) the 10% sample with ground truth labels and the remaining 90% sample using the generated pseudo-labels; and 3) all of ImageNet with its ground truth labels. Adversarial training boosts transfer performance in all 3 settings. Pseudo-labels also boost transfer performance. In the \u03b5 = 0 setting (i.e. no adversarial training), the highest target accuracy is obtained by using labeled examples with pseudo-labels (green points). Moreover, adversarial training with pseudo-labels also increases performance; at the optimal setting for adversarial training, the di\ufb00erence from using pseudo-labels instead of ground truth labels is only 1.5%. In Table 1 we investigate how the amount of pseudo-labeled data a\ufb00ects performance. We train models in 2 settings: with adversarial and non-adversarial (standard) training on the source task. The adversarial training corresponds to \u21132 norm adversarial training with \u03b5 = 1. Across all settings, we observe that robust training improves performance, and adding more pseudo-labeled data improves performance with diminishing returns. Theoretical illustration. We further support the above experimental observations with theories. We denote the unlabeled input data for each source task t \u2208[T] as Xu t = {xu,(t) i }nu t i=1. The algorithm we analyze is as the following: 8 \fTable 1: E\ufb00ect of amount of pseudo-labels on transfer task performance (measured with accuracy). At 0%, we just use 10% of data from the source task; at 900%, we use all remaining 90% of data with pseudo-labels (this is 9 times the train set size). Adversarial training corresponds to using \u21132-adversarial training with \u03b5 = 1 on the source task. Results on additional datasets in Appendix. Source Task Target Task +0% Pseudo-labels +200% Pseudo-labels +500% Pseudo-labels +900% Pseudo-labels ImageNet CIFAR-10 73.0% 73.8% 77.1 % 78.8 % ImageNet (w/adv.training) CIFAR-10 82.8% 85.7% 87.5 % 87.8 % ImageNet CIFAR-100 51.0% 52.9% 55.3 % 58.4% ImageNet (w/adv.training) CIFAR-100 62.6% 65.2% 68.1 % 69.5 % Algorithm 3 Natural and Adversarial Learning for Linear Features with Pseudo-labeling Input: {St}T +1 t=1 , {Xu t }T t=1, q Step 1: Train an initial classi\ufb01er: w(t) init = argmin\u2225w\u2225\u22641 1 nt Pnt i=1 \u2212y(t) i \u27e8w, x(t) i \u27e9 Step 2: Obtain pseudo labels: yu,(t) i = sgn(\u27e8w(t) init, x(t) i \u27e9) Step 3: Obtain augmented data sets St,aug by combining St and {(xu,(t) i , yu,(t) i )}nu t i=1 Step 4: ( \u02c6 W1,aug, \u02c6 w(T +1) 2,aug ) \u2190Algorithm 1(St,aug, ST +1), ( \u02c6 W adv 1,aug \u02c6 wadv,(T +1) 2,aug ) \u2190Algorithm 2(St,aug, ST +1, q) Return \u02c6 W1,aug, \u02c6 w(T +1) 2,aug , \u02c6 W adv 1,aug \u02c6 wadv,(T +1) 2,aug Theorem 3. Denote \u02dc n = mint\u2208[T ] nu t and assume \u02dc n > c1 max{pr2/T, r2 log(1/\u03b4)/T, r2, n} for some constant c1 > 0. Assume \u03c3r(M \u22a4M/T) = \u2126(1/r) and nc2 \u2273\u02dc n \u2273n for some c2 > 1, if n \u2273(T +d) and mint\u2208[T ] \u2225at\u2225= \u0398(log2 n) and \u03b7(t) i \u223cNp(0, \u03c12 tI2) for \u03c1t = \u0398(1). Let \u02c6 W1,aug obtained in Algorithm 3, with probability 1 \u2212O(n\u2212100), \u2225sin \u0398( \u02c6 W1,aug, B)\u2225F \u2272r r 1 \u02dc n + r p \u02dc nT + r log n \u02dc nT ! . Comparing the results above with Lemma 1, we theoretically justify that by incorporating unlabeled data, we are able to learn a better representation when \u02dc n \u226bn. In the following, we show that adversarial training, together with the pseudo-labeling, can further boost this improvement. Theorem 4. Under the same conditions as those in Theorem 3, (a). For \u21132 attack, under assumptions same to the those in Theorem 1, and additionally \u02dc n > c1 max{r2, r/\u03b1T } max{p log T, lo for a universal constant c1, and choose \u03b5 \u2208[maxt\u2208S1 \u2225at\u2225+ c3 p p log T/\u02dc n, mint\u2208S2 \u2225at\u2225\u2212c3 p p log T/\u02dc n], we then have with probability at least 1 \u2212O(n\u2212100), \u2225sin \u0398( \u02c6 W adv 1,aug, B)\u2225F \u2272(\u03b1T )\u22121 r r2 \u02dc n + r pr2 \u02dc nT + r r2 log(n) \u02dc nT ! ; (6) (b). For \u2113\u221eattack, under assumptions same to the those in Theorem 2, and additionally \u02dc n > C1 \u00b7 r2 max{s2 log2 T/T, 1} for a universal constant C1. There exists a universal constant c2, such that if we choose \u03b5 > c3 p log p/\u02dc n, with probability at least 1 \u2212O(n\u2212100) \u2212O(T \u2212100), \u2225sin \u0398( \u02c6 W adv 1,aug, B)\u2225F \u2272r( r 1 \u02dc n + r s2 \u02dc nT ) \u00b7 log(T + p). Similar to the interpretations of Theorems 1 and 2, Theorem 4 suggests that adversarial training can boost the representation learning either (i). when the signal to noise ratio is varying (\u21132 adversarial training helps in this case) and (ii). where there are many redundant features in classi\ufb01cation (\u2113\u221eadversarial training helps in this case. Same to the analysis before, we can obtain similar upper bounds on the excess risks as those in Theorems 1 and 2 by using Eq. (4). 9 \f5 Discussion In this paper, we provide the \ufb01rst theoretical framework to explain how adversarial training on the source data improves transfer learning. We show that adversarial training helps learning a more robust representation, and therefore boosts the predictive performance on the target task. Additionally, we extend our analysis to the semi-supervised setting and show that adversarial training, together with pseudo-labeling, have complementary bene\ufb01ts and can further improve the transfer. Societal impacts and limitations Transfer learning helps learn a well-performed machine learning model with only a small amount of labeled data from the target task. Our work contributes to this \ufb01eld by providing insights into factors that improve transfer learning. A limitation of our work is that we have to make some standard assumptions on the data generative distribution when developing theories, which were also made in several other theory papers. While the model is simple, it captures the essence of the problem studied in the paper and is the \ufb01rst tractable framework to study how adversarial training helps \ufb01xed-feature transfer learning. The analysis here are already challenging and are supported by our experiments." + }, + { + "url": "http://arxiv.org/abs/2010.13988v2", + "title": "Toward Better Generalization Bounds with Locally Elastic Stability", + "abstract": "Algorithmic stability is a key characteristic to ensure the generalization\nability of a learning algorithm. Among different notions of stability,\n\\emph{uniform stability} is arguably the most popular one, which yields\nexponential generalization bounds. However, uniform stability only considers\nthe worst-case loss change (or so-called sensitivity) by removing a single data\npoint, which is distribution-independent and therefore undesirable. There are\nmany cases that the worst-case sensitivity of the loss is much larger than the\naverage sensitivity taken over the single data point that is removed,\nespecially in some advanced models such as random feature models or neural\nnetworks. Many previous works try to mitigate the distribution independent\nissue by proposing weaker notions of stability, however, they either only yield\npolynomial bounds or the bounds derived do not vanish as sample size goes to\ninfinity. Given that, we propose \\emph{locally elastic stability} as a weaker\nand distribution-dependent stability notion, which still yields exponential\ngeneralization bounds. We further demonstrate that locally elastic stability\nimplies tighter generalization bounds than those derived based on uniform\nstability in many situations by revisiting the examples of bounded support\nvector machines, regularized least square regressions, and stochastic gradient\ndescent.", + "authors": "Zhun Deng, Hangfeng He, Weijie J. Su", + "published": "2020-10-27", + "updated": "2021-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "main_content": "Introduction A central question in machine learning is how the performance of an algorithm on the training set carries over to unseen data. Continued efforts to address this question have given rise to numerous generalization error bounds on the gap between the population risk and empirical risk, using a 1Harvard University 2Department of Computer and Information Science, University of Pennsylvania 3Wharton Statistics Department, University of Pennsylvania. Correspondence to: Zhun Deng . Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). variety of approaches from statistical learning theory (Vapnik, 1979; 2013; Bartlett & Mendelson, 2002; Bousquet & Elisseeff, 2002). Among these developments, algorithmic stability stands out as a general approach that allows one to relate certain speci\ufb01c properties of an algorithm to its generalization ability. Ever since the work of Devroye & Wagner (1979), where distribution-independent exponential generalization bounds for the concentration of the leave-one-out estimate are proposed, various results for different estimates are studied. Lugosi & Pawlak (1994) study the smooth estimates of the error for the deleted estimate developed in terms of a posterior distribution and Kearns & Ron (1999) propose error stability, which provides sanitycheck bounds for more general classes of learning rules regarding the deleted estimate. For general learning rules, Bousquet & Elisseeff (2002) propose the notion of uniform stability, which extends Lugosi & Pawlak (1994)\u2019s work and yields exponential generalization bounds. Loosely speaking, Bousquet & Elisseeff (2002) show that an algorithm would generalize well to new data if this algorithm is uniformly stable in the sense that its loss function is not sensitive to the deletion of a single data point. To date, uniform stability is perhaps the most popular stability notion. Despite many recent developments, most results on stability and generalization can be divided into two categories if not counting sanity-check bounds. The \ufb01rst category includes stability notions such as hypothesis stability, which only yield sub-optimal polynomial generalization bounds. The second category includes stability notions based on uniform stability and its variants, which yield optimal exponential generalization bounds. Nevertheless, the stability notions in the second category either stop short of providing distribution-dependent bounds or, worse, the bounds do not vanish even when the training sample size tends to in\ufb01nity (Abou-Moustafa & Szepesv\u00b4 ari, 2019). Recognizing these facts, in this paper, we aim to relax the uniform stability notion and propose a weaker and distribution-dependent stability notion, which yields exponential generalization bounds that are consistent in the sense that the bounds vanish to zero as the training sample size tends to in\ufb01nity. 1.1. A Motivating Example To further motivate our study, note that there are many cases where the worst-case sensitivity of the loss is much arXiv:2010.13988v2 [cs.LG] 13 Jul 2021 \fGeneralization Bounds with Locally Elastic Stability (a) Sensitivity of neural networks. (b) Sensitivity of a random feature model. (c) Sensitivity of a linear model. Figure 1. Class-level sensitivity approximated by in\ufb02uence functions for neural networks (based on a pre-trained 18-layer ResNet), a random feature model (based on a randomly initialized 18-layer ResNet), and a linear model on CIFAR-10. The vertical axis denotes the classes in the test data and the horizontal axis denotes the classes in the training data. The class-level sensitivity from class a in the training data to class b in the test data is de\ufb01ned as C(ca, cb) = 1 |Sa|\u00d7| \u02dc Sb| P zi\u2208Sa P z\u2208\u02dc Sb |l(\u02c6 \u03b8, z) \u2212l(\u02c6 \u03b8\u2212i, z)|, where Sa denotes the set of examples from class a in the training data and \u02dc Sb denotes set of examples from class b in the test data. larger than the average sensitivity, especially in random feature models or neural networks. As a concrete example, from Figure 1, we can observe that the sensitivity of neural networks and random feature models depends highly on the label information. To be precise, consider training two models on the CIFAR-10 dataset (Krizhevsky, 2009) and another dataset obtained by removing one training example, say an image of a plane, from CIFAR-10, respectively. Figure 1 shows that the difference between the loss function values for the two models depends on the label of the test image that the loss function is evaluated at: the difference between the loss function values, or sensitivity for short, is signi\ufb01cant if the test image is another plane, and the sensitivity is small if the test image is from a different class, such as car or cat. Concretely, the average plane-toplane difference is about seven times the average plane-tocat difference. The dependence on whether the two images belong to the same class results in a pronounced diagonal structure in Figure 1(a), which is consistent with the phenomenon of local elasticity in deep learning training (He & Su, 2020; Chen et al., 2020). In particular, this structural property of the loss function differences clearly demonstrates that uniform stability fails to capture how sensitive the loss function is in the population sense, which is considerably smaller than the worst-case sensitivity, for the neural networks and random feature models. 1.2. Our Contribution As our \ufb01rst contribution, we introduce a new notion of algorithmic stability that is referred to as locally elastic stability to take into account the message conveyed by Figure 1. This new stability notion imposes a data-dependent bound on the sensitivity of the loss function, as opposed to a constant bound that uniform stability and many of its relaxations use. The second contribution of this paper is to develop a generalization bound for any locally elastically stable algorithm. This new generalization bound is obtained by a \ufb01ne-grained analysis of the empirical risk, where using McDiarmid\u2019s inequality as in Bousquet & Elisseeff (2002) no longer works. Speci\ufb01cally, we expect the empirical sum of the sensitivities by deleting different samples to be close to the expected sensitivity taken over the deleted sample. However, conditioning on that event, the dependency among input examples invalidate McDiarmid\u2019s inequality. To overcome this dif\ufb01culty, we develop novel techniques that allow us to obtain a sharper analysis of some important quantities. Our results show that the generalization error is, loosely speaking, upper bounded by the expectation of the sensitivity function associated with locally elastic stability over the population of training examples. Assuming uniform stability, however, classical generalization bounds are mainly determined by the largest possible sensitivity over all pairs of training examples. We further demonstrate that our bounds are tighter than those derived based on uniform stability in many situations by revisiting the examples of bounded support vector machines (SVM), regularized least square regressions, and stochastic gradient descent (SGD). Although it requires further exploration on how to make the new bounds applicable to deep learning models in practice, the insights from this new stability notion shall shed light on the development of future approaches toward demystifying the generalization ability of modern neural networks. 1.3. Related Work Ever since Kearns & Ron (1999) and Bousquet & Elisseeff (2002) proposed the notions of uniform stability and hypothesis stability, a copious line of works has been de\fGeneralization Bounds with Locally Elastic Stability voted to extending and elaborating on their frameworks. In Mukherjee et al. (2006), Shalev-Shwartz et al. (2010) and Kutin & Niyogi (2002), the authors show there exist cases where stability is the key necessary and suf\ufb01cient condition for learnability but uniform convergence is not. On one hand, error stability is not strong enough to guarantee generalization (Kutin & Niyogi, 2012). On the other hand, hypothesis stability guarantees generalization but only provides polynomial tail bounds. Fortunately, uniform stability guarantees generalization and further provides exponential tail bounds. In Feldman & Vondrak (2018), the authors develop the generalization bound for the cases where uniform stability parameter is of order \u2126(1/\u221am), where m is the sample size. In subsequent work, Feldman & Vondrak (2019) prove a nearly tight high probability bound for any uniformly stable algorithm. In Bousquet et al. (2020), the authors provide sharper bounds than Feldman & Vondrak (2019) and also provide general lower bounds which can be applied to certain generalized concentration inequalities. There are also works seeking to relax uniform stability such as (Abou-Moustafa & Szepesv\u00b4 ari, 2019), but their bound still has a small term that would not vanish even with an in\ufb01nite sample size and a vanishing stability parameter. In addition, researchers demonstrate that many popular optimization methods, such as SGD, satisfy algorithmic stability. In Hardt et al. (2015), the authors show that SGD satis\ufb01es uniform stability. Lei & Ying (2020) further relax the smoothness and convexity assumptions, and others instead discuss the nonconvex case for SGD in more detail (Kuzborskij & Lampert, 2018; Madden et al., 2020). Kuzborskij & Lampert (2018) recently propose another notion of data-dependent stability for SGD. Our work can be viewed as a relaxation of uniform stability and SGD will be shown to satisfy our new notion of algorithmic stability. 2. Locally Elastic Stability We \ufb01rst collect some notations that are used throughout this paper, which mostly follows that of Bousquet & Elisseeff (2002). Denote the input by e. One instance of Z is X \u00d7 Y, where X and Y are input space and label space respectively. For a function class F, a learning algorithm A : Zm \u2192F takes the training set S as input and outputs a function AS \u2208F. For any m-sized training set S, let S\u2212i = {z1, \u00b7 \u00b7 \u00b7 , zi\u22121, zi+1, \u00b7 \u00b7 \u00b7 , zm} be derived by removing the ith element from S and Si = {z1, \u00b7 \u00b7 \u00b7 , zi\u22121, z\u2032 i, zi+1, \u00b7 \u00b7 \u00b7 , zm} be derived by replacing the ith element from S with another example z\u2032 i. For any input z, we consider a loss function l(f, z). We are particularly interested in the loss l(f, z) when the function f = AS. Now, we formally introduce the notion of locally elastic stability below. Let \u03b2m(\u00b7, \u00b7) be a sequence of functions indexed by m \u22652 that each maps any pair of z, z\u2032 \u2208Z to a positive value. De\ufb01nition 2.1 (Locally Elastic Stability). An algorithm A has locally elastic stability \u03b2m(\u00b7, \u00b7) with respect to the loss function l if, for all m, the inequality |l(AS, z) \u2212l(AS\u2212i, z)| \u2264\u03b2m(zi, z) holds for all S \u2208Zm, 1 \u2264i \u2264m, and z \u2208Z. In words, the change in the loss function due to the removal of any zi is bounded by a function depending on both zi and the data point z where the loss is evaluated. In this respect, locally elastic stability is data-dependent. In general, \u03b2m(\u00b7, \u00b7) is not necessarily symmetric with respect to its two arguments. To further appreciate this de\ufb01nition, we compare it with uniform stability, which is perhaps one of the most popular algorithmic stability notions. De\ufb01nition 2.2 (Uniform Stability (Bousquet & Elisseeff, 2002)). Let \u03b2U m be a sequence of scalars. An algorithm A has uniform stability \u03b2U m with respect to the loss function l if |l(AS, z) \u2212l(AS\u2212i, z)| \u2264\u03b2U m (1) holds for all S \u2208Zm, 1 \u2264i \u2264m, and z \u2208Z. First of all, by de\ufb01nition one can set \u03b2U m = supz\u2032,z \u03b2m(z\u2032, z). Furthermore, a simple comparison between the two notions immediately reveals that locally elastic stability offers a \ufb01ner-grained de\ufb01nition of the loss function sensitivity. The gain is signi\ufb01cant particularly in the case where the worst possible value of |l(AS, z) \u2212l(AS\u2212i, z)| is much larger than its typical realizations. 2.1. Estimation Using In\ufb02uence Functions In the introduction part, we motivated the proposal of locally elastic stability by showing the class-level sensitivity for a random feature model and neural networks in Figure 1. In this subsection, we elaborate more on the experimental results and the corresponding approximation method. The examples we considered demonstrate small \u03b2m(zi, z) for most z\u2019s in Z for any training example zi,. The fact that \u03b2m(zi, z) is small for most of z\u2019s is important to obtain a sharper generalization bound with locally elastic stability than the bound with uniform stability. Speci\ufb01cally, consider a function f that is parameterized by \u03b8 and write l(\u03b8, z) instead of l(f, z) for the loss. Writing f = f\u03b8, the algorithm A aims to output f\u02c6 \u03b8 where \u02c6 \u03b8 = arg min\u03b8\u2208\u0398 Pm j=1 l(\u03b8, zj)/m (we temporarily ignore the issue of uniqueness of minimizers here). Then, AS de\ufb01ned previously is exactly f\u02c6 \u03b8. Denote \u02c6 \u03b8\u2212i = arg min\u03b8\u2208\u0398 P j\u0338=i l(\u03b8, zj)/m, we aim to quantitatively estimate |l(\u02c6 \u03b8, z)\u2212l(\u02c6 \u03b8\u2212i, z)| for all i\u2019s. However, quantifying \fGeneralization Bounds with Locally Elastic Stability Models supz\u2032\u2208S,z\u2208Z \u03b2m(z\u2032, z) supz\u2032\u2208Z Ez\u03b2m(z\u2032, z) ratio Neural networks 3.05 0.02 153 Random feature model 1.73 0.04 43 Table 1. Comparison between locally elastic stability and uniform stability for neural networks and the random feature model in Figure 1. the above quantity for all i\u2019s is computationally prohibitive in practice for neural networks and also a pain even for random feature model. In order to alleviate the computational issue, we adopt in\ufb02uence functions from Koh & Liang (2017) and consider the same simpli\ufb01ed model as in Koh & Liang (2017): an N-layer neural network whose \ufb01rst N \u22121 layers are pre-trained. Given that model, when the loss function l(\u03b8, z) is strictly convex in \u03b8 for all z, such as the continuously used cross entropy loss and squared loss with l2 penalty on \u03b8, we have the following approximation: \u03b2m(zi, z) := |l(\u02c6 \u03b8, z) \u2212l(\u02c6 \u03b8\u2212i, z)| \u22481 m \f \f\u2207\u03b8l(\u02c6 \u03b8, z)H\u22121 \u02c6 \u03b8 \u2207\u03b8l(\u02c6 \u03b8, zi) \f \f, (2) where H\u02c6 \u03b8 = Pm j=1 \u22072l(\u02c6 \u03b8, zj)/m is the Hessian. We remark that it is very common in transfer learning to pretrain the N \u22121 layers and it is different from the random feature model, where the \ufb01rst N \u22121 layers are chosen to be independent of data. We further consider training the full N-layer neural networks by analyzing the sensitivity of the loss step-wisely for SGD in the Appendix. In Figure 1, we demonstrate the class-level sensitivity approximated by in\ufb02uence functions for neural networks (based on a pre-trained 18-layer ResNet (He et al., 2016)) and a random feature model (based on a randomly initialized 18layer Resnet) on CIFAR-10 (Krizhevsky, 2009). The results indicate that for random feature models and neural networks with a training example zi, if z is from the same class of zi, then \u03b2m(zi, z) is large; if z is from a different class \u03b2m(zi, z) is small. Recognizing the long tail property of class frequencies for image datasets in practice, it would lead to small \u03b2m(zi, z)\u2019s for most z\u2019s for any training example zi. We close this section by providing empirical evidence to justify our statement that \u201c\u03b2m(zi, z) for most z\u2019s is small for any training example zi.\u201d Speci\ufb01cally, we compare supz\u2032\u2208S,z\u2208Z \u03b2m(z\u2032, z) and supz\u2032\u2208Z Ez\u03b2m(z\u2032, z) for both neural networks and the random feature model, and the results are shown in Table 1. It is worth noticing the dependence on whether the two images belong to the same class results in a pronounced diagonal structure in Figure 1(a) and 1(b), and in contrast, linear models do not exhibit such a strong dependence on the class of images, as evidenced by the absence of a diagonal structure in Figure 1(c). We believe the above phenomenon is one of the reasons that neural networks generalize well and our new proposed stability provides a new direction towards understanding the generalization behavior of neural networks. 2.2. Connection with Local Elasticity Locally elastic stability has a profound connection with a phenomenon identi\ufb01ed by He & Su (2020), where the authors consider the question: how does the update of weights of neural networks using induced gradient at an image (say a tiger) impact the prediction at another image? In response to this question, He & Su (2020) observe that the impact is signi\ufb01cant if the two images have the same membership (e.g., the test image is another tiger) or share features (e.g., a cat), and the impact is small if the two images are not semantically related (e.g., a plane).1 In contrast, this phenomenon is generally not present in kernel methods, and Chen et al. (2020) argue that this absence is in part responsible for the ineffectiveness of neural tangent kernels compared to real-world neural networks in terms of generalization. Related observations have been made in Chatterjee (2020) and Fort et al. (2019). This phenomenon, which He & Su (2020) refer to as local elasticity, would imply the characteristic of neural networks that we observe in Figure 1. Intuitively, from local elasticity we would expect that if we remove an image of a cat in the training set S, the loss after training on a test image of a plane would not be affected much compared with the loss obtained by training on the original training set (assuming the same randomness from sampling). Conversely, the \ufb01nal loss would be affected much if the test image is another tiger. Our De\ufb01nition 2.1 formalizes the intuition of local elasticity by incorporating the membership dependence into the sensitivity of the loss function, hence is named as locally elastic stability. The improvement brought by locally elastic stability is further enhanced by the diversity of real-life data. First, the number of classes in tasks resembling practical applications is often very large. For example, the ImageNet dataset contains more than 1000 classes (Deng et al., 2009). For most pairs z\u2032, z, their class memberships are different, leading to a relatively small value of \u03b2m(z\u2032, z) compared to the uniform upper bound adopted in uniform stability. Moreover, the long tail property of real-life images suggest that the class of cats, for example, consists of many cats with different appearances and non-vanishing frequencies (Zhu et al., 2014) (further elaborations are included in the Appendix). Combining with the observations mentioned above, we would expect that for any \ufb01xed training example z\u2032, \u03b2m(z\u2032, z) would be small for most z sampled from 1To be complete, this phenomenon does not appear in the initialized neural networks and become pronounced only after several epochs of training on the dataset. \fGeneralization Bounds with Locally Elastic Stability the distribution D. Therefore, the use of a uniform upper bound on the sensitivity is too pessimistic. 3. Generalization Bounds In this section, we present our generalization bound for locally elastically stable algorithms and compare it to those implied by classical algorithmic stability notions. Assumptions. We assume the space Z is bounded. In addition, from the approximation shown in (2), for many problems one has |l(\u02c6 \u03b8, z) \u2212l(\u02c6 \u03b8\u2212i, z)| = O(1/m). Moreover, Bousquet & Elisseeff (2002) show that \u03b2U m in uniform stability satis\ufb01es \u03b2U m = O(1/m) for many problems including bounded SVM, k-local rules, and general regularization algorithms. This fact suggests that it is reasonable to expect that \u03b2m(z\u2032, z) = Oz\u2032,z(1)/m for locally elastic stability. More speci\ufb01cally, we have the following assumption. Assumption 3.1. For the function \u03b2m(\u00b7, \u00b7), for any z, z\u2032 \u2208 Z, \u03b2m(z\u2032, z) = \u03b2(z\u2032, z) m for some function \u03b2(\u00b7, \u00b7) that is independent of m. In addition, \u03b2(\u00b7, z) as a function of its \ufb01rst argument is L-Lipchitz continuous for all z \u2208Z and the loss function and there exists M\u03b2 > 0 such that |\u03b2(\u00b7, \u00b7)| \u2264M\u03b2. In essence, \u03b2m(z\u2032, z) = \u03b2(z\u2032, z)/m is equivalent to assuming that supm m\u03b2m(z\u2032, z) is \ufb01nite for all z\u2032, z. The boundedness assumption of \u03b2(\u00b7, \u00b7) holds if \u03b2 is a continuous function in conjunction with the boundedness of Z. In relating this assumption to uniform stability in De\ufb01nition 2.2, we can take \u03b2U m = M\u03b2/m. Now, we are ready to state our main theorem. For convenience, write \u2206(AS) as a shorthand for the defect Ezl(AS, z) \u2212Pm j=1 l(AS, zj)/m, where the expectation Ez is over the randomness embodied in z \u223cD. In particular, Ezl(AS, z) depends on AS. Theorem 3.1. Let A be an algorithm that has locally elastic stability \u03b2m(\u00b7, \u00b7) with respect to the loss function l, which satis\ufb01es 0 \u2264l \u2264Ml for a constant Ml. Under Assumption 3.1, for any given \u03b7 and any 0 < \u03b4 < 1 and , for suf\ufb01ciently large m, with probability at least 1 \u2212\u03b4, we have \u2206(AS) \u22642 supz\u2032\u2208Z Ez\u03b2(z\u2032, z) m +2 \u0012 2 sup z\u2032\u2208Z Ez\u03b2(z\u2032, z) + \u03b7 + Ml \u0013 r 2 log(2/\u03b4) m . We remark that (1). the parameter \u03b7 in Theorem 3.1 is used to control the deviation supz\u2032\u2208Z \f \f \f P j\u0338=k \u03b2(z\u2032, zj)/m \u2212 Ez\u03b2(z\u2032, z) \f \f \f. As shown in our Lemma A.4 in the Appendix, we only need \u03b7 > 2M\u03b2/m. Thus, as stated in our theorem, for any given \u03b7 > 0, as long as the sample size is large enough, i.e. m > 2M\u03b2/\u03b7, all the claims involving \u03b7 hold. In the subsequent discussions, for instance, in Section 4.1, we can set \u03b7 = supz\u2032\u2208Z Ez\u03b2(z\u2032, z) and Theorem 3.1 still holds. (2). Theorem 3.1 holds for m that is larger than a bound depending on \u03b4, \u03b7, d, L, M\u03b2, and Ml, One coarse and suf\ufb01cient condition provided in the Appendix is that m is large enough such that log(C\u2032m)/m \u2a7d\u03b72/(64M 2 \u03b2), 2M 2 log(2/\u03b4)/( \u02dc M 2m) \u2a7d \u03b72/(128M 2 \u03b2), 2M/ \u02dc M p 2 log(2/\u03b4)/m \u2a7d\u03b72/(128M 2 \u03b2), and m > 2M\u03b2/\u03b7 for constants C\u2032 (depnding on d), \u02dc M, M\u03b2, \u03b7, which can be achieved once we notice that limm\u2192\u221elog(C\u2032m)/m \u21920. In this theorem, the bound on the defect \u2206(AS) tends to 0 as m \u2192\u221e. The factor p log(2/\u03b4) results from the fact that this locally elastic stability-based bound is an exponential bound. Notably, the bound depends on locally elastic stability through supz\u2032\u2208Z Ez\u03b2(z\u2032, z), which is closely related to error stability (Kearns & Ron, 1999). See more discussion in Section 4. Remark 3.1. In passing, we make a brief remark on the novelty of the proof. An important step in our proof is to take advantage of the fact that | P j\u0338=k \u03b2(z\u2032, zj)/m \u2212 Ez\u03b2(z\u2032, z)| is small with high probability. Conditioning on this event, however, zj\u2019s are no longer an i.i.d. sample from D. The dependence among input examples would unfortunately invalidate McDiarmid\u2019s inequality, which is a key technique in proving generalization bounds for uniform stability. To overcome this dif\ufb01culty, we develop new techniques to obtain a more careful analysis of some estimates. More details can be found in our Appendix. 4. Comparisons with Other Notions of Algorithmic Stability Having established the generalization bound for locally elastic stability, we compare our results with some classical notions (Bousquet & Elisseeff, 2002). As will be shown in this subsection, error stability is not suf\ufb01cient to guarantee generalization, hypothesis stability only yields polynomial bounds, and uniform stability only considers the largest loss change on z in Z by removing zi from S. In contrast, locally elastic stability not only provides exponential bounds as uniform stability but also takes into account the varying sensitivity of the loss. This \ufb01ne-grained perspective can be used to improve the generalization bounds derived from uniform stability, when the average loss change by removing zi from S over different z\u2019s in Z is much smaller than the worst-case loss change. \fGeneralization Bounds with Locally Elastic Stability 4.1. Uniform Stability Following Bousquet & Elisseeff (2002), for an algorithm A having uniform stability \u03b2U m (see De\ufb01nition 2.2) with respect to the loss function l, if 0 \u2264l(\u00b7, \u00b7) \u2264Ml, for any \u03b4 \u2208(0, 1) and sample size m, with probability at least 1\u2212\u03b4, \u2206(AS) \u22642\u03b2U m + (4m\u03b2U m + Ml) r log(1/\u03b4) 2m . Notice that if an algorithm A satis\ufb01es locally elastic stability with \u03b2m(\u00b7, \u00b7), then it it has uniform stability with parameter \u03b2U m := supz\u2032\u2208S,z\u2208Z \u03b2(z\u2032, z)/m. We can identify supz\u2032\u2208S,z\u2208Z \u03b2(z\u2032, z) with M\u03b2 in Assumption 3.1. To get a better handle on the tightness of our new generalization bound, we revisit some classic examples in Bousquet & Elisseeff (2002) and demonstrate the superiority of using our bounds over using uniform stability bounds in certain cases. In order to have a clear presentation, let us brie\ufb02y recap the assumptions and concepts used in Bousquet & Elisseeff (2002). Assumption 4.1. Any loss function l considered in this paragraph is associated with a cost function cl, such that for a hypothesis f with respect to an example z = (x, y), the loss function is de\ufb01ned as l(f, z) = cl(f(x), y). De\ufb01nition 4.1. A loss function l de\ufb01ned on YX \u00d7 Y is \u03c3admissible with respect to YX if the associated cost function cl is convex with respect to its \ufb01rst argument and the following condition holds: for any y1, y2 \u2208Y and any y\u2032 \u2208Y |cl(y1, y\u2032) \u2212cl(y2, y\u2032)| \u2264\u03c3\u2225y1 \u2212y2\u2225Y, where \u2225\u00b7 \u2225Y is the corresponding norm on Y. Reproducing kernel Hilbert space. A reproducing kernel Hilbert space (RKHS) H is a Hilbert space of functions, in which point evaluation is a continuous linear functional and satis\ufb01es for any h \u2208H, any x \u2208X h(x) = \u27e8h, K(x, \u00b7)\u27e9 where K is the corresponding kernel of H. In particular, by Cauchy-Schwarz inequality, for any h \u2208H, any x \u2208X |h(x)| \u2264\u2225h\u2225K p K(x, x), where \u2225\u00b7 \u2225K is the norm induced by kernel K for the reproducing kernel Hilbert space H. We denote p K(x, x) as \u03ba(x). Notice that for the reproducing kernel Hilbert space, K must be a positive semi-de\ufb01nite kernel and \u03ba(x) \u22650. In order to derive locally elastic stability bounds, we introduce the following lemma, which is a variant of Theorem 22 in Bousquet & Elisseeff (2002). Lemma 4.1. Let H be a reproducing kernel Hilbert space with kernel K, and for any x \u2208X, K(x, x) \u2264\u03ba2 < \u221e. The loss function l is \u03c3-admissible with repect to H and the learning algorithm is de\ufb01ned by AS = arg min h\u2208H 1 m m X j=1 l(h, zj) + \u03bb\u2225h\u22252 K, where \u03bb is a positive constant. Then, AS has uniform stability \u03b2U m and locally elastic stability \u03b2m(zi, z) such that \u03b2U m \u2264\u03c32\u03ba2 2\u03bbm and \u03b2m(zi, z) \u2264\u03c32\u03ba(xi)\u03ba(x) 2\u03bbm . Now, we are ready to investigate how the locally elastic bounds improve over the uniform stability bounds in the bounded SVM regression and regularized least square regression studied in Bousquet & Elisseeff (2002). We remark here, following the same settings in Bousquet & Elisseeff (2002), though the algorithms in the examples below are minimizing a regularized version of the loss, the generalization gap studied above is still \u2206(AS) = Ezl(AS, z) \u2212 Pm j=1 l(AS, zj)/m. Example 4.1 (Stability of bounded SVM regression). Assume K is a bounded kernel, such that K(x, x) \u2264\u03ba2 for all x \u2208X, and Y = [0, B] for a real positive number B. Consider the loss function for \u03c4 > 0, l(f, z) = |f(x) \u2212y|\u03c4 = \u001a 0, if |f(x) \u2212y| \u2264\u03c4, |f(x) \u2212y| \u2212\u03c4, otherwise. The learning algorithm is de\ufb01ned by AS = arg min h\u2208H 1 m m X j=1 l(h, zj) + \u03bb\u2225h\u22252 K. Noting 0 \u2264l(f, z) \u2264B and \u03c3 = 1 in our case 2 and using Lemma 4.1, we obtain the following bound via uniform stability: \u2206(AS) \u2264\u03ba2 \u03bbm + \u00122\u03ba2 \u03bb + B \u0013 r log(1/\u03b4) 2m . (B1) In addition, we obtain the following bound via locally elastic stability by choosing \u03b7 = \u03baEx\u03ba(x)/\u03bb \u2206(AS) \u2264\u03baEx\u03ba(x) \u03bbm + \u00123\u03baEx\u03ba(x) \u03bb + 2B \u0013 r 2 log(2/\u03b4) m . (B2) 2There are several small typos in the original Example 1 and 3 in Bousquet & Elisseeff (2002) with respect to the range of l(f, z) which we correct in our examples. \fGeneralization Bounds with Locally Elastic Stability For simplicity, we consider K to be the bilinear kernel (similar analysis can be extended to other kernels such as polynomial kernels) K(x, x\u2032) = \u27e8x, x\u2032\u27e9and all x \u2208X\u2019s norm are bounded by B\u2032. Then, \u03ba = B\u20322 and Ex\u03ba(x) = Ex\u2225x\u22252. Apparently, the \ufb01rst term on the RHS in (B2) is smaller than the \ufb01rst term on the RHS in (B1). So we focus on comparing the second terms for both inequalities. For \u03b4 < 0.5, we have log(2/\u03b4) \u22642 log(1/\u03b4). Applying the above inequality to (B2), we can simplify the expressions, and if we further have \u00122\u03ba2 \u03bb + B \u0013 \u22652 \u221a 2 \u00123\u03baEx\u03ba(x) \u03bb + 2B \u0013 (3) the bound obtained in (B2) is tighter than the one in (B1). If the scale of \u03baEx\u03ba(x)/\u03bb and B and are relatively small comparing with \u03ba2/\u03bb, (3) apparently holds. Notice the \ufb01rst requirement regarding \u03baEx\u03ba(x)/\u03bb being relatively small comparing with \u03ba2/\u03bb is distribution-dependent and can be easily achieved if the distribution of \u2225x\u2225is concentrated around zero. If we further have B \u20322 is large enough comparing with B\u03bb, the bound in (B2) is tighter than the one in (B1). In particular, if x is a distribution such that P \u0012 \u2225x\u2225\u2264B\u2032 6 \u0013 \u226523 24, and B\u20322 \u22658 \u221a 2B\u03bb, \u03b4 < 0.5, the bound obtained in (B2) is tighter than the one in (B1). Moreover, our locally elastic bound is signi\ufb01cantly tighter than the one obtain via uniform stability, if B\u20322 \u226bB\u03bb. Example 4.2 (Stability of regularized least square regression). Consider Y = [0, B] and denote H as the reproducing kernel Hilbert space induced by kernel K. The regularized least square regression algorithm is de\ufb01ned by AS = arg min h\u2208H 1 m m X j=1 l(h, zj) + \u03bb\u2225h\u22252 K, where l(f, z) = (f(x) \u2212y)2. Then, with Lemma 4.1, we obtain the following bound via uniform stability \u2206(AS) \u22644\u03ba2B2 \u03bbm + \u00128\u03ba2B2 \u03bb + B2 \u0013 r log(1/\u03b4) 2m . (B3) Meanwhile, we can obtain the following bound via locally elastic stability \u2206(AS) \u22644\u03baEx\u03ba(x)B2 \u03bbm + \u001212\u03baEx\u03ba(x)B2 \u03bb + 2B2 \u0013 r 2 log(2/\u03b4) m . (B4) Similarly, for simplicity, let us consider K to be the bilinear kernel K(x, x\u2032) = \u27e8x, x\u2032\u27e9and all x \u2208X\u2019s norm are bounded by B\u2032. \u03ba = B\u20322 and Ex\u03ba(x) = Ex\u2225x\u22252. With the same spirit as in Example 4.1, if x is a distribution such that P \u0012 \u2225x\u2225\u2264B\u2032 4 \u0013 \u22653 4 and suppose B\u20324 \u2265\u03bb, \u03b4 < 0.5, the bound obtained in (B4) is tighter than the one in (B3). Similar as Example 4.1, our locally elastic bound is signi\ufb01cantly tighter than the one obtain via uniform stability, if B\u20324 \u226b\u03bb. 4.2. Hypothesis Stability. For a training set S with m examples, an algorithm A has hypothesis stability \u03b2H m with respect to the loss function l if ES,z |l(AS, z) \u2212l(AS\u2212i, z)| \u2264\u03b2H m holds for all S \u2208Zm and 1 \u2264i \u2264m, where \u03b2H m is a sequence of scalars . If 0 \u2264l(\u00b7, \u00b7) \u2264Ml, for any \u03b4 \u2208(0, 1) and sample size m, Bousquet & Elisseeff (2002) show that with probability at least 1 \u2212\u03b4 \u2206(AS) \u2264 r M 2 l + 12Mlm\u03b2H m 2m\u03b4 . For \u03b2H m = O(1/m), hypothesis stability only provides a tail bound of order O(1/ \u221a m\u03b4) (polynomial tail bound) while locally elastic stability provides tail bounds of order O( p log(1/\u03b4)/m) (exponential tail bound). In addition, for an algorithm A satisfying locally elastic stability \u03b2m(\u00b7, \u00b7), it by de\ufb01nition satis\ufb01es hypothesis stability with parameter Ez\u2032,z\u03b2m(z\u2032, z). 4.3. Error Stability. For a training set S with m examples, an algorithm A has error stability \u03b2E m with respect to the loss function l if |Ez[l(AS, z)] \u2212Ez[l(AS\u2212i, z)]| \u2264\u03b2E m, for all S \u2208Zm and 1 \u2264i \u2264m. Error stability is closely related to locally elastic stability in the sense that \u03b2E m can take the value of supz\u2032\u2208Z Ez\u03b2m(z\u2032, z). However, as pointed out by Kutin & Niyogi (2012), this notion is too weak to guarantee generalization in the sense that there exists an algorithm A where the error stability parameter goes to 0 as the sample size m tends to in\ufb01nity but the generalization gap does not go to 0. 5. Locally Elastic Stability and Stochastic Gradient Descent In Hardt et al. (2016), the authors demonstrate that SGD satis\ufb01es uniform stability under the standard Lipschitz and \fGeneralization Bounds with Locally Elastic Stability (a) Neural networks (epoch 0). (b) Neural networks (epoch 10). (c) Neural networks (epoch 50). (d) Random feature model (epoch 0). (e) Random feature model (epoch 50). (f) Random feature model (epoch 250). Figure 2. Exact stepwise characterization of class-level sensitivity for neural networks and random feature models trained with different numbers of epochs by SGD on CIFAR-10. The class-level sensitivity for a stepwise update of SGD is C\u2032(ca, cb) = 1 |Sa|\u00b7| \u02dc Sb| P zi\u2208Sa P z\u2208\u02dc Sb |l(\u02c6 \u03b8t \u2212\u03b7\u2207\u03b8l(\u02c6 \u03b8t, zi), z) \u2212l(\u02c6 \u03b8t, z)|, where Sa denotes the set of examples with class a in the training data and \u02dc Sb denotes the set of examples with class b in the test data. smoothness assumptions. As another concrete application of locally elastic stability, we revisit this problem and show that SGD also satis\ufb01es locally elastic stability under similar assumptions. SGD algorithm consists of multiples steps of stochastic gradient updates \u02c6 \u03b8t+1 = \u02c6 \u03b8t \u2212\u03b7t\u2207\u03b8l(\u02c6 \u03b8t, zit), where we allow the learning rate to change over time and \u03b7t is the learning rate at time t, it is picked uniformly at random from {1, \u00b7 \u00b7 \u00b7 , m}. Throughout this subsection, we develop our results for a T-step SGD. For a randomized algorithm A like SGD, we can extend the de\ufb01nition of locally elastic stability just as Hardt et al. (2016) do for uniform stability (De\ufb01nition 2.2). As shown in Figure 2, we further demonstrate the step-wise characterization of class-level sensitivity for neural networks (based on a pre-trained ResNet-18) and random feature models (based on a randomly initialized ResNet-18) trained for different numbers of epochs by SGD on CIFAR-10. De\ufb01nition 5.1. A randomized algorithm A is \u03b2m(\u00b7, \u00b7)locally elastic stable if for all datasets S \u2208Zn, we have |EA [l(AS, z)] \u2212EA [l(AS\u2212i, z)]| \u2264\u03b2m(zi, z), where the expectation is over the randomness embedded in the algorithm A . For SGD, the algorithm A outputs functions AS and AS\u2212i which are parameterized by \u02c6 \u03b8T and \u02c6 \u03b8\u2212i T and we further study whether there is a function \u03b2m(\u00b7, \u00b7) such that |E[l(\u02c6 \u03b8T , z)] \u2212E[l(\u02c6 \u03b8\u2212i T , z)]| \u2264\u03b2m(zi, z), where the expectation is taken with respect to randomness coming from uniformly choosing the index at each iteration. Under similar settings as in Hardt et al. (2016), we develop estimates of the locally elastic stability parameters separately for convex, strongly convex, and non-convex cases. Due to the space constraint, we only show our results here for convex and non-convex cases and defer the treatment of the strongly convex case to Appendix. Proposition 5.1 (Convex Optimization). Assume that the loss function l(\u00b7, z) is \u03b1-smooth and convex for all z \u2208Z. In addition, l(\u00b7, z) is L(z)-Lipschitz and L(z) < \u221efor all z \u2208Z: |l(\u03b8, z) \u2212l(\u03b8\u2032, z)| \u2264L(z)\u2225\u03b8 \u2212\u03b8\u2032\u2225for all \u03b8, \u03b8\u2032. We further assume L = supz\u2208Z L(z) < \u221e. Suppose that we run SGD with step sizes \u03b7t \u22642/\u03b1 for T steps. Then, |E[l(\u02c6 \u03b8T , z)] \u2212E[l(\u02c6 \u03b8\u2212i T , z)]| \u2264(L + L(zi))L(z) m T X t=1 \u03b7t. Proposition 5.2 (Non-convex Optimization). Assume that the loss function l(\u00b7, z) is non-negative and bounded for \fGeneralization Bounds with Locally Elastic Stability all z \u2208Z. Without loss of generality, we assume 0 \u2264 l(\u00b7, z) \u22641. In addition, we assume l(\u00b7, z) is \u03b1-smooth. We further assume l(\u00b7, z) is L(z)-Lipschitz and L(z) < \u221e for all z \u2208Z and L = supz\u2208Z L(z) < \u221e. Suppose that we run SGD for T steps with monotonically non-increasing learning rate \u03b7t \u2264c/t for some constant c > 0. Then, |E[l(\u02c6 \u03b8T , z)] \u2212E[l(\u02c6 \u03b8\u2212i T , z)]| \u2264\u03b3m\u03c6\u03b1(m, T, zi, z), where \u03c6\u03b1(m, T, zi, z) = (c(L(zi) + L)L(z)T \u03b1c) 1 \u03b1c+1 and \u03b3m = (1 + 1/(\u03b1c))/(m \u22121). From the propositions above, we see that SGD has locally elastic stability with parameter taking the form \u03b2(\u00b7, \u00b7)/m, where \u03b2(\u00b7, \u00b7) is independent of m. This is consistent with our assumptions regarding the form of \u03b2m(\u00b7, \u00b7) in Section 3. We remark that unlike Hardt et al. (2016), our results use \u02c6 \u03b8\u2212i T instead of \u02c6 \u03b8i T in order to be consistent with our de\ufb01nition in De\ufb01nition 2.1, where \u02c6 \u03b8i T is the parameter obtained by training on Si (replacing the ith element from S with another example instead of removing the ith element as in S\u2212i). This setting requires us to provide new techniques. Speci\ufb01cally, we construct new coupling sequences to obtain an upper bound on |E[l(\u02c6 \u03b8T , z)]\u2212E[l(\u02c6 \u03b8\u2212i T , z)]| (see more in the Appendix). Comparison with results in Hardt et al. (2016) By using L(z) instead of L, if EL(z) \u226aL, which holds for most common models in practice, we would expect to obtain a sharper generalization bound for SGD compared with the one derived using uniform stability in Hardt et al. (2016) according to the discussion in Section 4. Due to limited space, let us only compare Proposition 5.2 with Theorem 3.12 in Hardt et al. (2015) with a simple example as an illustration. With some abuse of notation, we still use \u2206(AS) to denote EzE[l(\u02c6 \u03b8T , z)] \u2212Pm i=1 E[l(\u02c6 \u03b8\u2212i T , z)]/m. In Theorem 3.12 in Hardt et al. (2015), via uniform stability, under the assumptions in Proposition 5.2, one can obtain the following bound: \u2206(AS) \u22642\u03b2U m + (4m\u03b2U m + 1) r log(1/\u03b4) 2m , (B5) where \u03b2U m = 1 + 1/(\u03b1c) m \u22121 \u00002cL2T \u03b1c\u0001 1 \u03b1c+1 . While via locally elastic stabilty, one can obtain \u2206(AS) \u22642 supz\u2032\u2208Z Ez\u03b2(z\u2032, z) m +2 \u0012 2 sup z\u2032\u2208Z Ez\u03b2(z\u2032, z) + 1 \u0013 r 2 log(2/\u03b4) m . (B6) where \u03b2(z\u2032, z) = 1 + 1/(\u03b1c) m \u22121 (c(L(z\u2032) + L) L(z)T \u03b1c) 1 \u03b1c+1 . Example 5.1. Let us take l(\u03b8, z) = z2e\u2212\u03b82, where \u03b8 and z are all scalars, where z \u2208[0, 1] and \u03b8 \u2208R. Apparently, the loss function is \u03b1 = 2-smooth with respect to z. Meanwhile, d d\u03b8l(\u03b8, z) = \u22122z2\u03b8e\u2212\u03b82. The fact that \u03b8e\u2212\u03b82 \u2264 e\u22121/2/ \u221a 2 leads to L(z) = \u221a 2e\u22121/2z2 and L = \u221a 2e\u22121/2. With the same spirit as in Example 4.1, if we choose learning rate \u03b7 \u2264 1/t, when \u03b4 < 0.5, as long as supz\u2032\u2208Z Ez\u03b2(z\u2032, z) < \u221a 2/8\u03b2U m and \u03b2U m \u22652 \u221a 2 \u22121/2, the bound obtained in (B6) is tighter than the one in (B5). It is easy to see that these conditions can be easily satis\ufb01ed if T is large enough and E[(L(z)) 1 3 ] < L 1 3 . Therefore, if we further have the condition that z lies in a small vicinity of 0 with high probability (for example, P \u0000|z| \u22641 2 \u0001 > 2 3), then the bound obtained in (B6) would be tighter than the one in (B5). In particular, if the training time T is long enough, the bound obtained in (B6) would be signi\ufb01cantly tighter than the one in (B5). 6. Conclusion and Future Work In this work, we introduce a new notion of algorithmic stability, which is a relaxation of uniform stability yes still gives rise to exponential generalization bounds. It also provides a promising direction to obtain useful theoretical bounds for demystifying the generalization ability of modern neural networks through the lens of local elasticity (He & Su, 2020). However, as shown in Theorem 3.1, we currently still require the sample size m to be large enough so that our theoretical results hold. Whether that requirement could be removed is worthy of further investigation. In addition, our bound is related to the constant Ml, which is typically very large in practice if we apply the bound to neural networks. Thus, an interesting question is to examine whether this constant could be improved or not. Acknowledgements We are grateful to Cynthia Dwork and Vitaly Feldman for inspiring discussions and constructive comments. This work was supported in part by NSF through CAREER DMS-1847415, CCF-1763665 and CCF-1934876, an Alfred Sloan Research Fellowship, the Wharton Dean\u2019s Research Fund, and Contract FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA)." + }, + { + "url": "http://arxiv.org/abs/2010.10650v1", + "title": "Towards Understanding the Dynamics of the First-Order Adversaries", + "abstract": "An acknowledged weakness of neural networks is their vulnerability to\nadversarial perturbations to the inputs. To improve the robustness of these\nmodels, one of the most popular defense mechanisms is to alternatively maximize\nthe loss over the constrained perturbations (or called adversaries) on the\ninputs using projected gradient ascent and minimize over weights. In this\npaper, we analyze the dynamics of the maximization step towards understanding\nthe experimentally observed effectiveness of this defense mechanism.\nSpecifically, we investigate the non-concave landscape of the adversaries for a\ntwo-layer neural network with a quadratic loss. Our main result proves that\nprojected gradient ascent finds a local maximum of this non-concave problem in\na polynomial number of iterations with high probability. To our knowledge, this\nis the first work that provides a convergence analysis of the first-order\nadversaries. Moreover, our analysis demonstrates that, in the initial phase of\nadversarial training, the scale of the inputs matters in the sense that a\nsmaller input scale leads to faster convergence of adversarial training and a\n\"more regular\" landscape. Finally, we show that these theoretical findings are\nin excellent agreement with a series of experiments.", + "authors": "Zhun Deng, Hangfeng He, Jiaoyang Huang, Weijie J. Su", + "published": "2020-10-20", + "updated": "2020-10-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Neural networks have achieved remarkable success in many \ufb01elds such as image recognition [14] and natural language processing [6]. However, it has been recognized that neural networks are not robust against adversarial examples \u2013 prediction labels can be easily manipulated by human imperceptible perturbations [12,22]. In response, many defense mechanisms have been proposed against adversarial attacks such as input de-noising [13], randomized smoothing [15], gradient regularization [20], and adversarial training [19]. Among these, one of the most popular techniques is adversarial training, which proposes to add adversarial examples into the training set as a way of improving the robustness. As oppose to earlier work that only adds adversarial examples several times during the training phase, more recently, [19] propose to formulate adversarial training through the lens of robust optimization, showing substantial improvement. More precisely, robust optimization for a loss function L in its simples setting takes the form min \u03b8\u2208\u0398 E(x,y)\u223cD max \u2225\u03b4\u2225p\u2a7d\u03b5 L(\u03b8, x + \u03b4, y), where \u03b8 \u2208\u0398 is the parameter and (x, y) are the input and label following some unknown joint distribution D. The inner maximization problem is to \ufb01nd an adversarial example, where \u03b4 is an adversarial perturbation with lp norm constraint for some integer 1 \u2a7dp \u2a7d\u221e. \u2217Harvard University, zhundeng@g.harvard.edu \u2020University of Pennsylvania, hangfeng@seas.upenn.edu \u2021Institute of Advanced Study, jiaoyang@ias.edu \u00a7University of Pennsylvania, suw@wharton.upenn.edu 1 arXiv:2010.10650v1 [cs.LG] 20 Oct 2020 \fFor neural networks, the inner maximization problem is typically non-concave and the most commonly used method in implementation is through the \ufb01rst-order method \u2013 projected gradient ascent. However, as pointed out by [23], the degree to which it solves the inner maximization problem has not been thoroughly understood. While there are several papers providing great theoretical insights to the convergence of adversarial training, they either formulate the inner maximization problem as maximizing the \ufb01rst order taylor expansion of the loss [23], or treat the inner maximization problem abstractly as a general function of data and study the convergence in the neural tangent kernel regime [9]. In our paper, we make the \ufb01rst step to analyze the dynamics of projected gradient ascent of neural networks. The \ufb01rst question is about the e\ufb00ectiveness of projected gradient ascent. To prove the e\ufb00ectiveness, we need to consider the time cost of using projected gradient ascent in the inner maximization problem. In [19], one claim is that using projected gradient ascent can \ufb01nd the adversaries rapidly. That claim is very important since adversarial training usually takes much longer than usual training due to the inner maximization problem. Speci\ufb01cally, if we use gradient method in the alternative optimization problem for both inner maximization and outer minimization, and denote the number of epochs taken to \ufb01nd adversaries with given weights by n1, number of updates of weights by n2, then the epochs taken by the adversarial training is n1n2. To make the time cost of adversarial training bearable, the fact that n1 is not large plays a key role here. Another issue about e\ufb00ectiveness is whether the projected gradient ascent can truly \ufb01nd a local maximum and not be stuck at a saddle point. In [19], Madry et al. claims the loss (as a function of model parameters) typically has many local maximums with very similar values. So, if the projected gradient ascent truly \ufb01nds a local maximum, the e\ufb00ectiveness of the adversarial training is trustworthy. We summarize our \ufb01rst question below. Questions 1. Does projected gradient ascent truly \ufb01nd a local maximum rapidly? The second question we try to explore is whether the scale of inputs matters. In the adversarial training, \u03b5\u2019s scale is usually in proportional to the scale of input x: \u03b5 = rE\u2225x\u2225p. For adversarial attacks on images, the ratio r is supposed to be small, so as to re\ufb02ect the fact that the attacks are visually imperceptible. For \ufb01xed r > 0, \u03b5 and the input scale E\u2225x\u2225p are closely related \u2013 a smaller input scale implies a smaller \u03b5. In the implementation of image recognition using neural networks, people usually rescale the image pixels to [0, 1] or [\u22121, 1]. While that seems not a\ufb00ecting regular optimization, it may a\ufb00ect adversarial training. So, we have the following question. Questions 2. When we \ufb01x the ratio r, do smaller input scales (implying smaller \u03b5) help optimization of adversarial training? If the answer to Question 2 is positive, it will be helpful in the future applications to rescale the inputs to a smaller scale. Both questions above have not been studied yet due to the highly non-concave landscape of adversaries. 1.1 Our contributions Our analysis provides answers to Question 1 and 2 for the initial phase of the adversarial training, i.e. the weights are drawn from Xavier initialization [11]. Even for this simple case, nothing has been discussed theoretically so far. In Section 3 and 4, we provide the answer to Question 1 by showing projected gradient ascent indeed can \ufb01nd a local maximum rapidly by providing a convergence theorem. Theorem 1.1 (Informal). Projected gradient ascent can obtain an approximate local maximum, which is close to a true local maximum on the sphere in polynomial number of iterations when the learning rate is small enough. If we further allow learning rate shrinking with time, projected gradient ascent can converge to a local maximum. 2 \fIn Section 5, we answer Question 2 by showing a smaller scale helps in the perspectives of landscapes and convergence of trajectories. From our theory, we show a smaller input scale helps the trajectory converge faster if we had a bad initialization. Besides, a smaller input scale makes the local maximums concentrate better, which can partially explain why the loss value of local maximums share similar values [19]. Lastly, we verify the previous claims by extensive numerical experiments. Our work mainly focuses on the initial phase of adversarial learning, which may be a good start towards understanding the \ufb01rst-order adversaries. 1.2 Related work Adversarial attack and defense Besides projected gradient ascent, some other have also been proposed to generate adversarial examples, such as FGSM [12], l-BFGS [22] and C& W attack [4]. Also, some attacks are proposed to attack black-box models, in order to defend against those attacks, many defense mechanisms have been proposed [3, 5]. However, many of these defense models have been evaded by new attacks [2] except [19]. Besides, a line of work focus on providing certi\ufb01ed robustness and robustness veri\ufb01cation [24,25,27], which also provide useful insights theoretically. Adversarial training The \ufb01rst work to propose adversarial training is [12], in which the authors advocate adding adversarial examples during training to improve the robustness. In [19], the authors use projected gradient ascent to \ufb01nd adversaries and reach a state of art performance. However, as we mentioned before, running projected gradient method is very slow, and some work [21] intend to solve this problem. Besides, the introducing of adversarial training also motivates a line of theoretical work, such as [1,18,26]. However, none of them address the inner maximization problem using projected gradient ascent. Non-convex optimization Non-convex optimization is notoriously hard to analyze. However, some work provide valuable guide. In [10], the authors analyze the dynamics of noisy gradient descent in the non-convex setting. Some following work including [7] show gradient descent can take very long to escape saddle point but noisy gradient descent does not, and [17] shows noisy gradient descent can converge to a second order stationary point vert fast. In our setting, we do not need extra noise, but can still yield a good convergence result. 2 Preliminaries Notations Throughout the paper, we use [n] to denote {1, 2, \u00b7 \u00b7 \u00b7 , n} and use \u2225\u00b7\u2225p to denote lp norm. In particular, for l2 norm, we use \u2225\u00b7\u2225or \u2225\u00b7\u22252 exchangeably. For any function L : Rd 7\u2192R, \u2207L and \u22072L denote the gradient vector and Hessian matrix respectively. We use B to denote ball and S to denote sphere. We denote \u2220[u, v] = uT v/(\u2225u\u2225\u2225v\u2225), which is the cosine value of the angle between the two vectors v and u. For a function h(x), we sometimes use shorthand \u2202h(x) for gradient \u2202h(x)/\u2202x. Setup Recall adversarial learning aims to solve the robust optimization of loss function L: min \u03b8\u2208\u0398 E(x,y)\u223cD max \u2225\u03b4\u2225p\u2a7d\u03b5 L(\u03b8, x + \u03b4, y), where \u03b8 \u2208\u0398 is the parameter, (x, y) \u2208Rd \u00d7 R is d-dimensional input and scalar output, which follows a joint distribution D. The corresponding empirical version for samples {xi, yi}n i=1 is min \u03b8\u2208\u0398 1 n n X i=1 max \u2200i\u2208[n],\u2225\u03b4i\u2225p\u2a7d\u03b5 L(\u03b8, xi + \u03b4i, yi). (1) For \ufb01xed \u03b8, solving the optimization problem (1) can be optimized as n di\ufb00erent optimization problems separately: for each xi, we need to obtain a corresponding \u03b4i. In this paper, we 3 \ffocus on studying the convergence rate of \ufb01nding adversaries, i.e. maximizing \u03b4 \u2208Rd when the constraint is the l2-norm and the loss is the quadratic loss of shallow neural network: max \u03b4 L(\u03b8, x + \u03b4, y) = (y \u2212f(a, W , \u03b4 + x))2, s.t. \u2225\u03b4\u22252 2 \u2a7d\u03b52. (2) Here, f is a two-layer neural network: f(a, W , \u03b4 + x) = m X r=1 ar\u03c3(wT r (x + \u03b4)). In the above equation, a = (a1, a2, \u00b7 \u00b7 \u00b7 , am)T is an m-dimensional vector, W = (w1, \u00b7 \u00b7 \u00b7 , wm) is an m \u00d7 d-matrix and \u03b8 = (aT , Vec(W )T )T , where Vec(\u00b7) is the vectorization operator. We use \u03c3 to denote the softplus activation function such that \u03c3(x) = log(1 + ex). We study the projected gradient ascent: \u03b4t+1 = PB(0,\u03b5) h \u03b4t + \u03b7 \u2202L(\u03b4t) \u2202\u03b4t i , t \u2a7e0, where B(0, \u03b5) is a ball centered at 0 with radius \u03b5 in Euclidean distance, and P is the projection operator. \u03b40 is uniformly sampled in the ball B(0, \u03b5). In this paper, we always consider the problem under the following settings unless we state explicitly otherwise. 1. wr\u2019s are i.i.d drawn from d-dimensional Gaussian N(0, \u03ba2I), where 0 < \u03ba \u2a7d1 controls the magnitude of initialization. 2. ar\u2019s are i.i.d drawn from Bernoulli distribution, which take \u00b1\u03b3 with 1/2 probability. 3. There exist L, U > 0 such that L < |y \u2212f(a, W , \u03b4 + x)| < U for all \u03b4 \u2208B(0, \u03b5). 4. \u03b40 is initialized by drawing from a uniform distribution over B\u25e6(0, \u03b5), where B\u25e6stands for the interior of the ball B. In this paper, we will take the parameters according to Xavier initialization, which means \u03ba = d\u22121/2 and \u03b3 = m\u22121/2. Remark 1. We study the case when the weights are drawn from commonly used distributions for initialization. Our analysis can be viewed as studying the dynamics of \ufb01nding adversaries in the initial phase of training. 3 Main Results We present our main results on the convergence of projected gradient descent (PGD) in this section. Since the objective of optimization is \u03b4, we use L(\u03b4) for loss and we denote the constraint as c(\u03b4) = \u2225\u03b4\u22252 \u2212\u03b52. For convenience, we consider the minimization version: L(\u03b4) = \u2212(y \u2212f(a, W , \u03b4 + x))2. The original problem in (2) is equivalent to: min \u03b4 L(\u03b4), s.t. c(\u03b4) \u2a7d0. Then, the iterative optimization algorithm used becomes the projected gradient descent (PGD) \u03b4t+1 = PB(0,\u03b5) h \u03b4t \u2212\u03b7 \u2202L(\u03b4t) \u2202\u03b4t i , t \u2a7e0. Here, we provide the formal statement of our main results. 4 \fTheorem 3.1 (Main Theorem). Suppose m = \u2126(d5/2), there exists \u03b5max(m) = \u0398 \u0000(log m)\u22122\u0001 and \u03b7max(m, \u03b5) = min{\u0398 \u0000(log m)\u22122\u0001 , \u03b52}, if \u03b5 < \u03b5max(m), for any \u03b7 < \u03b7max(m, \u03b5), with high probability, in O(\u03b7\u22122) iterations, projected gradient descent will output a point \u03b4t on the sphere which is O(\u03b71/2) close to some local minimum \u03b4\u2217. Remark 2. Our width requirement is much smaller compared to the results with respect to neural tangent kernels [8,16]. The latter one requires m = O(poly(n)), where n is the samples size. Notice the scale of \u03b5 only requires to be upper bounded by O(poly((log m)\u22121), under that requirement, the activation function will be activated along the update of \u03b4 with constant probability when \u2225x\u2225is small. Corollary 3.1 (Shrinking learning rate). Under the assumptions of Theorem 3.1, for \u02dc t satisfying \u03b4\u02dc t \u2208\u03b5Sd\u22121 and the tangent component of \u2202f(\u03b4\u02dc t) (for every point on the sphere, the tangent component of a vector is its projection to the tangent plane at that point) being smaller than \u03b71/2, let Ds := \u2225\u03b4\u02dc t+s \u2212\u03b4\u2217\u22252, if we shrink the learning rate after \u02dc t , in a way that \u03b4\u02dc t+s+1 = PB(0,\u03b5) h \u03b4\u02dc t+s \u2212\u03b7s\u2202L(\u03b4\u02dc t+s) i , t \u2a7e0, s \u2a7e0, for \u03b70 < \u03b7, as long as \u03b7s \u21920 as s \u2192\u221eand \u03a0k i=0(1 \u2212\u03b3\u03b7i/2) \u21920 as k \u2192\u221e, we will have Ds \u21920. Furthermore, if \u03b7s\u03a0k i=0(1 \u2212\u03b2\u03b7s+i 2 ) \u2a7d\u03b7s+k+1 (3) for all s, k \u2208N, where \u03b2 is a constant depending on (d, m, \u03b5, \u03b7) and can be calculated explicitly, then for all s \u2208N, Ds \u2a7dO(\u03b7s). Remark 3. One concrete example satisfying Eq. (3) is the following one: if \u03b7s = 2/(\u03b2s + \u03b2z) for large enough integer z, Ds \u2a7dO( 1 z + s). 3.1 Our interpretation Our results state that for a wide enough one hidden layer neural network, if the attack size \u03b5 is small, then we can choose small enough learning rate, such that the trajectory of PGD can quickly reach a point that is very close to one of the minimizers. Besides, the minimizer is located on the sphere with high probability. The theory can partially explain the observation in [19]: it does not take too many iterations to \ufb01nd an adversary, which is the key to guarantee the time cost of robust optimization modest. Also, our theory is consistent with the observation that the PGD will end up on the sphere for most samples in the implementation of adversarial training. 4 Proof Sketch In this section, we brie\ufb02y sketch our proof. We show with high probability, the gradient is non-vanishing in the ball. Meanwhile, on the sphere, there is no saddle points. Besides, the trajectory will not get stuck near local maximums and can converge to a local minimum in polynomial number of iterations. Lemma 4.1 (Dynamics in the ball). For m = \u2126(d5/2), there exists \u03b5max(m) = \u0398 \u0000(log m)\u22121/2\u0001 and \u03b7max(m, \u03b5) = min{\u0398 \u0000(log m)\u22121\u0001 , \u03b52}, if \u03b5 < \u03b5max(m), \u03b7 < \u03b7max(m, \u03b5) , with high probability, whenever \u03b4t+1 \u2208B\u25e6(0, \u03b5) L(\u03b4t+1) \u2212L(\u03b4t) \u2a7d\u2212\u2126(\u03b7). 5 \fThe above lemma shows the trajectory is very unlikely to terminate in the ball since the (t + 1)-th step can make progress if \u03b4t+1 \u2208B\u25e6(0, \u03b5). Next, we focus on studying the dynamics on the sphere. For constrained optimization, we can locally transform it into an unconstrained problem by introducing Lagrangian multipliers: L(\u03b4, \u03bb) = L(\u03b4) \u2212\u03bbc(\u03b4). Under some regularity conditions, we can obtain the Lagrangian multiplier \u03bb\u2217(\u00b7): \u03bb\u2217(\u03b4) = argmin\u03bb \u2225\u2202L(\u03b4) \u2212\u03bb\u2202c(\u03b4)\u2225. There are two key quantities. The \ufb01rst quantity can be viewed as an approximate gradient when we have constraints, which we will denote as \u0393: \u0393(\u03b4) = \u2202L(\u03b4, \u03bb)|(\u03b4,\u03bb\u2217(\u03b4)) = \u2202L(\u03b4) \u2202\u03b4 \u2212\u03bb\u2217(\u03b4)\u2202c(\u03b4) \u2202\u03b4 . Another important quantity can be viewed as the approximate Hessian of constraint optimization: \u039e(\u03b4) = \u22022L(\u03b4, \u03bb)|(\u03b4,\u03bb\u2217(\u03b4)) = \u22022L(\u03b4) \u2202\u03b42 \u2212\u03bb\u2217(\u03b4)\u22022c(\u03b4) \u2202\u03b42 . For \u03b4, \u03b4\u2032 \u2208\u03b5Sd\u22121, if \u22022L(\u03b4, \u03bb\u2217) is \u03c1-Lipschitz, i.e. \u2225\u22022L(\u03b4a, \u03bb\u2217) \u2212\u22022L(\u03b4b, \u03bb\u2217)\u2225\u2a7d\u03c1\u2225\u03b4a \u2212\u03b4b\u2225for all \u03b4a, \u03b4b \u2208B(0, \u03b5), we can obtain L(\u03b4, \u03bb\u2217) \u2a7dL(\u03b4\u2032, \u03bb\u2217) + \u2202L(\u03b4\u2032, \u03bb\u2217)T (\u03b4 \u2212\u03b4\u2032) + 1 2(\u03b4 \u2212\u03b4\u2032)T \u22022L(\u03b4\u2032, \u03bb\u2217)(\u03b4 \u2212\u03b4\u2032) + \u03c1 6\u2225\u03b4 \u2212\u03b4\u2032\u22253. Since \u03b4, \u03b4\u2032 are on the sphere, we know L(\u03b4, \u03bb\u2217) = L(\u03b4) and L(\u03b4\u2032, \u03bb\u2217) = L(\u03b4\u2032), we have L(\u03b4) \u2a7dL(\u03b4\u2032) + \u0393(\u03b4\u2032)T (\u03b4 \u2212\u03b4\u2032) + 1 2(\u03b4 \u2212\u03b4\u2032)T \u039e(\u03b4\u2032)(\u03b4 \u2212\u03b4\u2032) + \u03c1 6\u2225\u03b4 \u2212\u03b4\u2032\u22253. (4) Further, we denote T (\u03b4) as the tangent space at \u03b4 on the sphere, and PT (\u03b4) is the operator for projection to the tangent space T (\u03b4). The projected gradient descent can be approximated in the manner stated in the following lemma. Lemma 4.2 (Approximation of PGD). For any \u02c6 v \u2208Sd\u22121, let \u02dc \u03b41 = \u03b40 + \u03b7\u02c6 v and \u02dc \u03b42 = \u03b40 + \u03b7PT0 \u00b7 \u02c6 v \u2225PB(0,\u03b5)(\u02dc \u03b41) \u2212\u02dc \u03b42\u2225\u2a7d4\u03b72 \u03b5 . It is worth noting that \u0393(\u03b4) is actually the tangent component of \u2202f(\u03b4) \u0393(\u03b4) = PT (\u03b4) \u00b7 \u2202f(\u03b4). As a result, for \u03b4t \u2208\u03b5Sd\u22121 \u2225\u03b4t+1 \u2212(\u03b4t \u2212\u03b7\u0393(\u03b4t))\u2225\u2a7d4\u03b72 \u03b5 . (5) We can use the above Eq. (4) and (5) to calculate the progress at each step. Thus, in order to analyze the progress, we only need to carefully analyze \u0393 and \u039e. In the following paragraph, we discuss \u0393 and \u039e case by case. For each point on the sphere, we loosely de\ufb01ne \u201cnear\u201d and \u201caway from\u201d local optimums by looking into the angle between the gradient and the spherical normal vector. If the gradient is parallel to the spherical normal vector at a point on the sphere, then the point is a \ufb01xed point for projected gradient descent. It is either a local optimum or a saddle point. We will show such points are not saddle points under some regularity conditions. Since c(\u03b4) = \u2225\u03b4\u22252 \u2212\u03b52, the unit spherical normal vector is \u03b4/\u2225\u03b4\u2225at each point on the sphere and the cosine value of the angle we are looking at is \u2220[\u2202f(\u03b4), \u03b4]. If \u2220[\u2202f(\u03b4), \u03b4] is close to \u00b11, then such \u03b4 is close to a critical point. 6 \fLemma 4.3 (Away from critical points on the sphere). For m = \u2126(d5/2), there exists a threshold \u03b5max(m) = \u0398((log m)\u22121), if \u03b5 < \u03b5max, with high probability, for any \u03b4 \u2208\u03b5Sd\u22121 and any 0 \u2a7d\u03b2 \u2a7d1 such that \u2220[\u2202f(\u03b4), \u03b4] \u2a7d\u03b2, we have \u2225\u0393(\u03b4)\u2225\u2a7e p 1 \u2212\u03b22\u2225\u2202f(\u03b4)\u2225\u2a7eLBl p 1 \u2212\u03b22, where Bl is of order \u0398(1). Recall L is the lower bound such that |y \u2212f(a, W , \u03b4 +x)| > L for all \u03b4 \u2208B(0, \u03b5). The above lemma shows if the trajectory is away from critical points, each step can decrease the loss value by \u2212\u2126(\u03b7) since \u03b4t+1 \u2248\u03b4t \u2212\u03b7\u0393(\u03b4t) and L(\u03b4t+1) \u2a7dL(\u03b4t) + \u0393(\u03b4t)T (\u03b4t+1 \u2212\u03b4t) + O(\u2225\u03b4t+1 \u2212\u03b4t\u22252). The hard case is when the trajectory is near a critical point on the sphere. We will \ufb01rst show that the critical points on the sphere are not saddle points under some regularity conditions. Lemma 4.4 (Near critical points on the sphere). For m = \u2126(d5/2), there exists a threshold \u03b5max(m) = \u0398((log m)\u22121), if \u03b5 < \u03b5max, with high probability, there exists universal constants \u03c6, \u03b3 > 0, for any \u03b4 \u2208\u03b5Sd\u22121, such that \u2220[\u2202f(\u03b4), \u03b4] \u2a7e\u03c6, then for all \u2225v\u2225= 1, sgn \u0000(y \u2212u)\u03b4T \u2202f(\u03b4) \u0001 \u00b7 vT \u039ev \u2a7e\u03b3. Lemma 4.4 implies \u039e is either positive de\ufb01nite or negative de\ufb01nite near a critical point, thus, none of the critical points on the sphere are saddle points. Since near a local minimum, the trajectory can converge to that local minimum by traditional analysis technique, the only thing left to deal with is when the trajectory is near local maximums. The following lemma states the trajectory will not be stuck near any local maximum with high probability. We denote the set \u2206\u2212 \u03b7 = {\u03b4 : \u2220[\u2202f(\u03b4), \u03b4] \u2a7d\u22121 + \u221a\u03b7/(LBl), \u03b4 \u2208\u03b5Sd\u22121} and \u2206+ \u03b7 = {\u03b4 : \u2220[\u2202f(\u03b4), \u03b4] \u2a7e1 \u2212\u221a\u03b7/(LBl), \u03b4 \u2208\u03b5Sd\u22121}. Notice that \u2220[\u2202f(\u03b4), \u03b4] = \u00b11 when the spherical normal vector is parallel to the gradient at \u03b4. Thus, for small \u03b7, the two sets are the collections of points near local maximums and local minimums respectively. Lemma 4.5 (Trajectory and local optimums). For learning rate \u03b7 such that \u03b7 < min{1, LBl}, if arccos \u0000\u2220[\u2202f(\u03b4), \u2202f(\u03b4\u2032)] \u0001 + arccos s (LBl)2 \u2212\u03b7 (LBl)2 ! \u2a7d\u03c0 4 (6) for all \u03b4, \u03b4\u2032 \u2208\u03b5Sd\u22121, the trajectory initialized by drawing from a uniform distribution over B\u25e6(0, \u03b5) will never reach \u2206\u2212 \u03b7 . Meanwhile, if there exists t\u2217such that \u03b4t\u2217\u2208\u2206+ \u03b7 , then for all t \u2a7et\u2217, \u03b4t \u2208\u2206+ \u03b7 . From the discussions above, it is easy to see Lemma 4.5 holds for small enough \u03b5 and \u03b7. The above lemma states the trajectory will not be stuck near local maximums and \u2225\u0393(\u03b4)\u2225\u2a7e\u221a\u03b7 if \u03b4 / \u2208\u2206+ \u03b7 for \u03b4 \u2208\u03b5Sd\u22121. That can ensure L(\u03b4t+1) \u2212L(\u03b4t) \u2a7d\u2212\u2126(\u03b72) for \u03b4t, \u03b4t+1 \u2208\u03b5Sd\u22121. As a result, the trajectory can constantly make progress until the trajectory reaches \u2206+ \u03b7 . Then, traditional techniques for convex optimization can be applied and gives us the \ufb01nal convergence result. 5 Implications and Extensions So far, we have derived the theory about \ufb01nding adversaries in the initial phase of adversarial training. Through our theoretical analysis, we also identify several interesting phenomena concerning the scale of input x. In this section, we brie\ufb02y discuss the implications of our theory on experiments and show how to extend our arguments to general losses. 7 \f(a) Input scale=0.01, ratio=0.1 (b) Input scale=0.01, ratio=1.0 (c) Input scale=0.01, ratio=10 (d) Input scale=1.0, ratio=0.1 (e) Input scale=1.0, ratio=1.0 (f) Input scale=1.0, ratio=10 (g) Input scale=100, ratio=0.1 (h) Input scale=100, ratio=1.0 (i) Input scale=100, ratio=10 Figure 1: Landscapes and trajectories on simulated data. We compare the landscapes and trajectories with three di\ufb00erent input scales and three di\ufb00erent perturbation ratios. If the input scale is small enough (i.e. 0.01 here), the landscape has only one local minimum and PGD can easily escape the local maximal with few steps even with large perturbation ratio such as 10. On the other hand, if the input scale is not small enough, we will have a less regular landscape with a lot of local minimums and it takes a lot of steps to escape from a local maximum. For large input scales, we have to reduce the perturbation ratio, so as to make the landscape become more regular and make escaping from the local maximums faster. Our simulations are based on two-dimensional inputs and two-layer neural networks. More details can be found in the supplementary materials. 5.1 Scale, Landscape and Convergence In this subsection, we state the high level conclusions and the details of the theoretical results are left in the supplementary materials. As we stated in the introduction, \u03b5\u2019s scale is usually formulated in proportional to the scale of input x. In the empirical optimization (1) min \u03b8\u2208\u0398 1 n n X i=1 max \u2200i\u2208[n],\u2225\u03b4i\u2225p\u2a7d\u03b5 L(\u03b8, xi + \u03b4i, yi), \u03b5 takes the form \u03b5 = r n X i=1 \u2225xi\u2225p n for small r > 0, where r stands for a small constant ratio. In this section, we shed some light on Question 2, which we restate here. 8 \f(a) Input scale=0.1 (b) Input scale=10 (c) Input scale=1000 Figure 2: Trajectories from local maxima to local minima on real-world data. We show the adversarial losses of each point on the trajectories from local maxima with three di\ufb00erent input scales and \ufb01ve di\ufb00erent perturbation ratios. For a \ufb01xed perturbation ratio, a smaller input scale means that escaping from local maxima is easier. If the input scale is small enough (i.e. 0.1 here), PGD can easily escape the local maxima even with a large perturbation ratio such as 10 as shown in Fig. 2(a). If the input scale is not small enough, escaping from a local maximum will be easier with a smaller perturbation ratio as shown in Fig. 2(b). If the input scale is too large, escaping from a local maximum will be di\ufb03cult even with a small perturbation ratio 0.001 as shown in Fig. 2(c). These results are consistent with those on the simulated data in Fig. 1. The experiments are based on a real-world dataset MNIST and a practical multi-layer CNN. More details can be found in the supplementary materials. When we \ufb01x the ratio r, do smaller input scales (implying smaller \u03b5) help optimization of adversarial training? Our answer to that question is positive \u2014 at least the input scale matters in the initial phase of adversarial training. We experimentally and theoretically answer that question from the perspectives of landscapes and convergence of trajectories. 5.1.1 Smaller input scales imply more regular landscapes In our proofs, the concentration results for all quantities such as sup\u03b4\u2208B(0,\u03b5) \u2225\u2202f(a, W , \u03b4 + x)/\u2202\u03b4\u2225and min\u03b4,\u03b4\u2032\u2208B(0,\u03b5) \u2220[\u2202f(\u03b4), \u2202f(\u03b4\u2032)] depend only on the scale of \u03b5 since in the initial phase, a and W are drawn from initialization distributions which are independent to the inputs. That fact implies with a \ufb01xed ratio r, a smaller input scale will result in a smaller \u03b5, so as to make all the concentration results hold with a higher probability. Even if the ratio r is large, which means the adversarial attack is more aggressive, the concentration results can hold regardlessly. Moreover, min \u03b4,\u03b4\u2032 \u2220[\u2202f(\u03b4), \u2202f(\u03b4\u2032)] \u21921, as \u03b5 \u21920, which means the angle between \u2202f(\u03b4) and \u2202f(\u03b4\u2032) will be very small if \u03b5 is small. Besides, for \u03b4 \u2208\u03b5Sd\u22121, \u03b4 is a local optimum if and only if \u03b4 is parallel to \u2202f(\u03b4). Combining the above facts, it is natural to expect the local minimums will be closer to each other when a smaller \u03b5 is chosen. Actually, there is a threshold \u03c4\u03b5 > 0, when \u03b5 is smaller than \u03c4\u03b5, there is only one minimum on the sphere. Theorem 5.1 (Informal). Under the settings of Theorem 3.1, there exists a threshold \u03c4\u03b5 > 0, such that for \u03b5 < \u03c4\u03b5, there is only one local minimum on the sphere with high probability. Theorem 5.1 implies in the initial phase of adversarial training, a smaller input scale of \u2225x\u2225 actually can ensure there exists only one single local minimum on the sphere which is also the global minimum. Combined with previous results, the projected gradient descent is able to reach global minimum with high probability. 9 \f(a) Epoch=0 (b) Epoch=10 (c) Epoch=100 Figure 3: The dynamics of trajectories from local maxima to local minima during the adversarial training process. We use the same setting as that in Figure 2 and \ufb01x the input scale as 0.1. After a few epochs of adversarial training, escaping from local maxima is still easy for the small input scale as shown in Fig. 3(b). However, escaping from local maxima will be signi\ufb01cantly harder after a lot of training epochs as shown in Fig. 3(c). This in\ufb02uence is more signi\ufb01cant on large input scales compared to small ones. In Figure 1, we can see for a \ufb01xed r, smaller input scale make the landscape more regular, for instance, the upper left one has only one local minimum. For a large input scale, the landscape will become very complex (see sub\ufb01gure (i)) unless we use very small perturbation ratio r (see sub\ufb01gure (g)) . 5.1.2 Smaller input scales help convergence Another interesting discovery is inspired by Lemma 4.5 in the previous section. If \u03b5 is not small enough, Eq. (6) in Lemma 4.5 cannot stand. Thus, when the initial adversary \u03b40 \u2208B\u25e6(0, \u03b5) is close to one of the local maximums on the sphere, it is possible that the trajectory of projected gradient descent can reach the region \u2206\u2212 \u03b7 on the sphere, who contains points close to local maximums. Too close to a local maximum will result in a very small progress in the loss decay at each step, which will take much longer to reach a local minimum. As an illustration, we can see from Figure 2 and 3, by judging from the decay rate of loss function, we can see a smaller input scale leads to faster loss value decay in the initial phase of adversarial training. 5.2 General losses Previously, we have derived the theory with respect to quadratic loss. In this subsection, we extend the theory to general losses of (x, y) \u2208Rd \u00d7 R in the following form: L(y, f(\u03b8, x + \u03b4)), where we still take f as a two-layer neural network discussed previously: f(a, W , \u03b4 + x) = m X r=1 ar\u03c3(wT r (x + \u03b4)). Taking derivative with respect to \u03b4: \u2202L \u2202\u03b4 = \u2202L \u2202f \u00b7 \u2202f \u2202\u03b4 , \u22022L \u2202\u03b42 = \u2202L \u2202f \u00b7 \u22022f \u2202\u03b42 + \u22022L \u2202f 2 \u00b7 \u2202f \u2202\u03b4 \u0010\u2202f \u2202\u03b4 \u0011T . Actually the only di\ufb00erence of deriving theory for general losses compared to quadratic losses lies in the di\ufb00erent form of \u2202L/\u2202\u03b4. As long as \u2202L/\u2202\u03b4 satis\ufb01es L < |\u2202L/\u2202f| < U for all \u03b4 \u2208B(0, \u03b5) for some L, U > 0, and |\u22022L/\u2202f 2| is upper bounded by some constant B > 0, all our previous 10 \fconclusions stand without changing the scale of \u03b5 and \u03b7. Instead of going into too many details, we leave the details to readers who are interested in checking. In the later paragraph, we focus on discussing whether the above assumptions are reasonable. Generally, the loss chosen in the optimization has the following property: L(y, f) = 0 if and only if y = f. The \ufb01nal goal of optimization is to make L(y, f) small and in the initial phase, since we initialize the parameters randomly, we would expect f(\u03b8, x+\u03b4) to be \u201cfar from\u201d the label y, in other words, |L(y, f)| is lower bounded by some positive constant L. Then, by continuity of the loss function, if \u03b5 is small, the change of |L(y, f)| would be expected to be small. As a result, it is reasonable to assume \u2202L/\u2202\u03b4 satis\ufb01es L < |\u2202L/\u2202f| < U for all \u03b4 \u2208B(0, \u03b5) for some L, U > 0. Also, with smoothness assumptions on L over f, and smoothness assumptions on f over input x, since the change of \u03b5 is over a compact set, |\u22022L/\u2202f 2| should be upper bounded. We wrap up this subsection with another concrete example besides quadratic loss \u2013 cross entropy loss: L(y, f) = \u2212y log \u0012 exp(f) 1 + exp(f) \u0013 \u2212(1 \u2212y) log \u0012 1 1 + exp(f) \u0013 . Then, \u2202L \u2202f = exp(f) 1 + exp(f) \u2212y, \u22022L \u2202f 2 = exp(f) (1 + exp(f))2 . As discussed above, in the initial phase, we usually have the estimated probability exp(f)/(1 + exp(f)) is not equal to the true probability (here the true probability y is either 0 or 1). And with small \u03b5 > 0, we would expect \u2202L/\u2202\u03b4 satis\ufb01es L < |\u2202L/\u2202f| < U for all \u03b4 \u2208B(0, \u03b5) for some L, U > 0. Meanwhile, apparently 0 \u2a7d\u22022L/\u2202f 2 \u2a7d1. 6 Conclusions and Future Work In this paper, we theoretically characterize the dynamics of \ufb01nding adversaries in two-layer fully connected neural networks in the initial phase of training. We also talk about the experimental implications the theory brings. The main take-away is that in the initial phase of adversarial training, projected gradient method is trustworthy and a smaller input scale can help the adversarial training perform better. In the future, we hope to extend our theory to higher layer neural networks and to the full dynamics involving weight updates. When considering the full dynamics, as the adversarial training process goes on, the weights become more and more dissimilar to gaussian vectors. Usually, as the adversarial training goes on, L(y, f) will goes to 0, so we can expect the convergence rate on \ufb01nding adversaries will be slower since \u2202L \u2202\u03b4 = \u2202L \u2202f \u00b7 \u2202f \u2202\u03b4 and \u2202L \u2202\u03b4 should be close to 0. The landscape of adversaries in the later phase of training will become very complicated due to the intervention of \u03b4 and \u03b8. More importantly, using \ufb01rst order optimization method is possible to result in a cyclic dynamic. It is also interesting to explore how to get rid of the cyclic dynamic problem in the future. 7 Acknowledgements This work is in part supported by NSF award 1763665." + }, + { + "url": "http://arxiv.org/abs/2010.01247v1", + "title": "Interpreting Robust Optimization via Adversarial Influence Functions", + "abstract": "Robust optimization has been widely used in nowadays data science, especially\nin adversarial training. However, little research has been done to quantify how\nrobust optimization changes the optimizers and the prediction losses comparing\nto standard training. In this paper, inspired by the influence function in\nrobust statistics, we introduce the Adversarial Influence Function (AIF) as a\ntool to investigate the solution produced by robust optimization. The proposed\nAIF enjoys a closed-form and can be calculated efficiently. To illustrate the\nusage of AIF, we apply it to study model sensitivity -- a quantity defined to\ncapture the change of prediction losses on the natural data after implementing\nrobust optimization. We use AIF to analyze how model complexity and randomized\nsmoothing affect the model sensitivity with respect to specific models. We\nfurther derive AIF for kernel regressions, with a particular application to\nneural tangent kernels, and experimentally demonstrate the effectiveness of the\nproposed AIF. Lastly, the theories of AIF will be extended to distributional\nrobust optimization.", + "authors": "Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang", + "published": "2020-10-03", + "updated": "2020-10-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction Robust optimization is a classic \ufb01eld of optimization theory that seeks to achieve a certain measure of robustness against uncertainty in the parameters or inputs involved [4,5]. Recently, it has been used to address a concern in deep neural networks \u2014 the deep neural networks are vulnerable to adversarial perturbations [14,31]. In supervised learning, given input x, output y and a certain loss function l, adversarial training through robust optimization for a model M is formulated as min \u03b8M\u2208\u0398 Ex,y max \u03b4\u2208R(x) l(\u03b8M, x + \u03b4, y, M), (1) where R(x) is some constrained set, which is usually taken as a small neighborhood of x in robust optimization. For example, in image recognition [17], an adversarial attack should be small so that it is visually imperceptible. Although adversarial training through robust optimization has achieved great success in defending against adversarial attacks [25], the in\ufb02uence of such adversarial training on predictions is under-explored, even for a simple model M. In particular, let us de\ufb01ne the regular optimizer and the robust optimizer respectively: \u03b8M min := arg min \u03b8M\u2208\u0398 Ex,yl(\u03b8M, x, y, M), \u03b8M \u03b5,min := arg min \u03b8M\u2208\u0398 Ex,y max \u03b4\u2208R(x,\u03b5) l(\u03b8M, x + \u03b4, y, M). (2) \u2217Harvard University, zhundeng@g.harvard.edu \u2020Harvard University, dwork@seas.harvard.edu \u2021Harvard University, jialiangwang@g.harvard.edu \u00a7Rutgers University, lz412@stat.rutgers.edu 1 arXiv:2010.01247v1 [cs.LG] 3 Oct 2020 \fIt is unclear how Ex,yl(\u03b8M \u03b5,min, x, y, M) \u2014 the prediction loss on the original data with robust optimizer\u2014 performs compared to the optimal prediction loss Ex,yl(\u03b8M min, x, y, M). The di\ufb03culty for studying this questions is the underlying NP-hardness of solving robust optimization. Even for the simple models, say quadratic models, the robust optimization problem is NP-hard if the constraint set is polyhedral [26]. To address this problem, drawing inspiration from the idea of in\ufb02uence function in robust statistics [8,15,16,18], which characterizes how the prediction loss changes when a small fraction of data points being contaminated, we propose the Adversarial In\ufb02uence Function (AIF) to investigate the in\ufb02uence of robust optimization on prediction loss. Taking advantage of small perturbations, AIF has a closed-form expression and can be calculated e\ufb03ciently. Moreover, AIF enables us to analyze the prediction error without implementing the robust optimization, which typically takes long time due to the computational burden of searching adversaries. The rest of the paper is organized as follows. Section 2 lays out the setup and notation. Section 3 de\ufb01nes model sensitivity, which is used to understand how robust optimization a\ufb00ects the predictions. To e\ufb03ciently approximate the model sensitivity, Section 4 introduces the AIF. Further, in Section 5, we show several case studies, by applying the proposed AIF to theoretically analyze the relationship between model sensitivity and model complexity and randomized smoothing. In Section 6, we extend the AIF theory to kernel regressions and distributional robust optimization. 1.1 Related work Adversarial training and robust optimization Since [14] proposed adversarial training, many innovative methods have been invented to improve the performance of adversarial training, such as [1,23,29,33]. Earlier work only added adversarial examples in a few rounds during training, and many of them have been evaded by new attacks [2]. In [25], the authors proposed to use projected gradient ascent and obtain the state-of-art result. They further pointed out that the adversarial training can be formulated through the lens of robust optimization. Nevertheless, robust optimization has a very deep root in engineering [32] , but many robust optimization problems are NPhard [26], and solving such problems heavily relies on high-speed computers and their exponentially increasing FLOPS-rates [27]. Our adversarial in\ufb02uence function may bridge the gap between theoretical analysis and engineering implementation of robust optimization to a certain degree, and improve our understanding of robust optimization. Robust Staistics Robust statistics has been recently applied to machine learning and achieves impressive successes. [20] used the in\ufb02uence function to understand the prediction of a blackbox model. [9,24] and [6] used the in\ufb02uence function for model selections and cross-validations in kernel methods. Recently, [3] extended the in\ufb02uence function to the adversarial setting, and investigated the adversarial robustness of multivariate M-Estimators. We remark here that their adversarial in\ufb02uence function is di\ufb00erent from ours, where they focused on the in\ufb02uence on parameter inference, while ours focus on the in\ufb02uence of robust optimization on the prediction. 2 Setup and Notation In this paper, we consider the task of mapping m-dimensional input x \u2208X \u2286Rm to a scalar output y \u2208Y, with joint distribution (x, y) \u223cPx,y and marginal distributions x \u223cPx, y \u223c Py . We have training dataset (Xt, Y t) = {(xt 1, yt 1), \u00b7 \u00b7 \u00b7 , (xt nt, yt nt)} and evaluation dataset (Xe, Y e) = {(xe 1, ye 1), \u00b7 \u00b7 \u00b7 , (xe ne, ye ne)}. For a given model architecture M, the loss function is denoted as l(\u03b8M, x, y, M) with parameter \u03b8M \u2208\u0398 \u2286Rd (we will omit M in l sometimes if not causing confusions). For robust optimization, we focus on studying the constraint set R(x, \u03b5) = {\u03c9 \u2208X : \u2225\u03c9 \u2212x\u2225p \u2264\u03b5 \u00b7 Ex\u223cPx\u2225x\u2225p} with small \u03b5, where \u2225\u00b7 \u2225p is the lp norm. Such type of constraint set is also called lp-attack in adversarial learning, which implies the adversaries are allowed to observe the whole dataset and are able to contaminate each data point xi a little bit. This is commonly used in adversarial training for image classi\ufb01cations in 2 \fmachine learning and the constant factor Ex\u223cPx\u2225x\u2225p is for scale consideration.1 Further, we denote the empirical version of the minimizers for regular optimization and robust optimizers in Eq. (2): \u02c6 \u03b8M min := arg min \u03b8M\u2208\u0398 1 nt nt X i=1 l(\u03b8M, xt i, yt i, M), \u02c6 \u03b8M \u03b5,min := arg min \u03b8M\u2208\u0398 1 nt nt X i=1 max \u03b4i\u2208\u02c6 R(xt i,\u03b5) l(\u03b8M, xt i + \u03b4i, yt i, M), where \u02c6 R(xt i, \u03b5) = {u \u2208X : \u2225u \u2212xt i\u2225p \u2264\u03b5\u02c6 Ext\u2225x\u2225p}, with \u02c6 Ext being the expectation with respect to the empirical probability distribution of xt. We use sgn(x) to denote the sign function: sgn(x) = 1 if x > 0, sgn(x) = 0 if x = 0, and \u22121 otherwise. We also use [n] to denote the set {1, 2 \u00b7 \u00b7 \u00b7 , n}. Further, we use the notion op and Op, where for a sequence of random variables Xn, Xn = op(an) means Xn/an \u21920 in probability, and Xn = Op(bn) means that for any \u03b5 > 0, there is a constant K, such that P(|Xn| \u2264K \u00b7 bn) \u22651 \u2212\u03b5. 3 Model Sensitivity In order to quantify how robust optimization a\ufb00ects predictions, we \ufb01rst de\ufb01ne the model sensitivity with respect to the robust optimization. De\ufb01nition 3.1 (\u03b5-sensitivity/adversarial cost). For a given model M, the \u03b5-sensitivity/adversarial cost is de\ufb01ned as S\u03b5(M) := Ex,yl(\u03b8M \u03b5,min, x, y, M) \u2212Ex,yl(\u03b8M min, x, y, M). The \u03b5-sensitivity/adversarial cost quanti\ufb01es how robust optimization increases the expected loss, and this loss also indicates the additional cost of being adversarially robust. Besides this straightforward interpretation, one can also interpret S\u03b5(M) as a trade-o\ufb00between the prediction loss and robustness for model architecture M \u2014 the optimizer \u03b8M \u03b5,min is more adversarially robust but in\ufb02ates the prediction loss comparing to \u03b8M min. For \ufb01xed \u03b5, an architecture M with small \u03b5-sensitivity implies that such an architecture can achieve adversarial robustness by robust optimization without sacri\ufb01cing the performance on the original data too much. We also say an architecture M with smaller \u03b5-sensitivity is more stable. Since \u03b8M min is the minimizer of Ex,yl(\u03b8M, x, y, M) over \u03b8M, if we further have \u03b8M min \u2208\u0398\u25e6, where \u0398\u25e6denotes the interior of \u0398 and l is twice di\ufb00erentiable, by Taylor expansion, we would have S\u03b5(M) =1 2(\u2206\u03b8M \u03b5,min)T Ex,y\u22072l(\u03b8M min, x, y, M)\u2206\u03b8M \u03b5,min + o(\u2225\u2206\u03b8M \u03b5,min\u22252 2), where \u2206\u03b8M \u03b5,min = \u03b8M \u03b5,min \u2212\u03b8M min, and the remainder is negligible if \u03b5 is small enough. Given the training set (Xt, Y t) and the evaluation set (Xe, Y e), we de\ufb01ne the empirical \u03b5-sensitivity: \u02c6 S\u03b5(M) \u22481 2(\u2206\u02c6 \u03b8M \u03b5,min)T E\u02c6 Pxe,ye\u22072l(\u02c6 \u03b8M min, x, y, M)\u2206\u02c6 \u03b8M \u03b5,min, (3) by omitting the remainder o(\u2225\u2206\u02c6 \u03b8M \u03b5,min\u22252 2), where \u2206\u02c6 \u03b8M \u03b5,min = \u02c6 \u03b8M \u03b5,min \u2212\u02c6 \u03b8M min. Notice that Eq. (3) involves \u2206\u02c6 \u03b8M \u03b5,min, the solution of robust optimization, which, even for simple models with loss 1For standard MNIST, the average l2 norm of x is 9.21 with dimension m = 28 \u00d7 28. The attack size does not have to be small, but \u03b5, as the ratio of the magnitude of adversarial attacks and average magnitude of images, is small. 3 \ffunctions (such as linear regression with quadratic loss), does not have a closed-form expression and is computationally heavy to obtain. In the following sections, we will address this problem by introducing AIF, which provides an e\ufb03cient way to approximate and analyze S\u03b5(M). For simplicity of illustration, we remove the superscripts t, e and use generic notation (X, Y ) = {(x1, y1), \u00b7 \u00b7 \u00b7 , (xn, yn)} for general dataset in the following sections when there is no ambiguity. 4 Adversarial In\ufb02uence Function Unless explicitly stated, we mainly consider the case where the empirical risk Pn i=1 l(\u03b8M, xt i, yt i; M) is twice di\ufb00erentiable and strongly convex in this paper. A relaxation of such conditions will be discussed in Section 4.1. In order to approximate \u02c6 \u03b8M \u03b5,min \u2212\u02c6 \u03b8M min, for small \u03b5, we use \u02c6 \u03b8M \u03b5,min \u2212\u02c6 \u03b8M min \u2248\u03b5\u03b1 \u00b7 d(\u02c6 \u03b8M \u03b5,min \u2212\u02c6 \u03b8M min) d\u03b5\u03b1 |\u03b5=0+ = \u03b5\u03b1 \u00b7 d\u02c6 \u03b8M \u03b5,min d\u03b5\u03b1 \f \f \u03b5=0+ for approximation, where \u03b1 > 0 is the smallest positive real number such that the limit lim\u03b5\u21920+ (\u02c6 \u03b8M \u03b5,min \u2212\u02c6 \u03b8M min)/\u03b5\u03b1 is nonzero. Throughout this section, all the cases we consider later have \u03b1 = 1, while more general cases will be discussed in Section 6.2. Formally, we de\ufb01ne the adversarial in\ufb02uence function as follows. De\ufb01nition 4.1 (Adversarial In\ufb02uence Function). For a given model M, the adversarial in\ufb02uence function (AIF) is de\ufb01ned as I(M) := d\u03b8M \u03b5,min d\u03b5 \f \f \u03b5=0. (4) The AIF measures the changing trend of the optimizer under robust optimization in the limiting sense. With the help of AIF, we then approximate S\u03b5(M) by S\u03b5(M) \u22481 2\u03b52I(M)T Ex,y\u22072l(\u03b8M min, x, y, M)I(M) \f \f \u03b5=0 when \u03b5 is small. Next we provide a speci\ufb01c characterization of the empirical adversarial in\ufb02uence functions. We denote \u02c6 I(M) = d\u02c6 \u03b8M \u03b5,min/d\u03b5|\u03b5=0 as the empirical version of AIF. Besides, we denote the perturbation vector as \u2206= (\u03b4T 1 , \u00b7 \u00b7 \u00b7 , \u03b4T n )T . Further, for given (X, Y ) and M, we de\ufb01ne g(\u03b8M, \u2206) = 1/n Pn i=1 l(\u03b8M, xi +\u03b4i, yi; M) when we only consider the optimization over (\u03b8M, \u2206). Theorem 4.1. Suppose X, Y and \u0398 are compact spaces, the loss function l(\u03b8, x, y) is three times continuously di\ufb00erentiable on (\u03b8, x) \u2208\u0398 \u00d7 X for any given y \u2208Y, and the empirical Hessian matrix \u02c6 H\u02c6 \u03b8M min = 1/n Pn i=1 \u22072 \u03b8l(\u02c6 \u03b8M min, xi, yi) is positive de\ufb01nite. Further, we assume the empirical risk Pn i=1 l(\u03b8M, xt i, yt i; M) is twice di\ufb00erentiable and strongly convex and g(\u00b7, \u2206) is di\ufb00erentiable for every \u2206, \u2207\u03b8g(\u03b8M, \u2206) is continuous on \u0398 \u00d7 X, \u02c6 \u03b8M min lies in the interior of \u0398, and \u2207xl(\u02c6 \u03b8M min, xi, yi, M) \u0338= 0 for all i \u2208[n], then we have \u02c6 I(M) = \u2212\u02c6 H\u22121 \u02c6 \u03b8M min\u03a6, (5) where \u03a6 = 1/n Pn i=1 \u2207x,\u03b8l(\u02c6 \u03b8M min, xi, yi)Ex\u223c\u02c6 Px\u2225x\u2225p\u03c6i and \u03c6i = (\u03c81, \u03c82, \u00b7 \u00b7 \u00b7 , \u03c8m)T , with \u03c8k = bq\u22121 k (Pm k=1 bq k) 1 p sgn \u0010 \u2202 \u2202x\u00b7,k l(\u02c6 \u03b8M min, xi, yi, M) \u0011 . 4 \fHere, we have bk = \f \f \u2202 \u2202x\u00b7,k l(\u02c6 \u03b8M min, xi, yi, M) \f \f, x\u00b7,k is the k-th coordinate of vector x, for instance, xj = (xj,1, xj,2, \u00b7 \u00b7 \u00b7 , xj,m)T ; p \u22650 and q \u22650 are conjugate such that 1/p + 1/q = 1. Remark 1. The compactness condition is easy to satisfy. Since for any distributions D and integer n, we can take a su\ufb03ciently large constant R > 0, which is allowed to depend on n, such that all n samples are contained in the ball B(0, R) with high probability. Besides, if the input x is of high dimension, the computational bottleneck is mainly on inverting the empirical Hessian. We can use techniques such as conjugate gradients and stochastic estimation suggested in [20] to reduce the computational cost. The above theorem provides a closed-form expression for the \ufb01rst order AIF, and therefore a closed-form approximation of the model sensitivity S\u03b5(M). One nice property of such an approximation is that it does not depend on optimization algorithms, but only depends on the model M and the distribution of (x, y). This attribute makes model sensitivity an inherent property of model M and data distribution, making it a potential new rule for model selection. Model sensitivity can help us pick those models whose prediction result will not be greatly a\ufb00ected after robust optimization. We show the e\ufb00ectiveness of approximation by AIF in Figure 1. We plot two error curves for \u2206\u02c6 I(n, \u03b5) := \u2225(\u02c6 \u03b8M \u03b5,min \u2212\u02c6 \u03b8M min)/\u03b5 \u2212\u02c6 I(M)\u22252 and \u2206\u02c6 S(n, \u03b5) := \u2225\u02c6 S\u03b5(M)/\u03b52 \u2212 (\u02c6 I(M))T E\u02c6 Pxe,ye\u22072l(\u02c6 \u03b8M min, x, y, M)\u02c6 I(M)\u22252, where the sample size is n. Theoretically, we expect \u2206\u02c6 I(n, \u03b5) and \u2206\u02c6 S(n, \u03b5) go to 0 as \u03b5 goes to 0. In all the experiments in the paper, we use projected gradient descent (PGD) for robust optimization to obtain \u02c6 \u03b8M \u03b5,min. In Figure 1, we can see that as \u03b5 become smaller, \u2206\u02c6 I(n, \u03b5) and \u2206\u02c6 S(n, \u03b5) gradually go to 0. We remark here that we do not let \u03b5 be exactly 0 in our experiments, since PGD cannot obtain the exact optimal solutions for \u02c6 \u03b8M min and \u02c6 \u03b8M \u03b5,min. The existing system error will become dominating if \u03b5 is too small and return abnormally large value after divided by \u03b5. This also motivates us to introduce the AIF to have an accurate approximation. The model we use is a linear regression model with 500 inputs drawn from a two-dimensional standard Gaussian, i.e. x \u223cN(0, I). We \ufb01t y with y = 2x1 \u22123.4x2 + \u03b7 and \u03b7 \u223c0.1 \u00b7 N(0, I). 0.04 0.06 0.08 0.1 0.12 0 0.2 0.4 0.6 0.8 1 Figure 1: E\ufb00ectiveness of AIF and model sensitivity for linear regression model. From the monotonicity relationship between \u03b5 and \u2206\u02c6 I(n, \u03b5), \u2206\u02c6 S(n, \u03b5), we verify the e\ufb00ectiveness of AIF and model sensitivity. Here, the sample size n = 500. 5 \fRemark 2. It is straightforward to derive asymptotic normality for AIF by central limit theorem [11], which can be used to construct con\ufb01dence intervals for I(M). Speci\ufb01cally, if we denote \u03b6i := \u2212H\u22121 \u02c6 \u03b8M min\u2207\u03b8,xl(\u02c6 \u03b8M min, xi, yi)Ex\u223c\u02c6 PX\u2225x\u2225p\u03c6i, \u02c6 \u00b5n := 1/n Pn i=1 \u03b6i, and \u02c6 \u03a3n := 1/n Pn i=1(\u03b6i \u2212\u02c6 \u00b5n)(\u03b6i \u2212\u02c6 \u00b5n)T , then by classic statistical theory, we obtain \u221an\u02c6 \u03a3\u22121/2 n (\u02c6 I(M) \u2212\u02c6 \u00b5n) D \u2192N(0, Id), as n goes to in\ufb01nity, where N(0, I) denotes standard multivariate normal distribution and D \u2192denotes convergence in distribution. 4.1 Non-convex, non-convergence cases In the previous discussions, we talked about the case where the empirical loss is strongly convex. Now we brie\ufb02y discuss about non-convex and non-convergence cases. Well-separated condition. In the proof of Theorem 4.1, actually we only need \u02c6 \u03b8M min to be the global minimum and at the point \u02c6 \u03b8M min, the empirical Hessian matrix is positive de\ufb01nite and the landscape are allowed to have many local minimums. The uniqueness assumption can also be formulated in a more elementary way: if we assume the smoothness of loss function l over X \u00d7\u0398, compactness of \u0398 and we only have one global minimum for E(x,y)\u223cPx,yl(\u03b8M, x, y, M) which lies in the interior of \u0398, with positive definite Hessian matrix, and it is well-separated, which means that \u2200\u03c9 > 0, there exists \u03ba > 0, such that \u2200\u03b8M , if \u2225\u03b8M \u2212\u03b8M min\u2225> \u03c9, we have |Ex,yl(\u03b8M, x, y, M) \u2212Ex,yl(\u03b8M min, x, y, M)| > \u03ba. By classic statistical theory, \u02c6 \u03b8M min will be a global minimum if sample size is large enough. The well-separated condition relaxes the convexity condition in Theorem 4.1. However, the validity of Theorem 4.1 still requires the condition that \u02c6 \u03b8M min is the global minimum of the empirical risk, which in practice, is hard to \ufb01nd. Another alternative relaxation is to use a surrogate loss. Surrogate losses. In practice, we may obtain \u02dc \u03b8M min by running SGD with early stopping or on non-convex objectives, and get a solution \u02c6 \u03b8M min which is di\ufb00erent from \u02dc \u03b8M min. As in [20], we can form a convex quadratic approximation of the loss around \u02dc \u03b8M min, i.e., \u02dc l(\u03b8M, x, y) =l(\u02dc \u03b8M min, x, y) + \u2207\u03b8l(\u02dc \u03b8M min, x, y)(\u03b8M \u2212\u02dc \u03b8M min) + 1 2(\u03b8M \u2212\u02dc \u03b8M min)T \u0010 \u22072 \u03b8l(\u02dc \u03b8M min, x, y) + \u03bbI \u0011 (\u03b8M \u2212\u02dc \u03b8M min), where \u03bb is a damping term to remove the negative eigenvalues of the Hessian. One can show the results of Theorem 4.1 hold with this surrogate loss. 5 Case studies of Adversarial In\ufb02uence Functions To illustrate the usage of adversarial in\ufb02uence functions, we use it to explore the relationship between model complexity, randomized smoothing and model sensitivity. 6 \f5.1 Model Complexity and Model Sensitivity Throughout this paper, we use the term \u201cmodel complexity\u201d as a general term referring to 1) the number of features included in the predictive model, and 2) the model capacity, such as whether the model being linear, non-linear, and so on. As observed in the prior literature [12,21,25], model complexity is closely related to adversarial robustness, that is, when the model capacity increases, the \u03b5-sensitivity/adversarial cost will increase \ufb01rst and then decrease. However, such a phenomenon is only emporical and lack of theoretical justi\ufb01cation. In this subsection, we will theoretically explore how the model complexity model a\ufb00ect the model sensitivity/adversarial cost by studying speci\ufb01c models with di\ufb00erent model capacity and di\ufb00erent number of features included in the predictive model. 5.1.1 Model Capacity and Model Sensitivity We start with the relationship between model capacity and model sensitivity via two simple and commonly used models, with the dimension of inputs being \ufb01xed. Linear regression models (L) and quadratic models (Q) We consider the class of linear models L = {f\u03b2(x) = \u03b2T x : x, \u03b2 \u2208Rm} and the class of quadratic models Q = {f\u03b2,A(x) = \u03b2T x + xT Ax, x, \u03b2 \u2208Rm, A \u2208Rm\u00d7m}. Apparently, the class of quadratic models has a larger model capacity and is more \ufb02exible than that of linear models. In the following theorem, we will show that larger model capacity does not necessarily lead to smaller sensitivity. Theorem 5.1. We \ufb01t the data (xi, yi) by L and Q. For the simplicity of presentation, assume the sample sizes of both the training and testing sample are n. Suppose the underlying true generating process is y = xT \u03b2\u2217 1 + (\u03b2\u2217T 2 x)2 + \u03be, where x \u223cN(0, \u03c32 xIm) \u2208 Rm, \u03be \u223cN(0, \u03c32 \u03be) and independent with x. For l2 or l\u221eattack, I. when(\u2225\u03b2\u2217 2\u22252 2\u03c32 x \u2212 q 2 \u03c0\u03c3\u03be)2 > 1+2m\u03c32 x max{\u03c32 x,1} \u00b7 2 \u03c0\u03c32 \u03be, we have \u02c6 S\u03b5(L) > \u02c6 S\u03b5(Q) + Op(\u03b52 r m2 n ); II. when (\u2225\u03b2\u2217 2\u22252 2\u03c32 x + q 2 \u03c0\u03c3\u03be)2 < 1 min{1, 3 4 \u03c32 x}(1 + m\u03c32 x \u22122\u03c32 x \u00b7 log m) \u00b7 3 2\u03c0\u03c32 \u03b5, then \u02c6 S\u03b5(L) < \u02c6 S\u03b5(Q) + Op(\u03b52 r m2 n ). From Theorem 5.1, unlike adversarial robustness, we can see that the model sensitivity does not have monotonic relationship with the model capacity. Such a monotonic relationship only holds when the model has high complexity (when \u2225\u03b2\u2217 2\u2225is large). Therefore, when n is su\ufb03ciently large, the result implies that a larger model capacity does not necessarily lead to a model with smaller sensitivity. 5.1.2 Number of features and model sensitivity Another important aspect of model complexity is the number of features included in the predictive model. There have been many model selection techniques, such as LASSO, 7 \fAIC and BIC, developed over years. Given the newly introduced concept of model sensitivity, it is interesting to take model sensitivity into consideration during model selection. For example, if for a speci\ufb01c model, including more features results in a smaller model sensitivity, then for the sake of adversarial robustness, we should include more features even if it leads to feature redundancy. For instance, the following results study when xi follows some structures such as Cov(xi) = \u03c32 xIm for some constant \u03c3x, the relationship between model sensitivity and number of features included in linear models. Theorem 5.2. Suppose that the data (xi, yi)\u2019s are i.i.d. samples drawn from a joint distribution Px,y. Denote the sample sizes of the training and testing sample by ntrain and ntest respectively. Let m be the dimension of input x, and \u03b2L min = arg min \u03b2 EPx,y(y \u2212\u03b2T x)2. De\ufb01ne \u03b7L i = yi \u2212\u03b2L\u22a4 minxi, and assume E[xi \u00b7 sgn(\u03b7L i )] = 0 and Cov(xi) = \u03c32 xIm, then for \u21132 attack \u02c6 S\u03b5(L) = \u03b52(Ex\u223c\u02c6 Px\u2225x\u22252)2 \u00b7 (E|\u03b7L i |)2 \u00b7 \u03c3\u22122 x + Op(\u03b52 \u00b7 s 1 ntrain + m2 ntest ). Given this theorem, we now consider a speci\ufb01c case where we apply this result to random e\ufb00ect model. Corollary 5.1. Consider the random e\ufb00ect model y = \u03b2\u22a4x + \u03be, where x \u2208RM, \u03b21, ..., \u03b2M i.i.d. \u223c N(0, 1), and \u03be \u223cN(0, \u03c32 \u03be). Further, we assume x is a random design with distribution x1, ..., xn i.i.d. \u223cN(0, \u03c32 xIM). Then when we only include m features in the linear predictive model, the resulting model sensitivity is \u02c6 S\u03b5(L) = 4\u03b52 \u03c0\u03c32 x \u03932( m+1 2 ) \u03932( m 2 ) \u00b7 ((M \u2212m)\u03c32 x + \u03c32 \u03be) + Op(\u03b52 \u00b7 s 1 ntrain + m2 ntest ). (6) where \u0393(\u00b7) is the Gamma function such that \u0393(x) = R \u221e 0 tx\u22121e\u2212t dt. Since \u03932( m+1 2 ) \u03932( m 2 ) \u224d1 2m, there is a universal constant C, such that \u02c6 S\u03b5(L) \u224dCm((M \u2212 m)\u03c32 x + \u03c32 \u03be) = \u2212C\u03c32 xm2 + C(M + \u03c32 \u03be)m. This also implies that a larger model capacity does not necessarily lead to a model with smaller sensitivity. Speci\ufb01cally, when m is small, including more features in the linear model results in larger model sensitivity. In contrast, when m is large, i.e. in the high-complexity regime, including more features leads to smaller model sensitivity. Next, we consider a broader class of functions \u2014 general regression models. General regression models (GL) In general regression models, suppose we use a d-dimensional basis vGL(x) = (vGL 1 (x), ..., vGL d (x))T \u2208Rd to approximate y (d can be a function of m), and get the coe\ufb03cients by solving \u02c6 \u03b8GL min = arg min \u03b8 1 2n n X i=1 (yi \u2212\u03b8T vGL(xi))2, 8 \fwhere the loss function is l(\u03b8, xi, yi, GL) = 1 2(yi \u2212\u03b8T vGL(xi))2. By Theorem 4.1, it is straightforward to obtain \u02c6 I(GL) = \u2212\u02c6 H\u22121 \u02c6 \u03b8GL min\u03a6 = \u2212Cov(vGL(x))\u22121\u03a6 + OP ( r d n), where Cov(vGL(x)) is the covariance matrix of vGL(x) and \u03a6 = n X i=1 h|(\u02c6 \u03b8GL min)T vGL(xi) \u2212yi| n\u2225(\u02c6 \u03b8GL min)T \u2202vGL(xi) \u2202x \u2225 \u2202vGL(xi) \u2202x (\u2202vGL(xi) \u2202x )T \u02c6 \u03b8GL min + vGL(xi) n \u2225(\u02c6 \u03b8GL min)T \u2202vGL(xi) \u2202x \u2225sgn((\u02c6 \u03b8GL min)T vGL(xi) \u2212yi) i . Thus, \u02c6 S\u03b5(GL) = \u03b52 \u00b7 \u03a6\u22a4Cov(v(x))\u22121\u03a6 + OP (\u03b52 r d n). (7) Notice that the linear regression model is a special case if we take v(x) = x. However, the expression of model sensitivity for the general regression models is very complex and hard to analyze directly most of the time. Instead of directly studying Eq. (7), we further simplify the expression by providing an upper bound to shed some light. Theorem 5.3. Suppose that the data (xi, yi)\u2019s are i.i.d. samples drawn from a joint distribution Px,y. Let m be the dimension of input x, and \u03b8GL min = arg min \u03b8 EPx,y(y \u2212\u03b8T vGL(xi))2. Let \u03b7GL i = yi \u2212(\u03b8GL min)T vGL(xi) and assume E[xi \u00b7 sgn(\u03b7GL i )] = 0, then \u02c6 S\u03b5(GL) \u2264\u03b52(Ex\u223c\u02c6 Px\u2225x\u22252)2 \u00b7 1 \u03bbmin(E[v(xi)v(xi)\u22a4]) \u00b7 E[ \r \r( \u2202 \u2202xvGL(xi))T \u2202 \u2202xvGL(xi) \r \r 2] \u00b7 E[|\u03b7GL i |]2 + Op(\u03b52 r d n). The following example illustrates how our upper bound is used to demonstrate the trend of change between model sensitivity and number of features included. Example 5.1. Suppose v(x) = (xT , ( x 2 \u2299x 2)T )T . If x consists of random features, such that each coordinate of x is i.i.d drawn from uniform distribution on (\u22121, 1). y = xT \u03b2\u2217 1 + \u03b2\u2217T 2 x 2 \u2299x 2 + \u03be, where \u03be \u223cN(0, \u03c32 \u03be) and independent with x. As a result, the eigenvalue satis\ufb01es \u03bbminE[vGL(xi)vGL(xi)\u22a4] \u22651 5; E[ \r \r( \u2202 \u2202xvGL(xi))T \u2202 \u2202xvGL(xi) \r \r 2] = 1, regardless of the number of features m. Besides, E|\u03b7GL i | decreases as m increases, and thus the upper bound will decrease as m increases. In the experiments in Figure 2(a), we show the trend for \u02c6 S\u03b5(GL) by taking sample size n = 5000, \u03c3\u03be = 0.1. We take the average result for 1000 repetitions. 9 \f0 2 4 6 8 10 0 100 200 300 400 500 (a) Illustration of the relationship between the feature number and model sensitivity for the model in Example 5.1. 0.04 0.06 0.08 0.1 0.12 1.35 1.4 1.45 1.5 1.55 1.6 1.65 1.7 (b) E\ufb00ectiveness of AIF for kernel regression with NTK on MNIST. Figure 2: a) Experimentally, the general trend for \u02c6 S\u03b5(GL) with respect to m is decreasing (though not strict for every m) as the upper bound suggests. b) The monotonic trend of \u03b5 is still clearly observed, though thevalues are larger than the previous example in Figure 1 due to the high dimensionality of MNIST. 5.2 Randomized Smoothing and Model Sensitivity As the last case study of AIF, we investigate the e\ufb00ect of randomized smoothing [7], a technique inspired by di\ufb00erential privacy, in adversarial robustness. Randomized smoothing has achieved impressive empirical success as a defense mechanism of adversarial attacks for l2 attack. The core techniques is adding isotropic noise \u03d1 \u223cN(0, \u03c32 rI) to the inputs so that for any output range O, P \u0010 1 n n X i=1 l(\u03b8M, xi + \u03d1i, yi, M) \u2208O \u0011 is close to P \u0010 1 n n X i=1 l(\u03b8M, xi + \u03b4i + \u03d1i, yi, M) \u2208O \u0011 for constrained \u2225\u03b4i\u22252. The following theorem provides an insight into how randomized smoothing a\ufb00ects model sensitivity regarding linear regression models. Theorem 5.4. Use the same notation as that in Theorem 5.2. Suppose that the data (xi, yi)\u2019s are i.i.d. samples drawn from a joint distribution Px,y, and E[xi \u00b7sgn(\u03b7L i )] = 0, Cov(xi) = \u03c32 xIm, and V ar(\u03b7L i ) = \u03c3\u03b72. When we \ufb01t y with \u02dc x = x + \u03d1, where \u03d1 is distributed as N(0, \u03c32 rIm), then \u02c6 S\u03b5(Lnoise) \u02c6 S\u03b5(L) = \u03c32 x/\u03c32 \u03be \u03c32 x + \u03c32 r \u0012 2\u03c32 r\u03c32 x \u03c32 x + \u03c32 r \u2225\u03b2L min\u22252 2 + \u03c32 \u03be \u0013 + Op( rm n ). Here, Lnoise denotes the linear model with randomized smoothing by adding input noise. This theorem informs us that when \u03c3r is large, we have \u02c6 S\u03b5(Lnoise) \u2264\u02c6 S\u03b5(L) asymptotically, and \u02c6 S\u03b5(Lnoise) becomes smaller with larger \u03c3r. In other words, the randomized smoothing helps reduce the sensitivity in this case. 10 \f6 Further Extensions In this section, we extend the theories of IFA to kernel regressions and distributional robust optimization. First, we derive the AIF for kernel regressions in Section 6.1. In particular, we are interested in how well AIF characterizes the change of optimizers with neural tangent kernels (NTK), whose equivalence to in\ufb01nitely wide neural networks has been well-established in recent literatures [10,19]. In Section 6.2, we further extend our theory to compute the AIF for distributional robust optimization. 6.1 AIF of the kernel regressions We consider the kernel regression model in the following form \u02c6 Ln(\u03b8, X, Y ) := 1 n n X i=1 \u0000yi \u2212 n X j=1 K(xi, xj)\u03b8j \u00012 + \u03bb\u2225\u03b8\u22252 2. (8) where \u03b8 = (\u03b81, \u00b7 \u00b7 \u00b7 , \u03b8n)T , and \u03bb > 0. Now let us denote g(\u03b8, \u2206) = \u02c6 Ln(\u03b8, X + \u2206, Y ), and we will calculate the empirical adversarial in\ufb02uence function \u02c6 I(K) for kernel K. Notice that in kernel regression, the loss function \u0000yi \u2212Pn j=1 K(xi, xj)\u03b8j \u00012 includes all the data points in one single term, which is di\ufb00erent from the summation-form of loss function in Theorem 4.1. Fortunately, the technique of proving Theorem 4.1 can still be used here with slight modi\ufb01cation. We obtain the following corollary for the adversarial in\ufb02uence function \u02c6 I(K) in kernel regression. Corollary 6.1. Suppose X, Y and \u0398 are compact spaces, the kernel \u02c6 Ln is three times continuously di\ufb00erentiable on \u0398\u00d7X. g(\u00b7, \u2206) is di\ufb00erentiable for every \u2206and \u2207\u03b8g(\u03b8, \u2206) s continuous on \u0398 \u00d7 X, the minimizer \u02c6 \u03b8min lies in the interior of \u0398, with non-zero \u2207xi \u02c6 Ln(\u02c6 \u03b8min, X, Y ) for all i \u2208[n], then we have \u02c6 I(K) = \u2212 \u0000n X i=1 K(xi)K(xi)T + n\u03bbI \u0001\u22121 \u0010 n X k,i=1 \u0000K(xi)T \u02c6 \u03b8min + K(xi)\u02c6 \u03b8T min \u2212yi \u0001 Kxi,xk\u03b2k \u0011 . In the above formula, K(xi) = \u0000K(xi, x1), K(xi, x2), \u00b7 \u00b7 \u00b7 , K(xi, xn) \u0001T , Kxi,xk = \u0010\u2202K(xi, x1) \u2202xk , \u00b7 \u00b7 \u00b7 , \u2202K(xi, xn) \u2202xk \u0011T . And the z-th coordinate of \u03b2k is \u03b2k,z = cq\u22121 z (Pm k=1 cq z) 1 p sgn \u0010 \u2207xk \u02c6 Ln(\u02c6 \u03b8min, z) \u0011 Ex\u223c\u02c6 Px\u2225x\u2225p, with cz = |\u2207xk \u02c6 Ln(\u02c6 \u03b8min, z)|, where \u2207xk \u02c6 Ln(\u02c6 \u03b8min, z) is short for the z-th coordinate of \u2207xk \u02c6 Ln(\u02c6 \u03b8min, X, Y ): \u2207xk \u02c6 Ln(\u03b8, X, Y ) = 2 n n X i=1 \u0010 K(xi)T \u02c6 \u03b8min \u2212yi \u0011 KT xi,xk \u02c6 \u03b8min. 11 \fNeural tangent kernels The intimate connection between kernel regression and overparametrized two-layer neural networks has been studied in the literature, see [10, 19]. In this section, we are going to apply Corollary 6.1 to the two-layer neural networks in the over-parametrized setting. Speci\ufb01cally, we consider a two-layer ReLU activated neural network with b neurons in the hidden layer: fW,a(x) = 1 \u221a b b X r=1 ar\u03c3(wT r x), where x \u2208Rm denotes the input, w1, ..., wb \u2208Rm are weight vectors in the \ufb01rst layer, a1, ..., ab \u2208R are weights in the second layer. Further we denote W = (w1, ..., wb) \u2208 Rm\u00d7b and a = (a1, ..., am)T \u2208Rm. Suppose we have n samples S = {(xi, yi)}n i=1 and assume \u2225xi\u22252 = 1 for simplicity. We train the neural network by randomly initialized gradient descent on the quadratic loss over data S. In particular, we initialize the parameters randomly: wr \u223cN(0, \u03ba2I), ar \u223cU(\u22121, 1), for all r \u2208[m], then Jacot et al. [2018] showed that, such a resulting network converges to the solution produced by the kernel regression with the so called Neural Tangent Kernel (NTK) matrix: NTK = \u0014x\u22a4 i xj(\u03c0 \u2212arccos(x\u22a4 i xj)) 2\u03c0 \u0015 i,j\u2208[n] . In Figure 2(b), we experimentally demonstrate the e\ufb00ectiveness of the approximation of AIF in kernel regressions with neural tangent kernel on MNIST. The estimation is based on the average of randomly drawn 300 examples from MNIST for 10 times. 6.2 Distributional adversarial in\ufb02uence function Another popular way to formulate adversarial attack is through distributional robust optimization (DRO), where instead of perturbing x with certain distance, one perturbs (x, y) in a distributional sense. For a model M, the corresponding distributional robust optimization with respect to u-Wasserstein distance Wu for u \u2208[1, \u221e) regarding lp-norm is formulated as: min \u03b8M OPT(\u03b5; \u03b8M), where OPT(\u03b5; \u03b8M) is de\ufb01ned as OPT(\u03b5; \u03b8M) := max \u02dc Px,y:Wu(\u02dc Px,y,Px,y)\u2264\u03b5 E\u02dc Px,yl(\u03b8M, x, y; M). Here, Wu(D, \u02dc D) = inf{ R \u2225x \u2212y\u2225u pd\u03b3(x, y) : \u03b3 \u2208\u03a0(D, \u02dc D)}1/u for two distributions D, \u02dc D, and \u03a0(D, \u02dc D) are couplings of D, \u02dc D. However, it is not clear whether \u03b8M,DRO \u03b5,min := arg min \u03b8M OPT(\u03b5E\u02c6 Px\u2225x\u2225p; \u03b8M), is well-de\ufb01ned since the optimizer may not be unique. Moreover, the corresponding sample version of the optimizer \u03b8M,DRO \u03b5,min is not easy to obtain via regular optimization methods if we just replace the distribution Px,y by its empirical distribution since it 12 \fis hard to get the corresponding worst form of \u02dc Px,y. As a result, we focus on de\ufb01ning empirical distributional adversarial in\ufb02uence function for a special approximation algorithm and state its limit. Interested readers are refered to the following result in [30] and [13] to properly \ufb01nd an approximation for \u02dc Px,y. Lemma 6.1 (A variation of Corollary 2(iv) in [13]). Suppose for all y, l(\u03b8M, x, y; M) is L-Lipschitz as a function of x. De\ufb01ne EMP(\u03b5) := max \u03b41,\u00b7\u00b7\u00b7 ,\u03b4n 1 n n X i=1 l(\u03b8M, xi + \u03b4i, yi, M), s.t. ( 1 n n X i=1 \u2225\u03b4i\u2225u p)1/u \u2264\u03b5. Then, we have EMP(\u03b5) \u2265OPT(\u03b5; \u03b8M)\u2212LD/n where D bounds the maximum deviation of a single point. Lemma 6.1 provides a direction to de\ufb01ne an algorithm dependent empirical DAIF \u02c6 IDRO(M). We de\ufb01ne \u02c6 IDRO(M) similarly as before. For a given model M, the corresponding empirical distributional adversarial in\ufb02uence function is de\ufb01ned as \u02c6 IDRO(M) := d\u02c6 \u03b8M,DRO \u03b5,min d\u03b5 \f \f \u03b5=0+, s.t. \u02c6 \u03b8M,DRO \u03b5,min \u2208arg min \u03b8M\u2208\u0398 EMP \u0010 \u03b5E\u02c6 Px\u2225x\u2225p \u0011 . We use \u2208arg min here since there may not be a unique minimizer, but the limit \u02c6 IDRO(M) is still unique and well-de\ufb01ned. Similarly, we can provide a closed form of distributional adversarial in\ufb02uence function. Theorem 6.1. Under the settings of Theorem 4.1, \u02c6 IDRO(M) = \u2212\u02c6 H\u22121 \u02c6 \u03b8M min\u03f1n 1\u2212u u , (9) where \u03f1 = \u2207x,\u03b8l(\u02c6 \u03b8M min, xJ, yJ)E\u02c6 Px\u2225x\u2225p\u03c6J and \u03c6i is de\ufb01ned as in Theorem 4.1, J is the index: J = arg max i \u2225\u2207xL(\u02c6 \u03b8M min, xi, yi)\u2225q. We remark here from Eq. 9, we can see that if u > 1, more training data will result in a smaller norm of \u02c6 IDRO(M) since there is a factor n(1\u2212u)/u. 7 Conclusions and Future Work To achieve adversarial robustness, robust optimization has been widely used in the training of deep neural networks, while their theoretical aspects are under-explored. In this work we \ufb01rst propose the AIF to quantify the in\ufb02uence of robust optimization theoretically. The proposed AIF is then used to e\ufb03ciently approximate the model sensitivity, which is usually NP-hard to compute in practice. We then apply the AIF to study the relationship between model sensitivity and model complexity. Moreover, the AIF is applied to randomized smoothing and found that adding noise to the input during training would help reduce the model sensitivity. Further, the theories are extended to the kernel regression models and distributional robust optimization. Based on the newly introduced tool AIF, we suggest two main directions for future research. First, we can study how to use AIF to select model with the greatest adversarial robustness. Due to the computational e\ufb00ectiveness of AIF, it is a natural idea to use 13 \fAIF for model selection. Such an idea can be used for tuning parameter selection in statistical models such as high-dimensional regression and factor analysis, and further extended to the neural network depth and width selection. Second, AIF can be extended to study more phenomena in adversarial training. For instance, the relationship between low-dimensional representations and adversarial robustness. Recently, [22, 28] empirically observed that using learned low-dimensional representations as the input in neural networks is substantially more adversarially robust, but a theoretical exploration of this phenomenon is still lacking. 8 Acknowledgments This work is in part supported by NSF award 1763665 and NSF DMS-2015378." + }, + { + "url": "http://arxiv.org/abs/2006.11478v1", + "title": "Representation via Representations: Domain Generalization via Adversarially Learned Invariant Representations", + "abstract": "We investigate the power of censoring techniques, first developed for\nlearning {\\em fair representations}, to address domain generalization. We\nexamine {\\em adversarial} censoring techniques for learning invariant\nrepresentations from multiple \"studies\" (or domains), where each study is drawn\naccording to a distribution on domains. The mapping is used at test time to\nclassify instances from a new domain. In many contexts, such as medical\nforecasting, domain generalization from studies in populous areas (where data\nare plentiful), to geographically remote populations (for which no training\ndata exist) provides fairness of a different flavor, not anticipated in\nprevious work on algorithmic fairness.\n We study an adversarial loss function for $k$ domains and precisely\ncharacterize its limiting behavior as $k$ grows, formalizing and proving the\nintuition, backed by experiments, that observing data from a larger number of\ndomains helps. The limiting results are accompanied by non-asymptotic\nlearning-theoretic bounds. Furthermore, we obtain sufficient conditions for\ngood worst-case prediction performance of our algorithm on previously unseen\ndomains. Finally, we decompose our mappings into two components and provide a\ncomplete characterization of invariance in terms of this decomposition. To our\nknowledge, our results provide the first formal guarantees of these kinds for\nadversarial invariant domain generalization.", + "authors": "Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, Pragya Sur", + "published": "2020-06-20", + "updated": "2020-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In gene expression analysis, as well as in much of high-throughput biology analyses on human populations, variation between studies can arise from the intrinsic biological heterogeneity of the populations being studied, or from technological di\ufb00erences in data acquisition. In turn both these types of variation can be shared across studies or not. For example, an algorithm for predicting whether a tumor will recur, trained on data obtained from the local population via a speci\ufb01c data-collection and processing method at a research hospital A, will typically not perform equally well on data collected at a research hospital B, using di\ufb00erent data-collection techniques and serving a potentially di\ufb00erent local population. \u2217Author names listed in alphabetical order. Corresponding authors: zhundeng@g.harvard.edu, dwork@seas.harvard.edu, pragya@seas.harvard.edu. \fOf course, theoretically, B can train its own algorithm. This \u201csiloization\u201d is suboptimal for several reasons, from reduced statistical power, to wasteful allocation of research investment due to duplication of e\ufb00ort, or even reluctance to fund or publish such duplication. It is preferable to combine the data sources, potentially with some smart provision for domain variation. However, there will always be a new C to which the resulting algorithm will need to be applied \u2013 maybe the competition across town who did not collaborate at the development stage, or maybe a small, geographically isolated, population, far from any major medical research center. The ultimate goal, in the development of models with biomedical applications, is to provide accurate predictions for fully independent samples, originating from institutions and processed by laboratories that did not generate the training datasets. How can we transfer prediction capability to a new population? This is a problem of domain generalization, the subject of intense study for nearly two decades [1,4,7,9,11\u201316,19\u201322,25,26,28]. Under the assumption that there is a common signal that provides a high quality predictor g\u2217for all populations, and given labeled training data from several populations, can this signal be learned even when it does not necessarily yield the best predictor for any given population? When does the presence of multiple training datasets improve the accuracy of this learning procedure? Using tools developed for \ufb01nding \u201dfair\u201d representations of individuals in which sensitive attributes such as sex or race have been censored [8, 17, 18, 29], we proceed from the following intuition: treating the domain as a sensitive attribute and training on multiple, highly diverse, populations, the learning algorithm is forced to disregard the idiosyncratic in favor of the universal, that is, to \ufb01nd a prediction rule based on a signal that is shared among all domains. This work \u2013 domain generalization to unseen populations \u2013 provides a new dimension of fairness, transferring the bene\ufb01t of federal research dollars from preeminent bench to geographically remote bedside, not anticipated in earlier work on learning fair representations. Approach. We model the problem through the lens of a hierarchical Bayesian approach that is extensively used in applications. Let X \u2286Rd be the covariate space and Y = {0, 1} the outcome space. Let D denote a collection of probability distributions on X \u00d7 Y and \u00b5 be a distribution supported on D. The observed data arises through a hierarchical scheme\u2014\ufb01rst, domains D1, \u00b7 \u00b7 \u00b7 , Dk are sampled i.i.d. from \u00b5, and then random samples Si = {xi,j, yi,j}ni j=1 are drawn from each Di. We seek to train a classi\ufb01er on the observed samples that performs well on any distribution from D, even those from which no data have been observed. To this end, we adopt an adversarial censored learning approach. Simplifying slightly, for a mapping \u03c6 from the input covariate space to a representation space Z, a discriminator \u03c8k that attempts to guess the source domain of \u03c6\u22121(z) for z \u2208Z, and a classi\ufb01er f, we de\ufb01ne an empirical adversarial loss function that increases with misclassi\ufb01cations by f and correct guesses by the discriminator (Equation 3). Our approach then tries to \ufb01nd the classi\ufb01er f and encoding \u03c6 that minimizes this adversarial loss for the observed data. Our algorithm is adapted from [18], where it was used for the purposes of fair representation learning. To study the performance of the proposed approach on a newly coming domain Du \u2208D, it is crucial to understand the behavior of our adversarial loss in the limit of large k and ni\u2019s. However, the structure of the discriminator changes with growing k. Thus, a crucial challenge lies in pinning down whether our loss admits a limit, and if so, what should be the limiting value? Additionally, even if we can characterize this limit, how would the proposed algorithm perform on an arbitrary Du \u2208D? This paper explores these key questions in detail. \fContributions. We obtain a precise characterization of the limit of our adversarial loss (Section 3.1). We address the challenges incurred by the dependence of the discriminator on k via a highly non-trivial geometric argument. We then provide non-asymptotic generalization error bounds for the empirical loss around its population counterpart; the form of the population version is naturally determined using the prior limiting result. We further establish consistency of loss function optimizers \u02c6 f\u03bb, \u02c6 \u03c6\u03bb, in the sense that, these converge (under an appropriate limit) to the corresponding optimizers of the population loss. Section 3.2 provides a characterization of the prediction performance of our algorithm on unseen domains that lie within bounded Hdivergence [3] of the seen ones. Section 3.3 decomposes our mappings \u03c6 into two components, and provides a complete characterization of invariant mappings (which defeat the discriminator) in terms of this decomposition. Extensive experimental results are summarized in Section 4. Related Work. There are rich literatures of related work in computational learning theory. For lack of space we con\ufb01ne our discussion to a handful of works in domain generalization. In the earliest, kernel-based works on domain generalization [4,20], the learned classi\ufb01er f receives at test time not just a single x drawn from a test distribution DT , but (especially in [4]) a large, unlabeled sample from DT together with a single additional test sample to be classi\ufb01ed. To our knowledge, [20] is the \ufb01rst to assume a latent distribution on domains (as do we). Three works are particularly aligned with our philosophical approach. [2] comes from a line of work, initiated in [24], on causal inference and predictive robustness, relying on a notion of probabilistic invariance. (See [5] for a survey.) [2] seeks data representations that elicit predictors satisfying certain invariance properties across the domains. This is framed as a penalized risk minimization problem, which is then solved using stochastic gradient descent. The theoretical guarantees rely on linearity assumptions [2, Theorem 8]. Inspired by [10], adversarial networks were introduced for fair representation learning in [8,18] and for domain generalization in [15,16]. [15] uses an autoencoder and introduces a Laplace prior on representations to encourage domain generalization. [16] employs an adversarial architecture very similar to ours, expanded with a subnetwork that seeks to minimize the discrepancy between P(X|Y ) across the di\ufb00erent domains, addressing di\ufb00erences in base rates among the training distributions. We provide theoretical insights not featured in [15,16]. 2 Formal setup Recall our setting from Section 1. Throughout, we assume that D contains \ufb01nitely many probability distributions, i.e. D = {D\u2217 1, D\u2217 2, \u00b7 \u00b7 \u00b7 , D\u2217 N}, and DX is the corresponding set of marginal distributions induced on X. De\ufb01ne D1:k to be the set of seen domains {D1, . . . , Dk}, and assign them distinct ID\u2019s {1, . . . , k}. Let S1:k denote the collection of observed samples {S1, . . . , Sk}. Note that repeated sampling is possible here; for instance, we may have D1 = D2 = D\u2217 1. De\ufb01ne SX i := {xi,j}ni j=1 and g(SX i ) := {g(xi,j)}ni j=1, for any function g on X. For any function g and distribution D, we use g(D) to denote the distribution of g(z), where z \u223cD. For any distribution D that admits a density function pD, let SuppD := {x|pD(x) > 0}. Finally, for any function f : Rm \u2192Rn, we represent f(\u00b7) using the n-dimensional vector (f (1)(\u00b7), . . . , f (n)(\u00b7))\u22a4. Algorithm. The samples S1:k are \ufb01rst passed through an encoder that produces a representation {\u03c6(SX i )}k i=1 of the input covariates. Here, \u03c6 is a representation mapping that belongs to some function class \u03a6 = {g|g : Rd \u2192Rs}. The output from the encoder is subsequently passed \fthrough a discriminator \u03c8k of the form \u03c8k(\u00b7) = W\u03b6(\u00b7) + B, (1) where \u03b6 : Rs \u2192Rp lies in some function class \u03a5, W \u2208Rk\u00d7p and B \u2208Rk. We further denote W = (w1, w2, \u00b7 \u00b7 \u00b7 , wk)\u22a4, B = (b1, \u00b7 \u00b7 \u00b7 , bk)\u22a4, where wi \u2208Rp, bi \u2208R. Thus, the discriminator comprises a base structure \u03b6 followed by a linear transformation, and e\ufb00ectively maps each input in Rs to k unnormalized weights. For each input \u03c6(xi,j), the \u2113-th entry in the normalized version of the output \u03c8k(\u03c6(xi,j)) \u2208Rk should be viewed as the discriminator\u2019s estimate of the probability that the pre-image \u03c6\u22121(\u03c6(xi,j)) was drawn from the seen domain with ID \u2113. Finally, de\ufb01ne \u03c0k(\u00b7) to be the operation that maps an input vector w to the index of the entry with maximal weight. If multiple entries achieve the maximal weight, \u03c0k chooses uniformly among the corresponding indices. Simultaneously, a predictor is trained on the encoded representations {\u03c6(SX i )}k i=1 and produces labels in the outcome space. Denote the predictor class by F = {f | f : Rs 7\u2192Y}. Loss function. The encoder, discriminator and predictor will be simultaneously trained using a loss function that comprises two components: (a) the loss corresponding to the predictor Lpred(D1:k, f, \u03c6) = (1/k) Pk i=1 P(x,y)\u223cDi(f(\u03c6(x)) \u0338= y), (b) the loss corresponding to the discriminator or adversary Ladv(D1:k, \u03c6, \u03c8k) = Pk i=1 Px\u223cDX i (\u03c0k \u25e6\u03c8k(\u03c6(x)) = i). The form of these loss functions is inspired from [18]. De\ufb01ne L(D1:k, f, \u03c6, \u03c8k; \u03bb) = Lpred(D1:k, f, \u03c6) + \u03bbLadv(D1:k, \u03c6, \u03c8k), (2) where \u03bb > 0 is a tuning parameter, and the corresponding empirical version L(S1:k, f, \u03c6, \u03c8k; \u03bb) = 1 k k X i=1 1 ni ni X j=1 1{f(\u03c6(xi,j)) \u0338= yi,j}+\u03bb k X i=1 1 ni ni X j=1 1{\u03c0k\u25e6\u03c8k(\u03c6(xi,j)) = i}, (3) where 1 is the indicator function. We seek to optimize the aforementioned loss to obtain ( \u02c6 f\u03bb, \u02c6 \u03c6\u03bb) = arg inf f\u2208F,\u03c6\u2208\u03a6 sup \u03c8k\u2208\u03a8k L(S1:k, f, \u03c6, \u03c8k; \u03bb). (4) The in\ufb01mum aims to maximize accuracy of the predictor, whereas the supremum ensures the performance of the discriminator is minimized. The \ufb01nal predictor for any test datapoint x \u223cD where D \u223c\u00b5, is then given by \u02c6 y := \u02c6 f\u03bb(\u02c6 \u03c6\u03bb(x)). Remark 2.1. Recall the de\ufb01nition of H-Divergence [3]: let H be a class of binary classi\ufb01ers, then H-divergence between distributions D and D\u2032 over Rd is de\ufb01ned as DH(D, D\u2032) = sup h\u2208H |Px\u223cD(h(x) = 1) \u2212Px\u223cD\u2032(h(x) = 1)|. In the case of k = 2, if we choose H = {\u03c02 \u25e6\u03c82(\u03c6(\u00b7)) : \u03c82 \u2208\u03a82, \u03c6 \u2208\u03a6}, then we can see that 2 X i=1 Px\u223cDX i (\u03c02 \u25e6\u03c82(\u03c6(x)) = i) = 1 + Px\u223cDX 1 (\u03c02 \u25e6\u03c82(\u03c6(x)) = 1) \u2212Px\u223cDX 2 (\u03c02 \u25e6\u03c82(\u03c6(x)) = 1). As a result, sup \u03c82\u2208\u03a82 2 X i=1 Px\u223cDX i (\u03c02 \u25e6\u03c82(\u03c6(x)) = i) = 1 + dH(\u03c6(DX 1 ), \u03c6(DX 2 )). \fOur loss is a natural generalization of Hdivergence and has a straightforward interpretation \u2013 how best can the discriminator distinguish the images from the k domains. We elucidate this further in Section 3.1. For simplicity, throughout the paper, we only consider function classes for which the in\ufb01mum and supremum can be achieved, and therefore replace inf, sup in (4) by min, max respectively. In our experiments, the function classes F, \u03a6, \u03a8k are taken to be neural networks with speci\ufb01c architectures. 3 Theoretical Results 3.1 Learning theoretic analysis To obtain a learning theoretic analysis of L(S1:k, f, \u03c6, \u03c8k; \u03bb), it is crucial to understand its limiting behavior when the sample sizes ni and number of seen domains k diverge. It is clear that when every ni \u2192\u221e, L(S1:k, f, \u03c6, \u03c8k; \u03bb) \u2192L(D1:k, f, \u03c6, \u03c8k; \u03bb). Furthermore, Lpred(D1:k, f, \u03c6) \u2192 ED\u223c\u00b5[P(x,y)\u223cD(f(\u03c6(x)) \u0338= y)] as k \u2192\u221e. Thus, we focus on studying the limiting behavior of max\u03c8k\u2208\u03a8k Ladv(D1:k, \u03c6, \u03c8k) when k diverges. Denote the density function of \u03b6(\u03c6(D\u2217X i )) by \u03c1\u03b6,\u03c6 i (\u00b7). Assumption 3.1 (Continuity). Every \u03b6 \u2208\u03a5 and \u03c6 \u2208\u03a6 is continuous almost everywhere (a.e.). Besides, for all D\u2217X i \u2208D,\u03b6 \u2208\u03a5 and \u03c6 \u2208\u03a6, \u03c1\u03b6,\u03c6 i (\u00b7) is continuous a.e. and Supp\u03b6(\u03c6(D\u2217X i )) has non-zero volume in Rp. Notice fully connected neural networks with ReLU activation functions are continuous a.e., since the ReLU activation function is discontinuous only at 0. Theorem 3.1. Suppose \u00b5 is a probability distribution supported on a \ufb01nite set of distributions D := {D\u2217 1, D\u2217 2, \u00b7 \u00b7 \u00b7 , D\u2217 N}, each of which is a distribution over X \u00d7Y. Further, let D1, \u00b7 \u00b7 \u00b7 , Dk iid \u223c\u00b5 with corresponding study IDs {1, . . . , k}. Then under Assumption 3.1, for any \u03c6 \u2208\u03a6, lim k\u2192\u221emax \u03c8k\u2208\u03a8k k X i=1 Px\u223cDX i (\u03c0k \u25e6\u03c8k(\u03c6(x)) = i) = sup \u222aiAi=Rp,Ai\u2229Aj=\u2205,\u03b6\u2208\u03a5 N X i=1 Px\u223cD\u2217X i (\u03b6(\u03c6(x)) \u2208Ai). (5) The probabilities on the LHS (left hand side) are taken w.r.t. marginal covariate distributions of the seen domains DX i , which may contain repeats from D. But the RHS contains probabilities w.r.t. the corresponding marginals of all distributions in D. Speaking intuitively, Theorem 3.1 says that, for every \u03c6 \u2208\u03a6, as k grows so that (1) the encodings \u03c6(DX i ) contain repeated instances of every element from D, and (2) the structure of the last layer of the discriminator changes with k, the chance that the adversary accurately guesses the IDs of the encoded inputs is the same as the chance that the encoding \u03c6(\u00b7) itself maps the true distributions D\u22c6 i X to N disjoint parts of the space. Theorem 3.1 provides further insights into our loss function (2) and the behavior of our algorithm. Since the result holds for any \u03c6 \u2208\u03a6, when k is large our algorithm e\ufb00ectively \ufb01nds ( \u02c6 f\u03bb, \u02c6 \u03c6\u03bb) \u2248arg min f,\u03c6 {Lpred(D1:k, f, \u03c6) + \u03bb sup \u222aiAi=Rp,Ai\u2229Aj=\u2205,\u03b6\u2208\u03a5 N X i=1 Px\u223cD\u2217X i (\u03b6(\u03c6(x)) \u2208Ai)}. \fThe second term is minimized for an encoding \u03c6(\u00b7) that maps the distributions D\u22c6 i X to similar images, so that the adversary \ufb01nds it di\ufb03cult to guess the true IDs of the input covariates. (The prediction part of the loss discourages the trivial mapping \u2200x, \u03c6(x) = z for some arbitrary z.) The limit in (5) may be viewed as a measure of dissimilarity of the set {\u03c6(D\u22c6 i X )}N i=1. In fact, consider a setting where the supremum over \u03b6 \u2208\u03a5 in the RHS of (5) is achieved and that the maximizer, \u03b6\u22c6= argmax\u03b6\u2208\u03a5 N X i=1 Px\u223cD\u2217X i (\u03b6(\u03c6(x)) \u2208Ai) is unique. Then, it is not hard to see that sup \u222aiAi=Rp,Ai\u2229Aj=\u2205 N X i=1 Px\u223cD\u2217X i (\u03b6\u22c6(\u03c6(x)) \u2208Ai) \u2a7e1, where equality holds i\ufb00\u03b6\u22c6(\u03c6(D\u2217X 1 )) = \u00b7 \u00b7 \u00b7 = \u03b6\u22c6(\u03c6(D\u2217X N )). If \u03a5 contains the identity mapping, then sup \u222aiAi=Rp,Ai\u2229Aj=\u2205 N X i=1 Px\u223cD\u2217X i (\u03b6\u22c6(\u03c6(x)) \u2208Ai) = 1 i\ufb00\u03c6(D\u2217X 1 ) = \u00b7 \u00b7 \u00b7 = \u03c6(D\u2217X N ). That is, the limit in (5) is minimized i\ufb00{\u03c6(D\u22c6 i X )}N i=1 are identical. To the best of our knowledge, such a precise understanding of the adversarial loss, as illuminated by Theorem 3.1, has so far eluded prior literature, and may be of independent interest for invariant representation learning [27]. The fact that the structure of \u03c8k changes with k presents signi\ufb01cant challenges for the proof. We address this issue with a highly non-trivial geometric argument (Section A, Supplementary). Speaking informally, we cover the space Rp by grid cells such that, as k increases, the number of cells grows and each cell becomes increasingly re\ufb01ned. As the cells grow \ufb01ner, each one can be associated with an element from D according to the distribution among {\u03b6(\u03c6(D\u22c6 i X ))}N i=1 whose density in the cell is largest (ignoring ties). Then the \ufb01nal layer can be chosen such that, for every cell, the adversary assigns the highest weight to the corresponding distribution. Remark 3.1. For N = 2, the limit can be related to the total variation distance since sup \u222aiAi=Rp,Ai\u2229Aj=\u2205 2 X i=1 Px\u223cD\u2217X i (\u03b6\u2217(\u03c6(x)) \u2208Ai) = TV (\u03b6\u2217(\u03c6(D\u2217X 1 )), \u03b6\u2217(\u03c6(D\u2217X 2 ))) + 1. Non-asymptotic generalization bounds. We now turn to bounding the generalization error of L(S1:k, f, \u03c6, \u03c8k; \u03bb). To this end, a few key quantities are introduced next. De\ufb01ne L(D, f, \u03c6; \u03bb) = ED\u223c\u00b5[P(x,y)\u223cD(f(\u03c6(x)) \u0338= y)] + sup \u222aiAi=Rp,Ai\u2229Aj=\u2205,\u03b6\u2208\u03a5 \u03bb N X i=1 Px\u223cD\u2217X i (\u03b6(\u03c6(x)) \u2208Ai). (6) De\ufb01nition 3.1 (Grid Cells). For n \u2208N+, B \u2208R+, de\ufb01ne G(n, B) to be the set G(n, B) = {Ii1 \u00d7 Ii2 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Iip, ij \u2208{1, \u00b7 \u00b7 \u00b7 , n} \u2200j}, where Iij = [\u2212B + 2(ij \u22121)B/n, \u2212B + 2ijB/n]. \fThe elements in G(n, B) form a partition of [\u2212B, B]p and the intersection of every pair of elements has volume 0 in Rp. Now, let Hk be a collection of distributions in D that receive high \u00b5-probability in the following sense, Hk := {Di \u2208D : \u00b5(Di) \u2a7e1/k1/4}. De\ufb01ne T \u03b6,\u03c6 i to be the set of points in Rp where the density \u03c1\u03b6,\u03c6 i is maximized (up to ties), that is, T \u03b6,\u03c6 i := {z \u2208Rp : \u03c1\u03b6,\u03c6 i (z) > \u03c1\u03b6,\u03c6 j (z), for all j < i and \u03c1\u03b6,\u03c6 i (z) \u2a7e\u03c1\u03b6,\u03c6 j (z), for j \u2a7ei}. It is easy to see that {T \u03b6,\u03c6 i }N i=1 form a partition of Rp. Furthermore, let M\u03b6,\u03c6 i,1 (n, B) denote the collection of grid cells on the boundary of T \u03b6,\u03c6 i , and M\u03b6,\u03c6 i,2 (n, B) denote those in the interior. M\u03b6,\u03c6 i,1 (n, B) = {g \u2208G(n, B)|g \u2288T \u03b6,\u03c6 i , g \u2229T \u03b6,\u03c6 i \u0338= \u2205} M\u03b6,\u03c6 i,2 (n, B) = {g \u2208G(n, B)|g \u2286T \u03b6,\u03c6 i }. Assumption 3.2 (Boundedness). There exists a constant B\u03c1 and function B(\u00b7) s.t. for any \u03b5 > 0, sup\u03b6,\u03c6 PN i=1 Px\u223cD\u2217X i (\u2225\u03b6(\u03c6(x))\u22252 \u2a7eB(\u03b5)) \u2a7d\u03b5 and supz,\u03b6,\u03c6,i |\u03c1\u03b6,\u03c6 i (z)| \u2a7dB\u03c1. Assumption 3.3 (Bounded VC-dimensions). Assume that the function classes \u039b = {f \u25e6\u03c6|f \u2208 F, \u03c6 \u2208\u03a6} and \u039e = {1{w\u22a4 1 \u03b6(\u03c6(x))+b1 > w\u22a4 2 \u03b6(\u03c6(x))+b2}|wi \u2208Rp, bi \u2208R, \u03b6 \u2208\u03a5, \u03c6 \u2208\u03a6, i = 1, 2} have VC-dimensions V\u039b and V\u039e respectively. Note that in Assumption 3.3, the VC dimension condition on \u039e is on two nodes instead of k. Theorem 3.2. Consider the setting of Theorem 3.1, and de\ufb01ne mk := \u2308k 3 4\u2212 q (k log(|Hk|) + k 3 4)/ \u221a 2\u2309. Under Assumptions 3.1-3.3, there exists a universal constant c, s.t. for any t1, t2 > 0, w.p. at least 1 \u2212e\u2212k1/4 \u2212Pk\u22121 i=0 4e\u2212nit2 1 \u22122Ne\u22122kt2 2, max f\u2208F,\u03c6\u2208\u03a6 | max \u03c8k\u2208\u03a8k L(S1:k, f, \u03c6, \u03c8k; \u03bb) \u2212L(D, f, \u03c6; \u03bb)| \u2a7d(1 + k\u03bb)t1 + 2\u03bb \u221a k + N \u00b7 t2 + I + II + III, (7) where VC(k) = kV\u039e(log(V\u039e))2, and I = \u03bb max{N \u2212|Hk|, 0}, II = 2\u03bbB\u03c1 \u0010 B( 1 \u221a k) \u0011p \u230am1/p k \u230bp X i\u2208Hk sup \u03b6,\u03c6 |M\u03b6,\u03c6 i,1 (\u230am1/p k \u230b, B( 1 \u221a k ))|, III = 2c k k X i=1 k q VC(k) log( ni VC(k) ) + q V\u039b log( ni V\u039b) \u221ani . We now proceed to analyze the bound in (7). Note that III vanishes when mini ni = \u2126(k\u03b1) for \u03b1 \u2a7e2, whereas I is small when k is much larger than N. For II, note that for k much larger than N, log(|Hk|) is negligible compared to \u221a k, so that mk = \u2126(k3/4). Now for \ufb01xed B, when k is large, G(\u230am1/p k \u230b, B) shrinks in volume. In settings where the union of the grid cells in M\u03b6,\u03c6 i,2 (\u230am1/p k \u230b, B) approximates T \u03b6,\u03c6 i well enough with growing k, M\u03b6,\u03c6 i,1 (\u230am1/p k \u230b, B) contains negligible number of grid cells compared to M\u03b6,\u03c6 i,2 (\u230am1/p k \u230b, B), leading to X i\u2208Hk sup \u03b6,\u03c6 |M\u03b6,\u03c6 i,1 (\u230am1/p k \u230b, B)| = o(\u230am1/p k \u230bp). \fWe defer the readers to the Supplementary Section A for speci\ufb01c examples demonstrating this phenomenon. This continues to hold when B is replaced by B(1/ \u221a k) if the latter grows slowly with increasing k. Since B(\u00b7) is related to the tails of the distributions \u03b6(\u03c6(D\u2217X i )), we are able to control this term in speci\ufb01c examples. For instance, if all distributions in D are sub-Gaussian with sub-gaussian norm bounded by some constant \u03c3max, then B(1/ \u221a k) = O(\u221alog k). Together, this means that II is also small when k is su\ufb03ciently large. Thus, Theorem 3.2 demonstrates that observing samples from more domains helps in generalization. Consistency. Theorem 3.2 provides conditions on k and ni, i \u2208[k], under which the empirical loss function, when evaluated at the estimates ( \u02c6 f\u03bb, \u02c6 \u03c6\u03bb), will be close to its population counterpart w.h.p. Here we seek to establish that these estimates, in fact, well approximate the minimizers of the population loss. Since we impose no assumptions on the speci\ufb01c distributional forms of the seen domains, this is hard to prove in such generality. We will therefore establish this under a curvature condition on the population loss that is slightly weaker than strong convexity. Assumption 3.4 (Well-separation). Denote M\u2217 F,\u03a6 \u2286F \u00d7 \u03a6 to be the set of minimizers of L(D, f, \u03c6; \u03bb). For a metric dist(\u00b7, \u00b7) on the function class F \u00d7\u03a6, there exists a function U(\u00b7; \u03bb) : R \u2192R+ satisfying lim\u03b5\u21920 U(\u03b5; \u03bb) \u21920, such that for any \u03b5 > 0 inf \u03be\u2208F\u00d7\u03a6: infz\u2208M\u2217 F,\u03a6 dist(\u03be,z)\u2a7eU(\u03b5;\u03bb) |L(D, \u03be; \u03bb) \u2212 min f\u2208F,\u03c6\u2208\u03a6 L(D, f, \u03c6; \u03bb)| \u2a7e\u03b5. Theorem 3.3. Under Assumption 3.1-3.4, almost surely, inf z\u2208M\u2217 F,\u03a6 dist(( \u02c6 f\u03bb, \u02c6 \u03c6\u03bb), z) \u2a7dU(2\u0393; \u03bb), where \u0393 equals the RHS of (7). 3.2 Generalization to unseen domains Theorem 3.3 establishes that, under the aforementioned conditions, our proposed classi\ufb01er \u02c6 f\u03bb(\u02c6 \u03c6\u03bb(\u00b7)) minimizes the population loss L(D, f, \u03c6; \u03bb). However, this loss is a penalized version of the expected prediction error under \u00b5. Naturally, the results from the preceding section fail to capture the behavior of our classi\ufb01er on an arbitrary domain from D. We now address this problem, showing that such a worst case characterization is possible if elements in D are well-represented under \u00b5\u2014that is, every domain in D is close to at least one domain that receives relatively high \u00b5-probability. Assumption 3.5 (Well-represented). There exists constants 0 < pl < 1 and \u03b4 > 0, s.t. for any D \u2208D with \u00b5(D) > 0, \u2203D\u2032 \u2208D with \u00b5(D\u2032) \u2a7epl and dH(D, D\u2032) \u2a7d\u03b4, where H = F \u00d7 \u03a6. Theorem 3.4. Under Assumptions 3.1, 3.3 and 3.5, w.p. at least 1 \u2212exp(\u2212k2p2 l /2)/pl \u2212 Pk i=1 4e\u2212nit2 over the randomness in S1:k and D1:k, for any Du \u2208D and all f \u2208F, \u03c6 \u2208\u03a6, P(x,y)\u223cDu(f(\u03c6(x)) \u0338= y) \u2a7d2 pl \u0010 \u02c6 \u03b2(f, \u03c6) + t + c s V\u039b log(ni/V\u039b) ni \u0011 + \u03b4, where \u02c6 \u03b2(f, \u03c6) = Pk i=1 Pni j=1 I{f(\u03c6(xi,j)) \u0338= yi,j}/(kni). Moreover, this holds even if |D| is countably in\ufb01nite. \f3.3 Characterization of invariant representation mappings De\ufb01nition 3.2. An element \u03c6 \u2208\u03a6 is said to be an invariant representation mapping for a collection of k domains \u02dc D1, . . . , \u02dc Dk \u2208D and for some \u01eb > 0, if sup\u03c8k\u2208\u03a8k Pk i=1 Px\u223c\u02dc DX i (\u03c0k \u25e6 \u03c8k(\u03c6(x)) = i) \u2a7d\u03b5. Recall that the range of any \u03c6 \u2208\u03a6 is Rs so that \u03c6(\u00b7) may be expressed in the form (\u03c6(1)(\u00b7), \u03c6(2)(\u00b7), \u00b7 \u00b7 \u00b7 , \u03c6(s)(\u00b7))\u22a4. Suppose that the space containing \u03c6(i)(\u00b7) is separable, that is, there exists basis functions {\u03b2j}m j=1 (m can be in\ufb01nity), such that \u2200i \u2208{1, \u00b7 \u00b7 \u00b7 , s}, \u03c6(i)(x) = m X j=1 \u03b1(i) j (\u03c6)\u03b2j(x). On de\ufb01ning the matrix M\u03c6 = {\u03b1(i) j (\u03c6)}ij, we have \u03c6(x) = M\u03c6 \u00b7 (\u03b21(x), \u03b22(x), \u00b7 \u00b7 \u00b7 , \u03b2m(x))\u22a4 and let \u0393(x) = (\u03b21(x), \u03b22(x), \u00b7 \u00b7 \u00b7 , \u03b2m(x))\u22a4. Denote M\u2212 \u03c6 to be the MP-inverse of M\u03c6. Finally, for any \u03c8k \u2208\u03a8k, de\ufb01ne Ii(\u03c8k) = {z : \u03c8(i) k (z) > max j\u0338=i \u03c8(j) k (z)}. With this decomposition we can now characterize invariant mappings. Theorem 3.5. For any \u03b5 > 0, if \u2200\u03c8k \u2208\u03a8k, \u222aiIi(\u03c8k) = Rs, then \u03c6 \u2208\u03a6 satis\ufb01es De\ufb01nition 3.2 i\ufb00 \u2203f \u2208Ker(M\u03c6) s.t. k X i=1 Px\u223c\u02dc Di(\u0393(x) + f(x) \u2208M\u2212 \u03c6 Ii(\u03c8k)) \u2a7d\u03b5. Above, the condition \u222aiIi(\u03c8k) = Rs is necessary to ensure that there will be no ties between the k weights produced by \u03c8k. Note that our previous results do not require this condition. 4 Experiments We assessed the performance of our approach on several datasets: (a) synthetic data based on those in biomedical studies [23], (b) colored MNIST [6], (c) PACS [14]. Our experiments con\ufb01rm the conclusion (Section 3.1) that observing more domains improves generalization performance on an unseen one. For (a), we compared with logistic regression and random forest, whereas (b) and (c) were benchmarked against the state-of-the-art algorithms, IRM [2] and CIDDG [16]. Our code was adapted from the LAFTR code base [18], but the decoder was dropped to be constistent with our theoretical setting1. The prediction loss was taken to be binary cross-entropy. 1Extending the theory to include the decoder is an interesting direction for future work. \fSynthetic Data. We consider synthetic data settings with k = 4 and k = 10. In each case, to sample a data point from a domain Di, a pair (xj, yj), xj \u2208R30, yj \u2208{0, 1} is generated with xj \u223cN(\u00b5i, \u03a3i). The outcome yj is generated so that a part of the relationship between xj and yj remains invariant across domains, while the other part varies. To operationalize this, we select a random subset A of covariates, a base rate bi, and a set of functions {finv, f1, . . . , fk}. We then sample yj \u223cBer(bi) and accept (xj, yj) if yj = 1(finv(xj,A, \u01ebj,i) > 0) = 1(fi(xj,Ac) > 0), where \u01ebj,i \u223cFi is an additional small error term. Here, the parameters \u00b5i, \u03a3i, bi, Fi, fi vary between the domains whereas finv and A remain invariant; \u01ebj,i ensures that the invariant signal between domains is not strong compared to the domain-speci\ufb01c one. Table 1 reports the performance of our algorithm on a new unseen domain of the same form as above, but with di\ufb00erent parameters \u00b5k+1, \u03a3k+1, bk+1, Fk+1, fk+1. (Training involved 5000 samples from each seen domain.) Observe that the test accuracies increase from k = 4 to k = 10. We uniformly outperform both baselines by a notable margin. Table 1: Test domain classi\ufb01cation accuracy on (a) synthetic data where each of the functions finv, f1, . . . , fk, fk+1 contain a linear component and an interaction term; (b) similar synthetic data with responses generated di\ufb00erently (Section B, Supplementary) and each of the aforementioned functions now contain logical OR of two linear functions. Algorithm (a) 4-Domain (a) 10-Domain (b) 4-Domain (b) 10-Domain RVR 90.6% 95.6% 86.1% 93.4% Logistic Regression 82.3% 86.2% 82.6% 86.7% Random Forest 79.4% 89.0% 85.0% 88.3% Colored MNIST. The colored MNIST data was generated from the MNIST database on handwritten digits [6]; here, the digit color acts as a spurious signal and the digit shape acts as the invariant signal. We perform binary classi\ufb01cation on several versions of this dataset, following experimental setups similar to [2]. In Table 3, \u201dA%-shape B%-color\u201d refers to a setting with two training domains (10, 000 samples each) both containing A% correlation between digit shape and labels: digits 0 \u22124 receive label 0 w.p. A% (1 o.w.), and digits 5 \u22129 receive label 1 w.p. A% (0 o.w.). In addition, there is a B% domain-speci\ufb01c correlation between digit color and labels: domain 1 (resp. domain 2) associates the color red with label 0 (resp. 1) and green with label 1 (resp. 0) w.p. B%. The last setting in Table 3 contains 6 training domains with shape-label correlation similar to that for the \ufb01rst, but the color-label correlation varies largely across domains, with each one consisting of mixtures of 2-3 di\ufb00erent digit colors. The unseen test domains constitute either single color digits or a random mixture of red-green digit colors assigned independent of the label, and the same shape-label correlation as the corresponding training data. Our algorithm beats IRM and CIDDG in multiple settings, and performs comparably in others. Finally, Table 2 reports our performance when only 3 of the 6 domains from this setting are used as the training data (same test data). Once again, test accuracy improves remarkably with more seen domains. \fTable 2: Multi-domain comparison of test accuracy on colored MNIST with 100% digit-label correlation and varying color-label correlations. The second column is the same as Table 3, row 3. 3-Domain 6-Domain RVR 86.1% 97.7% Table 3: Test accuracy on several colored MNIST settings. Target denotes the digit color of the test domain. Details of the color-label correlation for row 3 can be found in Section B, Supplementary. Setting Target k RVR IRM CIDDG 1. 100%-shape 90%-color purple 2 97.5% 94.3% 95.7% 2. 75%-shape 80%-color red-green 2 69.7*% 69.0% 71.1% 3. 100%-shape, unequal color white 6 97.7% 94.7% 96.9% PACS. To conclude, we examine our performance on PACS, an image-style dataset made of Photos, Art, Cartoon, and Sketch, which has been repeatedly used [15] to benchmark domain generalization algorithms. We speci\ufb01cally consider images labeled gira\ufb00es (label 0) or elephants (label 1), which leads to 384, 540, 803, 1493 samples respectively. Each domain alternates as the target domain, while the algorithm trains on the rest. Once again, our algorithm beats (Table 4) both baselines across the board, and by a signi\ufb01cant margin in most settings. Table 4: Test accuracy on two types of images obtained from PACS Target RVR IRM CIDDG P 70.7% 57.6% 62.1% A 66.7% 64.2% 59.9% C 80.8% 75.0% 73.8% S 54.3% 54.0% 53.4% 5 Discussion One natural question is whether we can extend the theoretical results to cross-entropy or other notions of loss. Next, our theoretical analysis does not yet fully capture our intuition regarding the conditions under which we believe our approach will succeed. Our work forges a new path to address a major problem in biomedical research, where high-dimensional datasets are frequently encountered, and predictive algorithms increasingly used to inform personalized medical care. While strategies to ensure generalizability of these algorithms beyond the populations studied during training have been lacking, we are encouraged by our experimental results and have initiated engagement with the biomedical applications that inspired this work. \fAcknowledgements This work was supported in part by the Center for Research on Computation and Society (Harvard SEAS), the Harvard Data Science Initiative, NSF CCF-1763665, NIH/NCI 5T32CA00933739, and NSF-DMS 1810829." + }, + { + "url": "http://arxiv.org/abs/2006.08476v2", + "title": "Improving Adversarial Robustness via Unlabeled Out-of-Domain Data", + "abstract": "Data augmentation by incorporating cheap unlabeled data from multiple domains\nis a powerful way to improve prediction especially when there is limited\nlabeled data. In this work, we investigate how adversarial robustness can be\nenhanced by leveraging out-of-domain unlabeled data. We demonstrate that for\nbroad classes of distributions and classifiers, there exists a sample\ncomplexity gap between standard and robust classification. We quantify to what\ndegree this gap can be bridged via leveraging unlabeled samples from a shifted\ndomain by providing both upper and lower bounds. Moreover, we show settings\nwhere we achieve better adversarial robustness when the unlabeled data come\nfrom a shifted domain rather than the same domain as the labeled data. We also\ninvestigate how to leverage out-of-domain data when some structural\ninformation, such as sparsity, is shared between labeled and unlabeled domains.\nExperimentally, we augment two object recognition datasets (CIFAR-10 and SVHN)\nwith easy to obtain and unlabeled out-of-domain data and demonstrate\nsubstantial improvement in the model's robustness against $\\ell_\\infty$\nadversarial attacks on the original domain.", + "authors": "Zhun Deng, Linjun Zhang, Amirata Ghorbani, James Zou", + "published": "2020-06-15", + "updated": "2021-02-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Robustness to adversarial attacks has been a major focus in machine learning security (Biggio & Roli, 2018; Dalvi et al., 2004; Lowd & Meek, 2005), and \u2217Equal contribution. Proceedings of the 24th International Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2021, San Diego, California, USA. PMLR: Volume 130. Copyright 2021 by the author(s). has been intensively studied in the past few years (Goodfellow et al., 2014; Carlini & Wagner, 2017b; Nguyen et al., 2015). However, the theoretical understanding of adversarial robustness is still far from being satisfactory. Recently Schmidt et al. (2018) have demonstrated sample complexity may be one of the obstacles in achieving high robustness under standard learning, which is a large challenge since in many realworld applications, labeled examples are few and expensive. To address this challenge, recent works (Carmon et al., 2019; Stanforth et al., 2019) showed that adversarial robustness can be improved by leveraging unlabeled data that come from the same distribution/domain as the original labeled training samples. Nevertheless, that is still limited due to the dif\ufb01culty to make sure that the unlabeled data are exactly from the same distribution as the labeled data. For example, gathering a large number of unlabeled images that follow the same distribution as CIFAR10 is challenging, since one would have to carefully match the same lighting conditions, backgrounds, etc. Meanwhile, out-of-domain unlabeled data can be much easier and cheaper to collect. For instance, we used Bing search engine to query a small number of keywords and, within hours, generated a new 500k dataset of noisy CIFAR-10 categories; we call this Cheap-10 (available at https://tinyurl.com/mere5j0x). Despite being fast and easy to collect, we show that using Cheap-10 can substantially improve the adversarial robustness of the original CIFAR-10 classi\ufb01er. Our contributions In this paper, we investigate how such widely-available, out-of-domain unlabeled data could improve robustness in the original domain. We analyze the behavior of standard and robust classi\ufb01cation under a \ufb02exible generative model with Gaussian seeds and a non-linear classi\ufb01er class. Our model and classi\ufb01er classes can be viewed as an extension of the Gaussian model and linear classi\ufb01er class proposed in Schmidt et al. (2018). We show in this more general setting, the sample complexity gap between standard and robust classi\ufb01cation still exists. That is, to achieve the same amount of accuracy, the samarXiv:2006.08476v2 [cs.LG] 21 Feb 2021 \fImproving Adversarial Robustness via Unlabeled Out-of-Domain Data ple complexity of robust training is signi\ufb01cantly larger than that of standard training. We also demonstrate the necessity of this gap by providing a minimax type lower bound result. Luckily, we show that using unlabeled out-of-domain data can substantially improve robust accuracy as long as the unlabeled domain is not too di\ufb00erent from the original domain or if they share some (unknown) structural information, such as similar sparse features. Interestingly, we further show settings where using out-of-domain unlabeled data can produce even better robust accuracy than using indomain unlabeled data. We support our theory with experiments on three benchmark image recognition tasks, CIFAR-10, CINIC-10 and SVHN, for empirical \u2113\u221erobustness and certi\ufb01ed \u21132 robustness. In both CIFAR-10 and CINIC10, adding our easily generated Cheap-10 unlabeled data produces substantially higher robust accuracy than using just the CIFAR-10 or CINIC-10 data.On SVHN, we systematically characterize the tradeo\ufb00between the amount of noise in the unlabeled data and the robustness gain from adding such data. Related works. After the successful implementation of white-box and black-box adversarial examples (Goodfellow et al., 2014; Biggio & Roli, 2018; Moosavi-Dezfooli et al., 2016), several heuristic defense methods were introduced and broken one after another (Zantedeschi et al., 2017; Athalye et al., 2018; Guo et al., 2017; Biggio et al., 2013; Carlini & Wagner, 2017a; Athalye et al., 2017). A line of work has focused on certi\ufb01ed robustness (Cohen et al., 2019; Lecuyer et al., 2019; Raghunathan et al., 2018; Liu et al., 2020; Chiang et al., 2020) which has appealing guarantees but has relatively limited empirical performance. Most recent e\ufb00orts on training empirically robust models is based on adversarial training (Madry et al., 2017; Zhang et al., 2019; Kurakin et al., 2016; Hendrycks et al., 2019). Theoretically, some works justify the e\ufb03ciency of adversarial training (Deng et al., 2020b), and to explain why it is di\ufb03cult to achieve satisfactory performance in robust learning, some works try to explain the obstacles to gain robustness in a perspective of computation cost (Bubeck et al., 2018; Degwekar et al., 2019). Meanwhile, other works demonstrate how to quantify the trade-o\ufb00of adversarial robustness and standard accuracy (Deng et al., 2020a) and data augmentation such as Mixup could mitigate the trade-o\ufb00(Zhang et al., 2020). In addition, work such as Schmidt et al. (2018) try to explain the obstacle by showing the sample complexity of robust learning can be signi\ufb01cantly larger than that of standard learning. They investigated the Gaussian model, which is a special case of our Gaussian generative model. Some recent works (Carmon et al., 2019; Stanforth et al., 2019) propose using semi-supervised learning method, which has a rich literature (Laine & Aila, 2016; Miyato et al., 2018; Sajjadi et al., 2016), to bridge that sample gap. Their theoretical results all assume the unlabeled data are drawn from the same marginal distribution as the labeled data. We show that to bridge the sample complexity gap, it is suf\ufb01cient to have well-behaved unlabeled out-of-domain data. We substantially extend the previous results to more general models and classi\ufb01er classes and also make the \ufb01rst step to quantify when and how unlabeled data coming from a shifted distribution can help in improving adversarial robustness. Experimentally, previous works augmented CIFAR-10 with Tiny images, which is curated and very similar to CIFAR10. We introduce a new dataset Cheap-10 and obtain comparable results and demonstrate the power of incorporating out-of-domain data. Other related works include Zhai et al. (2019), which demonstrates a PCA-based procedure to incorporate unlabeled data to gain robustness and Naja\ufb01et al. (2019), who consider combining distributional robust optimization and semi-supervised learning. 2 Set-up Consider the classi\ufb01cation task of mapping the input x \u2208X \u2286Rs1 to the label y \u2208{\u00b11}. We have n labeled training data from an original domain D, with a joint distribution Px,y over (x, y) pairs and marginal distribution Px over x. Meanwhile, we have another \u02dc n unlabeled samples from a di\ufb00erent domain \u02dc D, with a distribution \u02dc Px over x. In this work, we focus on studying the possible advantages and limitations by performing semi-supervised learning with data from D and \u02dc D to train a classi\ufb01er for the domain D. Speci\ufb01cally, we apply the pseudolabeling approach used in Carmon et al. (2019) as follows. First, we perform supervised learning on the labeled data from domain D to obtain a classi\ufb01er f0. We then apply this classi\ufb01er on \u02dc D and generate pseudolabels for the unlabeled data: {(x, f0(x))|x \u2208 \u02dc D}, which are further used to train a \ufb01nal model. The classi\ufb01cation error metrics we consider are de\ufb01ned as the following. De\ufb01nition 1 ((Robust) classi\ufb01cation error). Let P be a distribution over X \u00d7 {\u00b11}. The classi\ufb01cation error \u03b2 of a classi\ufb01er g : Rs1 7\u2192{\u00b11} is de\ufb01ned as \u03b2g = P(x,y)\u223cP(g(x) \u0338= y) and robust classi\ufb01cation error \u03b2R g = P(x,y)\u223cP(\u2203u \u2208C(x) : g(u) \u0338= y), for some constraint set C. Throughout the paper, we consider the constraint set C to be the \u2113p-ball Bp(x, \u03b5) := {u \u2208X|\u2225u \u2212x\u2225p \u2a7d\u03b5} \fZhun Deng \u2217, Linjun Zhang \u2217, Amirata Ghorbani, James Zou with p = \u221e. In addition, we consider a certain type of data generating process for domain D \u2014 the Gaussian generative model, which is frequently used in generative models in machine learning. This model is more general than the one analyzed in Schmidt et al. (2018), which only considered symmetric Gaussian mixtures. Our Gaussian generative model takes a sample from a Gaussian mixture as input, and then pass it through a nonlinear (possibly high-dimensional) mapping. Gaussian generative model. For a function \u03c1 : Rs1 7\u2192Rs2, given z \u2208Rs1, the samples from D are drawn i.i.d. from a distribution over (x, y) \u2208Rs2 \u00d7 {\u00b11}, such that x = \u03c1(yz) (1) where z \u223cN(\u00b5, \u03c32Is1), y \u223cBern( 1 2) for \u00b5 \u2208Rs1, \u03c3 \u2208R. Remark 1. The Gaussian generative model is very \ufb02exible and includes many of the recent machine learning models. For example, many common deep generative models such as VAE and GAN are Gaussian generative models: in their case, the input is a Gaussian sample z and \u03c1 is parametrized by a neural network. Therefore our results are quite generally applicable. Classi\ufb01er class. The classi\ufb01er class we consider in this paper is in the following form: G = \b g|g(x) = sgn \u0000w\u22a4(\u03d1(x) \u2212b) \u0001 , (w, b) \u2208Rd \u00d7 Rd\t , (2) where \u03d1 is a basis function and \u03d1 : Rs2 7\u2192Rd. We remark here that this classi\ufb01er class is more general than the linear classi\ufb01er class considered in Schmidt et al. (2018). For a broad class of kernels, by Mercer\u2019s theorem, the corresponding kernel classi\ufb01cation belongs to G with a certain basis function \u03d1. Throughout the paper, we use \u03b8 = (w, b) to denote the parameters. Remark 2. The Gaussian generative model and the classi\ufb01er class we considered in this paper forms a hierarchy structure, where a random seed z \u2208Rs1 is mapped by a generative function \u03c1 to the input space x \u2208Rs2, and it is further mapped to Rd by \u03d1(x) when implementing classi\ufb01cation. Notations and terminology. We let \u03c6 = \u03d1\u25e6\u03c1 and denote fw,b(x) = sgn(w\u22a4(\u03d1(x) \u2212b)). In Section 3, the results will be mainly described in terms of \u03c6. Besides, let \u03b2(w, b) = P(x,y)\u223cP(fw,b(x) \u0338= y) and \u03b2R(w, b) = P(x,y)\u223cP(\u2203u \u2208C(x) : fw,b(u) \u0338= y) for a constraint set C. In particular, we use \u03b2\u03b5,\u221ewhen C is the \u2113\u221eball with radius \u03b5. Meanwhile, we use \u2225\u00b7 \u2225\u03c82 for subgaussian norm1. We call the conditional distribution of 1Due to the limit of space, we present the rigorous definition of the sub-gaussian norm in the appendix. x on y = 1 as positive distribution while for y = \u22121 as negative distribution. For distribution P1 and P2 over x, we call a distribution P over x is a uniform mixture of P1 and P2 if it equals to P1 and P2 with probability 1/2 respectively. For a sequence of random variables {Xn} and a sequences of positive numbers {an}, we write Xn = OP(an) if there exists a constant C, such that P(Xn \u2264Can) \u21921 when n \u2192\u221e. For real-valued sequences {an} and {bn}, we write an \u2272bn if an \u2a7dcbn for some universal constant c \u2208(0, \u221e), and an \u2273bn if an \u2a7ec\u2032bn for some universal constant c\u2032 \u2208(0, \u221e). We say an \u224dbn if an \u2272bn and an \u2273bn. In this paper, c, C, c0, c1, c2, \u00b7 \u00b7 \u00b7 , refer to universal constants, and their speci\ufb01c values may vary from place to place. 3 Theoretical Results We demonstrate for Gaussian generative models that combining unlabeled data from a reasonably wellbehaved shifted domain leads to a classi\ufb01er with better robust accuracy on the original domain D compared to the achievable robust accuracy using only the labeled data from D. We further analyze the tradeo\ufb00 between how di\ufb00erent the shifted domain can be from D before the unlabeled data hurts the robust accuracy on D. Finally, we show that if the data from a shifted domain share certain unknown sparsity structure with the data from original domain, performing semi-supervised learning also helps in obtaining a classi\ufb01er of higher robust accuracy on the original domain. Assumptions. Throughout this section, our theories are based on the following assumptions unless we state otherwise explicitly. 1). \u03d1(\u00b7) is L1-Lipchitz continuous in \u21132-norm, i.e. \u2225\u03d1(a) \u2212\u03d1(b)\u2225\u2a7dL1\u2225a \u2212b\u2225, and L\u2032 1-Lipchitz continuous in \u2113\u221e-norm; 2). \u03c1(\u00b7) is L2Lipchitz continuous in \u21132-norm and L\u2032 2-Lipchitz continuous in \u2113\u221e-norm; 3). \u2225E\u03c6(z)\u2212E\u03c6(\u2212z)\u2225= 2\u03b1 \u221a d for z \u223cN(\u00b5, \u03c32Is1) and some constant \u03b1 > 0. The last condition on the magnitude of the separation is added for the simplicity of presentation. Such a magnitude choice is also used in Schmidt et al. (2018); Carmon et al. (2019). 3.1 Supervised learning in Gaussian generative models We \ufb01rst consider the supervised setting where only the labeled data are used. In this setting, we prove the following two theorems demonstrating the sample complexity gap when one considers standard error and robust error respectively. Analogous results for Gaussian mixture models was shown in Schmidt et al. (2018); our results cover the more general Gaussian generative model setting. \fImproving Adversarial Robustness via Unlabeled Out-of-Domain Data Supervised learning algorithm: in this section, for the simplicity of presentation, we use 2n to denote the size of labeled training data. For Gaussian generative models, we focus on the following method. We \ufb01rst estimate w and b by \u02c6 w = 1/n Pn i=1 yi\u03d1(xi) and \u02c6 b = 1/n P2n i=n+1 \u03d1(xi). The \ufb01nal classi\ufb01er is then constructed as f \u02c6 w,\u02c6 b(x) = sgn( \u02c6 w\u22a4(\u03d1(x) \u2212\u02c6 b)). Here half of the labeled data, n, is used to \ufb01t \u02c6 w and the other half used to \ufb01t \u02c6 b, so that their estimation errors are independent, which simpli\ufb01es the analysis. The following theorem shows that this method achieves high standard accuracy. Theorem 1 (Standard accuracy). For a Gaussian generative model with \u03c3 \u2272d1/4, the method described above obtains a classi\ufb01er f \u02c6 w,\u02c6 b such that for d su\ufb03ciently large, with high probability, the classi\ufb01cation error \u03b2( \u02c6 w,\u02c6 b) is at most 1% even with n = 1. Meanwhile, we have the following lower bound to show the essentiality of the increased sample complexity if we are interested in the robust error. Theorem 2 (Sample complexity gap for robust accuracy). Let An be any learning algorithm, i.e. a function from n samples to a binary classi\ufb01er gn. Let \u03c3 \u224dd1/4, \u03b5 \u2a7e0, and \u00b5 \u2208Rs1 be drawn from a prior distribution N(0, Is1). We draw 2n samples from (\u00b5, \u03c3)-Gaussian generative model. Then, the expected robust classi\ufb01cation error \u03b2\u03b5,\u221e gn is at least (1 \u22121/d)/2 if n \u2272\u03b52\u221a d log d . Taken together, these two Theorems demonstrate that a substantial larger number of labeled samples (from the same domain) are necessary in order to achieve a decent robust accuracy in that domain. 3.2 Improving learning via out-of-domain data We next investigate how to improve the robust accuracy of a classi\ufb01er via incorporating unlabeled out-ofdomain data. Semi-supervised learning on out-of-domain data. Let us denote the samples from the shifted domain as {\u02dc xi}2\u02dc n i=1, which is incorporated via the following semi-supervised learning algorithm. Semi-supervised learning algorithm: we use \u02c6 w and \u02c6 b obtained in supervised learning to label {\u02dc xi}2\u02dc n i=1 via \u02c6 g(x) = sgn \u0000\u02c6 w\u22a4(\u03d1(x) \u2212\u02c6 b)) and obtain the corresponding pseudo-labels {\u02dc yi}2\u02dc n i=1. We denote sample sizes for each label class by \u02dc n1 = P2\u02dc n i=1 1(\u02dc yi = 1) and \u02dc n2 = P2\u02dc n i=1 1(\u02dc yi = \u22121) respectively. Then we estimate w and b respectively by \u02dc w = 1 2\u02dc n1 X \u02dc yi=1 \u03d1(\u02dc xi) \u2212 1 2\u02dc n2 \u02dc n X \u02dc yi=\u22121 \u03d1(\u02dc xi) \u02dc b = 1 2\u02dc n1 X \u02dc yi=1 \u03d1(\u02dc xi) + 1 2\u02dc n2 X \u02dc yi=\u22121 \u03d1(\u02dc xi). Given the pseudo-labels, these two estimators only depend on the shifted domain data. They are slightly di\ufb00erent than those in the supervised setting, since the shifted domain data is not necessarily mixed uniformly. The classi\ufb01er is then constructed as f \u02dc w,\u02dc b(x) = sgn( \u02dc w\u22a4(\u03d1(x) \u2212\u02dc b)). For the simplicity of theoretical analysis, we don\u2019t merge the original and out-ofdomain datasets to get \u02dc w and \u02dc b. However, as we show in Section 4, merging both datasets for robust training lead to better empirical performance. Recall \u03c6 = \u03d1 \u25e6\u03c1, and the semi-supervised learning algorithm only involves y and \u03d1(x), we can equivalently view the input distribution as \u03c6(yz) for z \u223c N(0, Is1), y \u223cBern(1/2), and the classi\ufb01er class as G\u2032 = \b g|g(x) = sgn \u0000w\u22a4x\u2212b) \u0001 , (w, b) \u2208Rd\u00d7Rd\t (such linearization is the common purpose of kernel tricks). For the simplicity of description, our later statements will use this equivalent setting and simply consider the distributions of \u03d1(\u02dc x). Theorem 3 (Robust accuracy). Recall in Gaussian generative model, the marginal distribution of the input x of labeled domain is a uniform mixture of two distributions with mean \u00b51 = E[\u03c6(z)] and \u00b52 = E[\u03c6(\u2212z)] respectively, where z \u223cN(0, \u03c32Is1). Suppose the marginal distribution of the input of unlabeled domain is a mixture of two sub-gaussian distributions with mean \u02dc \u00b51 and \u02dc \u00b52 with mixing probabilities q and 1 \u2212q and \r \rE \u0002 \u03d1(\u02dc xi) \u2212E[\u03d1(\u02dc xi)] | aT \u03d1(\u02dc xi) = b \u0003\r \r \u2272 \u221a d + |b| for \ufb01xed unit vector a. Assuming the sub-gaussian norm for both labeled and unlabeled data are upper bounded by a universal quantity \u03c3max \u224dd1/4, \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u22252 \u224d \u221a d, c < q < 1 \u2212c for some constant 0 < c < 1/2, and d\u03bd = max n\u2225\u02dc \u00b51 \u2212\u00b51\u2225 \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u2225, \u2225\u02dc \u00b52 \u2212\u00b52\u2225 \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u2225 o < c0, for some constant c0 \u22641/4, then the robust classi\ufb01cation error is at most 1% when d is su\ufb00ciently large, n \u2265C for some constant C (not depending on d and \u03f5) and \u02dc n \u2273\u03b52 log d \u221a d. Remark 3. We remark here that \u03c3max, the upper bound of the sub-gaussian norm of the Gaussian generative model, is upper bounded by L1L2\u03c3. Comparing to the Theorem 2, which shows that the sample complexity of order \u03b52\u221a d/log d is necessary to achieve \fZhun Deng \u2217, Linjun Zhang \u2217, Amirata Ghorbani, James Zou small robust error, the above theorem shows that, by incorporating the same order of similar unlabeled data (up to a logarithm factor), which is generally cheaper, one can achieve the same robust accuracy. We further note that the sub-Gaussian assumption in Theorem 3 is quite relaxed. For example, any dataset where the feature values are bounded are automatically subGaussian. This includes all image data since the pixel values are bounded. Remark 4. Moreover, by using the same technique, we can extend our theoretical results to a much more general family of distributions in Rd whose tails are bounded by any strictly decreasing function g. De\ufb01ne Dg(\u00b5, \u03c32) = {X \u2208Rd : \u2200v \u2208Rd, \u2225v\u22252 = 1, Var(Xj) \u2264\u03c32 P(|vT (X \u2212\u00b5)| > \u03c3 \u00b7 t) \u2264g(t)}. For example letting g(t) = C1e\u2212C2t2 reduces Dg to the family of sub-Gaussian distributions. The in-domain distribution is now assumed to be x1, ..., xn i.i.d. \u223c 1/2 \u00b7 Dg(\u00b51, \u03c32)+1/2\u00b7Dg(\u00b52, \u03c32) and the out-of-domain distribution is assumed to be \u02dc x1, ..., \u02dc x\u02dc n i.i.d. \u223cq\u00b7Dg(\u02dc \u00b51, \u02dc \u03c32)+ (1 \u2212q) \u00b7 Dg(\u02dc \u00b52, \u02dc \u03c32) where q \u2208(0, 1/2). We remark here that this extension allows the in-domain and outof-domain distributions to be very di\ufb00erent as long as they are all in the family Dg(\u00b7, \u00b7). We present the following proposition for this extension. Proposition 1. Under the similar assumptions to those in Theorem 3.3, that is, \r \rE \u0002 \u02dc xi \u2212E[\u02dc xi] | aT \u02dc xi = b \u0003\r \r \u2272 \u221a d + |b| for \ufb01xed unit vector a, \u02dc \u03c3 \u2264\u03c3max \u224dd1/4, \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u22252 \u224d \u221a d, c < q < 1 \u2212c for some constant 0 < c < 1/2, and d\u03bd = max n\u2225\u02dc \u00b51 \u2212\u00b51\u2225 \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u2225, \u2225\u02dc \u00b52 \u2212\u00b52\u2225 \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u2225 o < c0, for some constant c0 \u22641/4, then the robust classi\ufb01cation error is at most 1% when d is su\ufb03ciently large, n \u2265C for some constant C (not depending on d and \u03b5) and \u02dc n \u2273\u03b52 \u00b7 (g\u22121(1/d log d))2 \u00b7 \u221a d. Connections to statistical measures. A key quantity in Theorem 3 is d\u03bd, which quanti\ufb01es the difference between the labeled and unlabeled domain. In this section, we make connections between some commonly used statistical measures and d\u03bd via some more speci\ufb01c examples.We establish connections to Wasserstein Distance, Maximal Information and HDivergence. Due to the limit of space, we only demonstrate the result for Wasserstein Distance here and put the other results in the appendix. Throughout this paragraph, we consider the distribution of the labeled domain with positive distribution P1, negative distribution P2, and y \u223cBern(1/2). The marginal distributions of shifted domain is assumed to be a uniform mixtures of \u02dc P1 and \u02dc P2. Wasserstein Distance: the Wasserstein Distance induced by metric \u03c1 between distributions P1 and P2 over Rd is de\ufb01ned as W\u03c1(P1, P2) = sup \u2225f\u2225Lip\u2a7d1 \u0014Z fdP1 \u2212fdP2 \u0015 , where \u2225f\u2225Lip \u2a7d1 indicates the class of f : Rd 7\u2192R such that for any x, x\u2032 \u2208Rd, |f(x) \u2212f(x)| \u2a7d\u03c1(x, x\u2032). Let us consider \u03c1(x, x\u2032) = \u2225x \u2212x\u2032\u2225. Proposition 2. Under the assumption that max{W\u03c1(P1, \u02dc P1), W\u03c1(P2, \u02dc P2)} \u2a7d \u03c4, for \u03c4 \u2a7e 0, then we have \u2225\u00b5i \u2212\u02dc \u00b5i\u2225\u2a7d\u03c4 for i = 1, 2. As a result, d\u03bd \u2a7d \u03c4 \u2225\u02dc \u00b51 \u2212\u02dc \u00b52\u2225. If we further have \u03c4 \u2a7d\u2225\u00b51 \u2212\u00b52\u2225/2, we will then have d\u03bd \u2a7d\u03c4/(\u2225\u00b51 \u2212\u00b52\u2225\u22122\u03c4). As we can see, when the Wasserstein distance get smaller, the quantity d\u03bd decreases. Data from a shifted domain can work even better. Theorem 3 demonstrates the sample complexity gap in Section 3.1 can be bridged via out-of-domain data. Next, we show that in certain settings, one can achieve even better adversarial robustness when the unlabeled data comes from a shifted domain rather than the same domain as the labeled data. To illustrate this phenomenon, let us analyze a speci\ufb01c example of our model \u2014 the Gaussian model proposed in Schmidt et al. (2018). Theorem 4. Suppose the distribution of the labeled domain has positive distribution N(\u00b5, \u03c32Id) and negative distribution N(\u2212\u00b5, \u03c32Id) with \u00b5 \u2208 Rd and y \u223cBern(1/2). Samples (x1, y1), \u00b7 \u00b7 \u00b7 , (xn, yn) are i.i.d. drawn from the labeled domain. Suppose we have unlabeled inputs from the same domain xn+1, \u00b7 \u00b7 \u00b7 , xn+\u02dc n, and also have unlabeled shifted domain inputs \u02dc x1, \u00b7 \u00b7 \u00b7 , \u02dc x\u02dc n, which are drawn from a uniform mixture of N(\u02dc \u00b5, \u03c32Id) and N(\u2212\u02dc \u00b5, \u03c32Id) with \u02dc \u00b5 \u2208Rd. Denote the parameter \u03b8 = (w, b) of the classi\ufb01er obtained through semi-supervised algorithm by \u02c6 \u03b8same and \u02c6 \u03b8shifted, when we use {xi}n+\u02dc n i=n+1 and {\u02dc xi} respectively. If we let \u00b5 = 2\u03b51d and \u02dc \u00b5 = \u03b51d, where 1d is a d-dimensional vector with every entry equals to 1, when (1/d + 1/\u02dc n) \u00b7 \u03c32/\u03f52 \u21920 as d, \u02dc n \u2192\u221e, we then have \u03b2\u03b5,\u221e(\u02c6 \u03b8shifted) \u2264\u03b2\u03b5,\u221e(\u02c6 \u03b8same). This seemingly surprising result can be explained intuitively. Heuristically, when one tries to minimize the robust error, the robust optimizer will behave similarly \fImproving Adversarial Robustness via Unlabeled Out-of-Domain Data to a regularized version of the standard optimizer. In our semi-supervised setting, the shifted domain data also act as regularization. Such an intuition is rigorously justi\ufb01ed in the proof. Further, we illustrate the results in Theorem 4 by experiments with synthetic data, the experiment set-up and results are presented in the appendix, where we \ufb01nd that the robust error by incorporating the out-of-domain unlabeled data is smaller than than incorporating the same amount of unlabeled data from the same domain as the labeled data. Too irrelevant unlabeled data hurts robustness. In the results above, we demonstrate that incorporating unlabeled data from a shifted domain can improve robust accuracy in the original domain, if the shifted domain is not too di\ufb00erent from the original (as measured by d\u03bd). Here we show that there is no free lunch; if the shifted domain is too di\ufb00erent from the original, then incorporating its unlabeled data through pseudolabeling could decrease the robust accuracy in the original domain. Theorem 5. Suppose that the distribution of labeled domain\u2019s positive and negative distribution area uniform two symmetric sub-gaussian distribution with means \u00b52 = \u2212\u00b51, and y \u223cBern(1/2). The distribution of unlabeled domain P\u2032 is a mixture of two subgaussian distributions with mean \u02dc \u00b51 and \u02dc \u00b52. Let \u039e = {\u02dc \u00b51, \u02dc \u00b52 : d\u03bd \u22651 2}. Then for P\u2032 with (\u02dc \u00b51, \u02dc \u00b52) \u2208\u039e, with high probability, the worst case robust misclassi\ufb01cation error \u03b2\u03b5,\u221e( \u02dc w,\u02dc b) via the previous semi-supervised learning satis\ufb01es sup P\u2032:(\u02dc \u00b51, \u02dc \u00b52)\u2208\u039e \u03b2\u03b5,\u221e( \u02dc w,\u02dc b) \u226549%. Shifted domain with unknown sparsity. In Theorem 3, we show how reasonably close shifted domain data helps in improving adversarial robustness. However, sometimes, the shifted domain is not so close to the original domain in terms of d\u03bd, but they still share some similarity. For instance, both domains can share some structural commonness. Here, we consider the case where the two distributions have common salient feature set; that is, the labeled and unlabeled domains share discriminant features, though the corresponding coe\ufb03cients can be far apart. This setting is common in practice. For example, when one tries to classify images of di\ufb00erent kinds of cats, the discriminant features include the eyes, ears, shapes etc. These discriminant features also applies when one aims to classify dogs, though the weights on these features might be very di\ufb00erent. Speci\ufb01cally, we consider the distributions of the labeled domain\u2019s positive and negative parts are N(\u00b51, \u03c32Id) and N(\u00b52, \u03c32Id), and the labels y \u223c Bern(1/2). The samples (x1, y1), ..., (xn, yn) are drawn i.i.d. from this labeled domain. Suppose we have unlabeled samples \u02dc x1, ..., \u02dc x\u02dc n, which are drawn from a uniform mixture of N(\u02dc \u00b5, \u03c32Id) and N(\u2212\u02dc \u00b5, \u03c32Id) with \u02dc \u00b5 \u2208Rd. Here, we assume the two domains share the support information, that is, supp(\u00b51 \u2212\u00b52) = supp(\u02dc \u00b51 \u2212\u02dc \u00b52), though the distance between \u00b51 \u2212\u00b52 and \u02dc \u00b51 \u2212\u02dc \u00b52 is not necessarily small. For such a case, we propose to use the following algorithm to help improving adversarial robustness. Algorithm of unknown sparsity: we \ufb01rst apply the high-dimensional EM algorithm (Cai et al., 2019) to estimate supp(\u02dc \u00b51 \u2212\u02dc \u00b52) from the unlabeled data. This high-dimensional EM algorithm is an extension of the traditional EM algorithm with the M-step being replaced by a regularized maximization. The detailed description can be found in the Appendix. After implementing the high-dimensional EM to estimate the support \u02c6 S from the unlabeled data, we then project the labeled data to this support \u02c6 S and therefore reduce the dimension. Finally, we apply the algorithm in the supervised setting on the labeled samples with reduced dimensionality to get the estimated \u02c6 wsparse = 1/n Pn i=1 yi[\u03d1(xi)] \u02c6 S and \u02c6 bsparse = 1/n P2n i=n+1[\u03d1(xi)] \u02c6 S. The following theorem provides theoretical guarantee for the robust classi\ufb01cation error for \u02c6 \u03b8sparse by this algorithm. Theorem 6. Under the conditions of Theorem 3.3 on parameters \u00b51, \u00b52, \u02dc \u00b51, \u02dc \u00b52, \u03c3 and q. Suppose | supp(\u00b51\u2212 \u00b52)| = | supp(\u02dc \u00b51 \u2212\u02dc \u00b52)| = m, and min\u02dc \u00b51,j\u2212\u02dc \u00b52j\u0338=0 |\u02dc \u00b51j \u2212 \u02dc \u00b52,j| \u2265\u03c3 p 2m log d/\u02dc n, where \u02dc \u00b5i,j is the j-th entry in vector \u02dc \u00b5i. If n \u2273\u03b52 log d\u221am, we have \u03b2\u03b5,\u221e( \u02c6 wsparse,\u02c6 bsparse) \u226410\u22123 + OP( 1 n + 1 d). Comparing to the result in Theorem 2, which shows that the sample complexity O(\u03b52\u221a d/(log d)) is necessary to achieve small robust error, the above theorem suggests that by utilizing the shared structural information from the unlabeled domain, one can reduce the sample complexity from O( \u221a d) to O(\u221am). Corresponding simulation results are put in the Appendix. 4 Experiments In this section, we provide empirical support for our theory and show that using unlabeled data from shifted domains can consistently improve robust accuracy for three widely-used benchmark datasets: CIFAR-10 (Krizhevsky et al., 2009), CINIC-10 (Darlow et al., 2018) and SVHN (Netzer et al., 2011). \fZhun Deng \u2217, Linjun Zhang \u2217, Amirata Ghorbani, James Zou Datasets (# of samples) Robust Acc. (%) Clean Acc. (%) CIFAR-10(50k) 48.4\u00b10.7 86.1\u00b10.6 CIFAR-10(50k) + Cheap-10(470k) 58.8\u00b10.4 87.8\u00b10.3 CINIC10(260k) 54.9\u00b10.4 85.2\u00b10.3 CINIC10(260k) + Cheap-10(470k) 57.5\u00b10.2 86.1\u00b10.3 (b) (a) (c) Figure 1: (a) \u2113\u221erobustness. Each row shows the test accuracy on clean and adversarially perturbed images (\u03f5 = 8/255) when the original datasets are used versus when there is additional unlabeled source of data (Cheap10). The robustness performance when we use the out-of-domain dataset is signi\ufb01cantly better than the original training set, in agreement with our theory. (b) SVHN dataset. Dashed lines stand for the baseline performance of using only the labeled data. Each point on the x-axis shows a di\ufb00erent model that is robustly trained using the original data and the unlabeled set of images with additive Gaussian noise with std (\u03c3) equal to the axis value. The dashed lines indicate the clean and robust accuracy achieved using only the SVHN data. Adding noisy data improved robust accuracy. (c) \u21132 certi\ufb01ed robustness. Each point in the plot shows the percentage of test images that are certi\ufb01ed to be classi\ufb01ed correctly at that \u21132 radius. Adding out-of-domain datasets consistently improves the certi\ufb01ed raidii. Datasets. The CIFAR-10 dataset has a training set of 50k images and test set of 10k images. The CINIC10 dataset is a subset of ImageNet Russakovsky et al. (2015) of objects that are similar to CIFAR-10 objects; it has 260k images2. As our source of unlabeled data, we use the Cheap-10 dataset that we created to be a benchmark for using very cheap unlabeled outof-domain data (avaiable at https://tinyurl.com/ mere5j0x). We created Cheap-10 by searching keywords related to CIFAR-10 objects on the Bing image search engine3. A more detailed pipeline for creating Cheap-10 is described the Appendix. The important thing to note about Cheap-10 is that it is very fast to generate (hours) and can be quite noisy due to the lack of expert curation. Therefore it is a good illustration of the power of cheap, out-of-domain data. A model trained on original CIFAR-10 data has a 68% accuracy on predicting Cheap-10 labels. The number is 75% for a model trained on CINIC-10 data. Both results mean that Cheap-10 is a related out of domain datasets with respect to both Cheap-10 and CIFAR10. The SVHN dataset had 73k training and 26k test images. For SVHN task, the original dataset contains 2After removing CIFAR-10 test images that are in CINIC-10 3https://www.bing.com/images/ an extra 513k set of training images. We use this extra images as our source of unlabeled data and synthetically push the data out of domain by adding random Gaussian noise to it. Methods. For each task, we \ufb01rst train a classi\ufb01cation model on the original labeled data using crossentropy loss function. We then use the trained model to assign pseudo-labels to unlabeled images. We next aggregate the two datasets to train a robust model using robust training. Following Carmon et al. (2019), we sample half of each batch from the original data and the other half from the additional pseudo-labeled data during training. We use the robustness regularization loss introduced in Zhang et al. (2019). For a maximum allowed \u2113p-norm perturbation of size \u03f5, we use the training loss function: L (x, y; \u03b8) = \u2212log p\u03b8(y|x) + \u03b2 max \u02c6 x\u2208Bp(x,\u03b5) DKL(p\u03b8(y|x)||p\u03b8(\u02c6 y|\u02c6 x)) where the regularization parameter \u03b2 balances the loss between accurate classi\ufb01cation and stability within the \u03b5 \u2113p-norm ball. We approximate the maximization in the second term as follows: \u2022 Similar to Madry et al. (2017), for \u2113\u221epertur\fImproving Adversarial Robustness via Unlabeled Out-of-Domain Data bations, we focus on empirical robustness of the models and use an inner loop of projected gradient descent for the maximization. \u2022 Following Carmon et al. (2019), for \u21132 perturbations, we focus on certi\ufb01ed robustness and use the idea of stability training Zheng et al. (2016); Li et al. (2018). We replace the maximization with large additive noise draws: E\u02c6 x\u223cN (,\u03c3)DKL(p\u03b8(y|x)||p\u03b8(\u02c6 y|\u02c6 x)). The idea is to have a model that is robust to large random perturbations. Using Cohen et al.\u2019s method Cohen et al. (2019), in test time, we can \ufb01nd a safe radius of certi\ufb01ed robust prediction for each sample. As our \ufb01rst experiment, we focused on empirical robustness against \u2113\u221eperturbations. We used a Wide ResNet 28-10 architecture Zagoruyko & Komodakis (2016). Following the literature, for \u2113\u221eperturbation, we set \u03b5 = 8/255. Results for empirical robustness against l\u221eperturbations are shown in Fig. 1(a). The clean accuracy is the model\u2019s performance on nonperturbed images. The robust accuracy is the model\u2019s performance on adversarially perturbed images. We use the strongest known adversarial attack methods, iterative projected gradient descent (PGD), to create the l\u221eperturbations. We \ufb01ne-tuned the attack hyperparameters and found that using 40 iterations results in the smallest robust accuracy. More details are in the Appendix. We \ufb01nd that using Cheap-10 consistently improves the robust accuracy. Note that, for CIFAR-10, while Cheap 10 was created in a few hours, it produced signi\ufb01cantly better robust accuracy (58.8 \u00b1 0.4%) compared to using only the original data, and similarly for CINIC-10. This strategy of using cheap noisy data to improve robustness compares favorably to state-of-the-art existing defenses applied to CIFAR-10: TRADES method (55.4%, Zhang et al. (2019)) and Adversarial Pretraining (57.4%, Hendrycks et al. (2019)). As our second experiment, we use the SVHN dataset and a Wide ResNet 16-8 as our model architecture. SVHN has a training set of 73k real digit images and an extended set of 531k images that come with the dataset. The extra set is a synthetically generated set of digits that to mimic the original dataset closely. We use the extended set as our source of unlabeled data. The model we trained (normal training) on the original training set has an accuracy of 96.8% on SVHN test set and an accuracy of 98.4% on the extra training set; this means that the extra data is very similar to the original SVHN dataset. To push the unlabeled data out-of-domain, we add four di\ufb00erent levels of additive Gaussian noise to the images. We focus on \u2113\u221eperturbations with \u03f5 = 4/255. Fig. 1bc) describes the results. The dashed lines are the baselines for not having any additional unlabeled data. They show clean and robust accuracies when only the original training set is used. It can be observed that adding the unlabeled data robustly improves robust accuracy. As the unlabeled data distribution gets more distant from SVHN data, the improvement achieved from adding the extra set of unlabeled images becomes smaller. As our \ufb01nal experiment, we focus on certi\ufb01ed robustness. For stability training, we used \u03c3 = 0.25. Fig. 1(c) shows the percentage of images that are certi\ufb01ed to be classi\ufb01ed correctly at each \u21132 radius. First, use the CIFAR-10 dataset as the labeled data and the Cheap-10 data set as the unlabeled data. Secondly, we use the original SVHN tranining set as the labeled data and the extra set of SVHN images with additive Gaussian noise (std = 0.15) as the unlabeled data source. It demonstrates that adding cheap out-of-domain data consistently improves certi\ufb01ed robustness compared to only using the original training set. More implementation details are described in the Appendix. 5 Further Discussions Incorporating cheap unlabeled data is a popular way to improve the prediction performance in machine learning. In this work, we show that this substantially improves adversarial robustness, even when the unlabeled data come from a di\ufb00erent domain. We prove our theoretical results for Gaussian generative models, which are very \ufb02exible (e.g. it includes common deep generative models such as GANs and VAEs). Moreover our theory is supported by our experiments using a new dataset Cheap-10. This suggests that the vast amount of noisy out-of-domain data is a relatively untapped resource that could substantially improve the reliability of many machine learning tasks. In this work, we showed that, in general, the adversarial robustness of a semi-supervised algorithm will be improved when the out-of-domain distribution is similar to the labeled data, and the robustness will be hurt if the out-of-domain distribution is too di\ufb00erent. One possible extension of our work is to use the aggregation idea in Li et al. (2020) to deal with the challenging setting where the similarity between the out-of-domain distribution and labeled data distribution is unknown a priori. Such an extension will make the results applied to more general settings. Further, our theoretical results and analysis also lay the foundation of studying the adversarial robustness of other tasks, such as multi-class classi\ufb01cation and linear/kernel regression in the semi-supervised setting when the unlabeled data come from a di\ufb00erent domain. \fZhun Deng \u2217, Linjun Zhang \u2217, Amirata Ghorbani, James Zou The focus of this work is on the e\ufb00ects of out-of-domain unlabeled data, and we use the popular and simple pseudo-labeling method to capture the key insights. An interesting direction of future work is to investigate how to improve robustness with other semi-supervised learning methods. For example, one could apply several iteration of pseudo-labeling to improve label quality." + }, + { + "url": "http://arxiv.org/abs/1906.01354v1", + "title": "Architecture Selection via the Trade-off Between Accuracy and Robustness", + "abstract": "We provide a general framework for characterizing the trade-off between\naccuracy and robustness in supervised learning. We propose a method and define\nquantities to characterize the trade-off between accuracy and robustness for a\ngiven architecture, and provide theoretical insight into the trade-off.\nSpecifically we introduce a simple trade-off curve, define and study an\ninfluence function that captures the sensitivity, under adversarial attack, of\nthe optima of a given loss function. We further show how adversarial training\nregularizes the parameters in an over-parameterized linear model, recovering\nthe LASSO and ridge regression as special cases, which also allows us to\ntheoretically analyze the behavior of the trade-off curve. In experiments, we\ndemonstrate the corresponding trade-off curves of neural networks and how they\nvary with respect to factors such as number of layers, neurons, and across\ndifferent network structures. Such information provides a useful guideline to\narchitecture selection.", + "authors": "Zhun Deng, Cynthia Dwork, Jialiang Wang, Yao Zhao", + "published": "2019-06-04", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Designing effective defense mechanisms against adversarial examples is vital for machine learning [2, 4, 11, 6, 20]. Most state-of-the-art defense mechanisms are based on choosing suitable parameters for a given architecture in order to achieve a balance between accuracy and robustness [12, 18] (we use the word \u201carchitecture\u201d to describe a model architecture such as linear regression or a convolutional neural network). Empirical studies have shown that there may exist an inherent tension between adversarial robustness and native accuracy [21]. However, we still yet to understand how to characterize the trade-off between robustness and accuracy for a given architecture and the underlying theories behind it. Our paper attempts to uncover those theories. 1.1 Characterizing the Trade-off Why is it important to characterize the inherent trade-off of an architecture? That is because there are many mysteries unknown: for example, is accuracy always at the cost of robustness? How is the trade-off related to the number of neurons and layers? Besides, such characterization may serve as a potential guide for architecture selection. Recent works show that neural networks are vulnerable to adversarial examples, even though the perturbations to data are imperceptible to human eyes [3, 8, 13, 14]. Researchers have developed many defense mechanisms [23, 15]. Among them, several mechanisms involve choosing suitable parameters for a given neural network architecture, sacri\ufb01cing a certain degree of accuracy in order to be more robust. For example, instead of choosing Preprint. Work in progress. arXiv:1906.01354v1 [cs.LG] 4 Jun 2019 \f\u03b8\u2217:= arg min\u03b8\u2208\u0398 EP0[l(\u03b8, x, y)] (notice that l depends on what architecture being used), where l is a loss function and P0 is a distribution on the space X \u00d7 Y, [12] chooses: \u03b8\u2217 \u03b5,p := arg min \u03b8\u2208\u0398 EP0[max \u03b4\u2208S l(\u03b8, x + \u03b4, y)] where S is a \u03b5-ball in lp-norm. In [19], it claims that architecture is a more important factor than size concerning robustness. It is natural to question: what is the role of architecture with respect to the trade-off between accuracy and robustness? With different \u03b8\u2019s, the accuracy and robustness differs \u2013 promoting the robustness may decrease the accuracy. However, that trade-off is not so severe under some architectures. [16] shows some architectures such as variational autoencoder can yield a robust model even using the parameter chosen by ordinary optimization, which can be very accurate while still be relatively robust. Thus, a general guideline seems to be very useful \u2013 a pre-screening procedure to tell a researcher which architectures may be more promising to start with. Thus, we use Trade-off Curve (TOC) to characterize the trade-off of an architecture by looking into the smallest native loss it can achieve, the smallest adversarial loss it can achieve and the trade-off between accuracy and robustness along the change of parameters. 1.2 Understanding the Trade-off We attempt to understand the trade-off separately in two regimes \u2013 whether \u03b5 is in\ufb01nitesimal or not. For in\ufb01nitesimal \u03b5, we use the in\ufb02uence function, an ef\ufb01cient tool from robust statistics. We adopt its spirit and develop an in\ufb02uence function of adversarial attack (IFA) in order to capture how the parameters change as the input data are adversarially perturbed. It can further give us insight about what factors affect the sensitivity of parameters with respect to adversarial attack. Using the IFA, we can obtain a closed form approximation to the trade-off quantity we de\ufb01ne later. When \u03b5 is not in\ufb01nitesimal, we illustrate how over-parameterization affects the trade-off. We show that already in a simple natural data model, over-parameterization can cause the gap between accuracy and robustness no matter how small the perturbation is. We further explore how adversarial training imposes regularization to the loss function and how adversarial training is related to classic statistical model selection. By understanding the adversarial training, it is bene\ufb01cial to theoretically characterize the trade-off curve. 1.3 Related Work There has been some other work on exploring the trade-off between accuracy and robustness, which bring us great insights. [21] provides the insight that accuracy may be at odds with robustness by experiments and use a simple example as a proof of concept to analyze theoretically. In [19], the authors thoroughly benchmark 18 ImageNet models using multiple robustness metrics. They mainly focus on experimentally studying the trade-off. A closely related work by Zhang et al. [24] identify a trade-off between robustness and accuracy, but their aim is mainly to use the trade-off to guide principle in the design of defenses against adversarial examples for a \ufb01xed architecture while the trade-off we identify is mainly for characterizing different architectures. 2 How to Characterize the Trade-off Consider the supervised learning problem from some input space X (e.g., images) to an output space Y (e.g.,labels). Let P0 denote the underlying distribution of (x, y) \u2208X \u00d7 Y. We also assume we are given a suitable loss function l(\u03b8, x, y; A), for example, the cross-entropy loss for a neural network, where \u03b8 is the parameter, and we specify the architecture A by explicitly writing it out in the loss function expression. For \ufb01xed \u03b8, we use the expected loss on clean data at \u03b8 to quantify accuracy, and use the expected adversarial loss to quantify robustness. We do not directly use accuracy here since the loss is usually negative correlated to accuracy, and easier to analyze theoretically. Speci\ufb01cally, we denote \u03b1(\u03b8) := EP0[l(\u03b8, x, y; A)], \u03b2(\u03b8) := EP0[max \u03b4\u2208S l(\u03b8, x + \u03b4, y; A)], 2 \fwhere we adopt the most commonly used formulation of S \u2013 an \u03b5 ball in lp norm for 1 \u2264p \u2264\u221e. Then, for each \u03b8, we can quantify accuracy and robustness by \u03b1(\u03b8) and \u03b2(\u03b8) respectively. In order to characterize the trade-off of an architecture, the ideal way is to examine (\u03b1(\u03b8), \u03b2(\u03b8)) for every possible \u03b8, which is impossible to implement. A better way is to examine the Pareto frontier since we actually only care about those parameters which are not sub-optimal. Here \u201csub-optimal\u201d means we can obtain the same performance under adversarial attack while having better performance on clean data. Trade-off Curve: We start by choosing the two end points for the trade-off curve. We denote \u03b8\u2217as one of the minimizers of \u03b1(\u03b8) that has the lowest value of adversarial loss. Similarly, we denote \u03b8\u2217 \u03b5 as one of the minimizers of \u03b2(\u03b8) that has the lowest value of native loss. We build a Cartesian coordinate system whose horizontal axis represents the expected native loss and vertical axis represents the expected loss under adversarial attack. Then, we denote the point (\u03b1(\u03b8\u2217), \u03b2(\u03b8\u2217)) as P1 and (\u03b1(\u03b8\u2217 \u03b5), \u03b2(\u03b8\u2217 \u03b5)) as P2. For each \u03b8, its coordinates are (\u03b1(\u03b8), \u03b2(\u03b8)) and by the nature of \u03b8\u2217 and \u03b8\u2217 \u03b5, \u03b1(\u03b8) \u2265\u03b1(\u03b8\u2217) and \u03b2(\u03b8) \u2265\u03b2(\u03b8\u2217 \u03b5) for all \u03b8 \u2208\u0398. The Pareto frontier is de\ufb01ned as the boundary where each point (\u03b1(\u03b8b), \u03b2(\u03b8b)) lying on it has the property: for any \u03b8 \u2208\u0398, if \u03b2(\u03b8) = \u03b2(\u03b8b), then \u03b1(\u03b8) \u2265\u03b1(\u03b8b); if \u03b1(\u03b8) = \u03b1(\u03b8b), then \u03b2(\u03b8) \u2265\u03b2(\u03b8b), as \ufb01gure 1 states. The inherent goodness of an architecture is fully characterized by the ef\ufb01cient Figure 1: Illustration of The Pareto Frontier frontier, since this curve is the collection of \"good\" points. Any point (\u03b1(\u03b8), \u03b2(\u03b8)) has corresponding points on the frontier which perform no worse than it in both accuracy and robustness. The methodology above require us to do the following optimization: min \u03b8 \u03b2(\u03b8), s.t. \u03b1(\u03b8) = \u03b3, for \u03b1(\u03b8\u2217) \u2264\u03b3 \u2264\u03b1(\u03b8\u2217 \u03b5). In implementation, we sample the points on the curve and then link them. Thus, we only need solve the following class of optimizations: min \u03b8 \u03be\u03b2(\u03b8) + (1 \u2212\u03be)\u03b1(\u03b8) (1) for \u03be \u2208(0, 1) since it can be proven that for all \u03be \u2208(0, 1), the solution to the optimization in (1) lies on the trade-off curve. However, we should be more careful when dealing with the endpoints, since (\u03b1(\u03b8), \u03b2(\u03b8)) obtained by simply optimizing \u03b2(\u03b8) and \u03b1(\u03b8) may not lie on the frontier. Thus, we choose to optimize (1) with \u03be very close to 0 and 1 to approximate both end points of the trade-off curve. To sum up, we can simply change \u03be and link the obtained points to get an approximation to our trade-off curve. In order to further investigate the trade-off of a given architecture theoretically, we take the two endpoints \u03b8\u2217 \u03b5 and \u03b8\u2217, and we further identify the trade-off using the following quantity: \u2206(A) := EP0[l(\u03b8\u2217 \u03b5, x, y; A)] \u2212EP0[l(\u03b8\u2217, x, y; A)]. The meaning of the above quantity is: if we choose a model under an architecture and make it as robust as possible, how much worse we will do on clean data compared to the model selected to be as 3 \faccurate as possible. If this quantity is small, we would expect the architecture does not suffer a great trade-off under adversarial attack. With smoothness and compactness conditions, we can obtain \u2206(A) = EP0[l(\u03b8\u2217 \u03b5, x, y; A)] \u2212EP0[l(\u03b8\u2217, x, y; A)] = 1 2(\u03b8\u2217 \u03b5 \u2212\u03b8\u2217)T EP0[\u22072l(\u03b8\u2217, x, y; A)](\u03b8\u2217 \u03b5 \u2212\u03b8\u2217) + o(\u2225\u03b8\u2217 \u03b5 \u2212\u03b8\u2217\u22252). Thus, understanding the behavior of \u03b8\u2217 \u03b5 \u2212\u03b8\u2217is key to understanding the trade-off. Such a characterization is given in the following section. 3 Theoretical Results to Understand the Trade-off In this section, we attempt to explore the underlying theory of trade-off under adversarial attack. We mainly focus on the following two aspects: the sensitivity of minimizer and the role of adversarial training in the over-parameterized case, which in turn will help us understand the trade-off. If we have training data set (Xt, Y t) := ({xt i}nt i=1, {yt i}nt i=1). We de\ufb01ne \u02c6 \u03b1, \u02c6 \u03b2 as follows: \u02c6 \u03b1(\u03b8, Xt, Y t; A) := 1 n n X i=1 l(\u03b8, xt i, yt i; A), \u02c6 \u03b2(\u03b8, Xt, Y t; A) := 1 n n X i=1 l(\u03b8, xt i + \u03b4i, yt i; A), where \u2225\u03b4i\u2225p \u2264\u03b5 and {\u03b4i}n i=1 are functions of \u03b8 that maximizes \u02c6 \u03b2(\u03b8, Xt, Y t; A) for a given \u03b8. We temporally assume \u02c6 \u03b1 and \u02c6 \u03b2 both have unique global minimizer (but are not necessarily convex), and will relax the assumption later in subsection 3.1. The uniqueness assumption can also be formulated in a more elementary way: for example, if we assume smoothness of loss function \u03b1 over X \u00d7 \u0398, compactness of \u0398 and we only have one global minimum for \u03b1 which lies in the interior of \u0398, with positive de\ufb01nite Hessian matrix, and it is well-separated, which means \u2200\u03c9 > 0, there exists \u03ba > 0, such that \u2200\u03b8 , if \u2225\u03b8 \u2212\u03b8\u2217\u2225> \u03c9, we have |\u03b1(\u03b8, Xt, Y t; A) \u2212\u03b1(\u03b8\u2217, Xt, Y t; A)| > \u03ba, we can obtain that \u02c6 \u03b8 is also a global minimum if sample size is large enough. When the attack size \u03b5 is small, we would also have that \u02c6 \u03b8\u03b5 is unique. Then, we can denote \u02c6 \u03b8 := arg min \u03b8\u2208\u0398 \u02c6 \u03b1(\u03b8, Xt, Y t; A), \u02c6 \u03b8\u03b5 := arg min \u03b8\u2208\u0398 \u02c6 \u03b2(\u03b8, Xt, Y t; A). This gives us the sample version of the trade-off between accuracy and robustness of a given architecture A: \u02c6 \u2206(A) = 1 2(\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8)T H\u02c6 \u03b8(Xe, Y e)(\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8) + o(\u2225\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8\u22252 2), where H\u02c6 \u03b8(Xe, Y e) = 1/n \u2032 Pn \u2032 i=1 \u22072 \u03b8l(\u02c6 \u03b8, xe i, ye i ; A) denotes the hessian matrix of loss on evaluation data set (Xe, Y e) := ({xe i}n \u2032 i=1, {ye i }n \u2032 i=1). 3.1 In\ufb02uence Function of Adversarial Attack We want to theoretically understand what is the effect on optimal loss function when our input data are adversarially perturbed a little bit. In order to do that, as we mentioned above, understanding \u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8 is the key since we can see from the expression of \u02c6 \u2206(A) , \u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8 is an important component. In the spirit of the in\ufb02uence function, we use the following concept,which we call the in\ufb02uence function of adversarial attack (IFA), to capture the idea of studying models through their training data. Notice that our concept is different from the one proposed by [1] called adversarial in\ufb02uence function (AIF), since in their setting the adversary is trying to maliciously interfere with parameter estimation while in our setting the adversary is trying to interfere with prediction power. For simplicity, we omit the supscript for X and Y here, and only use X, Y to illustrate our point. We de\ufb01ne IFA := lim \u03b5\u21920 d\u02c6 \u03b8\u03b5 d\u03b5 . 4 \fFor given X and Y , we can also view 1 n Pn i=1 l(\u03b8, xi + \u03b4i, yi; A) as g(\u03b8, P), where P := \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \u2212\u03b4T 1 \u2212 \u2212\u03b4T 2 \u2212 . . . \u2212\u03b4T n \u2212 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8. We can obtain: Theorem 1. If X, Y and \u0398 are compact, \u02c6 \u03b1 is three times continuously differentiable on \u0398 \u00d7 X, the Hessian matrix H\u02c6 \u03b8(X, Y ) is positive de\ufb01nite, and g(\u00b7, P) is differentiable for every P and \u2207\u03b8g(\u03b8, P) and \u02c6 \u03b8 lies in the interior of \u0398, with non-zero \u2207x \u02c6 \u03b1(\u02c6 \u03b8, X, Y ), then we have IFA = \u2212H\u22121 \u02c6 \u03b8 (X, Y )\u03a6, where \u03a6 = 1 n Pn i=1 \u2207x,\u03b8l(\u02c6 \u03b8, xi, yi)\u03c6i and \u03c6i = (\u03c81, \u03c82, \u00b7 \u00b7 \u00b7 , \u03c8m)T , where \u03c8k = bq\u22121 k (Pm k=1 bq k) 1 p sgn( \u2202 \u2202x\u00b7,k l(\u02c6 \u03b8, xi, yi)). Here, we have bk = | \u2202 \u2202x\u00b7,k l(\u02c6 \u03b8, xi, yi)|, x\u00b7,k is the k-th coordinate of vector x, for instance, xj = (xj,1, xj,2, \u00b7 \u00b7 \u00b7 , xj,m)T . d is the dimension of \u03b8 and m is the dimension of xk. This theorem can be used as a theoretical tool to understand how sensitive the optima of loss function is under adversarial attack and we can use \u03b5\u00b7IFA to approximate \u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8. Notice that as long as the unique global minimum \u03b8\u2217is well-separated, by continuity, if the sample size is large enough, the Hessian H\u02c6 \u03b8 is positive de\ufb01nite. Besides, the theorem can be extended beyond the case \u2207x \u02c6 \u03b1(\u02c6 \u03b8, X, Y ) \u0338= 0. In those cases, we need to derive higher order in\ufb02uence function. We leave it to future work. Example 1 (Shallow Neural Network). Consider the architecture studied in [5], which has the following form: f(x, W) = Pk j=1 aj\u03c3(wT j x), where aj\u2019s are constant scalars and wj is a vector with the same dimension of x, both are m. W is the weight matrix whose j-th column is wj. \u03c3 is the quadaratic activation function \u03c3(z) = z2. The loss function is taken as L(W, X, Y ) = 1 n n X i=1 (yi \u2212f(xi, W))2 + \u00b5 2 \u2225W\u22252 F , where \u2225\u00b7 \u2225F is the Frobenius norm and \u00b5 > 0. Let us denote \u02c6 W = arg minW L(W, X, Y ) and assume \u2207W L( \u02c6 W, X, Y ) = 0, \u22072 W L( \u02c6 W, X, Y ) \u227b0. In that case, \u03b8 = (wT 1 , wT 2 , \u00b7 \u00b7 \u00b7 , wT k )T and we can obtain IFA accordingly: IFA = ( 1 n n X i=1 (yi \u2212f(xi, \u02c6 W) \u2212\u2207\u03b8f(xi, \u02c6 W)\u2207\u03b8f(xi, \u02c6 W)T + \u00b5I)\u22121\u03a6, \u03a6 = 1 n n X i=1 (yi \u2212f(xi, \u02c6 W))\u2207xi,\u03b8f(xi, \u02c6 W)\u03c6i \u2212\u2207\u03b8f(xi, \u02c6 W)\u2207xf(xi, \u02c6 W)T \u03c6i. More detailed expression is attached in the supplementary material. Recall when \u03b5 is small, the trade-off we de\ufb01ned yields \u02c6 \u2206(A) \u22481 2(\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8)T H\u02c6 \u03b8(Xe, Y e)(\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8) \u22481 2\u03a6T H\u22121 \u02c6 \u03b8 (Xt, Y t)H\u02c6 \u03b8(Xe, Y e)H\u22121 \u02c6 \u03b8 (Xt, Y t)\u03a6\u03b52. where we further by approximate (\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8) by \u2212H\u22121 \u02c6 \u03b8 (Xt, Y t)\u03a6\u03b5. If we denote \u03bbe min and \u03bbe max as the minimuml and maximal eigenvalue of H\u02c6 \u03b8(Xe, Y e), \u03bbt min and \u03bbt max correspondingly with respect to H\u02c6 \u03b8(Xt, Y t), and we further assume H\u02c6 \u03b8(Xe, Y e) is positive de\ufb01nite, we have bounds for \u03a6T H\u22121 \u02c6 \u03b8 (Xt, Y t)H\u02c6 \u03b8(Xe, Y e)H\u22121 \u02c6 \u03b8 (Xt, Y t)\u03a6: \u03bbe min (\u03bbt max)2 \u03a6T \u03a6 \u2264\u03a6T H\u22121 \u02c6 \u03b8 (Xt, Y t)H\u02c6 \u03b8(Xe, Y e)H\u22121 \u02c6 \u03b8 (Xt, Y t)\u03a6 \u2264 \u03bbe max (\u03bbt min)2 \u03a6T \u03a6. As data size increases, we would expect \u03bbt min/\u03bbe min and \u03bbt max/\u03bbe max goes to 1. We can see the trade-off is closely related to the spectrum of Hessian matrix and \u03a6 and enjoys a quadratic form. 5 \f3.1.1 More General Cases If by running SGD on \u02c6 \u03b1, we obtain \u02dc \u03b8, it is possible that \u02dc \u03b8 is neither a global minimum, nor a unique global minimum. If we hope to theoretically analyze the sensitivity of the optimal loss function under adversarial attack at \u02dc \u03b8, we can do as [9] to study a surrogate loss. Speci\ufb01cally, we form a convex quadratic approximation of the loss around \u02dc \u03b8, i.e., \u02dc \u03b1(\u03b8, X, Y ; A) := \u02c6 \u03b1(\u02dc \u03b8, X, Y ; A) + \u2207\u03b8 \u02c6 \u03b1(\u02dc \u03b8, X, Y ; A)T (\u03b8 \u2212\u02dc \u03b8) + 1 2(\u03b8 \u2212\u02dc \u03b8)T (H\u02dc \u03b8(X, Y ) + \u00b5I)(\u03b8 \u2212\u02dc \u03b8). Here \u00b5 is a damping term to make H\u02dc \u03b8 is positive de\ufb01nite, which corresponds to adding L2 regularization on \u03b8. To sum up, in subection 3.1, we study the sensitivity of minimizer of loss function under adversarial attack, where the attack size is in\ufb01nitesimal. Besides, we assume there are enough data. The cases of larger attack size and over-parameterization are hard to analyze, thus, we choose to study linear model to shed some light on adversarial training in subsection 3.2. 3.2 Regularization by Adversarial Training in Over-Parameterized Case It has been believed the adversarial training helps regularize the model. We dedicate this paragraph to theoretically illustrate the regularization imposed by adversarial training in over-parametrized linear model, which can help us character the corresponding trade-off curve. For simplicity, we consider the over-parametrized linear model, where the data generating process follows Y = X\u03b8\u2217, where d > n. Here X = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \u2212xT 1 \u2212 \u2212xT 2 \u2212 . . . \u2212xT n\u2212 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8, Y = \uf8eb \uf8ec \uf8ec \uf8ed y1 y2 . . . yn \uf8f6 \uf8f7 \uf8f7 \uf8f8. yi is a scalar, and xi and \u03b8\u2217are both vectors of dimension d. We consider the loss \u2225Y \u2212X\u03b8\u22252 2. In this case, it is easy to see the generalization error can be very large if we just do the simplest standard training. That is because if we minimize \u2225Y t \u2212Xt\u03b8\u22252 2, since d > n, it is possible that the design matrix Xt has multiple sub-matrices with rank n, so that there are in\ufb01nite number of minimizers. Example 2. Let the design matrix be a Bernoulli matrix, with i.i.d. entries with probability 0.5 equals 1, and 0.5 equals \u22121. Let d > n. With high probability, there exists in\ufb01nitely many of minimizers, and for any B > 0, there exists a minimizer \u02c6 \u03b8B such that the loss on evaluation data \u2225Y e \u2212Xe\u02c6 \u03b8B\u22252 2 > B. However, if we do adversarial training, it can regularize the model. We \ufb01rstly study the l\u221e-bounded case, where we require \u2225\u03b4i\u2225\u221e\u2264\u03b5. We denote the set of n \u00d7 d matrices, in which the in\ufb01nity norm of each row is bounded by \u03b5 as \u039e. Then, given a data set (X, Y ), the adversarial training can be written as the following optimization: min \u03b8\u2208\u0398 max P \u2208\u039e 1 n\u2225Y \u2212(X + P)\u03b8\u22252 2. (2) Theorem 2. Assume Y = X\u03b8\u2217, the optimization in (2) is equivalent to the following optimization: min \u03b8\u2208\u0398 \u2225\u03b8\u22251, s.t. 1 n n X i=1 (yi \u2212xT i \u03b8)2 \u2264b2, for some b such that 0 \u2264b \u2264\u03b5\u2225\u03b8\u2217\u22251, which is equivalent to LASSO: min\u03b8\u2208\u0398 1 n Pn i=1(yi \u2212xT i \u03b8)2 + \u03bb\u2225\u03b8\u22251 for some \u03bb. Under some regularity conditions, LASSO can have really good behavior, which can help us see the power of adversarial training. Speci\ufb01cally, we assume: \u03b8\u2217has support S \u22821, 2, \u00b7 \u00b7 \u00b7 , p and |S| = s, (s-sparsity) (3) 1 n\u2225X\u2206\u22252 2 \u2265\u03c4\u2225\u2206\u22252 2 for all \u2206\u2208C\u03b6(S), (restricted eigenvalue condition) (4) 6 \fwhere S is the support of \u03b8\u2217, C\u03b6(S) := {\u2206\u2208Rp|\u2225\u2206Sc\u22251 \u2264\u03b6\u2225\u2206S\u22251} and \u2206S is a vector has the same entries with \u2206on S and 0 elsewhere. Regularity condition (3) is very commonly seen in over-parameterized models, which requires the true parameters actually sit on a low-dimensional manifold. For (4), some commonly seen random matrices can satisfy it with high probabilty. For example, the random matrix with i.i.d. N(0, 1) entries which satis\ufb01es restricted isometry property is one way to certify the restricted eigenvalue properties with high probability. Corollary 1. If we further assume (3) and (4) with \u03b6 \u22651, if \u02c6 \u03b8\u03b5 is a minimizer of (2), we can obtain \u2225\u02c6 \u03b8\u03b5 \u2212\u03b8\u2217\u2225\u2264 b \u221a\u03c4 b \u2264\u03b5\u2225\u03b8\u2217\u2225 \u221a\u03c4 . (5) From (5), we can see, unlike directly minimize \u2225Y \u2212X\u03b8\u22252 2, \u02c6 \u03b8\u03b5 has to be close to \u03b8\u2217. Similarly, the arguments above can be extended beyond l\u221e-bounded attack. Let \u03a0 be the set of n \u00d7 d matrices whose rows are all bounded by \u03f5 in lp norm. The adversarial training is to optimize: min \u03b8\u2208\u0398 max P \u2208\u03a0 1 n\u2225Y \u2212(X + P)\u03b8\u22252 2, (6) and we can transform the optimization in (6) to min \u03b8\u2208\u0398 \u2225\u03b8\u2225q, s.t. 1 n n X i=1 (yi \u2212xT i \u03b8)2 \u2264b2, for some b such that 0 \u2264b \u2264\u03b5\u2225\u03b8\u2217\u2225q, where 1 p + 1 q = 1. When q = 2, it is equivalent to a ridge regression. In this section, we can see how adversarial training can be related to classic method in statistics. It is worth exploring in the future the underlying theory about regularization imposed by adversarial training. We end up this subsection by giving the following characterization of the trade-off curve: Theorem 3. Under lp-bounded attack with size \u03b5, if Y = X\u03b8\u2217 min \u03b8 \u03be \u02c6 \u03b2(\u03b8, X, Y ; A) + (1 \u2212\u03be)\u02c6 \u03b1(\u03b8, X, Y ; A) (7) can be transformed into min \u03b8\u2208\u0398 \u2225\u03b8\u2225q, s.t. 1 n n X i=1 (yi \u2212xT i \u03b8)2 \u2264b2, for some b such that 0 \u2264b \u2264\u03b5\u221a\u03be\u2225\u03b8\u2217\u2225q. In the case p = \u221e, if we further assume (3) and (4) with \u03b6 \u22651, if \u02c6 \u03b8\u03be is a minimizer of the sample version of optimization (1), we can obtain \u2225\u02c6 \u03b8\u03be \u2212\u03b8\u2217\u2225\u2264\u03be\u03b5\u2225\u03b8\u2217\u2225 \u221a\u03c4 . (8) 4 Numerical Experiments We study the trade-off curves varying the following neural network parameters: the number of layers (depth), the number of \ufb01lters (width) and neural network architectures (simple convolutional, VGG [17] and ResNet-50 [7]); and the following attack parameters: size of the attack and attacks norms (l2 and l\u221e). We use VGG-16 with 64 \ufb01lters in the \ufb01rst layer as the default architecture. We use 0.3 as the default attack size for l2 attacks and 0.015 for l\u221eattacks. We do all our experiments on CIFAR-10, which has 60000 images of 32 \u00d7 32 \u00d7 3 [10], and we normalize the input images to [0, 1] in all experiments. We study two trade-off curves: one for test set loss and the other for test set accuracy. We set \u03be to be [0.001, 0.25, 0.50, 0.75, 0.999]. As explained in section 2, we use 0.001 and 0.999 instead of 0 and 1 for end points. Figures 2 and 3 show all testing results. In Figure 2, we \ufb01x the neural network architecture to be VGG-16 and change other parameters. Overall, we observe that there indeed exists an inherent tension between robustness and accuracy. Besides, we found that as we increase the capacity of networks (e.g. larger depth or width), we achieve higher accuracy in both adversarial examples and 7 \f0.80 0.85 0.90 native accuracy 0.6 0.7 0.8 adversarial accuracy (A) 32.0 64.0 128.0 0.8 1.0 1.2 native loss 1 2 adversarial loss 32.0 64.0 128.0 0.80 0.85 native accuracy 0.6 0.7 adversarial accuracy (B) 5.0 16.0 19.0 0.8 0.9 1.0 native loss 1.0 1.5 2.0 adversarial loss 5.0 16.0 19.0 0.84 0.86 0.88 native accuracy 0.5 0.6 0.7 adversarial accuracy (C) 2.0 inf 0.850 0.875 0.900 0.925 native loss 1.0 1.5 2.0 2.5 adversarial loss 2.0 inf 0.825 0.850 0.875 native accuracy 0.6 0.7 adversarial accuracy (D) 0.3 0.6 0.80 0.85 0.90 0.95 native loss 1.0 1.5 2.0 adversarial loss 0.3 0.6 Figure 2: (A) VGG-16 with different width under l2 attack (B) VGG with different depth under l2 attack. We use the original VGG-16 and VGG-19 architectures in [17] and cut the VGG-16 at the 4th conv layer and combine with a fully connected layer to obtain VGG-5 (C) VGG-16 under l2 and l\u221e attack (D) VGG-16 with different attack size under l2 attack. legitimate ones, but the adversarial accuracy we need to sacri\ufb01ce for increasing native loss is also increasing. As we increase the attack size, the adversarial accuracy we need to sacri\ufb01ce for native accuracy is increasing. In addition, we found that comparing with l2 attacks, the trade-off under l\u221e is more severe between adversarial accuracy and native accuracy. Figure 3(E) shows the results cross different types of networks and 3(F) shows the results for the simple convolutional architecture with different width under l\u221eattack. As we can see, the best adversarial accuracy using VGG-16 is almost the same as ResNet-50 even though the capacity is very different, which con\ufb01rms the observation by Su et al. [19] that network type is a more important factor than size with respect to robustness. 0.86 0.88 native accuracy 0.5 0.6 0.7 0.8 adversarial accuracy (E) Vgg-16 resnet50 0.80 0.85 0.90 native loss 1.0 1.5 2.0 2.5 adversarial loss Vgg-16 resnet50 0.75 0.80 0.85 0.90 native accuracy 0.4 0.6 adversarial accuracy (F) 16.0 32.0 64.0 128.0 0.8 1.0 native loss 2 3 adversarial loss 16.0 32.0 64.0 128.0 Figure 3: (E) comparison between different types of networks (VGG-16 and ResNet-50) under l2 attack (F) simple convolutional neural network with different width under l\u221eattack. 5 Conclusion We identify a trade-off between accuracy and robustness and explore the underlying theory behind it. Speci\ufb01cally, we derive a theoretical tool to understand how optimal loss function changes when the input data are adversarially perturbed. Besides, we show how adversarial learning helps regularize the linear model. We also empirically visualize the trade-off and relate it to previous works. 8 \fSupplementary Material Proof of Theorem 1 In order to prove theorem 1, let us \ufb01rst state the Danskin theorem. Lemma 1 (Danskin). Let B be nonempty compact topological space and h : Rd \u00d7 B \u2192R be such that h(\u00b7, \u03b4) is differentiable for every \u03b4 \u2208B and \u2207\u03b8h(\u03b8, \u03b4) is continuous on Rd \u00d7 B. Also, let \u03b4\u2217(\u03b8) = {\u03b4 \u2208arg max\u03b4\u2208B h(\u03b8, \u03b4)}. Then, the corresponding max-function \u03be(\u03b8) = max \u03b4\u2208B h(\u03b8, \u03b4) is locally Lipschitz continuous, directionally differentiable, and its directional derivatives satisfy \u03be \u2032(\u03b8, r) = sup \u03b4\u2208\u03b4\u2217(\u03b8) rT \u2207\u03b8h(\u03b8, \u03b4). In particular, if for some \u03b8 \u2208Rd the set \u03b4\u2217(\u03b8) = {\u03b4\u2217 \u03b8} is a singleton, the max-function is differentiable at \u03b8 and \u2207\u03be(\u03b8) = \u2207\u03b8h(\u03b8, \u03b4\u2217 \u03b8). By this lemma, we can easily obtain the following lemma: Lemma 2. For any \u02dc \u03b8 that minimize \u03be(\u03b8) and lying in the interior, we can obtain \u2207\u03b8h(\u02dc \u03b8, \u03b4) = 0. Proof. Since \u02dc \u03b8 minimizes \u03be(\u03b8) and lies in the interior of \u0398 , we can obtain \u03be \u2032(\u02dc \u03b8, r) = 0 for any direction vector r. If there is a \u03b4 \u2208\u03b4\u2217(\u02dc \u03b8), such that \u2207\u03b8h(\u02dc \u03b8, \u03b4) \u0338= 0, then we take r = \u2207\u03b8h(\u02dc \u03b8, \u03b4)/\u2225\u2207\u03b8h(\u02dc \u03b8, \u03b4)\u2225, we have \u03be \u2032(\u02dc \u03b8, r) = sup \u03b4\u2208\u03b4\u2217(\u02dc \u03b8) rT \u2207\u03b8h(\u02dc \u03b8, \u03b4) \u2265\u2225\u2207\u03b8h(\u02dc \u03b8, \u03b4)\u22252 > 0, which is contradictory to the fact \u2207\u03b8h(\u02dc \u03b8, \u03b4) = 0. Now we are ready to give the formal proof. In order for simplicity, we here use l(\u03b8, x, y) instead of l(\u03b8, x, y; A). With lemma 2, we can obtain that 1 n n X i=1 \u2207\u03b8l(\u03b8, xi + \u03b4i, yi)|\u03b8=\u02c6 \u03b8\u03b5 = 0. With Taylor expansion and under the assumption of theorem 2, we can obtain 0 = 1 n n X i=1 [\u2207\u03b8l(\u02c6 \u03b8\u03b5, xi, yi) + \u2207x,\u03b8l(\u02c6 \u03b8\u03b5, xi, yi)\u03b4i + O(\u2225\u03b4\u22252 2)]. Here the assumption of compactness and continuity can help us to write the remainder 1 2\u03b4T i H\u02dc \u03b8\u03b4i into O(\u2225\u03b4\u22252 2) since we can bound every entry of H\u02dc \u03b8. We use the same property again and again and we will not reiterate it. Now, let us perform taylor expansion on \u2207\u03b8l(\u02c6 \u03b8\u03b5, xi, yi) and \u2207x,\u03b8l(\u02c6 \u03b8\u03b5, xi, yi). \u2207\u03b8l(\u02c6 \u03b8\u03b5, xi, yi) = \u2207\u03b8l(\u02c6 \u03b8, xi, yi) + \u22072 \u03b8l(\u02c6 \u03b8, xi, yi)(\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8) + O(\u2225\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8\u22252 2) and \u2207x,\u03b8l(\u02c6 \u03b8\u03b5, xi, yi) = \u2207x,\u03b8l(\u02c6 \u03b8, xi, yi) + O(\u2225\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8\u22252). 9 \fThus, by simple algebra manipulation \u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8 + O(\u2225\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8\u22252 2) = \u0000\u22121 n n X i=1 \u22072 \u03b8l(\u02c6 \u03b8, xi, yi) \u0001\u22121\u0010 1 n n X i=1 \u2207x,\u03b8l(\u02c6 \u03b8, xi, yi)\u03b4i + \u2225\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8\u22252\u2225\u03b4i\u22252) \u0011 . We know if we divided \u03b5 on both sides, we know when \u03b5 goes to 0, the limit of the right handside exists if we assume the limit of lim\u03b5\u21920 \u03b4i/\u03b5 exist. Thus, \u2225\u02c6 \u03b8\u03b5 \u2212\u02c6 \u03b8\u2225/\u03b5 cannot goes to in\ufb01nity as \u03b5 goes to 0. In orther words, IFA must exist! Notice that \u03b4i here actually should be \u03b4\u03b5,i, which will converge to the \u03b4i \u2208S maximizing \u2207xl(\u02c6 \u03b8, xi, yi) \u00b7 \u03b4i. By H\u00a8 older inequality, \u03b4i,k = bq\u22121 k (Pm k=1 bq k) 1 p sgn( \u2202 \u2202x\u00b7,k l(\u02c6 \u03b8, xi, yi))\u03b5, where bk = | \u2202 \u2202x\u00b7,k l(\u02c6 \u03b8, xi, yi)|. Thus, our whole arguments stand including the existence of limit. Thus, as \u03b5 \u21920, we can obtain that d\u02c6 \u03b8\u03b5 d\u03b5 = \u0000\u22121 n n X i=1 \u22072 \u03b8l(\u02c6 \u03b8, xi, yi) \u0001\u22121\u03a6 where \u03a6 = 1 n Pn i=1 \u2207x,\u03b8l(\u02c6 \u03b8, xi, yi)\u03c6i and \u03c6i = (\u03c81, \u03c82, \u00b7 \u00b7 \u00b7 , \u03c8m)T , where \u03c8k = bq\u22121 k (Pm k=1 bq k) 1 p . Proof of Theroem 2 The proof of theorem 3 is almost the same, using H\u00a8 older inequality. Let us focus on calculating maxP \u2208\u039e 1 n\u2225Y \u2212(X + P)\u03b8\u22252 2. 1 n\u2225Y \u2212(X + P)\u03b8\u22252 2 = 1 n(\u2225Y \u2212X\u03b8\u22252 2 \u22122(Y \u2212X\u03b8)T P\u03b8 + \u2225P\u03b8\u22252 2) Since P\u03b8 = (Pp i=1 p1j\u03b8j, Pp i=1 p2j\u03b8j, \u00b7 \u00b7 \u00b7 , Pp i=1 pnj\u03b8j)T . By choosing pij = sgn(\u03b8j)sgn(xT i \u03b8 \u2212 yi)\u03b5, we can obtain the maximum of 1 n\u2225Y \u2212(X + P)\u03b8\u22252 2 = 1 n Pn i=1(|yi \u2212xT i \u03b8| + \u03b5\u2225\u03b8\u22251)2. Thus, the optimization problem in (2) can be rewritten as min \u03b8\u2208\u0398 1 n n X i=1 (|yi \u2212xT i \u03b8| + \u03b5\u2225\u03b8\u22251)2. If \u02c6 \u03b8\u03b5 is a minimizer of the above optimization. Since we know that plugging in \u03b8\u2217, we can obtain the loss being \u03b52\u2225\u03b8\u2217\u22252 1, thus, \u02c6 \u03b8\u03b5 satis\ufb01es 1 n n X i=1 (|yi \u2212xT i \u02c6 \u03b8\u03b5| + \u03b5\u2225\u03b8\u03b5\u22251)2 \u2264\u03b52\u2225\u03b8\u2217\u22252 1. Actually, it is easy to see \u02c6 \u03b8\u03b5 is also the solution of the following oracle optimization problem: min \u03b8\u2208\u0398 \u2225\u03b8\u22251, s.t. 1 n n X i=1 (yi \u2212xT i \u03b8)2 \u22641 n n X i=1 (yi \u2212xT i \u02c6 \u03b8\u03b5)2. Thus, we can obtain that \u02c6 \u03b8\u03b5 is the solution to the following optimization: min \u03b8\u2208\u0398 \u2225\u03b8\u22251, s.t. 1 n n X i=1 (yi \u2212xT i \u03b8)2 \u2264b2, for some 0 \u2264b \u2264\u03b5\u2225\u03b8\u2217\u22251, which is equivalent to LASSO: min\u03b8\u2208\u0398 1 n Pn i=1(yi \u2212xT i \u03b8)2 + \u03bb\u2225\u03b8\u22251 for some \u03bb. 10 \fProof of Corollary 1 Since \u03b8\u2217is a feasible solution, we know \u2225\u02c6 \u03b8\u03b5\u22251 \u2264\u2225\u03b8\u2217\u22251 we can obtain by theorem 7.1 in [22] that \u02c6 \u03b8\u03b5 \u2212\u03b8\u2217\u2208C1(S) \u2282C\u03b6(S). Thus, we can obtain by regularity condition (4) that \u03c4\u2225\u02c6 \u03b8\u03b5 \u2212\u03b8\u2217\u22252 2 \u2264b2 \u2264\u03b52\u2225\u03b8\u2217\u22252 2. (9) Calculation of Example 1 \u2207\u03b8f(x, W) = \u00002a1wT 1 xxT , \u00b7 \u00b7 \u00b7 , 2akwT k xxT \u0001T , \u2207xf(x, W) = k X j=1 2ajwT j xwj. We can obtain \u22072 \u03b8f(x, W) = \u00002a1wT 1 xxT w1, \u00b7 \u00b7 \u00b7 , 2akwT k xxT wk \u0001T , \u2207x,\u03b8f(x, W) = \uf8eb \uf8ec \uf8ed B1 . . . Bk \uf8f6 \uf8f7 \uf8f8 where Bj = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed 2ajx\u00b7,1(wj + e1)T 2ajx\u00b7,2(wj + e2)T . . . 2ajx\u00b7,m(wj + em)T \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8. Here ej = (0, 0, \u00b7 \u00b7 \u00b7 , 1, 0, \u00b7 \u00b7 \u00b7 , 0)T is the vector with only the j-th entry is 1, other entries are 0. \u2207x,\u03b8l(\u03b8, x, y) = 2(y \u2212f(x, W))\u2207x,\u03b8f(x, W) \u22122\u2207\u03b8f(x, W)\u2207xf(x, W)T \u2207xl(\u03b8, x, y) = 2(y \u2212f(x, W))\u2207xf(x, W), \u03c8k = |y\u2212f(x,W )|q\u22121| \u2202f(x,W ) \u2202x\u00b7k |q\u22121 Pm k=1 |y\u2212f(x,W )|q\u22121| \u2202f(x,W ) \u2202x\u00b7k |q\u22121 sgn \u0000(y \u2212f(x, W)) \u2202f(x,W ) \u2202x\u00b7k \u0001 . 11" + } + ], + "Kenji Kawaguchi": [ + { + "url": "http://arxiv.org/abs/2305.18887v1", + "title": "How Does Information Bottleneck Help Deep Learning?", + "abstract": "Numerous deep learning algorithms have been inspired by and understood via\nthe notion of information bottleneck, where unnecessary information is (often\nimplicitly) minimized while task-relevant information is maximized. However, a\nrigorous argument for justifying why it is desirable to control information\nbottlenecks has been elusive. In this paper, we provide the first rigorous\nlearning theory for justifying the benefit of information bottleneck in deep\nlearning by mathematically relating information bottleneck to generalization\nerrors. Our theory proves that controlling information bottleneck is one way to\ncontrol generalization errors in deep learning, although it is not the only or\nnecessary way. We investigate the merit of our new mathematical findings with\nexperiments across a range of architectures and learning settings. In many\ncases, generalization errors are shown to correlate with the degree of\ninformation bottleneck: i.e., the amount of the unnecessary information at\nhidden layers. This paper provides a theoretical foundation for current and\nfuture methods through the lens of information bottleneck. Our new\ngeneralization bounds scale with the degree of information bottleneck, unlike\nthe previous bounds that scale with the number of parameters, VC dimension,\nRademacher complexity, stability or robustness. Our code is publicly available\nat: https://github.com/xu-ji/information-bottleneck", + "authors": "Kenji Kawaguchi, Zhun Deng, Xu Ji, Jiaoyang Huang", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CV", + "cs.IT", + "math.IT" + ], + "main_content": "Introduction The information bottleneck principle (Tishby et al., 1999; Slonim & Tishby, 2000) has been a great concept in balancing the trade-off between the complexity of representation and the power of predicting. It is based on the notion of *Equal contribution. Author ordering determined by coin flip. 1NUS 2Columbia University 3Mila 4University of Pennsylvania. Correspondence to: Kenji Kawaguchi . Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). Figure 1. Illustration of X, Y and Z. This paper studies the relationship between performances of deep neural networks and the mutual information between X and Z. Our theory proves that controlling this mutual information is one way to control performances in deep learning, although it is not the necessary way. minimal sufficient statistics for extracting information about target Y \u2208Y into representation Z = \u03d5(X) \u2208Z from input X \u2208X. An information bottleneck imposes regularization at representation Z by minimizing the mutual information between X and Z, I(X; Z), while maximizing the mutual information between Y and Z, I(Y ; Z). In practice I(X; Z) is often minimized implicitly, e.g. as a result of stochastic gradient descent (SGD) or an architecture choice (Shwartz-Ziv & Tishby, 2017). An explicit minimization of I(X; Z) has been also adopted in the machine learning literature as a regularization technique (Alemi et al., 2016; 2018), where the mutual information is either estimated by averaging log probabilities of latent representations over empirical samples or replaced by a tractable upper bound (Kirsch et al., 2020; Kolchinsky & Tracey, 2017; Alemi et al., 2016). More generally, the notion of bottlenecks on representation expressivity has been used in work on structural inductive biases (Goyal & Bengio, 2022). Consequently, understanding the connection between the information bottleneck regularizer I(X; Z) and the generalization ability of machine learning models has become an active area of research. Given its importance, Shwartz-Ziv et al. (2019) provided the following conjecture: Conjecture 1. (Informal version (Shwartz-Ziv et al., 2019)) With probability at least 1 \u2212\u03b4 over the training data s = {(xi, yi)}n i=1 drawn from the same distribution as a random variable pair (X, Y ), for the generalization error 1 arXiv:2305.18887v1 [cs.LG] 30 May 2023 \fHow Does Information Bottleneck Help Deep Learning? \u2206(s) = EX,Y [\u2113(f s(X), Y )] \u22121 n Pn i=1 \u2113(f s(xi), yi), there is a bound obeys the following form: \u2206(s) \u2264 s 2I(X;Zs l ) + log 2 \u03b4 2n , (1) where f s is the full model obtained by training and Zs l = \u03d5s l (X) is the output of the an intermediate l-layer encoder \u03d5s l of the model, i.e. representation obtained after passing through the first l layers. However, this appealing conjecture cannot be applied to explain the success of information bottleneck principle in practice. First, the proof of the bound in this conjecture is incomplete. More importantly, as pointed out by Hafez-Kolahi et al. (2020), there is a critical drawback in the formulation of this conjecture: Shwartz-Ziv et al. (2019) implicitly assumes the independence of Zs l and s in the arguments of this conjecture, which means that they treated the encoder \u03d5s l as fixed and independent of training data s. Indeed, Hafez-Kolahi et al. (2020) constructed a counterexample to show that the conjecture is invalid when the encoder \u03d5s l is also learned with the training data s. This is because minimizing I(X; Zs l ) only does not sufficiently constrain the complexity of \u03d5s l , allowing it to arbitrarily overfit to the training data with a large generalization gap, in contradiction to the inequality (1). In other words, when selecting the encoder\u2019s parameters is part of the learning problem, measuring compression via I(X; Zs l ) does not capture the degree of overfitting of the encoder\u2019s parameters. Accordingly, as a first step towards proving a sample complexity bound via information bottleneck, Hafez-Kolahi et al. (2020) focused on the input layer and proved the following input compression bound for binary classification: if Y = {0, 1} and \u2113is the 0\u20131 loss, then for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over the training dataset s, they roughly prove \u2206(s) = \u02dc O r 2H(X) n ! , (2) up to a factor 21/\u03f5 for some constant \u03f5 satisfying \u03f5 = \u2126 \u0000p (26H(X)/\u03f5 + log(1/\u03b4) + 2)/n \u0001 . However, despite the popularity of the of information bottleneck principle and active usage in practice, this is still far from being a valid sample complexity bound, as noted by Hafez-Kolahi et al. (2020). To the best of our knowledge, in the current literature, much of the work on information bottleneck assumes its benefits, but no rigorous and valid sample complexity bounds have been proposed to justify why it is desirable to control information bottlenecks. In this paper, we make the first step to fill in this gap and provide an answer to the following open problem: \u201cHow does information bottleneck help deep learning from the perspective of statistical learning theory?\u201d As our first contribution, we resolve this open question by providing novel and complete proofs for end-to-end learning of intermediate representations (Theorem 2). To the best of our knowledge, we provide the first rigorous generalization bound for information bottleneck in the case of learning representations, showing that simplicity in both the representation and representation function are factors that support generalization. As our second contribution, an intermediate step and byproduct of our novel proof for Theorem 2 not only completes the proof of Conjecture 1, where Z is treated as fixed random variable and independent of the training data s, it also significantly improve the previous bound in the conjecture. We show the generalization error roughly (with high probability) as \u02dc O r I(X; Zs l |Y ) + 1 n ! as n \u2192\u221e. This not only improves the numerator of the bound from an exponential dependence to linear dependence on mutual information, but also improves I(X; Zs l ) to a smaller quantity I(X; Zs l |Y ). More importantly, it is of independent interest and applicable to cases in transfer learning and unsupervised learning. Finally, in Section 5, we consolidate our theoretical findings by comprehensive experiments on our bounds and related generalization prediction metrics, finding that empirical estimates of the main factors in our bounds are strong predictors of the generalization gap. 2. Preliminaries In this section, we describe the notations we use and settings we mainly consider. Notation. We are given a training dataset s = ((xi, yi))n i=1 \u223cP\u2297n of n samples where xi \u2208X and yi \u2208Y are i.i.d. drawn from a joint distribution P over X \u00d7 Y. We want to analyze the generalization gap, i.e., the gap between the expected loss and the training loss, which is defined as \u2206(s) := E(X,Y )\u223cP[\u2113(f s(X), Y )] \u22121 n n X i=1 \u2113(f s(xi), yi), where \u2113: Rmy \u00d7 Y \u2192R\u22650 is a bounded per-sample loss, and f s : X 7\u2192Rmy represents a deep neural network learned with a given training dataset s. Here, X and Y are the corresponding random variables for xi and yi with (X, Y ) \u223cP. We use symbol \u25e6to represent the composition 2 \fHow Does Information Bottleneck Help Deep Learning? of functions and the notation of [D+1] = {1, 2, . . . , D+1}. We define the random variable of the output of the l-th layer by Zs l = \u03d5s l (X), (3) where \u03d5s l is the map for the first l layer with with \u03d5s l (x) \u2208 Zs l . That is, for any layer index l \u2208[D + 1], we can decompose the neural network f s by f s = gs l \u25e6\u03d5s l , (4) where gs l is the map for the rest of the layers after l layers. For convenience, we refer to \u03d5s l as the encoder and to gs l as the decoder, though it is unnecessary to have an explicit structure of an encoder and a decoder. Here, the case of l = 1 corresponds to the input layer where \u03d5s 1(x) = x and gs 1(x) = f s(x). The case of l = D + 1 corresponds to the output layer where \u03d5s D+1(x) = f s(x) and gs D+1(q) = q. f s can also be decomposed as f s = hs D+1 \u25e6hs D \u25e6hs D\u22121 \u25e6\u00b7 \u00b7 \u00b7 \u25e6hs 1, where hs l represents the computation of the l-th layer; i.e., \u03d5s l = hs l \u25e6hs l\u22121 \u25e6\u00b7 \u00b7 \u00b7 \u25e6hs 1 and gs l = hs D+1 \u25e6hs D \u25e6\u00b7 \u00b7 \u00b7 \u25e6hs l+1. We use A to denote the learning algorithm that returns the output functions of each layer; i.e., A(s) = {hs l }D+1 l=1 . Then, by taking a subset of the output coordinates, we define \u02dc Al(s) = {hs k}l k=1. Finally, by composing the outputs of \u02dc Al, we define Al(s) = hs l \u25e6hs l\u22121 \u25e6\u00b7 \u00b7 \u00b7 \u25e6hs 1 = \u03d5s l \u2208Ml (where {Ml}l\u2019s are families of functions). Define the maximum loss R(f s) = sup (x,y)\u2208X\u00d7Y \u2113(f s(x), y.) We then define the random variable of the encoder of the l-th layer by \u03d5s l = Al(s). (5) Presumption. Following the previous work (Shwartz-Ziv et al., 2019), we consider the setting of |X| < \u221eand |Ml| < \u221e. This is the natural setting with digital computers (e.g., using floating point). In this setting, the mutual information that we consider are all finite and thus all bounds are nontrivial. Similar restrictions are commonly considered in theory work involving mutual information to avoid the issue of infinite mutual information; for example, in Xu & Raginsky (2017), they consider countable hypothesis space. We follow the above setting in our main results, but we also show that those requirements can be relaxed: see more details in Section 4 and Appendix E.2. 3. Main Results In this section, we establish sample complexity bounds to connect information bottlenecks and generalization errors. We start with completing and improving the previous results in the setting where the encoder \u03d5s l is treated as fixed and independent of training data s (Shwartz-Ziv et al., 2019; Hafez-Kolahi et al., 2020) in Section 3.1. It will serve as an important intermediate step toward our final result, where we extend the argument to deal with the main case of our interest \u2013 learning the encoder \u03d5s l with s in Section 3.2. 3.1. Encoder independent with the training data The following theorem shows that we can indeed optimize the control of expected loss by minimizing the conditional mutual information I(X; Zs l |Y ) and the training loss if the encoder \u03d5s l is fixed and independent of training data s.1 Even though this simpliefied case is just an intermediate step towards our final results, Theorem 1 is still useful and of independent interest. For example, it is applicable when the encoder is learned with data independent of s, such as in certain cases in transfer learning and unsupervised learning. Theorem 1. Let l \u2208{1, . . . , D}. Suppose that \u03d5s l is fixed independently of the training dataset s. Then, for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over training data s, the following holds: \u2206(s) \u2264Gl 3 r I(X; Zs l |Y ) ln(2) + Gl 2 n + Gl 1(0) \u221an , (6) where Gl 1(0) = \u02dc O(1), Gl 2 = \u02dc O(1), and Gl 3 = \u02dc O(1), as n \u2192\u221e. The formula of Gl 1(0), Gl 2 and Gl 3 are given in Appendix E.1. Theorem 1 rigorously completes the proof of Conjecture 1, with the significant improvements, which we will provide more detailed explanation in the following paragraph. Explanation of Theorem 1. There are two significant improvements in our bound (6) when compared with the previous bound (1). First, we reduce the exponential dependence to the linear dependence by replacing the exponential growth rate 2I(X;Zs l ) with the linear growth rate I(X; Zs l ). Second, we replace I(X; Zs l ) with I(X; Zs l |Y ), which is the expected mutual information between X and Zs l conditioned on Y . To see why it is an improvement, notice that I(X; Zs l |Y ) \u2264I(X; Zs l ) since we can decompose I(X; Zs l ) into two components by using the chain rule as in Federici et al. (2020): I(X; Zs l ) = I(X; Zs l |Y )+I(Y ; Zs l ). Here, I(X; Zs l |Y ) \u22650 is the superfluous information that we want to minimize so as to maximize the predictive information I(Y ; Zs l ) \u22650. Therefore, the spirit of information bottleneck to regularize I(X; Zs l ) while maximizing I(Y ; Zs l ) is an indirect way to regularize I(X; Zs l |Y ). Accordingly, instead of regularizing I(X; Zs l ), recent works 1Here, we still use the superscript s for various quantities to maintain notation consistency. Theorem 1 considers encoders that are independent of s, while Theorem 2 and the rest of this paper consider encoders that are dependent of s. 3 \fHow Does Information Bottleneck Help Deep Learning? have also start to consider regularizing I(X; Zs l |Y ) (Fischer, 2020; Federici et al., 2020; Lee et al., 2021). In terms of theory, replacing I(X; Zs l ) with I(X; Zs l |Y ) is qualitatively significant because I(X; Zs l ) cannot be zero while maintaining the label-relevant information I(Y ; Zs l ), unlike I(X; Zs l |Y ). For practical use of our bound, one can think about the case of fixed-feature learning, where the representation is learnt by other dataset independent of s. This is widely used in transfer learning and pre-training, where a l-layer representation is learnt from a large public available dataset (e.g. ImageNet) and we further use task specific and possibly private dataset s to finetune extra few layers of the neural networks while fixing the representation. 3.2. Encoder learned with the training data In the previous section, we have proven an improved version of Conjecture 1 for the setting with a fixed encoder \u03d5s l . However, the typical usage of the information bottleneck principle is trying to minimizing I(X; Zs l ) = I(X; \u03d5s l (X)) over the parameters of the encoder \u03d5s l along with a discriminative objective. Thus, to support the typical usage of the information bottleneck principle, we need to extend the results to the setting of learning encoder \u03d5s l with s. In this setting, the bound in Conjecture 1 is no longer valid as discussed in Section 1. However, the question about proving a sample complexity bound with the information bottleneck in the above typical setting is challenging and remains open. In this section, we present our main theorem that answers this open problem. Our result reconciles the information bottleneck regularizer I(X; Zs l |Y ) with the mutual information of the encoder and the training dataset I(\u03d5S l ; S). Theorem 2 (Main Theorem). Let D \u2286{1, 2, . . . , D + 1}. Then, for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over the training set s, the following generalization bound holds: \u2206(s) \u2264min l\u2208D Ql, (7) where for l \u2264D, Ql= Gl 3 q (I(X;Zs l |Y )+I(\u03d5S l ;S)) ln(2)+ b Gl 2 n + Gl 1(\u03b6) \u221an ; and for l = D + 1, Ql = R(f s) s I(\u03d5S l ; S) ln(2) + \u02c7 Gl 2 2n , Here, S \u223cP\u2297n, Gl 1(\u03b6) = \u02dc O( q I(\u03d5S l ; S) + 1), b Gl 2 = \u02dc O(1), \u02c7 Gl 2 = \u02dc O(1), and Gl 3 = \u02dc O(1) as n \u2192\u221e. The formulas of Gl 1(\u03b6), b Gl 2, \u02c7 Gl 2, and Gl 3 are given in Appendix E.1. In Theorem 2, we denote by S \u223cP\u2297n the random variable following the same distribution as that of the training dataset s \u223cP\u2297n. This notation is required here because the bound in Theorem 2 is equivalent to Ps\u223cP\u2297n[\u2206(s) \u2264 g(I(X; Zs l |Y ), I(\u03d5S l ; S))] \u22651 \u2212\u03b4 for some function g. Here, the probability Ps\u223cP\u2297n is taken with respect to s. The instantiations s within this Ps\u223cP\u2297n are used in \u2206(s) and I(X; Zs l |Y ), but not in I(\u03d5S l ; S) since I(\u03d5S l ; S) only depends the distribution P\u2297n instead of the instantiations. That is, the reason why we need S here is similar to the same reason why we use j for the expression Pn i=1 g1(i, Pn j=1 g2(j)) (for some functions g1, g2) while we have Pn j=1 g2(j) = Pn i=1 g2(i) in terms of its value, where i and j correspond to s and S, respectively. In Theorem 2, the randomness of I(X; Zs l |Y ) and I(\u03d5S l ; S) are different. The mutual information I(X; Zs l |Y ) is calculated over the randomness of the distribution of the input X conditioning on Y , after fixing a realization of the training data s (for each fixed draw of s from Ps\u223cP\u2297n). In contrast, the mutual information I(\u03d5S l ; S) is computed over the randomness of the distribution of the training dataset S \u223cP\u2297n. Remark 1. One can consider the parameterization of the encoder as \u03d5S l = \u03d5l,\u03b8S l where \u03b8S l is the parameter vector that is learned with S and contains all parameters of the layers up to l-th layer. In that case, Theorem 2 holds when replacing \u03d5S l with \u03b8S l . Explanation of Theorem 2. Theorem 2 provides the first rigorous sample complexity bound for the information bottleneck in the setting of training the encoder \u03d5s l with the same training data s. Here, Zs l is the random variable of the l-th layer\u2019s representation with dependence on the given training dataset s, and D is the number of all layers, including the input layer and excluding the output layer; i.e., Zs 1 is the input layer, Zs D is the last hidden layer, and Zs D+1 is the output layer. Here, I(\u03d5S l ; S) is measuring the effect of overfitting the encoder, which is necessary to avoid the counter-example (Hafez-Kolahi et al., 2020, Example 3.1). The main factor in the above theorem is I(X; Zs l |Y ) + I(\u03d5S l ; S). This term captures the novel relationship that has not been studied in any previous sample complexity bounds. Specifically, this captures the relationship between \u201chow much information from the input X the trained encoder \u03d5s l retains, i.e., I(X; Zs l |Y )\u201d and \u201chow much information from the training dataset S is used to train the encoder \u03d5S l , i.e., I(\u03d5S l ; S)\u201d. Theorem 2 is applicable when the encoder is trained with s and potentially additional data independent of s: e.g., supervised learning, semi-supervised learning, unsupervised learning, and transfer learning. For example, Theorem 2 captures the benefit of transfer learning in both terms of 4 \fHow Does Information Bottleneck Help Deep Learning? I(X; Zs l |Y ) and I(\u03d5S l ; S) since the encoder \u03d5S l is expected to have less dependence on S (target data) (for some l \u2264D) in transfer learning, which tends to decrease I(\u03d5S l ; S). Finally, we note that in the formula of b Gl 2, we have a linear dependence on H(Zs l |X, Y ) ln(2) (see Appendix E.1). However, we have H(Zs l |X, Y ) = 0 if the function \u03d5s l is deterministic, which is the typical case for deep neural networks, because \u03d5s l is the function used at inference or test time as opposed to training time (when dropout for example can be used). When the function \u03d5s l is stochastic at test time, we have H(Zs l |X, Y ) \u22480 when the injected noise is small, and more generally H(Zs l |X, Y ) = O(1) as n \u2192\u221e. 4. Extensions Our results thus far focus on the case of |X| < \u221e, which is already general enough to cover the realistic implementation on a computer and commonly considered in previous theory work (Shwartz-Ziv et al., 2019). Indeed, our presumption makes sure that the mutual information is finite and thus the bounds provided are non-trivial. In the case of |X| = \u221e, the mutual information can be infinite, and thus requires a separate treatment. In this section, we show how to generalize our arguments to the case of |X| = \u221e. 4.1. Neural networks with ReLU activation functions First, we show that finite mutual information can be obtained in some cases even for the case of |X| = \u221e. Specifically, the following proposition shows that a (deterministic) neural network can have finite mutual information with ReLU activations with continuous distributions. Proposition 1. For a given neural network with ReLU activation functions, there are infinitely many continuous distributions over X such that the corresponding I(X, Z|Y ) is finite. 4.2. Modification for valid bounds in the case of infinite mutual information The mutual information for the information bottleneck is finite for many practical cases including the cases of discrete domains X with any models and of continuous domains X with stochastic models as well as the case in Proposition 1 with ReLU. However, it is infinite for some special case, for example, of continuous domains X with deterministic neural networks with certain types of injective activations such as sigmoid (instead of ReLU) (Amjad & Geiger, 2019). This subsection demonstrates that our bounds can be modified to produce finite bounds even for any special cases of the mutual information being infinite. Our results (Theorems 1\u20132 with Corollary 1) also resolve the known issue of arbitrariness of the mutual information with different binning methods (Saxe et al., 2019). Consider an arbitrary (continuous or discrete) domain X and an arbitrary encoder \u02dc \u03d5s l such that \u02dc \u03d5s l (x) \u2208\u02dc Zs l and the set \u02dc Zs l is potentially (uncountably or countably) infinite. Define the corresponding model \u02dc f s by \u02dc f s = gs l \u25e6\u02dc \u03d5s l and \u02dc Zs l = \u02dc \u03d5s l \u25e6X. We formalize an arbitrary binning method El[\u02dc \u03d5s l ] of computing the mutual information (Chelombiev et al., 2019) as follows: for any (l, \u02dc \u03d5s l ), let El[\u02dc \u03d5s l ] : \u02dc Zs l \u2192Zs l \u2286\u02dc Zs l be a function such that |Zs l | < \u221e. Set \u03d5s l = El[\u02dc \u03d5s l ] \u25e6\u02dc \u03d5s l ; i.e., it follows that Zs l = El[\u02dc \u03d5s l ] \u25e6\u02dc Zs l and f s = gs l \u25e6El[\u02dc \u03d5s l ] \u25e6\u02dc \u03d5s l . Let \u02c6 Ql and minl\u2208D Ql be the right-hand side of Eq. (6) and Eq. (7) in Theorems 1\u20132 with this choice of encoder \u03d5s l ; i.e., \u02c6 Ql and Ql contain I(X; Zs l |Y ) instead of I(X; \u02dc Zs l |Y ). Here, I(X; Zs l |Y ) is the mutual information computed by the binning method El[\u02dc \u03d5s l ] while I(X; \u02dc Zs l |Y ) is the true mutual information of \u02dc f s. Let Cl be a nonnegative real number such that P(|\u2113((gs l \u25e6\u02dc \u03d5s l )(X), Y ) \u2212\u2113((gs l \u25e6El[\u02dc \u03d5s l ] \u25e6 \u02dc \u03d5s l )(X), Y )| \u2264Cl) = 1. Corollary 1 shows that even when the mutual information I(X; \u02dc Zs l |Y ) of the original model \u02dc f s is infinite, Theorems 1\u20132 provide the finite bounds on the original model \u02dc f s using the finite mutual information I(X; Zs l |Y ) returned by a binning method El[\u02dc \u03d5s l ]: Corollary 1. Suppose that Cl < \u221e. Then, Theorems 1\u20132 hold true also when we replace (Theorem 1) Eq. (6) with \u2206(s) \u2264\u02c6 Ql + 2Cl < \u221e, and, (Theorem 2) Eq. (7) with \u2206(s) \u2264minl\u2208D Ql + 2Cl < \u221e. The assumption on the finiteness of Cl is satisfied for common scenarios. For example, let L be the Lipschitz constant of the function q 7\u2192\u2113(gs l (q), Y ) w.r.t. some metric dE almost surely (Fazlyab et al., 2019; Latorre et al., 2019; Aziznejad et al., 2020; Pauli et al., 2021). Set El[\u02dc \u03d5s l ] such that the radius of each bin w.r.t. the metric dE is at most \u03f5/ \u221a nL2 for some \u03f5 > 0. We can then set Cl = \u03f5/\u221an. In Corollary 1, the arbitrariness with binning methods is resolved: e.g., increasing the bin size \u03f5 can decrease the mutual information, but it also increases the value of Cl = \u03f5/\u221an. Thus, there is always a tradeoff and we can\u2019t arbitrarily change values of our bounds by choosing different binning methods. Similarly, for the case of infinite mutual information, we prove the validity of general methods of computing mutual information, including those of injecting noises and kernel density estimations in Appendix E.2. 5. Experiments We conduct empirical experiments to investigate the following questions: (i) Does the information bottleneck regularizer I(X; Zs l ) alone reliably predict generalization when the encoder \u03d5s l is learned with s? (ii) Does the main factor in our bound in Theorem 2 minl\u2208[D] I(S; \u03b8S l ) + I(X; Zs l |Y ) 5 \fHow Does Information Bottleneck Help Deep Learning? Pearson Correlation Num. params. -0.0294 Q l\u2225\u03b8s l \u2225F -0.0871 \u02d8 I(X; Zs l ) 0.3712 \u02d8 I(X; Zs l |Y ) 0.3842 \u02d8 I(S; \u03b8S D+1) 0.0091 \u02d8 I(S; \u03b8S l ) 0.0211 \u02dc I(S; \u03b8S l ) + \u02d8 I(X; Zs l ) 0.3928 \u02dc I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) 0.4130 Table 1. Pearson correlation coefficient between metrics and the generalization gap in loss for constrained models trained for 5 class classification on 2D inputs. Positive values denote positive correlations. \u03b8l denotes parameters of layer l and \u03b8l denotes parameters up to layer l. with Remark 1 predict generalization more accurately than I(X; Zs l ) alone (or I(X; Zs l |Y ) alone)? (iii) How does varying layer l within the network affect the values of I(S; \u03b8S l ) and I(X; Zs l ) and their predictive ability? 5.1. On the Representation Compression Bound As discussed in Sections 3, I(X; Zs l ) is generally not a reliable predictor of generalization because feature compression does not prevent the overfitting of the representation function\u2019s parameters. We investigate this further by designing a learning algorithm that trains models under various hyperparameter settings with the constraint that estimated I(X; Zs l ) is approximately constant. Following previous work such as Galloway et al. (2022), we use correlation analysis to empirically evaluate how strongly metrics predict generalization. The inference problem studied was 5 class classification on clustered 2D inputs (Fig. 3). The model architecture was a 5 layer MLP with deterministic weights and feature layer l was fixed to the penultimate layer. Given training dataset s, each model q\u03b8s was optimized with the cross-entropy loss min\u03b8s \u22121 |s| P (x,y)\u2208s(log(1/k) Pk j=1 q\u03b8s(y|zj)) s.t. \u02c6 I(X; Zs l ) = \u03c1, where features zj \u223cq\u03b8s(Zs l |x), and q\u03b8s(Zs l |x) is a multivariate Normal distribution with mean and variance computed by the MLP. Here, \u02c6 I(X; Zs l ) = 1 |s| X (x,y)\u2208s 1 k k X j=1 log q\u03b8s(zj|x) 1 |s| P (x\u2032,y\u2032)\u2208s q\u03b8s(zj|x\u2032) is a Monte-Carlo sampling based estimator of I(X; Zs l ), and constraint \u03c1 was set to 1.5, approximately half the value of \u02c6 I(X; Zs l ) attained without constraining \u02c6 I(X; Zs l ). The neural network infers a distribution over a stochastic latent features so that \u02c6 I(X; Zs l ) can be regularized and evaluated directly during training; in Section 5.2 we consider the case of deterministic features without regularization of \u02c6 I(X; Zs l ). Whereas \u02c6 I(X; Zs l ) is a sampling based estimator, we also use the upper-bound based estimator: \u02d8 I(X; Zs l ) = 1 |s| P (x,y)\u2208s 1 k Pk j=1(log q\u03b8s(zj|x) \u2212 ( 1 |s| P (x\u2032,y\u2032)\u2208s log q\u03b8s(zj|x\u2032))). We define \u02c6 I(X; Zs l |Y ) and \u02d8 I(X; Zs l |Y ) accordingly by conditioning these quantities on Y : \u02c6 I(X; Zs l |Y ) = 1 |s| P c\u2208C P (x,y)\u2208sc 1 k Pk j=1 log q\u03b8s(zj|x) 1 |sc| P (x\u2032,y\u2032)\u2208sc q\u03b8s(zj|x\u2032), and \u02d8 I(X; Zs l |Y ) = 1 |s| P c\u2208C P (x,y)\u2208sc 1 k Pk j=1(log q\u03b8s( zj|x) \u2212( 1 |sc| P (x\u2032,y\u2032)\u2208sc log q\u03b8s(zj|x\u2032))). For the computation of I(S; \u03b8S l ), the learning algorithm is defined by the posterior distribution over network parameters P(\u03b8S l |S = s), which was modelled using SWAG (Maddox et al., 2019; Mandt et al., 2017), chosen for its popularity and simplicity. We denote the estimator of I(S; \u03b8S l ) using SWAG by \u02d8 I(S; \u03b8S l ), where datasets S were drawn from the set of training datasets. To account for different scales of different estimation procedures, we tested rescaling \u02d8 I(S; \u03b8S l ) by the average value of \u02c6 I(X; Zs l |Y ), denoting rescaled values by \u02dc I(S; \u03b8S l ) (see Appendix A for more details). 216 models were trained over varying architectures, weight decay rates, dataset draws, and random seeds. Model parameters were optimized end-to-end using the reparameterization trick (Kingma et al., 2015) with dual gradient descent for MI constraints (Bertsekas, 2014) (Appenix A). For each model, we measured the generalization gap between the test set and train set losses. We found that combining model compression and representation compression yielded the best predictor of generalization overall, and that this outperformed using representation compression alone (Table 1, 5). Additional experimental results on MNIST and Fashion MNIST datasets are given in Appendix B, showing that this conclusion also holds for stochastic feature networks in cases when I(X; Zs l ) is unconstrained. 5.2. Image Classification with DNNs To investigate a common setting, we trained 540 deep neural networks on CIFAR10 without explicitly constraining MI, over varying preactivation ResNet architectures (He et al., 2016), weight decay rates, batch sizes, dataset draws and random seeds. To study representation compression by estimating MI with deterministically computed features, noise is customarily injected purely for analysis purposes (Saxe et al., 2019). We tested adaptive kernel density estimation (KDE) (Chelombiev et al., 2019), which models the latent represenation of an input as a unimodal Gaussian centred at the deterministic feature, with variance \u03c32 l determined by scaling a base value according to maximum observed activation value in the layer. We also tested selecting \u03c32 l by maximum likelihood estimation (MLE) of observed features under the constraint that estimated MI decreases with layer, 6 \fHow Does Information Bottleneck Help Deep Learning? Spearman corr. Pearson corr. Kendall corr. Generalization gap: Loss Error Loss Error Loss Error 1 D PD l=1 \u02d8 I(X; Zs l ) 0.8481 0.7410 0.2116 0.1831 0.6425 0.5436 minl\u2208[D] \u02d8 I(X; Zs l ) 0.7145 0.5602 0.7203 0.5719 0.4461 0.3404 1 D PD l=1 \u02d8 I(X; Zs l |Y ) 0.8481 0.7406 0.2140 0.1853 0.6427 0.5435 minl\u2208[D] \u02d8 I(X; Zs l |Y ) 0.7004 0.5434 0.7062 0.5560 0.4386 0.3305 \u02d8 I(S; \u03b8S D+1) 0.4688 0.3112 0.2512 0.0775 0.2121 0.1208 minl\u2208[D] \u02d8 I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) 0.8434 0.7313 0.8437 0.7195 0.6270 0.5332 \u00af I(S; \u03b8S D+1) 0.5370 0.3800 0.2924 0.1218 0.2442 0.1526 minl\u2208[D] \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) 0.8632 0.7576 0.8511 0.7562 0.6626 0.5664 Table 2. Correlation results across metrics for CIFAR10 models. Each value is in [-1, 1] and > 0 indicates positive correlation. Best metric highlighted. More results can be found in Appendix D. Spearman corr. Pearson corr. Kendall corr. Generalization gap: Loss Error Loss Error Loss Error 1 D PD l=1 \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) 0.4429 0.2908 0.2783 0.1059 0.2349 0.1426 maxl\u2208[D] \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) 0.5711 0.4204 0.2993 0.1311 0.2886 0.1945 minl\u2208[D] \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) 0.8632 0.7576 0.8511 0.7562 0.6626 0.5664 \u00af I(S; \u03b8S 1 ) + \u02d8 I(X; Zs 1|Y ) 0.6476 0.5292 0.1557 0.1331 0.4307 0.3504 \u00af I(S; \u03b8S D) + \u02d8 I(X; Zs D|Y ) 0.5711 0.4204 0.2993 0.1311 0.2886 0.1945 Table 3. Correlation results for \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) for CIFAR10 models across different layer summarization methods. Figure 2. (Left) Results for minl\u2208[D] \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) for unconstrained models trained on CIFAR10. Dashed line denotes best polynomial fit with degree 2. (Right) Metrics averaged over models for each layer index. Star denotes minimum point for each metric. Values are normalized by subtracting the minimum and dividing by the range. which follows from the information processing inequality. We report the results in this section for MLE and in Appendix D.4 for adaptive KDE. Representations were taken from D = 5 layers in the model, ranging from the input to the output of the penultimate layer. Again, SWAG was used to model the posterior P(\u03b8S l |S = s). Since SWAG approximates the stationary distribution of SGD from a fixed initialization as a unimodal Gaussian (Mandt et al., 2017), we also tested averaging over initializations to obtain a richer posterior model, and denote the estimator of MI from this model as \u00af I(S; \u03b8S l ), defined in Appendix D.2. To construct multiple training datasets, we sampled 5 training sets of size 15K from the CIFAR10 training set, and each test set was the original 10K test set. The generalization gap was positively correlated with metrics measuring representation compression, but more correlated with metrics that combined both representation and model compression (Table 2). By increasing the value of layer index l of the encoder, MI between the encoder and training dataset increased, while MI between the representation and input decreased (Fig. 2), capturing a trade-off between these two measures of compression. For selection of hyperparameters \u03c32 l , MLE (Fig. 2, Table 2, 3, 19, 17) outperformed adaptive KDE (Table 20, 18), however regardless of which scheme was used, the best metric that combined representation compression and model compression outperformed the best metric for representation compression or model compression individually. minl\u2208[D] \u00af I(S; \u03b8S l ) + \u02d8 I(X; Zs l |Y ) performed best overall (Fig. 2). Taking the minimum over layers (Theorem 2) outperformed other layer summarization methods (Table 3). 7 \fHow Does Information Bottleneck Help Deep Learning? 5.3. Feature binning experiments To test whether the importance of model compression would hold when I(X; Zs l ) was estimated by binning deterministic features into discrete categories (Section 4.2), we conducted toy clustering experiments (Appendix 5.1) on deterministic feature models. The binning implementation of Saxe et al. (2019) was reused to discretize the activity of each node into 10 buckets and the information bottleneck was implemented using the surrogate objective of Kirsch et al. (2020, Eq. 1). 216 models were trained across MLP architectures and hyperparameters and SWAG was used to estimate the posterior. Model details and results are given in appendix C. We found that imposing the information bottleneck regularizer decreased the generalization prediction performance of feature compression metrics \u02c6 I(X; Zs l ) and \u02c6 I(X; Zs l |Y ), and combining model compression with feature compression metrics increased performance. 6. Proof Sketch In this section, we further provide proof sketches of Theorems 1-2 for readers interested in having an overview of the proofs. The complete proofs are in Appendix F. 6.1. Proof Sketch of Theorem 1 Fix l \u2208[D] and \u03d5s l independently of the training dataset s. Let T be the standard typical set of Zs l in information theory. As we will see in the later sketch, we combine deterministic decompositions and probabilistic bounds with respect to the randomness of new fresh samples X and datasets s. The usages of probabilistic bounds for different sample spaces enable the exponential improvement over the previous bounds. Step 1. Decompose the generalization gap into two terms as \u2206(s) = A+B, where A corresponds to the case of Zs l \u2208T, while B is for the case of Zs l / \u2208T. We will bound A and B separately. Step 2. By standard argument from information theory, we have P(Zs l / \u2208T | Y = y) \u2264O( 1 \u221an) where the probability is with respect to X, with which we can roughly argue 2 that B \u2264O( 1 \u221an). Step 3. To bound A, we argue that A is bounded by a concentration gap of a special multinomial distribution over the elements of T, which is bounded roughly as A = O( p (ln |T|)/n) (with high probability with respect to S), by using a recent statistical result on multinomial distributions (Kawaguchi et al., 2022a, Lemma 3 & Propo2Although this requires a refinement of the standard argument. In Appendix F, we refine the argument using the McDiarmid\u2019s inequality and a further decomposition of B. sition 3). Then, the standard argument from information theory approximately bounds the size of the typical set as |T| \u22642I(X;Zs l |Y )+CT for some CT > 0, approximately resulting in: with high probability A = O r ln |T| n ! = \u02dc O r (I(X; Zs l |Y ) + 1) n ! . Step 4. Finally, By combining the above bounds on A and B, we approximately conclude the result. 6.2. Proof Sketch of Theorem 2 Based on the result of Theorem 1 for fixed l \u2208[D] and \u03d5s l , we further generalize it for flexible l and learnable \u03d5s l . Our proof is based on the tricky usage of probabilistic bounds for different sample spaces in Theorem 1\u2019s proof. Step 1. Let l \u2208[D]. We first find a hypothesis space \u03a6l \u03b4 such that P(\u03d5S l / \u2208\u03a6l \u03b4) \u2264\u03b4 and |\u03a6l \u03b4| \u22642I(\u03d5S l ;S)+C\u03b4 for some C\u03b4 \u22650. We then construct the corresponding hypothesis space H by H = \u222a\u03d5l\u2208\u03a6l \u03b4H\u03d5l where H\u03d5l = {gl \u25e6\u03d5l | gl : Zl \u2192Rmy}. Step 2. We then obtain the sample complexity bound for each H\u03d5l (for each \u03d5l \u2208\u03a6l \u03b4) by using the result of Theorem 1. For each \u03d5l \u2208\u03a6l \u03b4 that is fixed independently of s; i.e., P(\u2200f \u2208H\u03d5l, B(f) \u2264Jl(\u03b4)) \u22651 \u2212\u03b4 where B(f) = EX,Y [\u2113(f(X), Y )] \u22121 n Pn i=1 \u2113(f(xi), yi) and Jl(\u03b4) is the right-hand side of Eq. (6). Then, by taking union bound with the equal weighting over all \u03d5l \u2208\u03a6l \u03b4, we have P(\u2200f \u2208H, B(f) \u2264J\u03b4,l) \u22651 \u2212\u03b4, where J\u03b4,l = Jl(\u03b4/(2I(\u03d5S l ;S)+C\u03b4)). Step 3. We now want to show that this bound holds for B(f s) instead of B(f) for f \u2208H. This is achieved if f s \u2208H. Since P(f S \u2208H) \u22651 \u2212\u03b4 from the construction of H and P(A \u2229B) \u2264P(B) for any events A and B, the following holds: P(B(f S) \u2264J\u03b4,l) \u2265P(f S \u2208H)P(B(f S) \u2264J\u03b4,l | f S \u2208H) \u2265P(f S \u2208H)(1 \u2212\u03b4) \u22651 \u22122\u03b4. Therefore, by replacing \u03b4 with \u03b4/2, we have P(B(f S) \u2264 J\u03b4/2,l) \u22651 \u2212\u03b4. Step 4. For the case of l = D + 1, the proof is significantly simplified because an entire model is an encoder as f = \u03d5D+1; i.e., we replace the result of Theorem 1 with Hoeffding\u2019s inequality to conclude that P(\u2200f \u2208 H\u03d5D+1, B(f) \u2264JD+1(\u03b4)) \u22651 \u2212\u03b4 where JD+1(\u03b4) = R(f) p (ln(1/\u03b4))/(2n). Using the same steps as the case of l \u2208[D], we prove that P(B(f S) \u2264J\u03b4/2,D+1) \u22651 \u2212\u03b4, where J\u03b4,D+1 = JD+1(\u03b4/(2I(\u03d5S l ;S)+C\u03b4)). By taking 8 \fHow Does Information Bottleneck Help Deep Learning? union bounds over l \u2208D \u2286{1, 2, . . . , D + 1}, we conclude P(\u2200l \u2208D, B(f S) \u2264J\u03b4/(2|D|),l) = P(B(f S) \u2264 minl\u2208D J\u03b4/(2|D|),l) \u22651 \u2212\u03b4. Step 5. Organizing the expression of J\u03b4/(2|D|),l yields the right-hand side of eq. (7), which proves Theorem 2. 7. Related Works The implicit minimization of mutual information I(X; Z) has been studied with the motivation of understanding why deep learning works through the lens of information bottlenecks (Shwartz-Ziv & Tishby, 2017). The previous work assumes the benefit of minimizing I(X; Z) and questioned whether the training of deep neural networks implicitly result in the minimization of I(X; Z). In contrast, we studied the benefit of (implicitly or explicitly) controlling I(X; Z). In this paper, we consider the generalization gap in deep learning (Nagarajan & Kolter, 2019; Zhang et al., 2021a;b; Kawaguchi et al., 2022a;b; Hu et al., 2022), which is different from generalization gaps studied in the field of information bottleneck. In the field of information bottleneck, previous studies have analyzed a generalization gap between the true mutual information and its empirical estimate (Shamir et al., 2010; Tishby & Zaslavsky, 2015) and the generalization gap on C(q\u03b8(Z|X), q(Y |Z)) (Vera et al., 2018) where C(p, q) is the cross entropy of q relative to p, q\u03b8(Z|X) is a randomized encoder with learnable parameters \u03b8, and q(Y |Z) is a simple count-based decoder with no learnable parameter. Unlike ours, their bounds on C(q\u03b8(Z|X), q(Y |Z)) scale with |X| ln n \u221an + |Z| \u221an (due to their dependence on 1 PX(xmin) \u2265|X| and |U| = |Z| in their notation). This dependence on |X| makes their bounds inapplicable as it requires the number of samples n \u226b|X|2. Here, the cross entropy C(q\u03b8(Z|X), q(Y |Z)) studied in the previous work is also different from the cross-entropy loss of deep learning, C(p(Y |X), q\u03b8(Y |X)), where p(Y |X) is a target distribution and q\u03b8(Y |X) represents a deep neural network with learnable parameters \u03b8. Therefore, we could not rely on any of these previous results from the field of information bottleneck. Another related yet different topic of information theory is to use I(f S; S) to compute generalization bounds (Xu & Raginsky, 2017; Bassily et al., 2018; Hellstr\u00a8 om & Durisi, 2020; Steinke & Zakynthinou, 2020). However, these previous bounds are not about information bottleneck as these do not utilize I(X; Zs l |Y ) (or I(X; Zs l )) and only uses I(f S; S), the mutual information between the training dataset S and the entire model f S = \u03d5S D+1. Thus, the previous bounds cannot provide insights or justifications on the information bottleneck principle unlike our bounds. Moreover, in Section 5, we demonstrate the advantage of I(X; Zs l |Y ) + I(\u03d5S l ; S) in our bound over I(f S; S) in the previous bounds. Here, notice that I(\u03d5S l ; S) \u0338= I(f S; S) for any l \u0338= D + 1, and we always have I(\u03d5S 1 ; S) \u2264\u00b7 \u00b7 \u00b7 \u2264 I(\u03d5S D; S) \u2264I(\u03d5S D+1; S) = I(f S; S) (e.g., see Fig. 2). See Appendix E.3 for more comparison with these previous bounds, which are not of information bottlenecks although they use mutual information. The various interesting properties of information bottleneck are discussed in (Achille & Soatto, 2018). However, they do not provide generalization bounds with information bottlenecks. Among others, they discuss a connection between I(X; Zs l ) and I(\u02dc \u03b8; S), where I represents an over-simplified version of mutual information in which everything is ignored except an artificially added noise; i.e., I(\u02dc \u03b8; S) \u225c\u22121 2 Pd i=1 log \u03b1i (this definition is given in Remark 4.2 of Achille & Soatto, 2018) where \u03b1i is the variance of the Gaussian noise \u03f5 that is artificially multiplied to the learned weights \u03b8 after training as \u02dc \u03b8 = \u03f5\u03b8. In other words, I completely ignores the dependence of \u03b8 on S, although it is the main factor in the question of generalization. For example, if we set \u03b1i = 1, we have I(\u02dc \u03b8; S) = 0 always, despite the fact that I(\u03b8; S) \u0338= 0 when \u03b8 is learned from S. This shows that the previous paper did not consider the mutual information between \u03b8 and S; it used the entropy of the artificially multiplied Gaussian noise \u03f5 as mutual information. Thus the meaningful factors of mutual information are ignored in the previous work. There are generalization bounds via different approaches including VC-dimension (Vapnik, 1999; Bartlett et al., 2019), Rademacher complexity (Bartlett & Mendelson, 2002; Truong, 2022; Tr\u00a8 auble et al., 2023), stability (Bousquet & Elisseeff, 2002; Deng et al., 2021), robustness (Xu & Mannor, 2012; Liu et al., 2021; Kawaguchi et al., 2022a; Liu et al., 2022), and PAC-Bayes (Dziugaite et al., 2021; Lotfi et al., 2022). Unlike these previous studies, we provided the first generalization bound via the information bottleneck. 8. Conclusion This study completed the proof of the previous conjecture with near-exponential improvements for the setting of fixed representations, proved the first rigorous generalization bound for the setting of learning representations, and empirically strengthened the findings with supporting experiments. This paper makes a contribution on technical aspects relevant for current and future methods of deep learning through the lens of information bottlenecks. Whereas information bottleneck is explicit in various algorithms (e.g., Federici et al., 2020; Sun et al., 2022; Li et al., 2022a;b; Su et al., 2023), it is also interesting to motivate future methods based on implicit effects of architectures (e.g., transformer and convolution) and training (e.g., SGD and self-supervised objectives) on information bottlenecks. 9 \fHow Does Information Bottleneck Help Deep Learning?" + }, + { + "url": "http://arxiv.org/abs/2206.13497v4", + "title": "Robustness Implies Generalization via Data-Dependent Generalization Bounds", + "abstract": "This paper proves that robustness implies generalization via data-dependent\ngeneralization bounds. As a result, robustness and generalization are shown to\nbe connected closely in a data-dependent manner. Our bounds improve previous\nbounds in two directions, to solve an open problem that has seen little\ndevelopment since 2010. The first is to reduce the dependence on the covering\nnumber. The second is to remove the dependence on the hypothesis space. We\npresent several examples, including ones for lasso and deep learning, in which\nour bounds are provably preferable. The experiments on real-world data and\ntheoretical models demonstrate near-exponential improvements in various\nsituations. To achieve these improvements, we do not require additional\nassumptions on the unknown distribution; instead, we only incorporate an\nobservable and computable property of the training samples. A key technical\ninnovation is an improved concentration bound for multinomial random variables\nthat is of independent interest beyond robustness and generalization.", + "authors": "Kenji Kawaguchi, Zhun Deng, Kyle Luh, Jiaoyang Huang", + "published": "2022-06-27", + "updated": "2022-08-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "math.PR", + "stat.ML" + ], + "main_content": "Introduction Robust optimization (Ben-Tal & Nemirovski, 1998; Bertsimas et al., 2011; Gabrel et al., 2014) is an in\ufb02uential paradigm for dealing with noisy or uncertain data. Many optimization problems are sensitive to perturbations in their parameters. Using powerful concepts derived from convexity and duality, robust optimization aims to \ufb01nd a solution to these optimization problems that is feasible with respect to all possible realizations of noisy or unknown parameters. Essentially, this criterion solves the optimization problem for the worst-case choice of the possible parameters. Robust optimization has been successfully applied in a variety of \ufb01elds, e.g., machine learning, to deal with inaccurately 1National University of Singapore 2Harvard University 3University of Colorado Boulder 4New York University. Correspondence to: Kenji Kawaguchi . Proceedings of the 39 th International Conference on Machine Learning (ICML), Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). observed training samples and strengthen the robustness of deep learning (Bhattacharyya et al., 2004; Globerson & Roweis, 2006; Deng et al., 2021b; Rice et al., 2021; Robey et al., 2021; Pedraza et al., 2022; Chen et al., 2022). Inspired by robust optimization, Xu & Mannor (2010; 2012) showed that robust algorithms generalize to unseen data well for various models including deep neural networks. Thus, the notion of robustness provides an alternative view in the topic of generalization (Vapnik, 1998; Bartlett & Mendelson, 2002; Bousquet & Elisseeff, 2002; Kawaguchi et al., 2017; Arora et al., 2019; Kawaguchi & Huang, 2019; Deng et al., 2021a; Hu et al., 2021; Pham et al., 2021; Zhang et al., 2021a;b). A learning algorithm A is said to be robust if the loss \u2113 of the hypothesis AS (returned by the learning algorithm A under the training dataset S) behaves similarly on two samples that are near each other: De\ufb01nition 1. A learning algorithm A is (K, \u03f5(\u00b7))-robust, for K \u2208N and \u03f5(\u00b7) : Zn \u2192R, if Z can be partitioned into K disjoint sets, denoted by {Ck}K k=1, such that the following holds for all S \u2208Zn: \u2200s \u2208S, \u2200z \u2208Z, \u2200k \u2208[K], if s, z \u2208 Ck, then |\u2113(AS, s) \u2212\u2113(AS, z)| \u2264\u03f5(S). Here, a training dataset S = (zi)n i=1 consists of n samples and \u2113: H \u00d7 Z \u2192R\u22650 is the per-sample loss, where H is a hypothesis space and zi \u2208Z is the i-th training data point. That is, a learning algorithm A is a mapping from S \u2208Zn to AS \u2208H. Using De\ufb01nition 1, Xu & Mannor (2010; 2012) proved that the generalization error of hypothesis AS has an upper bound that scales proportionally to \u03f5(S) + p K/n. This bound is consequential in the theory of invariant classi\ufb01ers (Sokolic et al., 2017a), adversarial examples (Cisse et al., 2017), majority voting (Xu & Mannor, 2012; Devroye et al., 2013), support vector machines (Xu & Mannor, 2012; Xu et al., 2009; Qi et al., 2013), lasso (Xu & Mannor, 2012; Hastie et al., 2019), principle component analysis (Xu & Mannor, 2012; Jolliffe & Cadima, 2016), deep neural networks (Xu & Mannor, 2012; Sokolic et al., 2017b; Cisse et al., 2017; Sener & Savarese, 2017; Gouk et al., 2021; Jia et al., 2019), metric learning (Bellet & Habrard, 2015; Shi et al., 2014), facial recognition (Ding et al., 2015; Tao et al., 2016), matrix completion (Luo et al., 2015), spectral clustering (Liu et al., 2017), domain adaption (Redko et al., arXiv:2206.13497v4 [cs.LG] 3 Aug 2022 \fRobustness Implies Generalization via Data-Dependent Generalization Bounds 2020), numerical analysis (Shen et al., 2020) and stochastic algorithms (Zahavy et al., 2016). The bound based on algorithmic robustness (De\ufb01nition 1) has gained considerable interest in the community and has been discussed in much literature as listed above, partially because the dependence on the robustness term \u03f5(S) is natural and intuitive. However, the square root dependence on the partition size (or covering number) K is problematic, because K can be prohibitively large in many applications, especially in high-dimensional data where the covering number can be exponential in the dimension (Van Der Vaart et al., 1996; Vershynin, 2018). Indeed, the K dependence is one of the chief disadvantages of the robust algorithm framework and Xu & Mannor (2010; 2012) conjectured that it would be possible to reduce the dependency on K via adaptive partitioning but remarked that extending their proof to achieve this is complex, leaving this issue as an open problem. The proof of the algorithmic robustness bound relies on the concentration results for multinomial random variables, in particular the \u21131 deviations (Xu & Mannor, 2012; Wellner et al., 2013). Spurred by applications in learning theory, the concentration of multinomial random variables has been an active area of research by itself beyond algorithmic robustness (Weissman et al., 2003; Devroye, 1983; Agrawal & Jia, 2017; Qian et al., 2020), where a particular attention has been paid to the dependence of the bound on K \u2014 the number of possible outcomes in the de\ufb01nition of the multinomial random variable. In the robust algorithmic context, K corresponds exactly to K in De\ufb01nition 1. In a paper previously published in NeurIPS (Agrawal & Jia, 2017), a signi\ufb01cant improvement in the \u221a K dependence was claimed which was later refuted by Qian et al. (2020) with the refutation being acknowledged by the authors (Agrawal & Jia, 2020). Thus, to date, there has been no success in reducing the \u221a K term reported in the literature despite its importance and several previous attempts. Importantly, Qian et al. (2020) established a lower bound that already scales as \u221a K; that is, we have matching upper and lower bounds in terms of K. Thus, it may seem that the open question posed in (Xu & Mannor, 2012) has been settled negatively and any attempts to reduce the \u221a K dependence is futile. However, similar to other lower bounds in machine learning, the lower bound given in (Qian et al., 2020) only means that there exists a worst-case distribution for which the (lower) bound cannot be further improved. It is plausible that this worst-case distribution is neither representative nor commonplace. Thus, by incorporating information from the training data, it may be possible to extract the properties of the underlying distribution, which may allow one to reduce the \u221a K dependence. In fact, by probing beyond the worst-case analysis, we show that non-uniform and purely data-dependent bounds can greatly outperform these previous bounds (that are implicitly derived for the worst-case distributions). Here, a bound is said to be nonuniform if the bound differs for different data-distributions. Unlike the standard data-dependence through the outcome of the learning algorithm AS (e.g., in the robustness term \u03f5(S)), a bound is said to be purely data-dependent if the bound contains a term that is independent of the algorithm A and differs for different training data S. We summarize our main contributions below: 1. In Section 3, we address the open problem of reducing the \u221a K dependence without making any additional assumptions about the data distribution. The key insight (and challenge) here is to prove an purely datadependent bound where the \u221a K dependence is replaced by an easily computable quantity that is a function of the training samples. This allows us to reduce the \u221a K dependence without presuming strong prior knowledge of the probability space and the learning algorithm. 2. A crucial technical innovation is a series of nonuniform and purely data-dependent bounds for multinomial random variables that greatly improve the classical bounds under certain conditions. A representative of our new bounds is stated in Section 3 (and others are presented in Appendices A and B). These bounds are likely of independent interest in the broader literature beyond robustness and generalization. 3. In addition to our main theorems, we provide abundant numerical simulations and several theoretical examples in which our bounds are provably superior in Sections 4 and 5. As a consequence of our improvements to algorithmic robustness, we can deduce immediate improvements to the many applications of algorithmic robustness listed above, ranging from invariant classi\ufb01ers to numerical analysis and stochastic learning algorithms. 2. Preliminaries This section introduces notation and previous results. 2.1. Notation For an integer n, we use [n] to denote the set of integers 1, . . . , n. For a \ufb01nite set B, we let |B| represent the number of elements in B. For a set S equipped with metric \u03c1, we de\ufb01ne \u02c6 S as an \u03b5-cover of S if for all s \u2208S, there exists \u02c6 s \u2208\u02c6 S such that \u03c1(s, \u02c6 s) \u2264\u03b5. We then de\ufb01ne the \u03b5-covering number as N(\u03b5, S, \u03c1) = min{| \u02c6 S| : \u02c6 S is an \u03b5-cover of S}. We use 1(\u00b7) as an indicator function, and \u2225\u00b7\u2225p is the standard p-norm for a vector. \fRobustness Implies Generalization via Data-Dependent Generalization Bounds 2.2. Problem Setting and Background In this study, we are interested in bounding the expected loss Ez[\u2113(AS, z)], where Ez denotes the expectation with respect to the sampling distribution. This is a quantity that cannot be computed or accessed. Accordingly, we obtain an upper bound by using the training loss 1 n Pn i=1 \u2113(AS, zi), which is a computable quantity, and by invoking other computable terms. A previous study (Xu & Mannor, 2012) used algorithmic robustness (De\ufb01nition 1) to achieve the following result: Proposition 1. (Xu & Mannor, 2012) Assume that for all h \u2208H and z \u2208Z, the loss is upper bounded by B i.e., \u2113(h, z) \u2264B. If the learning algorithm A is (K, \u03f5(\u00b7))-robust (with {Ck}K k=1), then for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over an iid draw of n samples S = (zi)n i=1, the following holds: Ez[\u2113(AS, z)] (1) \u22641 n n X i=1 \u2113(AS, zi) + \u03f5(S) + B r 2K ln 2 + 2 ln(1/\u03b4) n . For example, in the special case of supervised learning, the sample space can be decomposed as Z = X \u00d7 Y, where X is the input space and Y is the label space. However, note that Z can differ from the original space of the data points. For example, if the original data point is \u02dc z, we can use z = g(\u02dc z) for any \ufb01xed-function g. The previous paper (Xu & Mannor, 2012) also proves the same upper bound on 1 n Pn i=1 \u2113(AS, zi) \u2212Ez[\u2113(AS, z)], instead of Ez[\u2113(AS, z)] \u22121 n Pn i=1 \u2113(AS, zi). However, the empirical loss 1 n Pn i=1 \u2113(AS, zi) can be minimized during training; hence, we are typically interested in the upper bound on Ez[\u2113(AS, z)] \u22121 n Pn i=1 \u2113(AS, zi). The focus on this quantity. The relationship between algorithmic robustness and the multinomial distribution is apparent when we consider independent samples from the sample space of {Ck}K k=1. Then, the number of samples from each class, Ck, is multinomially distributed with pk = P(z \u2208Ck). The actual values of pk are not available to us. Therefore, it is natural that the concentration of the multinomial values around these expectations is required in the argument. The concentration of a multinomial random variable is of interest in theoretical probability and practical use in applied \ufb01elds such as statistics and computer science (Van Der Vaart et al., 1996). Consequently, several concentration bounds have been proposed in the literature (Weissman et al., 2003; Devroye, 1983; Agrawal & Jia, 2017; Qian et al., 2020; Van Der Vaart et al., 1996), for example: Proposition 2 (Bretagnolle-Huber-Carol inequality). (Van Der Vaart et al., 1996, Proposition A.6.6) If X1, . . . , XK are multinomially distributed with parameters n and p1, . . . , pK, then for any \u03b4 > 0, with probability at least 1 \u2212\u03b4, K X k=1 \f \f \f \fpk \u2212Xk n \f \f \f \f \u2264 r 2K ln 2 + 2 ln(1/\u03b4) n (2) Crucially, the bounds in the literature are uniform in the parameters pk, meaning that the right-hand side of the inequality is true for any set of parameters. A key step in our reduction of the \u221a K dependence in algorithmic robustness is the non-uniform (and purely data-dependent) enhancement of the above concentration bound, which may be of independent interest beyond algorithmic robustness. 3. Main Theorems In this section, we record our improvements to Proposition 1 along with our upgraded bounds for the multinomial distribution. We discuss our main contributions and relegate the complete proofs of theoretical results to the appendices. 3.1. Algorithmic Robustness One of the main contributions of this study is the following re\ufb01nement of the algorithmic robustness bound: Theorem 1. If the learning algorithm A is (K, \u03f5(\u00b7))-robust (with {Ck}K k=1), then for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over an iid draw of n samples S = (zi)n i=1, the following holds: Ez[\u2113(AS, z)] \u22641 n n X i=1 \u2113(AS, zi) + \u03f5(S) (3) + \u03b6(AS) ( \u221a 2 + 1) r |TS| ln(2K/\u03b4) n + 2|TS| ln(2K/\u03b4) n ! , where IS k := {i \u2208[n] : zi \u2208Ck}, \u03b6(AS) := max z\u2208Z {\u2113(AS, z)}, and TS := {k \u2208[K] : |IS k | \u22651}. Theorem 1 is a signi\ufb01cant improvement over the previous bound (1) of Proposition 1 as (3) has a far better dependence on K. In terms of K, we have reduced \u221a K to \u221a ln K. Overall, if we ignore the log factor, K in the previous bound is replaced by |TS| in our bound. Here, |TS| is the number of distinct classes, Ck, that actually appear in the single speci\ufb01c training dataset S; thus, |TS| \u2264K by the de\ufb01nition and it is shown to have |TS| \u226aK in many general \fRobustness Implies Generalization via Data-Dependent Generalization Bounds cases later in Proposition 3 and Sections 4.2 and 5. For example, our experimental results in Section 5 indicate that |TS| \u226aK in many natural settings, where we see an exponential discrepancy in a variety of real-world data sets and theoretical models. Intuitively, |TS| is likely to be signi\ufb01cantly less than K when there are many sparsely populated classes Ck. Therefore, it is improbable that many of these classes are represented in the sample data. Our theoretical and experimental and results demonstrate that such scenarios are prevalent in the \ufb01eld. The following proposition shows that |TS| is indeed independent of K and only scales as ln n under a general mild condition on pk, proving that we have |TS| \u226aK and |TS| \u226an in a general case: Proposition 3. Under the assumptions of Theorem 1, we denote pk = P(z \u2208Ck) where p1 \u2265p2 \u2265\u00b7 \u00b7 \u00b7 \u2265pK. If there are some constants \u03b1, \u03b2, C > 0 such that pk decays as pk \u2264Ce\u2212(k/\u03b2)\u03b1, and ln n \u2265max{1, 2/\u03b1} then with probability at least 1 \u2212\u03b4, the following holds: |TS| \u2264 ( \u03b2(ln n) 1 \u03b1 + C(e \u22121) \u03b2 \u03b1 + log(1/\u03b4) if \u03b1 \u22651, (1 + 2C(e \u22121))\u03b2(ln n) 1 \u03b1 + log(1/\u03b4) if \u03b1 < 1. In Proposition 3, \u03b1 controls the rate of decay for pk. For realworld data sets, we expect the data distribution concentrates on a lower dimension manifold or around small number of modes. In such settings, we expect the probability pk = P(z \u2208Ck) (arranging them in decaying order) exhibits fast decays. If \u03b1 = \u221e, pk concentrates on unknown \u03b2 bins and we have |TS| \u2264\u03b2. If \u03b1 < \u221e, we have pk \u0338= 0 for all k \u2208[K], but |TS| is still upper bound by \u03b2 times the constant up to a logarithmic factor and is independent of K. Proposition 3 also demonstrates the fact that even with perfect prior knowledge of the data distribution, |TS| can be much smaller than K because |TS| is more adaptive according to the training data while K cells need to cover all the possible parts that the distribution has positive mass on. Without the perfect knowledge, |TS| can be more signi\ufb01cantly smaller than K. A crucial aspect of Theorem 1 is that TS depends exclusively on the training sample data and not the actual background distribution. Accordingly, our result is of practical value in statistical learning settings, where information about the actual distribution can only be obtained through the training sample data. Although the main breakthrough in Theorem 1 is the reduced K dependence, there is also a substantially re\ufb01ned dependence on the upper bound of the loss value \u2014 the replacement of B with \u03b6(AS), where \u03b6(AS) \u2264B; i.e., we replace a maximum over the entire hypothesis space with the single hypothesis returned by the algorithm. This can be a signi\ufb01cant advantage for common loss functions, such as square loss and cross-entropy loss. Note that B in the previous bound is de\ufb01ned to be larger than \u2113(h, z) for all h \u2208H, meaning that B is dependent on the entire hypothesis space H. In contrast, \u03b6(AS) in our bound depends only on the single actual hypothesis, AS, returned by the speci\ufb01c algorithm for each data set S. With a more re\ufb01ned analysis, we also prove a stronger (yet more complicated) version of Theorem 1: Theorem 2. If the learning algorithm A is (K, \u03f5(\u00b7))-robust (with {Ck}K k=1), then for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over an iid draw of n examples S = (zi)n i=1, the following holds: Ez[\u2113(AS, z)] \u22641 n n X i=1 \u2113(AS, zi) + \u03f5(S) (4) + Q1 r ln(2K/\u03b4) n + 2Q2 ln(2K/\u03b4) n where Q1 := X k\u2208TS (\u03b1T c S (AS) + \u221a 2\u03b1k(AS)) r |IS k | n , Q2 := \u03b1T c S (AS)|TS| + X k\u2208TS \u03b1k(AS), with TS := {k \u2208[K] : |IS k | \u22651}, IS k := {i \u2208[n] : zi \u2208Ck}, \u03b1k(h) := Ez[\u2113(h, z)|z \u2208Ck], \u03b1T c S (AS) := maxk\u2208T c S \u03b1k(AS), and T c S := [K] \\ TS. Note that P k\u2208TS \u03b1k(AS) \u2264 |TS|\u03b6(AS) and P k\u2208TS q |IS k |/n \u2264 p |TS| by the Cauchy-Schwarz inequality. Thus, Theorem 2 is always as strong as Theorem 1. Furthermore, Theorem 2 signi\ufb01cantly upgrades Theorem 1 approximately when P k\u2208TS \u03b1k(AS) \u226a |TS|\u03b6(AS) or P k\u2208TS q |IS k |/n \u226a p |TS|. Otherwise put, if the maximum expected loss of the classes is much larger than the typical expected loss or the distribution of samples in the classes is lopsided, Theorem 2 will be an even tighter bound. The complete proofs of Theorems 1 and 2 are provided in Sections C and E of the Appendix. We remark that our proof of Theorem 1 proves a stronger theoretical statement, where \u03b6(AS) is replaced by maxk\u2208[K] Ez[\u2113(AS, z)|z \u2208Ck] (\u2264 \u03b6(AS)). This formulation may have advantages over that in Theorem 1 if the problem context reveals more information about the conditional expectation. \fRobustness Implies Generalization via Data-Dependent Generalization Bounds 3.2. Concentration Bounds for the Multinomial Distribution Let the vector X = (X1, . . . , XK) follow a multinomial distribution with parameters n and p = (p1, . . . , pK). As shown in the proof of Theorem 1, the key technical hurdle is to avoid an explicit \u221a K dependence as we upper bound a quantity of the following form: K X i=1 ai(X) \u0012 pi \u2212Xi n \u0013 , (5) where ai is an arbitrary function with ai(X) \u22650 for all i \u2208{1, . . . , K}. Importantly, ai(X) are functions of X1, . . . , XK, which makes this problem particularly challenging and further complicates the non-trivial correlations already present in Xi. This dif\ufb01culty is avoided in (Qian et al., 2020; Bellet & Habrard, 2015; Xu & Mannor, 2012) by using the global maximum of the loss function with the \u221a K dependence. Allowing ai(X) to depend on X is critical in our analysis and underpins the improvement from B in Proposition 1 to \u03b6(AS) in Theorem 1, in addition to the improvement from \u221a K to \u221a ln K. One example of our new multinomial bounds that are nonuniform is the following lemma. Lemma 1. For any \u03b4 > 0, with probability at least 1 \u2212\u03b4, K X i=1 ai(X) \u0012 pi \u2212Xi n \u0013 \u2264 K X i=1 ai(X)\u221api ! r 2 ln(K/\u03b4) n . By comparing Lemma 1 to Proposition 2, in some range of parameters, we have essentially replaced \u221a K with P i \u221api. If pi = 1/K for all i, then we recover (2). Conversely, in the other extreme case, if p1 \u22481 and the remaining pi are near zero, then P \u221api \u22481. Thus, our result interpolates between these cases, and there is a wide range of possible distributions in which our bound is signi\ufb01cantly better than Proposition 2. While Lemma 1 is of independent interest, it is likely dif\ufb01cult to be used directly in machine learning because it depends on the probability distribution p, which is typically unknown. To overcome this issue, we further re\ufb01ne Lemma 1 and remove the dependence on p. To keep the notation consistent, we write pk = P(z \u2208Ck) and Xk = Pn i=1 1{zi \u2208Ck} with {Ck}K k=1 being arbitrary in this subsection. Since {Ck}K k=1 is arbitrary here, this still represents general multinomial distributions. One of the main contributions of this study is the following re\ufb01nement of the concentration bound on general multinomial distributions: Theorem 3. For any \u03b4 > 0, with probability at least 1 \u2212\u03b4, K X i=1 ai(X) \u0012 pi \u2212Xi n \u0013 (6) \u2264\u02dc Q r |TS| ln(2K/\u03b4) n + aT c S (X)2|TS| ln(2K/\u03b4) n , where \u02dc Q = aTS(X) \u221a 2 + aT c S (X), aTS(X) = maxi\u2208TS ai(X) and aT c S (X) = maxi\u2208T c S ai(X) with T c S = [K] \\ TS. Further results of this nature and their technical proofs can be found in Sections A and B of the Appendix. 3.3. Discussion and Extensions 3.3.1. PROOF IDEAS AND CHALLENGES The proof of Theorem 1 can be divided into three phases. In the \ufb01rst, we prove (a stronger version of) Lemma 1. Next, we invoke Lemma 1 to prove Theorem 3. Finally, we deduce Theorem 1 from Theorem 3. Thus, the \ufb01rst obstacle is to establish Lemma 1, which supplants Proposition 2. After this key step, the next challenge lies in going from Lemma 1 to Theorem 3, which requires us to remove the P i \u221api term in Lemma 1 without incurring the \u221a K dependence. For example, if we naively bound P i \u221api with the Cauchy\u2013 Schwarz inequality, we have that K X i=1 \u221api \u2264 v u u t K X i=1 pi v u u t K X i=1 1 = \u221a K. Similarly, if we apply Jensen\u2019s inequality, which relies on the concavity of the square root function in this domain, we \ufb01nd that 1 K K X i=1 \u221api \u2264 v u u t 1 K K X i=1 pi = r 1 K , which again implies that PK i=1 \u221api \u2264 \u221a K. Thus, both approaches reproduce the \u221a K dependence, illustrating the challenges of removing the P i \u221api term without the \u221a K dependence. Our novel observation is to \ufb01rst decompose the sum as K X i=1 \u0012 pi \u2212Xi n \u0013 = X i\u2208TS \u0012 pi \u2212Xi n \u0013 + X i/ \u2208TS pi (7) = X i\u2208TS \u0012 pi \u2212Xi n \u0013 + 1 \u2212 X i\u2208TS pi ! , and \ufb01nd an upper bound on the second term 1 \u2212P i\u2208TS pi by using a lower bound on P i\u2208TS \u0000pi \u2212Xi n \u0001 . That is, if P i\u2208TS \u0000pi \u2212Xi n \u0001 \u2265\u2212c for some c > 0, then 1 \u2212 \fRobustness Implies Generalization via Data-Dependent Generalization Bounds P i\u2208TS pi \u22641 + c \u2212P i\u2208TS Xi n = c. The second line of (7) is a conceptually crucial step where we convert the question of upper bounding P i/ \u2208TS pi to that of lower bounding P i\u2208TS pi. If we directly upper bound P i/ \u2208TS pi, it will sustain a cost of p K \u2212|TS|, resulting in a \u221a K dependence. This step of decomposing the sum and of converting the upper bound to a lower bound is designed to avoid the \u221a K dependence. Now, the problem has been reduced to \ufb01nding tight lower and upper bounds on these quantities, which is still nontrivial and is described in Appendices A and B. 3.3.2. PSEUDO-ROBUSTNESS Xu & Mannor (2012) also introduced a more general notion of pseudo-robustness which relaxes algorithmic robustness by only requiring the nearness of the loss functions to hold for a subset of the training samples: De\ufb01nition 2. A learning algorithm A is (K, \u03f5(\u00b7), \u02c6 n(\u00b7)) pseudo-robust , for K \u2208N, \u03f5(\u00b7) : Zn \u2192R, and \u02c6 n(\u00b7) : Zn \u2192{1, . . . , n}, where Z can be partitioned into K disjoint sets, denoted by {Ck}K k=1, such that for all S \u2208Zn, there exists a subset of training samples \u02c6 S with | \u02c6 S| = \u02c6 n(S) and the following holds: \u2200s \u2208\u02c6 S, \u2200z \u2208Z, \u2200k = 1, . . . , K : if s, z \u2208Ck, then |\u2113(AS, s) \u2212\u2113(AS, z)| \u2264\u03f5(S). For pseudo-robustness, we have proved the following analog of Theorem 1: Theorem 4. If the learning algorithm A is (K, \u03f5(\u00b7), \u02c6 n(\u00b7)) is pseudo-robust (with {Ck}K k=1), then for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over an iid draw of n examples S = (zi)n i=1, the following holds: Ez[\u2113(AS, z)] \u22641 n n X i=1 \u2113(AS, zi) + \u02c6 n(S) n \u03f5(S) + n \u2212\u02c6 n(S) n \u02c6 \u03b6(A, S) + \u03b6(AS) ( \u221a 2 + 1) r |TS| ln(2K/\u03b4) n + 2|TS| ln(2K/\u03b4) n ! , where \u02c6 \u03b6(A, S) := max(k,i)\u2208[K]\u00d7[n] |\u03b1k(AS) \u2212\u2113(AS, zi)|, \u03b6(AS) := max k\u2208[K] Ez[\u2113(AS, z)|z \u2208Ck] and TS := {k \u2208[K] : |IS k | \u22651} with IS k := {i \u2208[n] : zi \u2208Ck}. Theorem 2 exhibits an analogous generalization. A precise statement and proof can be found in Appendix F. These theorems offer concomitant improvements to the pseudorobustness bounds attained in (Xu & Mannor, 2012). 4. Examples Our contributions augment the abstract framework of algorithmic robustness. We do not use application-dependent information nor do we append any restrictive assumptions, and, therefore, can deduce improvements to the many applications that employ the structure of algorithmic robustness. After presenting known examples in Section 4.1, we provide new theoretical comparisons via examples in Section 4.2. 4.1. Robust Algorithms Although our Theorems 1 and 2 are applicable to a wide range of applications, this section provides only a few simple examples from (Xu & Mannor, 2012) to which our Theorems 1 and 2 can be applied. When we have the decomposition z \u2208Z = X \u00d7 Y with X as the input space and Y as the output space, we let z(x) \u2208X and z(y) \u2208Y denote the X and Y components of a sample point z, respectively, with z = (z(x), z(y)). We also write S = (s1, . . . , sn). Example 1. (Xu & Mannor, 2012) (Lipschitz continuous functions) Broad classes of learning problems are set in spaces with natural metrics. If the loss function is Lipschitz, which is a simple and natural condition, then the algorithm is robust. More precisely, if Z is compact with regarding a metric \u03c1 and \u2113(AS, \u00b7) is Lipschitz continuous with Lipschitz constant c(S), that is, |\u2113(AS, z) \u2212\u2113(AS, z\u2032)| \u2264c(S)\u03c1(z, z\u2032), \u2200z, z\u2032 \u2208Z, then A is (N(\u03b3/2, Z, \u03c1), c(S)\u03b3)-robust for all \u03b3 > 0. Example 2. (Xu & Mannor, 2012) (Lasso) Lasso is a workhorse of modern machine learning (Hastie et al., 2019). We assume that Z is compact, and we use the loss function \u2113(AS, z) = |z(y) \u2212AS(z(x))|. Lasso can be formulated as an optimization problem. minimize w : 1 n n X i=1 (s(y) i \u2212w\u22a4s(x) i )2 + c\u2225w\u22251. This algorithm is (N(\u03bd/2, Z, \u2225\u00b7\u2225\u221e), \u03bd( 1 n Pn i=1(s(y) i )2)/c + \u03bd)-robust for all \u03bd > 0. Example 3. (Xu & Mannor, 2012) (Principal Component Analysis) For Z \u2282Rm, a set with the maximum \u21132 norm bounded by B. If we use the loss function \u2113((w1, . . . , wd), z) = d X j=1 (w\u22a4 j z)2, then \ufb01nding the \ufb01rst d principal components via the optimization problem: Maximize: n X i=1 d X j=1 (w\u22a4 j si)2 with the constraint that \u2225wj\u22252 = 1 and w\u22a4 i wj = 0 for i \u0338= j is (N(\u03b3/2, Z, \u2225\u00b7 \u22252), 2d\u03b3B)-robust, for all \u03b3 > 0. \fRobustness Implies Generalization via Data-Dependent Generalization Bounds Theorems 1 and 2 can be used as a black box mathematical tool for many more of the existing applications cataloged in the introduction. 4.2. Theoretical Comparisons Here, we further demonstrate the advantage of using our bounds in Section 3 over the bounds provided by Xu & Mannor (2012) and the bounds obtained via uniform stability (Bousquet & Elisseeff, 2002), using Lasso, least square regression, neural networks, and discrete-valued neural communication as examples. The \ufb01rst example demonstrates that when the data are embedded with high probability on a low-dimensional manifold in the data space, and our bound is much stronger than that of (Xu & Mannor, 2012): Example 4 (Lasso). Recall that in Example 2, Lasso is (N(\u03bd/2, Z, \u2225\u00b7 \u2225\u221e), \u03bd( 1 n Pn i=1(s(y) i )2)/c + \u03bd)-robust for all \u03bd > 0. Consider z(y) \u2208R and z(x) \u2208Rd. Given any \u03bd > 0, let z follow a distribution Dz, such that z(x) = (x(1)\u22a4, x(2)\u22a4)\u22a4, where x(1) \u223cN(0, Ip)|[\u22121,1]p (truncated Gaussian on [\u22121, 1]p), x(2) \u223cN(\u00b5, \u03c32Ir)|[\u22121,1]r, and r = d \u2212p, z(y) = w\u2217\u22a4z(x), where \u2225w\u2217\u22251 \u22641. For suf\ufb01ciently small \u03c3, we can check that the \u03bd-covering of the data space [\u22121, 1]d satis\ufb01es Proposition 3, with \u03b2 = (2/\u03bd)p and \u03b1 = 2. As a consequence, we have that |TS| = \u0398((2/\u03bd)p+1) with a probability of at least 1 \u2212\u03b4. Since N(\u03bd/2, Z, \u2225\u00b7 \u2225\u221e) = \u0398((2/\u03bd)d+1), there exists D, N such that for any d > D and n > N, when d \u226bp, there exists (\u00b5, \u03c3) such that the bound in Theorem 1 is much tighter than that in Proposition 1 as |TS| = \u0398((2/\u03bd)p+1) \u226a\u0398((2/\u03bd)d+1) = N(\u03bd/2, Z, \u2225\u00b7 \u2225\u221e). See Appendix G for more details. In the next example, we can see that our bound is much tighter than the bound obtained via uniform stability when there are outliers in the data: Example 5 (Regularized least square regression). We refer to the example in (Bousquet & Elisseeff, 2002) for regularized least squares regression. Speci\ufb01cally, z(y) \u2208[0, B] and z(x) \u2208[0, 1]. The regularized least squares regression is de\ufb01ned as AS = argminw 1 n Pn i=1 \u2113(w, zi) + \u03bb|w|2, where \u2113(w, z) = (w \u00b7 z(x) \u2212z(y))2 and w \u2208R. For this example, Bousquet & Elisseeff (2002) observe that 0 \u2264\u2113(w, z) \u2264 p B/\u03bb, and establish the stability bound \u03b2 \u22642B2 \u03bbn . Now, consider the following distribution on z: z(y) = w\u2217\u00b7 z(x) + \u03f51(|\u03f5| < B). In addition, z(x) follows a continuous distribution on [0, 1], for instance, a uniform distribution on [0, 1], and \u03f5 \u223cN(0, \u03c32). Without loss of generality, let w\u2217= 1. In this example, we can possibly observe some large outlier with small probability. One can check that by suitably chosen \u03c3, Proposition 3 holds with \u03b2 = 2/\u03bd and \u03b1 = 2. Thus, there exists a threshold N such that for any n > N, there exists \u03bd > 0, \u03c3 > 0 and \u03b4 > 0, with a probability of at least 1 \u2212\u03b4, |TS| = \u0398(2\u03bd). Thus, if B2/\u03bb \u226b2/\u03bd, the bound in Theorem 1 is a far more precise bound than that obtained via uniform stability. For details of the proof, we refer the reader to Appendix G. Although Example 5 compares our robustness bound and the uniform stability bound, we emphasize that robustness and stability are very different properties and have distinct strengths and weaknesses: i.e., one setting prefers the robustness framework and another setting favors the stability approach. For example, a learning algorithm may be robust but not stable, e.g., lasso regression (Xu et al., 2009), and vice versa. Accordingly, this paper focuses on the fundamental advancements of the robustness framework and of the statistical bound for general multinomial distributions. Furthermore, the following examples illustrate some of the immediate improvements for robust margin deep neural networks and discrete-valued neural communication: Example 6 (Robust margin deep neural networks). The previous paper (Sokolic et al., 2017b) uses Proposition 1 (in our paper) with the \u03f5-covering of X for robust margin neural networks. Our new theorem (Theorem 1 or Theorem 2) immediately improve their bounds by replacing K = 2k+1(CM)k \u03b3k b in their bounds with |TS|. In our paper, the comparison of K v.s. |TS| for the \u03f5-covering of X is shown in Figure 3 for the real-life datasets. By plugging these values of K and |TS| into the previous bounds and our versions, we yield exponential improvements over the previous bounds for robust margin deep neural networks. Example 7 (Discrete-valued neural communication). The bound in (previous) theorem 3 of the recent paper on discrete-valued neural communication (Liu et al., 2021) scales at the rate of LG (which is the size of the discrete bottleneck). Since their proof uses Proposition 2 (in our paper) to bound the left-hand-side of (6) (in our Theorem 3), by applying our Theorem 3, we yield an improvement by replacing LG with |TS| for discrete-valued neural communication. 5. Experiments This section establishes the advantage of our new bounds via experiments using both synthetic data and real-world data. We generated synthetic data by sampling from beta distributions and Gaussian mixture distributions with a variety of hyperparameters. For real-world data, we adopted the standard benchmark datasets: MNIST (LeCun et al., 1998), CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), Fashion-MNIST (FMNIST) (Xiao et al., 2017), Kuzushiji-MNIST (KMNIST) (Clanuwat et al., 2019), and Semeion (Srl & Brescia, 1994). The value of \u03f5(S) is exactly the same for the previous bound \fRobustness Implies Generalization via Data-Dependent Generalization Bounds 1 2 3 4 5 6 7 8 9 10 d 0 2\u00d7109 4\u00d7109 6\u00d7109 8\u00d7109 1010 K | S| (a) Beta(0.1, 0.1) 1 2 3 4 5 6 7 8 9 10 d 0 2\u00d7109 4\u00d7109 6\u00d7109 8\u00d7109 1010 K | S| (b) Beta(0.1, 10) 1 2 3 4 5 6 7 8 9 10 d 0 2\u00d7109 4\u00d7109 6\u00d7109 8\u00d7109 1010 K | S| (c) Gauss mix (0.01) 1 2 3 4 5 6 7 8 9 10 d 0 2\u00d7109 4\u00d7109 6\u00d7109 8\u00d7109 1010 K | S| (d) Gauss mix (1.0) Figure 1. The values of K versus |TS| with synthetic data and the \u03f5-covering of the original space. The plot shows the mean of 10 random trials. The value of K increases exponentially as d increases linearly, whereas |TS| does not. 1 2 3 4 5 6 7 8 9 10 d 101 102 103 K | S| (a) Beta(0.1, 0.1) 1 2 3 4 5 6 7 8 9 10 d 101 102 103 K | S| (b) Beta(0.1, 10) 1 2 3 4 5 6 7 8 9 10 d 101 102 103 K | S| (c) Gauss mix (0.01) 1 2 3 4 5 6 7 8 9 10 d 101 102 103 K | S| (d) Gauss mix (1.0) Figure 2. The values of K versus |TS| with synthetic data and the inverse image of the \u03f5-covering in randomly projected spaces. The plot shows the mean and one standard deviation of 10 random trials. We still have |TS| < K with random projections to reduce K. and our bound in all the experiments. To choose the partition {Ck}K k=1, in addition to other examples, we used the \u03f5-covering of the original input space X as our primary example, as this is the default option in (Xu & Mannor, 2012). The data space is normalized such that X \u2286[0, 1]d for the dimensionality d of each input data. Accordingly, we used the in\ufb01nity norm and a diameter of 0.1 for the \u03f5-covering in all experiments. See Appendix H for more details on the experimental setup. 5.1. Synthetic data Figure 1 shows the values of K and |TS| for the synthetic data with the partition {Ck}K k=1 being the \u03f5-covering of X. Here, Beta(\u03b1, \u03b2) indicates the Beta distribution with hyperparameters \u03b1 and \u03b2, and Gauss mix (\u03c3) means the mixture of \ufb01ve Gaussian distributions with a standard deviation \u03c3. Appendix H presents more results with different distributions, showing the same qualitative behavior in all cases. While the \u03f5-covering of the original input space X is the default example from the previous paper (Xu & Mannor, 2012), in Figure 1 we see that K grows rapidly as d increases. Therefore, to reduce K signi\ufb01cantly, we also propose utilizing the inverse image of the \u03f5-covering in a randomly projected space. That is, given a random matrix A, we use the \u03f5-covering of the space of u = Ax to de\ufb01ne the prepartition { \u02dc Ck}K k=1. Then, the partition {Ck}K k=1 is de\ufb01ned by Ck = {x \u2208X : Ax \u2208\u02dc Ck}. We randomly generated matrix A \u2208R3\u00d7d in each trial. Figure 2 shows the values of K versus |TS| for the synthetic data with the partition {Ck}K k=1 being the inverse image of the \u03f5-covering in randomly projected spaces. As can be seen, even in the case where K is reduced via random projection, we have |TS| \u226aK. Thus, in both cases, our bounds are signi\ufb01cantly tighter than the previous bound for these synthetics data. 5.2. Real-world data Figure 3 shows the values of K versus |TS| for the realworld data with the partition {Ck}K k=1 being the \u03f5-covering of X. All the training data points of each dataset were used. As can be seen, we have |TS| \u226aK for the real-world data. To reduce the value of K, we additionally propose the following new method; i.e., as we have unlabeled data in many applications, we propose to use them to help de\ufb01ne the partition {Ck}K k=1. The key idea here was that the choice of partition {Ck}K k=1 had to be independent of the labeled data used in the training loss in Theorems 1 and 2, but it could depend on the unlabeled data. Otherwise expressed, given a set of unlabeled data points {\u00af xk}K k=1, the partition {Ck}K k=1 is de\ufb01ned by the clustering with the unlabeled data as Ck = {x \u2208X : k = argmink\u2032\u2208[K] \u2225x \u2212\u00af xk\u2032\u22252}. Following the literature on semi-supervised learning, we split the training data points into labeled data points (500 for Semeion and 5000 for all other datasets) and unlabeled data points (the remainder of the training data). \fRobustness Implies Generalization via Data-Dependent Generalization Bounds MNIST CIFAR10 CIFAR100 SVHN FMNIST KMNIST SEMEION 100 10500 101000 101500 102000 102500 103000 K 1040 \u00d7 | S| Figure 3. The values of K versus |TS| with real-world data and the \u03f5-covering. The values of |TS| are extremely small compared to those of K in all datasets. MNIST CIFAR-10CIFAR-100 SVHN FMNIST KMNIST SEMEION 0 2\u00d7104 4\u00d7104 6\u00d7104 8\u00d7104 105 K | S| Figure 4. The values of K versus |TS| with real-world data and the clustering using unlabeled data. With clustering to reduce K, we still have |TS| < K. Here, |TS| was close to zero for Semeion. Figure 4 shows the values of K versus |TS| for the realworld data with the partition {Ck}K k=1 being the clustering with the unlabeled data. As can be seen, even in this case with the signi\ufb01cantly reduced K, we still have |TS| \u226aK. Figure 5 shows the values of K v.s. |TS| for real-life datasets with the partition being the inverse image of the \u03f5-covering in randomly projected spaces. The random projection was conducted in the same manner without unlabeled data as in Figure 2. The projection reduced the value of K signi\ufb01cantly, and yet we still have |TS| \u226aK. Thus, in all three cases, our bounds are signi\ufb01cantly tighter than the previous bound for these real-world data. 6. Conclusion Since its introduction in 2010, algorithmic robustness has been a popular approach for analyzing learning algorithms (Xu & Mannor, 2010; 2012; Bellet & Habrard, 2015; Jolliffe & Cadima, 2016). In the original manuscript, which initiated the study of algorithmic robustness, Xu & Mannor (2010; 2012) pointed out that one disadvantage of their method is the dependence of the bound on the covering number of the sample space. To the community, they posed the open problem of \ufb01nding a mechanism to improve this dependence. Despite the popularity and several unsuccessful attempts, no signi\ufb01cant progress has been made in this regard (Qian et al., 2020; Agrawal & Jia, 2017; 2020). MNIST CIFAR10 CIFAR100 SVHN FMNIST KMNIST SEMEION 0 50 100 K | S| Figure 5. The values of K versus |TS| with real-world data and random projection. With random projection to reduce K, we still have |TS| < 30 < K = 100 < n \u224860, 000 for the real-life datasets. Here, n is the full train data size of each dataset: e.g., n = 60, 000 for MNIST. In this study, we provide tighter bounds for algorithmic robustness and general multinomial distributions. Our results establish natural and easily veri\ufb01ed conditions in which the dependence of K can be greatly reduced. Additionally, we demonstrate that the expected loss can be controlled by examining the single hypothesis returned by an algorithm, whereas in (Xu & Mannor, 2012), the entire hypothesis space has to be analyzed. This is a considerable gain against several common loss functions. Our bound is both practical and effective in the machine learning setting, as it depends only on the training samples. Furthermore, we provided theoretical and numerical examples in which our bounds proved superior to those of Xu & Mannor (2012) and those that follow from uniform stability (Bousquet & Elisseeff, 2002). Our experimental simulations show that on common datasets and popular theoretical models, this bound is exponentially better than the algorithmic robustness bound (Xu & Mannor, 2012). These improvements to the foundations of algorithmic robustness have immediate impacts on applications ranging from metric learning to invariant classi\ufb01ers. The main limitation of our approach is that we cannot know the values of the bounds until specifying training data; i.e., |TS| in our bound is data-dependent, whereas K in the previous bound is data-independent. This data-dependence might not be preferable in some applications, where we may want to compute a bound before seeing training data. Acknowledgements The research of J.H. is supported by the Simons Foundation as a Junior Fellow at the Simons Society of Fellows, and NSF grant DMS-2054835. The research of Z.D. is supported by the Sloan Foundation grants, the NSF grant 1763665, and the Simons Foundation Collaboration on the Theory of Algorithmic Fairness. \fRobustness Implies Generalization via Data-Dependent Generalization Bounds" + }, + { + "url": "http://arxiv.org/abs/2106.14836v4", + "title": "Understanding Dynamics of Nonlinear Representation Learning and Its Application", + "abstract": "Representations of the world environment play a crucial role in artificial\nintelligence. It is often inefficient to conduct reasoning and inference\ndirectly in the space of raw sensory representations, such as pixel values of\nimages. Representation learning allows us to automatically discover suitable\nrepresentations from raw sensory data. For example, given raw sensory data, a\ndeep neural network learns nonlinear representations at its hidden layers,\nwhich are subsequently used for classification (or regression) at its output\nlayer. This happens implicitly during training through minimizing a supervised\nor unsupervised loss in common practical regimes of deep learning, unlike the\nneural tangent kernel (NTK) regime. In this paper, we study the dynamics of\nsuch implicit nonlinear representation learning, which is beyond the NTK\nregime. We identify a pair of a new assumption and a novel condition, called\nthe common model structure assumption and the data-architecture alignment\ncondition. Under the common model structure assumption, the data-architecture\nalignment condition is shown to be sufficient for the global convergence and\nnecessary for the global optimality. Moreover, our theory explains how and when\nincreasing the network size does and does not improve the training behaviors in\nthe practical regime. Our results provide practical guidance for designing a\nmodel structure: e.g., the common model structure assumption can be used as a\njustification for using a particular model structure instead of others. We also\nderive a new training framework based on the theory. The proposed framework is\nempirically shown to maintain competitive (practical) test performances while\nproviding global convergence guarantees for deep residual neural networks with\nconvolutions, skip connections, and batch normalization with standard benchmark\ndatasets, including CIFAR-10, CIFAR-100, and SVHN.", + "authors": "Kenji Kawaguchi, Linjun Zhang, Zhun Deng", + "published": "2021-06-28", + "updated": "2022-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "math.OC", + "stat.ML" + ], + "main_content": "Introduction LeCun, Bengio, and Hinton (LeCun et al., 2015) described deep learning as one of hierarchical nonlinear representation learning approaches: Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. (p. 436) In applications such as computer vision and natural language processing, the success of deep learning can be attributed to its ability to learn hierarchical nonlinear representations by automatically changing nonlinear features and kernels during training based on the given data. This is in contrast to the neural tangent kernel (NTK) regime, extremely wide neural networks, and classical machine-learning methods, where representations or equivalently nonlinear features and kernels are (approximately) \ufb01xed during training. Deep learning in practical regimes, which has the ability to learn nonlinear representation (Bengio et al., 2013), has had a profound impact in many areas, including object recognition in computer vision (Rifai et al., 2011; Hinton et al., 2006; Bengio et al., 2007; Ciregan et al., 2012; Krizhevsky et al., 2012), style transfer (Gatys et al., 2016; Luan et al., 2017), image super-resolution (Dong et al., 2014), speech recognition (Dahl et al., 2010; Deng et al., 2010; Seide et al., 2011; Mohamed et al., 2011; Dahl et al., 2011; Hinton et al., 2012), machine translation (Schwenk et al., 2012; Le et al., 2012), paraphrase detection (Socher et al., 2011a), word sense disambiguation (Bordes et al., 2012), and sentiment analysis (Glorot et al., 2011; Socher et al., 2011b). However, we do not yet know what is the precise condition that makes deep learning tractable in the practical regime of representation learning. To initiate a study towards such a condition, we consider the following problem setup that covers deep learning in the practical regime and other nonlinear representation learning methods in general. We are given a training dataset ((xi, yi))n i=1 of n samples where xi \u2208X \u2286Rmx and yi \u2208Y \u2286Rmy are the i-th input and the i-th target respectively. We would like to learn a predictor (or hypothesis) from a parametric family H = {f(\u00b7, \u03b8) : Rmx \u2192Rmy | \u03b8 \u2208Rd} by minimizing the objective L: L(\u03b8) = 1 n n X i=1 \u2113(f(xi, \u03b8), yi), (1) where \u2113: Rmy \u00d7 Y \u2192R\u22650 is the loss function that measures the discrepancy between the prediction f(xi, \u03b8) and the target yi for each sample. In this paper, it is allowed to let yi = xi (for i = 1, . . . , n) to include the setting of unsupervised learning. The function f is also allowed to represent a wide range of machine learning models. For a (deep) neural network, f(xi, \u03b8) represents the pre-activation output of the last layer of the network, and the parameter vector \u03b8 \u2208Rd contains all the trainable parameters, including weights and bias terms of all layers. 2 \fFor example, one of the simplest models is the linear model in the form of f(x, \u03b8) = \u03c6(x)\u22a4\u03b8 where \u03c6 : X \u2192Rd is a \ufb01xed function and \u03c6(x) is a nonlinear representation of input data x. This is a classical machine learning model where much of the effort goes into the design of the hand-crafted feature map \u03c6 via feature engineering (Turner et al., 1999; Zheng and Casari, 2018). In this linear model, we do not learn the representation \u03c6(x) because the feature map \u03c6 is \ufb01xed without dependence on the model parameter \u03b8 that is optimized with the dataset ((xi, yi))n i=1. Similarly to many de\ufb01nitions in mathematics, where an intuitive notion in a special case is formalized to a de\ufb01nition for a more general case, we now abstract and generalize this intuitive notion of the representation \u03c6(x) of the linear model to that of all differentiable models as follows: De\ufb01nition 1. Given any x \u2208X and differentiable function f, we de\ufb01ne \u2202f(x,\u03b8) \u2202\u03b8 to be the gradient representation of the data x under the model f at \u03b8. This de\ufb01nition recovers the standard representation \u03c6(x) in the linear model as \u2202f(x,\u03b8) \u2202\u03b8 = \u2202\u03c6(x)\u22a4\u03b8 \u2202\u03b8 = \u03c6(x) and is applicable to all differentiable nonlinear models in representation learning. Moreover, this de\ufb01nition captures the key challenge of understanding the dynamics of nonlinear representation learning well, as illustrated below. Using the notation of d\u03b8t dt = \u2206t, the dynamics of the model f(x, \u03b8t) over the time t can be written by d dtf(x, \u03b8t) = \u2202f(x, \u03b8t) \u2202\u03b8t d\u03b8t dt = \u2202f(x, \u03b8t) \u2202\u03b8t \u2206t. (2) Here, we can see that the dynamics are linear in \u2206t if there is no gradient representation learning as \u2202f(x,\u03b8t) \u2202\u03b8t \u2248\u2202f(x,\u03b80) \u2202\u03b80 , which is indeed the case for the neural tangent kernel (NTK) regime and extremely wide neural networks. However, with representation learning in the practical regime, the gradient representation \u2202f(x,\u03b8t) \u2202\u03b8t changes depending on t (and \u2206t), resulting in the dynamics that are nonlinear in \u2206t. Therefore, the de\ufb01nition of the gradient representation can distinguish fundamentally different dynamics in machine learning. In this paper, we initiate the study of the dynamics of learning gradient representation that are nonlinear in \u2206t. That is, we focus on the regime where the gradient representation \u2202f(x,\u03b8t) \u2202\u03b8t at the end of training time t differs greatly from the initial representation \u2202f(x,\u03b80) \u2202\u03b80 , unlike the NTK regime where we have \u2202f(x,\u03b8t) \u2202\u03b8t \u2248 \u2202f(x,\u03b80) \u2202\u03b80 . This practical regime of representation learning was studied in the past for the case where the function \u03c6(x) 7\u2192f(x, \u03b8) is af\ufb01ne for some \ufb01xed feature map \u03c6 (Saxe et al., 2014; Kawaguchi, 2016; Laurent and Brecht, 2018; Bartlett et al., 2019; Zou et al., 2020; Kawaguchi, 2021; Xu et al., 2021). Unlike any previous studies, we focus on the problem setting where the function \u03c6(x) 7\u2192f(x, \u03b8) is nonlinear and non-af\ufb01ne, with the effect of nonlinear (gradient) representation learning having \u2202f(x,\u03b8t) \u2202\u03b8t \u0338\u2248 \u2202f(x,\u03b80) \u2202\u03b80 . The results of this paper avoid the curse of dimensionality by studying the global convergence of the gradient-based dynamics instead of the dynamics of global optimization (Kawaguchi et al., 2016) and Bayesian optimization (Kawaguchi et al., 2015). Importantly, we do not require any wide layer or large input dimension throughout this paper. Our main contributions are summarized as follows: 3 \f1. In Section 2, we identify a pair of a novel assumption and a new condition, called the common model structure assumption and the data-architecture alignment condition. Under the common model structure assumption, the data-architecture alignment condition is shown to be a necessary condition for the globally optimal model and a suf\ufb01cient condition for the global convergence. The condition is dependent on both data and architecture. Moreover, we empirically verify and deepen this new understanding. When we apply representation learning in practice, we often have overwhelming options regarding which model structure to be used. Our results provide a practical guidance for choosing or designing model structure via the common model structure assumption, which is indeed satis\ufb01ed by many representation learning models used in practice. 2. In Section 3, we discard the assumption of the data-architecture alignment condition. Instead, we derive a novel training framework, called the ExplorationExploitation Wrapper (EE Wrapper), which satis\ufb01es the data-architecture alignment condition time-independently a priori. The EE Wrapper is then proved to have global convergence guarantees under the safe-exploration condition. The safe-exploration condition is what allows us to explore various gradient representations safely without getting stuck in the states where we cannot provably satisfy the data-architecture alignment condition. The safe-exploration condition is shown to hold true for ResNet-18 with standard benchmark datasets, including MNIST, CIFAR-10, CIFAR-100, Semeion, KMNIST and SVHN timeindependently. 3. In Subsection 3.4, the Exploration-Exploitation Wrapper is shown to not degrade practical performances of ResNet-18 for the standard datasets, MNIST, CIFAR-10, CIFAR-100, Semeion, KMNIST and SVHN. To our knowledge, the present paper provides the \ufb01rst practical algorithm with global convergence guarantees without degrading practical performances of ResNet-18 on these standard datasets, using convolutions, skip connections, and batch normalization without any extremely wide layer of the width larger than the number of data points. To the best of our knowledge, we are not aware of any similar algorithms with global convergence guarantees in the regime of learning nonlinear representations without degrading practical performances. It is empirically known that increasing the network size tends to improve the training behaviors. Indeed, the size of networks correlates well with the training error in many cases in our experiments (e.g., Figure 1 b). However, the size and the training error do not correlate well in some experiments (e.g., Figure 1 c). Our new theoretical results explain that the training behaviors correlate more directly with the data-architecture alignment condition instead. The seeming correlation with the network size is caused by another correlation between the network size and the data-architecture alignment condition. This is explained in more details in Section 2.3. 4 \f2 Understanding Dynamics via Common Model Structure and Data-Architecture Alignment In this section, we identify the common model structure assumption and study the dataarchitecture alignment condition for the global convergence in nonlinear representation learning. We begin by presenting an overview of our results in Subsection 2.1, deepen our understandings via experiments in Subsection 2.2, discuss implications of our results in Subsection 2.3, and establish mathematical theories in Subsection 2.4. 2.1 Overview We introduce the common model structure assumption in Subsection 2.1.1 and de\ufb01ne the data-architecture alignment condition in Subsection 2.1.2. Using the assumption and the condition, we present the global convergence result in Subsection 2.1.3. 2.1.1 Common Model Structure Assumption Through examinations of representation learning models used in applications, we identi\ufb01ed and formalized one of their common properties as follows: Assumption 1. (Common Model Structure Assumption) There exists a subset S \u2286 {1, 2, . . . , d} such that f(xi, \u03b8) = Pd k=1 1{k \u2208S}\u03b8k( \u2202f(xi,\u03b8) \u2202\u03b8k ) for any i \u2208{1, . . . , n} and \u03b8 \u2208Rd. Assumption 1 is satis\ufb01ed by common machine learning models, such as kernel models and multilayer neural networks, with or without convolutions, batch normalization, pooling, and skip connections. For example, consider a multilayer neural network of the form f(x, \u03b8) = Wh(x, u)+b, where h(x, u) is an output of its last hidden layer and the parameter vector \u03b8 consists of the parameters (W, b) at the last layer and the parameters u in all other layers as \u03b8 = vec([W, b, u]). Here, for any matrix M \u2208Rm\u00d7 \u00af m, we let vec(M) \u2208Rm \u00af m be the standard vectorization of the matrix M by stacking columns. Then, Assumption 1 holds because f(x, \u03b8) = Pd k=1 1{k \u2208S}\u03b8k( \u2202f(xi,\u03b8) \u2202\u03b8k ) where S is de\ufb01ned by {\u03b8k : k \u2208S} = {vec([W, b])k : k = 1, 2, . . . , \u03be} with vec([W, b]) \u2208R\u03be. Since h is arbitrary in this example, the common model structure assumption holds, for example, for any multilayer neural networks with a fully-connected last layer. In general, because the nonlinearity at the output layer can be treated as a part of the loss function \u2113while preserving convexity of q 7\u2192\u2113(q, y) (e.g., cross-entropy loss with softmax), this assumption is satis\ufb01ed by many machine learning models, including ResNet18 and all models used in the experiments in this paper (as well as all linear models). Moreover, Assumption 1 is automatically satis\ufb01ed in the next section by using the EE Wrapper. 2.1.2 Data-Architecture Alignment Condition Given a target matrix Y = (y1, y2, . . . , yn)\u22a4\u2208Rn\u00d7my and a loss function \u2113, we de\ufb01ne the modi\ufb01ed target matrix Y\u2113= (y\u2113 1, y\u2113 2, . . . , y\u2113 n)\u22a4\u2208Rn\u00d7my by Y\u2113= Y for the squared 5 \floss \u2113, and by (Y\u2113)ij = 2Yij \u22121 for the (binary and multi-class) cross-entropy losses \u2113with Yij \u2208{0, 1}. Given input matrix X = (x1, x2, . . . , xn)\u22a4\u2208Rn\u00d7mx, the output matrix fX(\u03b8) \u2208Rn\u00d7my is de\ufb01ned by fX(\u03b8)ij = f(xi, \u03b8)j \u2208R. For any matrix M \u2208 Rm\u00d7 \u00af m, we let Col(M) \u2286Rm be its column space. With these notations, we are now ready to introduce the data-architecture alignment condition: De\ufb01nition 2. (Data-Architecture Alignment Condition) Given any dataset (X, Y ), differentiable function f, and loss function \u2113, the data-architecture alignment condition is said to be satis\ufb01ed at \u03b8 if vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8)) \u2202\u03b8 ). The data-architecture alignment condition depends on both data (through the target Y and the input X) and architecture (through the model f). It is satis\ufb01ed only when the data and architecture align well to each other. For example, in the case of linear model f(x, \u03b8) = \u03c6(x)\u22a4\u03b8 \u2208R, the condition can be written as vec(Y\u2113) \u2208Col(\u03a6(X)) where \u03a6(X) \u2208Rn\u00d7d and \u03a6(X)ij = \u03c6(xi)j. In De\ufb01nition 2, fX(\u03b8) is a matrix of the pre-activation outputs of the last layer. Thus, in the case of classi\ufb01cation tasks with a nonlinear activation at the output layer, fX(\u03b8) and Y are not in the same space, which is the reason why we use Y\u2113here instead of Y . Importantly, the data-architecture alignment condition does not make any requirements on the the rank of the Jacobian matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 \u2208Rnmy\u00d7d: i.e., the rank of \u2202vec(fX(\u03b8)) \u2202\u03b8 is allowed to be smaller than nmy and d. Thus, for example, the dataarchitecture alignment condition can be satis\ufb01ed depending on the given data and architecture even if the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4is zero, in both cases of over-parameterization (e.g., d \u226bn) and under-parameterization (e.g., d \u226an). This is further illustrated in Subsection 2.2 and discussed in Subsection 2.3. We note that we further discard the assumption of the data-architecture alignment condition in Section 3 as it is automatically satis\ufb01ed by using the EE Wrapper. 2.1.3 Global Convergence Under the common model structure assumption, the data-architecture alignment condition is shown to be what lets us avoid the failure of the global convergence and suboptimal local minima. More concretely, we prove a global convergence guarantee under the data-architecture alignment condition as well as the necessity of the condition for the global optimality: Theorem 1. (Informal Version) Let Assumption 1 hold. Then, the following two statements hold for gradient-based dynamics: (i) The global optimality gap bound decreases per iteration towards zero at the rate of O(1/ p |T |) for any T such that the data-architecture alignment condition is satis\ufb01ed at \u03b8t for t \u2208T . (ii) For any \u03b8 \u2208Rd, the data-architecture alignment condition at \u03b8 is necessary to have the globally optimal model fX(\u03b8) = \u03b7Y\u2113at \u03b8 for any \u03b7 \u2208R. Theorem 1 (i) guarantees the global convergence without the need to satisfy the data-architecture alignment condition at every iteration or at the limit point. Instead, it 6 \fshows that the bound on the global optimality gap decreases towards zero per iteration whenever the data-architecture alignment condition holds. Theorem 1 (ii) shows that the data-architecture alignment condition is necessary for the global optimality. Intuitively, this is because the expressivity of a model class satisfying the common model structure assumption is restricted such that it is required to align the architecture to the data in order to contain the globally optimal model fX(\u03b8) = \u03b7Y\u2113(for any \u03b7 \u2208R). To better understand the statement of Theorem 1 (i), consider a counter example with a dataset consisting of the single point (x, y) = (1, 0), the model f(x, \u03b8) = \u03b84 \u221210\u03b82 + 6\u03b8 + 100, and the squared loss \u2113(q, y) = (q \u2212y)2. In this example, we have L(\u03b8) = f(x, \u03b8)2, which has multiple suboptimal local minima of different values. Then, via gradient descent, the model converges to the closest local minimum and, in particular, does not necessarily converge to a global minimum. Indeed, this example violates the common model structure assumption (Assumption 1) (although it satis\ufb01es the data-architecture alignment condition), showing the importance of the common model structure assumption along with the data-architecture alignment. This also illustrates the non-triviality of Theorem 1 (i) in that the data-architecture alignment is not suf\ufb01cient, and we needed to understand what types of model structures are commonly used in practice and formalize the understanding as the common model structure assumption. To further understand the importance of the common model structure assumption in Theorem 1, let us now consider the case where we do not require the assumption. Indeed, we can guarantee the global convergence without the common model structure assumption if we ensure that the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4 is nonzero. This can be proved by the following derivation. Let \u03b8 be an arbitrary stationary point of L. Then, we have 0 = \u2202L(\u03b8) \u2202\u03b8 = 1 n Pn i=1( \u2202\u2113(q,yi) \u2202q |q=f(xi,\u03b8)) \u2202f(xi,\u03b8) \u2202\u03b8 , which implies that \u2202vec(fX(\u03b8)) \u2202\u03b8 v = 0, (3) where v = vec(( \u2202\u2113(q,u1) \u2202q |q=f(x1,\u03b8))\u22a4, . . . , ( \u2202\u2113(q,uN) \u2202q |q=f(xn,\u03b8))\u22a4) \u2208Rnmy. Therefore, if the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4is nonzero, then we have v = 0: i.e., \u2202\u2113(q,ui) \u2202q |q=f(xi,\u03b8) = 0 for all i \u2208{1, 2, . . . , n}. Using the convexity of the map q 7\u2192\u2113(q, y) (which is satis\ufb01ed by the squared loss and cross-entropy loss), this implies that for any q1, q2, . . . , qn \u2208Rmy, L(\u03b8) = 1 n n X i=1 \u2113(f(xi, \u03b8), yi) (4) \u22641 n n X i=1 \u0012 \u2113(qi, yi) \u2212 \u0012\u2202\u2113(q, ui) \u2202q \f \f \f q=f(xi,\u03b8) \u0013 (qi \u2212f(xi, \u03b8)) \u0013 (5) \u22641 n n X i=1 \u2113(qi, yi). (6) Since f(x1), f(x2), . . . , f(xn) \u2208Rmy, this implies that any stationary point \u03b8 is a global minimum if the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4is nonzero, without the common model structure assumption (Assumption 1). Indeed, in the above 7 \fTable 1: The change of the gradient representation during training, \u2225M(\u03b8 \u02c6 T)\u2212M(\u03b80)\u22252 F, where M(\u03b8) := \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4and \u02c6 T is the last time step. (a) Fully-connected network Dataset Two moons 2.09\u00d71011 Sine wave 3.95\u00d7109 (b) Convolutional network Dataset mc = 4 mc = 2 mc = 1 seed#1 seed#2 seed#1 seed#2 seed#1 seed#2 Semeion 8.09\u00d71012 5.19\u00d71012 9.82\u00d71012 3.97\u00d71012 2.97\u00d71012 5.41\u00d71012 Random 3.73\u00d71012 1.64\u00d71012 3.43\u00d7107 4.86\u00d71012 1.40\u00d7107 8.57\u00d71011 example with the model f(x, \u03b8) = \u03b84 \u221210\u03b82 + 6\u03b8 + 100, the common model structure assumption is violated but we still have the global convergence if the minimum eigenvalue is nonzero: e.g., f(x, \u03b8) = y = 0 at any stationary point \u03b8 such that the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4is nonzero. In contrast, Theorem 1 allows the global convergence even when the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4is zero, by utilizing the common model structure assumption. The formal version of Theorem 1 is presented in Subsection 2.4 and is proved in Appendix A in the Supplementary Information. Before proving the statement, we \ufb01rst examine the meaning and implications of our results through illustrative examples in the two next subsections. 2.2 Illustrative Examples in Experiments Theorem 1 suggests that data-architecture alignment condition vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) has the ability to distinguish the success and failure cases, even when the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8t)) \u2202\u03b8t ( \u2202vec(fX(\u03b8t)) \u2202\u03b8t )\u22a4is zero for all t \u22650. In this subsection, we conduct experiments to further verify and deepen this theoretical understanding. We employ a fully-connected network having four layers with 300 neurons per hidden layer, and a convolutional network, LeNet (LeCun et al., 1998), with \ufb01ve layers. For the fully-connected network, we use the two-moons dataset (Pedregosa et al., 2011) and a sine wave dataset. To create the sine wave dataset, we randomly generated the input xi from the uniform distribution on the interval [\u22121, 1] and set yi = 1{sin(20xi) < 0} \u2208R for all i \u2208[n] with n = 100. For the convolutional network, we use the Semeion dataset (Srl and Brescia, 1994) and a random dataset. The random dataset was created by randomly generating each pixel of the input image xi \u2208R16\u00d716\u00d71 from the standard normal distribution and by sampling yi uniformly from {0, 1} for all i \u2208[n] with n = 1000. We set the activation functions of all layers to be softplus \u00af \u03c3(z) = ln(1 + exp(\u03c2z))/\u03c2 with \u03c2 = 100, which approximately behaves as the ReLU activation as shown in Appendix C in the Supplementary Information. See Appendix B in the Supplementary Information 8 \ffor more details of the experimental settings. The results of the experiments are presented in Figure 1. In each sub\ufb01gure, the training errors and the values of QT are plotted over time T. Here, QT counts the number of Y\u2113not satisfying the condition vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) during t \u2208{0, 1, . . . , T} and is de\ufb01ned by QT = T X t=0 1 \u001a vec(Y\u2113) / \u2208Col \u0012\u2202vec(fX(\u03b8t)) \u2202\u03b8t \u0013\u001b . (7) Figure 1 (a) shows the results for the fully-connected network. For the two-moons dataset, the network achieved the zero training error with QT = 0 for all T (i.e., 0 10 20 30 40 50 60 70 Error Two moons Sine wave 0 200 400 600 800 1000 T 0 500 1000 QT (a) Fully-connected network 0 10 20 30 40 50 Error 0 50 100 150 200 250 300 350 400 T 0 200 400 QT (b) Convolutional network: seed #1 0 10 20 30 40 50 Error Semeion: mc = 4 Semeion: mc = 2 Semeion: mc = 1 Random: mc = 4 Random: mc = 2 Random: mc = 1 0 50 100 150 200 250 300 350 400 T 0 200 400 QT (c) Convolutional network: seed #2 Figure 1: Training error and the value of QT over time steps T. The legend of (b) is shown in (c). The value of QT measures the number of Y\u2113not satisfying the condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ). This \ufb01gure validates our theoretical understanding that the bound on the global optimality gap decreases at any iteration when QT does not increase at the iteration: i.e., when the QT value is \ufb02at. The minimum eigenvalue of the matrix M(\u03b8t) = \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4is zero at all iterations in Figure 1 (b) and (c) for all cases with mc = 1. 9 \fvec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8T )) \u2202\u03b8T ) for all T). For the sine wave dataset, it obtained high training errors with QT = T for all T (i.e., vec(Y\u2113) / \u2208Col( \u2202vec(fX(\u03b8T )) \u2202\u03b8T ) for all T). This is consistent with our theory. Our theory explains that what makes a dataset easy to be \ufb01tted or not is whether the condition vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) is satis\ufb01ed or not. Figure 1 (b) and (c) shows the results for the convolutional networks with two random initial points using two different random seeds. In the sub\ufb01gures, we report the training behaviors with different network sizes mc = 1, 2 and 4: i.e., the number of convolutional \ufb01lters per convolutional layer is 8 \u00d7 mc and the number of neurons per fully-connected hidden layer is 128 \u00d7 mc. As can be seen, with the Semeion dataset, the networks of all sizes achieved the zero error with QT = 0 for all T. With the random dataset, the deep networks yielded the zero training error whenever QT is not linearly increasing over the time, or equivalently whenever the condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8T )) \u2202\u03b8T ) holds suf\ufb01ciently many steps T. This is consistent with our theory. Finally, we also con\ufb01rmed that gradient representation \u2202f(x,\u03b8t) \u2202\u03b8t changed signi\ufb01cantly from the initial one \u2202f(x,\u03b80) \u2202\u03b80 in our experiments. That is, the values of \u2225M(\u03b8T) \u2212 M(\u03b80)\u22252 F were signi\ufb01cantly large and tended to increase as T increases, where the matrix M(\u03b8) \u2208Rnmy\u00d7nmy is de\ufb01ned by M(\u03b8) = \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4. Table 1 summarizes the values of \u2225M(\u03b8T) \u2212M(\u03b80)\u22252 F at the end of the training. 2.3 Implications In Section 2.1.3, we showed that an uncommon model structure f(x, \u03b8) = \u03b84 \u221210\u03b82 + 6\u03b8 + 100 does not satisfy Assumption 1 and Assumption 1 is not required for global convergence if the minimum eigenvalue is nonzero. However, in practice, we typically use machine learning models that satisfy Assumption 1 instead of the model f(x, \u03b8) = \u03b84 \u221210\u03b82 + 6\u03b8 + 100, and the minimum eigenvalue is zero in many cases. In this context, Theorem 1 provides the justi\ufb01cation for common practice in nonlinear representation learning. Furthermore, Theorem 1 (i) contributes to the literature by identifying the common model structure assumption (Assumption 1) and the data-architecture alignment condition (De\ufb01nition 2) as the novel and practical conditions to ensure the global convergence even when the minimum eigenvalue becomes zero. Moreover, Theorem 1 (ii) shows that this condition is not arbitrary in the sense that it is also necessary to obtain the globally optimal models. Furthermore, the data-architecture alignment condition is strictly more general than the condition of the minimum eigenvalue being nonzero, in the sense that the latter implies the former but not vice versa. Our new theoretical understanding based on the data-architecture alignment condition can explain and deepen the previously known empirical observation that increasing the network size tends to improve the training behaviors. Indeed, the size of networks seems to correlate well with the training error to a certain degree in Figure 1 (b). However, the size and the training error do not correlate well in Figure 1 (c). Our new theoretical understanding explains that the training behaviors correlate more directly with the data-architecture alignment condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) instead. The seeming correlation with the network size is indirect and caused by another correlation between the network size and the condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ). That is, 10 \fthe condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) more likely tends to hold when the network size is larger because the matrix \u2202vec(fX(\u03b8t)) \u2202\u03b8t is of size nmy \u00d7 d where d is the number of parameters: i.e., by increasing d, we can increase the column space Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) to increase the chance of satisfying the condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ). Note that the minimum eigenvalue of the matrix M(\u03b8t) = \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4 is zero at all iterations in Figure 1 (b) and (c) for all cases of mc = 1. Thus, Figure 1 (b) and (c) also illustrates the fact that while having the zero minimum eigenvalue of the matrix M(\u03b8t), the dynamics can achieve the global convergence under the dataarchitecture alignment condition. Moreover, because the multilayer neural network in the lazy training regime (Kawaguchi and Sun, 2021) achieve zero training errors for all datasets, Figure 1 additionally illustrates the fact that our theoretical and empirical results apply to the models outside of the lazy training regime and can distinguishing \u2018good\u2019 datasets from \u2019bad\u2019 datasets given a learning algorithm. In sum, our new theoretical understanding has the ability to explain and distinguish the successful case and failure case based on the data-architecture alignment condition for the common machine learning models. Because the data-architecture alignment condition is dependent on data and architecture, Theorem 1 along with our experimental results show why and when the global convergence in nonlinear representation learning is achieved based on the relationship between the data (X, Y ) and architecture f. This new understanding is used in Section 3 to derive a practical algorithm, and is expected to be a basis for many future algorithms. 2.4 Details and Formalization of Theorem 1 This subsection presents the precise mathematical statements that formalize the informal description of Theorem 1. In the following subsections, we devide the formal version of Theorem 1 into Theorem 2, Theorem 3, and Proposition 1. 2.4.1 Preliminaries Let (\u03b8t)\u221e t=0 be the sequence de\ufb01ned by \u03b8t+1 = \u03b8t \u2212\u03b1t\u00af gt with an initial parameter vector \u03b80, a learning rate \u03b1t, and an update vector \u00af gt. The analysis in this section relies on the following assumption on the update vector \u00af gt: Assumption 2. There exist \u00af c, c > 0 such that c\u2225\u2207L(\u03b8t)\u22252 \u2264\u2207L(\u03b8t)\u22a4\u00af gt and \u2225\u00af gt\u22252 \u2264 \u00af c\u2225\u2207L(\u03b8t)\u22252 for any t \u22650. Assumption 2 is satis\ufb01ed by using \u00af gt = Dt\u2207L(\u03b8t), where Dt is any positive de\ufb01nite symmetric matrix with eigenvalues in the interval [c, \u221a\u00af c]. If we set Dt = I, we have gradient descent and Assumption 2 is satis\ufb01ed with c = \u00af c = 1. This section also uses the standard assumption of differentiability and Lipschitz continuity: Assumption 3. For every i \u2208[n], the function \u2113i : q 7\u2192\u2113(q, yi) is differentiable and convex, the map fi : \u03b8 7\u2192f(xi, \u03b8) is differentiable, and \u2225\u2207L(\u03b8)\u2212\u2207L(\u03b8\u2032)\u2225\u2264L\u2225\u03b8\u2212\u03b8\u2032\u2225 for all \u03b8, \u03b8\u2032 in the domain of L for some L \u22650. 11 \fThe assumptions on the loss function in Assumption 3 are satis\ufb01ed by using standard loss functions, including the squared loss, logistic loss, and cross entropy loss. Although the objective function L is non-convex and non-invex, the function q 7\u2192\u2113(q, yi) is typically convex. For any matrix Y \u2217= (y\u2217 1, y\u2217 2, . . . , y\u2217 n)\u22a4\u2208Rn\u00d7my, we de\ufb01ne L\u2217(Y \u2217) = 1 n n X i=1 \u2113(y\u2217 i , yi) . (8) For example, for the squared loss \u2113, the value of L\u2217(Y\u2113) is at most the global minimum value of L as L\u2217(Y\u2113) \u2264L(\u03b8), \u2200\u03b8 \u2208Rd, (9) since L\u2217(Y\u2113) = 1 n Pn i=1 \u2225yi \u2212yi\u22252 2 = 0 \u2264L(\u03b8) \u2200\u03b8 \u2208Rd. This paper also uses the notation of [k] = {1, 2, . . . , k} for any k \u2208N+ and \u2225\u00b7 \u2225= \u2225\u00b7 \u22252 (Euclidean norm). Finally, we note that for any \u03b7 \u2208R, the condition of vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8)) \u2202\u03b8 ) is necessary to learn a near global optimal model fX(\u03b8) = \u03b7Y\u2113: Proposition 1. Suppose Assumption 1 holds. If vec(Y\u2113) / \u2208Col( \u2202vec(fX(\u03b8)) \u2202\u03b8 ), then fX(\u03b8) \u0338= \u03b7Y\u2113for any \u03b7 \u2208R. Proof. All proofs of this paper are presented in Appendix A in the Supplementary Information. 2.4.2 Global Optimality at the Limit Point The following theorem shows that every limit point \u02c6 \u03b8 of the sequence (\u03b8t)t achieves a loss value L(\u02c6 \u03b8) no worse than inf\u03b7\u2208R L\u2217(\u03b7Y \u2217) for any Y \u2217such that vec(Y \u2217) \u2208 Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t \u2208[\u03c4, \u221e) with some \u03c4 \u22650: Theorem 2. Suppose Assumptions 1\u20133 hold. Assume that the learning rate sequence (\u03b1t)t satis\ufb01es either (i) \u03f5 \u2264\u03b1t \u2264c(2\u2212\u03f5) L\u00af c for some \u03f5 > 0, or (ii) limt\u2192\u221e\u03b1t = 0 and P\u221e t=0 \u03b1t = \u221e. Then, for any Y \u2217\u2208Rn\u00d7my, if there exists \u03c4 \u22650 such that vec(Y \u2217) \u2208 Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t \u2208[\u03c4, \u221e), every limit point \u02c6 \u03b8 of the sequence (\u03b8t)t satis\ufb01es L(\u02c6 \u03b8) \u2264L\u2217(\u03b7Y \u2217), \u2200\u03b7 \u2208R. (10) For example, for the squared loss \u2113(q, y) = \u2225q \u2212y\u22252, Theorem 2 implies that every limit point \u02c6 \u03b8 of the sequence (\u03b8t)t is a global minimum as L(\u02c6 \u03b8) \u2264L(\u03b8), \u2200\u03b8 \u2208Rd, (11) if vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for t \u2208[\u03c4, \u221e) with some \u03c4 \u22650. This is because L(\u02c6 \u03b8) \u2264 L\u2217(Y\u2113) \u2264L(\u03b8) \u2200\u03b8 \u2208Rd from Theorem 2 and equation (9). In practice, one can easily satisfy all the assumptions in Theorem 2 except for the condition that vec(Y \u2217) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t \u2208[\u03c4, \u221e). Accordingly, we will now weaken this condition by analyzing optimality at each iteration so that the condition is veri\ufb01able in experiments. 12 \f2.4.3 Global Optimality Gap at Each Iteration The following theorem states that under standard settings, the sequence (\u03b8t)t\u2208T converges to a loss value no worse than inf\u03b7\u2208R L\u2217(\u03b7Y \u2217) at the rate of O(1/ p |T |) for any T and Y \u2217such that vec(Y \u2217) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for t \u2208T : Theorem 3. Suppose Assumptions 1 and 3 hold. Let (\u03b1t, \u00af gt) = ( 2\u03b1 L , \u2207L(\u03b8t)) with an arbitrary \u03b1 \u2208(0, 1). Then, for any T \u2286N0 and Y \u2217\u2208Rn\u00d7my such that vec(Y \u2217) \u2208 Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t \u2208T , it holds that min t\u2208T L(\u03b8t) \u2264L\u2217(\u03b7Y \u2217) + 1 p |T | s L\u03b6\u03b7L(\u03b8t0) 2\u03b1(1 \u2212\u03b1), (12) for any \u03b7 \u2208R, where t0 = min{t : t \u2208T }, \u03b6\u03b7 := 4 maxt\u2208T max(\u2225\u03bd(\u03b8t)\u22252, \u2225\u02c6 \u03b2(\u03b8t, \u03b7)\u22252), \u02c6 \u03b2(\u03b8, \u03b7) := \u03b7(( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4\u2202vec(fX(\u03b8)) \u2202\u03b8 )\u2020( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4vec(Y \u2217), and \u03bd(\u03b8)k = \u03b8k for all k \u2208S and \u03bd(\u03b8)k = 0 for all k / \u2208S with the set S being de\ufb01ned in Assumption 1. For the squared loss \u2113, Theorem 3 implies the following for any T \u22651: for any T \u2286[T] such that vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t \u2208T , we have min t\u2208[T] L(\u03b8t) \u2264inf \u03b8\u2208Rd L(\u03b8) + O(1/ p |T |). (13) This is because L\u2217(Y\u2113) \u2264inf\u03b8\u2208Rd L(\u03b8) from equations (9) and mint\u2208[T] L(\u03b8t) \u2264mint\u2208T L(\u03b8t) from T \u2286[T]. Similarly, for the binary and multi-class cross-entropy losses, Theorem 3 implies the following for any T \u22651: for any T \u2286[T] such that vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t \u2208T , we have that for any \u03b7 \u2208R, min t\u2208[T] L(\u03b8t) \u2264L\u2217(\u03b7Y\u2113) + O(\u03b72/ p |T |). (14) Given any desired \u03f5 > 0, since L\u2217(\u03b7Y\u2113) \u21920 as \u03b7 \u2192\u221e, setting \u03b7 to be suf\ufb01ciently large obtains the desired \u03f5 value as mint\u2208T L(\u03b8t) \u2264\u03f5 in equation (14) as p |T | \u2192\u221e. 3 Application to the Design of Training Framework The results in the previous section show that the bound on the global optimality gap decreases per iteration whenever the data-architecture alignment condition holds. Using this theoretical understanding, in this section, we propose a new training framework with prior guarantees while learning hierarchical nonlinear representations without assuming the data-architecture alignment condition. As a result, we made signi\ufb01cant improvements over the most closely related study on global convergence guarantees (Kawaguchi and Sun, 2021). In particular, whereas the related study requires a wide layer with a width larger than n, our results reduce the requirement to a layer with a width larger than \u221an. For example, the MNIST dataset has n = 60000 and hence previous studies require 60000 neurons at a layer, whereas we only require \u221a 60000 \u2248245 13 \fneurons at a layer. Our requirement is consistent and satis\ufb01ed by the models used in practice that typically have from 256 to 1024 neurons for some layers. We begin in Subsection 3.1 with additional notations, and then present the training framework in Subsection 3.2 and convergence analysis in Subsection 3.3. We conclude in Subsection 3.4 by providing empirical evidence to support our theory. 3.1 Additional Notations We denote by \u03b8(l) \u2208Rdl the vector of all the trainable parameters at the l-th layer for l = 1, . . . , H where H is the depth or the number of trainable layers (i.e., one plus the number of hidden layers). That is, the H-th layer is the last layer containing the trainable parameter vector \u03b8(H) at the last layer. For any pair (l, l\u2032) such that 1 \u2264l \u2264l\u2032 \u2264H, we de\ufb01ne \u03b8(l:l\u2032) = [\u03b8\u22a4 (l), . . . , \u03b8\u22a4 (l\u2032)]\u22a4\u2208Rdl:l\u2032: for example, \u03b8(1:H) = \u03b8 and \u03b8(l:l\u2032) = \u03b8(l) if l = l\u2032. We consider a family of training algorithms that update the parameter vector \u03b8 as follows: for each l = 1, . . . , H, \u03b8t+1 (l) = \u03b8t (l) \u2212gt (l), gt (l) \u223cG(l)(\u03b8t, t) (15) where the function G(l) outputs a distribution over the vector gt (l) and differs for different training algorithms. For example, for (mini-batch) stochastic gradient descent (SGD), gt (l) represents a product of a learning rate and a stochastic gradient with respect to \u03b8(l) at the time t. We de\ufb01ne G = (G(1), . . . , G(H)) to represent a training algorithm. For an arbitrary matrix M \u2208Rm\u00d7m\u2032, we let M\u2217j be its j-th column vector in Rm, Mi\u2217be its i-th row vector in Rm\u2032, and rank(M) be its matrix rank. We de\ufb01ne M \u25e6M \u2032 to be the Hadamard product of any matrices M and M \u2032. For any vector v \u2208Rm, we let diag(v) \u2208Rm\u00d7m be the diagonal matrix with diag(v)ii = vi for i \u2208[m]. We denote by Im the m \u00d7 m identity matrix. 3.2 Exploration-Exploitation Wrapper In this section, we introduce the Exploration-Exploitation (EE) wrapper A. The EE wrapper A is not a stand-alone training algorithm. Instead, the EE wrapper A takes any training algorithm G as its input, and runs the algorithm G in a particular way to guarantee global convergence. We note that the exploitation phase in the EE wrapper does not optimize the last layer, and instead it optimizes hidden layers, whereas the exploration phase optimizes all layers. The EE wrapper allows us to learn the representation \u2202f(x,\u03b8t) \u2202\u03b8t that differs signi\ufb01cantly from the initial representation \u2202f(x,\u03b80) \u2202\u03b80 without making assumptions on the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4by leveraging the data-architecture alignment condition. The data-architecture alignment condition is ensured by the safe-exploration condition (de\ufb01ned in Section 3.3.1), which is timeindependent and holds in practical common architectures (as demonstrated in Section 3.4). 3.2.1 Main Mechanisms Algorithm 1 outlines the EE wrapper A. During the exploration phase in lines 3\u20137 of Algorithm 1, the EE wrapper A freely explores hierarchical nonlinear representations 14 \fAlgorithm 1 A: Exploration-Exploitation (EE) wrapper 1: Inputs: a training algorithm G and a base model \u00af f. 2: Modify the base model \u00af f to the model f 3: Exploration phase: 4: Initialize the parameter vector \u03b80 of the model f 5: for t = 0, 1, . . . , \u03c4 \u22121 do 6: \u03b8t+1 (l) = \u03b8t (l) \u2212gt (l), gt (l) \u223cG(l)(\u03b8t, t), \u2200l \u2208[H] 7: end for 8: Exploitation phase: 9: Set \u03b8\u03c4 = \u03b8\u03c4\u22121 + \u03b5\u03b4 where \u03b4 \u223cNd(0, I) 10: Replace \u03c3j by \u03c3R(j) with R(j) = \u03b8\u03c4 (H\u22121,j) 11: for t = \u03c4, \u03c4 + 1, . . . do 12: \u03b8t+1 (H\u22121) = \u03b8t (H\u22121) \u2212\u02dc gt, \u02dc gt \u223c\u02dc G(H\u22121)(\u03b8t, t) 13: end for to be learned without any restrictions. Then, during the exploitation phase in lines 8\u201312, it starts exploiting the current knowledge to ensure vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ) for all t to guarantee global convergence. The value of \u03c4 is the hyper-parameter that controls the time when it transitions from the exploration phase to the exploitation phase. In the exploitation phase, the wrapper A only optimizes the parameter vector \u03b8t (H\u22121) at the (H \u22121)-th hidden layer, instead of the parameter vector \u03b8t (H) at the last layer or the H-th layer. Despite this, the EE wrapper A is proved to converge to global minima of all layers in Rd. The exploitation phase still allows us to signi\ufb01cantly change the representations as M(\u03b8t) \u0338\u2248M(\u03b8\u03c4) for t > \u03c4. This is because we optimize the hidden layers instead of the last layer without any signi\ufb01cant over-parameterizations. The exploitation phase uses an arbitrary optimizer \u02dc G with the update vector \u02dc gt \u223c \u02dc G(H\u22121)(\u03b8t, t) with \u02dc gt = \u03b1t\u02c6 gt \u2208RdH\u22121. During the two phases, we can use the same optimizers (e.g., SGD for both G and \u02dc G) or different optimizers (e.g., SGD for G and L-BFGS for \u02dc G). 3.2.2 Model Modi\ufb01cation This subsection de\ufb01nes the details of the model modi\ufb01cation at line 2 of Algorithm 1. Given any base network \u00af f, the wrapper A \ufb01rst checks whether or not the last two layers of the given network \u00af f are fully connected. If not, one or two fully-connected last layers are added such that the output of the network \u00af f can be written by \u00af f(x, \u00af \u03b8) = \u00af W (H)\u00af \u03c3( \u00af W (H\u22121)z(x, \u03b8(1:H\u22122))). Here, z(x, \u03b8(1:H\u22122)) is the output of the (H\u22122)-th layer, and the function z is arbitrary and can represent various deep networks. Moreover, \u00af \u03c3 is a nonlinear activation function, and \u00af W (H\u22121) and \u00af W (H) are the weight matrices of the last two layers. The wrapper A then modi\ufb01es these last two layers as follows. In the case of my = 1, the model \u00af f is modi\ufb01ed to f(x, \u03b8) = W (H)\u03c3(W (H\u22121), z(x, \u03b8(1:H\u22122))), (16) where W (H\u22121) \u2208RmH\u00d7mH\u22121 and W (H) \u2208Rmy\u00d7mH are the weight matrices of the last two layers. The nonlinear activation \u03c3 is de\ufb01ned by \u03c3(q, q\u2032) = \u02dc \u03c3(qq\u2032) \u25e6(qq\u2032), where \u02dc \u03c3 15 \fis some nonlinear function. For example, we can set \u02dc \u03c3(q) = 1 1+e\u2212\u03c2\u2032q (sigmoid) with any hyper-parameter \u03c2\u2032 > 0, for which it holds that as \u03c2\u2032 \u2192\u221e, f(x, \u03b8) \u2192W (H)relu(W (H\u22121)z(x, \u03b8(1:H\u22122))). (17) We generalize equation (16) to the case of my \u22652 as: f(x, \u03b8)j = W (H) j\u2217\u03c3j(W (H\u22121,j), z(x, \u03b8(1:H\u22122))), (18) for j \u2208[my] where W (H\u22121,j) \u2208RmH\u00d7mH\u22121 is the weight matrix at the (H \u22121)-th layer, and \u03c3j = \u03c3 until line 10. At line 10, the wrapper A replaces \u03c3j by \u03c3R(j) where \u03c3R(j)(q, q\u2032) = \u02dc \u03c3(R(j)q\u2032) \u25e6(qq\u2032) with R(j) = \u03b8\u03c4 (H\u22121,j) and \u03b8(H\u22121,j) = (W (H\u22121,j))\u22a4. To consider the bias term, we include the constant neuron to the output of the (H \u22121)th layer as z(x, \u03b8(1:H\u22122)) = [\u00af z(x, \u03b8(1:H\u22122))\u22a4, 1]\u22a4\u2208RmH\u22121 where \u00af z(x, \u03b8(1:H\u22122)) is the output without the constant neuron. 3.3 Convergence Analysis In this subsection, we establish global convergence of the EE wrapper A without using assumptions from the previous section. Let \u03c4 be an arbitrary positive integer and \u03b5 be an arbitrary positive real number. Let (\u03b8t)\u221e t=0 be a sequence generated by the EE wrapper A. We de\ufb01ne \u02c6 L(\u03b8(H\u22121)) = L(\u03b8\u03c4 (1:H\u22122), \u03b8(H\u22121), \u03b8\u03c4 (H)) and B\u00af \u03f5 = min\u03b8(H\u22121)\u2208\u0398\u00af \u03f5 \u2225\u03b8(H\u22121) \u2212 \u03b8\u03c4 (H\u22121)\u2225where \u0398\u00af \u03f5 = argmin\u03b8(H\u22121) max( \u02c6 L(\u03b8(H\u22121)), \u00af \u03f5) for any \u00af \u03f5 \u22650. 3.3.1 Safe-exploration Condition The mathematical analysis in this section relies on the safe-exploration condition, which is what allows us to safely explore deep nonlinear representations in the exploration phase without getting stuck in the states of vec(Y\u2113) / \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ). The safeexploration condition is veri\ufb01able, time-independent, data-dependent and architecturedependent. The veri\ufb01ability and time-independence makes the assumption strong enough to provide prior guarantees before training. The data-dependence and architecturedependence make the assumption weak enough to be applicable for a wide range of practical settings. For any q \u2208RmH\u22121\u00d7mH, we de\ufb01ne the matrix-valued function \u03c6(q, \u03b8(1:H\u22122)) \u2208 Rn\u00d7mHmH\u22121 by \u03c6(q, \u03b8(1:H\u22122)) = \uf8ee \uf8ef \uf8f0 \u02dc \u03c3(z\u22a4 1 q\u22171)z\u22a4 1 \u00b7 \u00b7 \u00b7 \u02dc \u03c3(z\u22a4 1 q\u2217mH)z\u22a4 1 . . . ... . . . \u02dc \u03c3(z\u22a4 n q\u22171)z\u22a4 n \u00b7 \u00b7 \u00b7 \u02dc \u03c3(z\u22a4 n q\u2217mH)z\u22a4 n \uf8f9 \uf8fa \uf8fb, where zi = z(xi, \u03b8(1:H\u22122)) \u2208RmH\u22121 and \u02dc \u03c3(z\u22a4 i q\u2217k)z\u22a4 i \u2208R1\u00d7mH\u22121 for all i \u2208[n] and k \u2208[mH]. Using this function, the safe-exploration condition is formally stated as: Assumption 4. (Safe-exploration condition) There exist a q \u2208RmH\u22121\u00d7mH and a \u03b8(1:H\u22122) \u2208Rd1:H\u22122 such that rank(\u03c6(q, \u03b8(1:H\u22122))) = n. 16 \fThe safe-exploration condition asks for only the existence of one parameter vector in the network architecture such that rank(\u03c6(q, \u03b8(1:H\u22122))) = n. It is not about the training trajectory (\u03b8t)t. Since the matrix \u03c6(q, \u03b8(1:H\u22122)) is of size n \u00d7 mHmH\u22121, the safeexploration condition does not require any wide layer of size mH \u2265n or mH\u22121 \u2265n. Instead, it requires a layer of size mHmH\u22121 \u2265n. This is a signi\ufb01cant improvement over the most closely related study (Kawaguchi and Sun, 2021) where the wide layer of size mH \u2265n was required. Note that having mHmH\u22121 \u2265n does not imply the safe-exploration condition. Instead, mHmH\u22121 \u2265n is a necessary condition to satisfy the safe-exploration condition, whereas mH \u2265n or mH\u22121 \u2265n was a necessary condition to satisfy assumptions in previous papers, including the most closely related study (Kawaguchi and Sun, 2021). The safe-exploration condition is veri\ufb01ed in experiments in Subsection 3.4. 3.3.2 Additional Assumptions We also use the following assumptions: Assumption 5. For any i \u2208[n], the function \u2113i : q 7\u2192\u2113(q, yi) is differentiable, and \u2225\u2207\u2113i(q) \u2212\u2207\u2113i(q\u2032)\u2225\u2264L\u2113\u2225q \u2212q\u2032\u2225for all q, q\u2032 \u2208R. Assumption 6. For each i \u2208[n], the functions \u03b8(1:H\u22122) 7\u2192z(xi, \u03b8(1:H\u22122)) and q 7\u2192\u02dc \u03c3(q) are real analytic. Assumption 5 is satis\ufb01ed by using standard loss functions such as the squared loss \u2113(q, y) = \u2225q \u2212y\u22252 and cross entropy loss \u2113(q, y) = \u2212Pdy k=1 yk log exp(qk) P k\u2032 exp(qk\u2032). The assumptions of the invexity and convexity of the function q 7\u2192\u2113(q, yi) in Subsections 3.3.3\u20133.3.4 also hold for these standard loss functions. Using L\u2113in Assumption 5, we de\ufb01ne \u02c6 L = L\u2113 n \u2225Z\u22252 where Z \u2208Rn is de\ufb01ned by Zi = maxj\u2208[my] \u2225[diag(\u03b8\u03c4 (H,j)) \u2297 ImH\u22121](\u03c6(\u03b8\u03c4 (H\u22121,j), \u03b8\u03c4 (1:H\u22122))i\u2217)\u22a4\u2225with \u03b8(H,j) = (W (H) j\u2217)\u22a4. Assumption 6 is satis\ufb01ed by using any analytic activation function such as sigmoid, hyperbolic tangents and softplus activations q 7\u2192ln(1 + exp(\u03c2q))/\u03c2 with any hyperparameter \u03c2 > 0. This is because a composition of real analytic functions is real analytic and the following are all real analytic functions in \u03b8(1:H\u22122): the convolution, af\ufb01ne map, average pooling, skip connection, and batch normalization. Therefore, the assumptions can be satis\ufb01ed by using a wide range of machine learning models, including deep neural networks with convolution, skip connection, and batch normalization. Moreover, the softplus activation can approximate the ReLU activation for any desired accuracy: i.e., ln(1 + exp(\u03c2q))/\u03c2 \u2192relu(q) as \u03c2 \u2192\u221e, where relu represents the ReLU activation. 3.3.3 Global Optimality at the Limit Point The following theorem proves the global optimality at limit points of the EE wrapper with a wide range of optimizers, including gradient descent and modi\ufb01ed Newton methods: Theorem 4. Suppose Assumptions 4\u20136 hold and that the function \u2113i : q 7\u2192\u2113(q, yi) is invex for any i \u2208[n]. Assume that there exist \u00af c, c > 0 such that c\u2225\u2207\u02c6 L(\u03b8t (H\u22121))\u22252 \u2264 17 \f\u2207\u02c6 L(\u03b8t (H\u22121))\u22a4\u02c6 gt and \u2225\u02c6 gt\u22252 \u2264\u00af c\u2225\u2207\u02c6 L(\u03b8t (H\u22121))\u22252 for any t \u2265\u03c4. Assume that the learning rate sequence (\u03b1t)t\u2265\u03c4 satis\ufb01es either (i) \u03f5 \u2264\u03b1t \u2264 c(2\u2212\u03f5) \u02c6 L\u00af c for some \u03f5 > 0, or (ii) limt\u2192\u221e\u03b1t = 0 and P\u221e t=\u03c4 \u03b1t = \u221e. Then with probability one, every limit point \u02c6 \u03b8 of the sequence (\u03b8t)t is a global minimum of L as L(\u02c6 \u03b8) \u2264L(\u03b8) for all \u03b8 \u2208Rd. 3.3.4 Global Optimality Gap at Each Iteration We now present global convergence guarantees of the EE wrapper A with gradient decent and SGD: Theorem 5. Suppose Assumptions 4\u20136 hold and that the function \u2113i : q 7\u2192\u2113(q, yi) is convex for any i \u2208[n]. Then, with probability one, the following two statements hold: (i) (Gradient descent) if \u02c6 gt = \u2207\u02c6 L(\u03b8t (H\u22121)) and \u03b1t = 1 \u02c6 L for t \u2265\u03c4, then for any \u00af \u03f5 \u22650 and t > \u03c4, L(\u03b8t) \u2264inf \u03b8\u2208Rd max(L(\u03b8), \u00af \u03f5) + B2 \u00af \u03f5 \u02c6 L 2(t \u2212\u03c4). (ii) (SGD) if E[\u02c6 gt|\u03b8t] = \u2207\u02c6 L(\u03b8t (H\u22121)) (almost surely) with E[\u2225\u02c6 gt\u22252] \u2264G2, and if \u03b1t \u22650, P\u221e t=\u03c4(\u03b1t)2 < \u221eand P\u221e t=\u03c4 \u03b1t = \u221efor t \u2265\u03c4, then for any \u00af \u03f5 \u22650 and t > \u03c4, E[L(\u03b8t\u2217)] \u2264inf \u03b8\u2208Rd max(L(\u03b8), \u00af \u03f5) + B2 \u00af \u03f5 + G2 Pt k=\u03c4 \u03b12 k 2 Pt k=\u03c4 \u03b1k (19) where t\u2217\u2208argmink\u2208{\u03c4,\u03c4+1,...,t} L(\u03b8k). In Theorem 5 (ii), with \u03b1t \u223cO(1/ \u221a t), the optimality gap becomes E[L(\u03b8t\u2217)] \u2212inf \u03b8\u2208Rd max(L(\u03b8), \u00af \u03f5) = \u02dc O(1/ \u221a t). 3.4 Experiments This section presents empirical evidence to support our theory and what is predicted by a well-known hypothesis. We note that there is no related work or algorithm that can guarantee global convergence in the setting of our experiments where the model has convolutions, skip connections, and batch normalizations without any wide layer (of the width larger than n). Moreover, unlike any previous studies that propose new methods, our training framework works by modifying any given method. 3.4.1 Sine Wave Dataset We have seen in Subsection 2.2 that gradient descent gets stuck at sub-optimal points for the sine wave dataset. Using the same setting as that in Subsection 2.2 with \u03b5 = 0.01, \u03c4 = 2000, and \u02dc G = G, we con\ufb01rm in Figure 2 that the EE wrapper A can modify gradient descent to avoid sub-optimal points and converge to global minima as predicted by our theory. Figure 3 shows the value of the change of the gradient representation, \u2225M(\u03b8T) \u2212M(\u03b80)\u22252 F, for each time step T. As it can be seen, the values of 18 \f\u2225M(\u03b8T) \u2212M(\u03b80)\u22252 F are large for both methods. Notably, the EE wrapper A of the base case signi\ufb01cantly increases the value of \u2225M(\u03b8T) \u2212M(\u03b80)\u22252 F even in the exploitation phase after \u03c4 = 2000 as we are optimizing the hidden layer. See Appendix B in the Supplementary Information for more details of the experiments for the sine wave dataset. 3.4.2 Image datasets The standard convolutional ResNet with 18 layers (He et al., 2016) is used as the base model \u00af f. We use ResNet-18 for the illustration of our theory because it is used in practice and it has convolution, skip connections, and batch normalization without any width larger than the number of data points. This setting is not covered by any of the previous theories for global convergence. We set the activation to be the softplus function q 7\u2192ln(1 + exp(\u03c2q))/\u03c2 with \u03c2 = 100 for all layers of the base ResNet. This approximates the ReLU activation well, as shown in Appendix C in the Supplementary Information. We employ the cross-entropy loss and \u02dc \u03c3(q) = 1 1+e\u2212q . We use a standard algorithm, SGD, with its standard hyper-parameter setting for the training algorithm G with \u02dc G = G: i.e., we let the mini-batch size be 64, the weight decay rate be 10\u22125, the momentum coef\ufb01cient be 0.9, the learning rate be \u03b1t = 0.1, the last epoch \u02c6 T be 200 (with data augmentation) and 100 (without data augmentation). The hyper-parameters \u03b5 and \u03c4 = \u03c40 \u02c6 T were selected from \u03b5 \u2208{10\u22123, 10\u22125} and \u03c40 \u2208{0.4, 0.6, 0.8} by only using training data. That is, we randomly divided each training data (100%) into a smaller training data (80%) and a validation data (20%) for a grid search over the hyper-parameters. See Appendix B in the Supplementary Information for the results of the grid search and details of the experimental setting. This standard setting satis\ufb01es Assumptions 5\u20136, leaving Assumption 4 to be veri\ufb01ed. Veri\ufb01cation of Assumption 4. Table 2 summarizes the veri\ufb01cation results of the safeexploration condition. Because the condition only requires an existence of a pair (\u03b8, q) satisfying the condition, we veri\ufb01ed it by using a randomly sampled q from the standard normal distribution and a \u03b8 returned by a common initialization scheme He et al. (2015). As mH\u22121 = 513 (512 + the constant neuron for the bias term) for the standard ResNet, we set mH = \u23082(n/mH\u22121)\u2309throughout all the experiments with the ResNet. For each 0 2000 4000 6000 8000 10000 Iteration T 0 10 20 30 40 50 60 Error Base #1 Base #2 Base #3 (Base) #1 (Base) #2 (Base) #3 Figure 2: Training errors for three random trials. 0 2000 4000 6000 8000 10000 Iteration T Base (Base) 1012 109 106 103 100 \u2225M(\u03b8T ) \u2212M(\u03b80)\u22252 F Figure 3: Changes of the representations where M(\u03b8) := \u2202vec(fX(\u03b8)) \u2202\u03b8 ( \u2202vec(fX(\u03b8)) \u2202\u03b8 )\u22a4. 19 \fdataset, the rank condition was veri\ufb01ed twice by the two standard methods: one from (Press et al., 2007) and another from (Golub and Van Loan, 1996). Test Performance. One well-known hypothesis is that the success of deep-learning methods partially comes from its ability to automatically learn deep nonlinear representations suitable for making accurate predictions from data (e.g., LeCun et al., 2015). As the EE wraper A keeps this ability of representation learning, the hypothesis suggests that the test performance of the EE wrapper A of a standard method is approximately comparable with that of the standard method. Unlike typical experimental studies, our objective here is to con\ufb01rm this prediction, instead of showing improvements over a previous method. We empirically con\ufb01rmed the prediction in Tables 3 and 4 where the numbers indicate the mean test errors (and standard deviations are in parentheses) over \ufb01ve random trials. As expected, the values of \u2225M(\u03b8 \u02c6 T) \u2212M(\u03b80)\u22252 2 were also large: e.g., 4.64 \u00d7 1012 for the standard method and 3.43 \u00d7 1012 for the wrapper A of the method with Semeion dataset. Training Behavior. Figure 4 shows that the EE wrapper A can improve training loss values of the standard SGD algorithm in the exploitation phase without changing its Table 2: Veri\ufb01cation of the safe-exploration condition (Assumption 4) with mH = \u23082(n/mH\u22121)\u2309where n is the number of training data, mH is the width of the last hidden layer, and mH\u22121 is the width of the penultimate hidden layer. Dataset n mH\u22121 mH Assumption 4 MNIST 60000 513 234 Veri\ufb01ed CIFAR-10 50000 513 195 Veri\ufb01ed CIFAR-100 50000 513 195 Veri\ufb01ed Semeion 1000 513 4 Veri\ufb01ed KMNIST 60000 513 234 Veri\ufb01ed SVHN 73257 513 286 Veri\ufb01ed Table 3: Test error (%): data augmentation. Dataset Standard A(Standard) MNIST 0.40 (0.05) 0.30 (0.05) CIFAR-10 7.80 (0.50) 7.14 (0.12) CIFAR-100 32.26 (0.15) 28.38 (0.42) Semeion 2.59 (0.57) 2.56 (0.55) KMNIST 1.48 (0.07) 1.36 (0.11) SVHN 4.67 (0.05) 4.43 (0.11) Table 4: Test error (%): no data augmentation. Dataset Standard A(Standard) MNIST 0.52 (0.16) 0.49 (0.02) CIFAR-10 15.15 (0.87) 14.56 (0.38) CIFAR-100 54.99 (2.29) 46.13 (1.80) 20 \f0 25 50 75 100 125 150 175 200 Epoch 10 2 10 1 100 Train Loss Standard (Standard) (a) MNIST 0 25 50 75 100 125 150 175 200 Epoch 10 2 10 1 100 Train Loss Standard (Standard) (b) CIFAR-10 0 25 50 75 100 125 150 175 200 Epoch 10 2 10 1 100 Train Loss Standard (Standard) (c) CIFAR-100 Figure 4: Training loss with data augmentation Table 5: Total wall-clock time in a local GPU workstation Dataset Standard A(Standard) Semeion 364.60 (0.94) 356.82 (0.67) CIFAR-10 3616.92 (10.57) 3604.5 (6.80) Table 6: Test errors (%) of A(Standard) with \u02dc G = L-BFGS. (a) with data augmentation HHHH H \u03b5 \u03c40 0.4 0.6 0.8 10\u22123 0.26 0.38 0.37 10\u22125 0.37 0.32 0.37 (b) without data augmentation HHHH H \u03b5 \u03c40 0.4 0.6 0.8 10\u22123 0.36 0.43 0.42 10\u22125 0.42 0.35 0.35 hyper-parameters because \u02dc G = G in these experiments. In the \ufb01gure, the plotted lines indicate the mean values over \ufb01ve random trials and the shaded regions show error bars with one standard deviation. Computational Time. The EE wrapper A runs the standard SGD G in the exploration phase and the SGD \u02dc G = G only on the subset of the weights \u03b8(H\u22121) in the exploitation phase. Thus, the computational time of the EE wrapper A is similar to that of the SGD in the exploration phase, and it tends to be faster than the SGD in the exploitation phase. To con\ufb01rm this, we measure computational time with Semeion and CIFAR-10 datasets under the same computational resources (e.g., without running other jobs in parallel) in a local workstation for each method. The mean wall-clock time (in seconds) over \ufb01ve random trials is summarised in Table 5, where the numbers in parentheses are standard deviations. It shows that the EE wrapper A is slightly faster than the standard method, as expected. Effect of Learning Rate and Optimizer. We also conducted experiments on the effects of learning rates and optimizers using MNIST dataset with data-augmentation. Using the best learning rate from {0.2, 0.1, 0.01, 0.001} for each method (with \u02dc G = G = SGD), the mean test errors (%) over \ufb01ve random trials were 0.33 (0.03) for the standard base method, and 0.27 (0.03) for the A wrapper of the standard base method (the 21 \fnumbers in parentheses are standard deviations). Moreover, Table 6 reports the preliminary results on the effect of optimizers with \u02dc G being set to Limited-memory Broyden\u2013 Fletcher\u2013Goldfarb\u2013Shanno algorithm (L-BFGS) (with G = the standard SGD). By comparing Tables 3 and 6, we can see that using a different optimizer in the exploitation phase can potentially lead to performance improvements. A comprehensive study of this phenomena is left to future work. 4 Conclusion Despite the nonlinearity of the dynamics and the non-invexity of the objective, we have rigorously proved convergence of training dynamics to global minima for nonlinear representation learning. Our results apply to a wide range of machine learning models, allowing both under-parameterization and over-parameterization. For example, our results are applicable to the case where the minimum eigenvalue of the matrix \u2202vec(fX(\u03b8t)) \u2202\u03b8t ( \u2202vec(fX(\u03b8t)) \u2202\u03b8t )\u22a4is zero for all t \u22650. Under the common model structure assumption, models that cannot achieve zero error for all datasets (except some \u2018good\u2019 datasets) are shown to achieve global optimality with zero error exactly when the dynamics satisfy the data-architecture alignment condition. Our results provide guidance for choosing and designing model structure and algorithms via the common model structure assumption and data-architecture alignment condition. The key limitation in our analysis is the differentiability of the function f. For multilayer neural networks, this is satis\ufb01ed by using standard activation functions, such as softplus, sigmoid, and hyperbolic tangents. Whereas softplus can approximate ReLU arbitrarily well, the direct treatment of ReLU in nonlinear representation learning is left to future work. Our theoretical results and numerical observations uncover novel mathematical properties and provide a basis for future work. For example, we have shown global convergence under the data-architecture alignment condition vec(Y\u2113) \u2208Col( \u2202vec(fX(\u03b8t)) \u2202\u03b8t ). The EE wrapper A is only one way to ensure this condition. There are many other ways to ensure the data-architecture alignment condition and each way can result in a new algorithm with guarantees." + }, + { + "url": "http://arxiv.org/abs/2104.05785v2", + "title": "A Recipe for Global Convergence Guarantee in Deep Neural Networks", + "abstract": "Existing global convergence guarantees of (stochastic) gradient descent do\nnot apply to practical deep networks in the practical regime of deep learning\nbeyond the neural tangent kernel (NTK) regime. This paper proposes an\nalgorithm, which is ensured to have global convergence guarantees in the\npractical regime beyond the NTK regime, under a verifiable condition called the\nexpressivity condition. The expressivity condition is defined to be both\ndata-dependent and architecture-dependent, which is the key property that makes\nour results applicable for practical settings beyond the NTK regime. On the one\nhand, the expressivity condition is theoretically proven to hold\ndata-independently for fully-connected deep neural networks with narrow hidden\nlayers and a single wide layer. On the other hand, the expressivity condition\nis numerically shown to hold data-dependently for deep (convolutional) ResNet\nwith batch normalization with various standard image datasets. We also show\nthat the proposed algorithm has generalization performances comparable with\nthose of the heuristic algorithm, with the same hyper-parameters and total\nnumber of iterations. Therefore, the proposed algorithm can be viewed as a step\ntowards providing theoretical guarantees for deep learning in the practical\nregime.", + "authors": "Kenji Kawaguchi, Qingyun Sun", + "published": "2021-04-12", + "updated": "2021-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "math.OC", + "stat.ML" + ], + "main_content": "Introduction The pursuit of global convergence guarantee has been one of the important aspects of optimization theory. However, ensuring global convergence is notoriously hard for \ufb01rstorder optimization algorithms used to train deep neural networks (Goodfellow, Bengio, and Courville 2016). Recently, some progress has been made on understanding the optimization aspect of overparametrized neural networks. Overparametrized neural networks can be trained to have zero training errors, interpolating all the training data points, and are recently shown to have global convergence guarantees in theoretical regimes (Li and Liang 2018; Soltanolkotabi, Javanmard, and Lee 2018; Kawaguchi and Huang 2019; Daniely 2019; Bresler and Nagaraj 2020; Montanari and Zhong 2020; Bubeck et al. 2020). These studies open up an insightful direction leading to the understanding of the optimization aspect of deep learning. However, there is still a signi\ufb01cant gap between theory and practice. In applications such as computer vision, speech Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. and natural language, a major reason for the success of deep learning in practice is its ability to learn representations with multiple levels of abstraction during training, as explained by LeCun, Bengio, and Hinton (2015). In contrast, special types of neural networks studied in previous theories with global convergence guarantees are not allowed to learn representation during training, as the neural tangent kernels are approximately unchanged during training. Indeed, such special neural networks without the capability to learn representation are considered to have limitations compared to those with the capability (Wei et al. 2019; Chizat, Oyallon, and Bach 2019; Yehudai and Shamir 2019). Furthermore, the set of neural networks studied by previous theories have not yet practical deep neural networks used in practice with good generalization performances (Kawaguchi, Kaelbling, and Bengio 2017; Poggio et al. 2017). In this work, we propose a two-phase method to modify a base algorithm such that the modi\ufb01ed algorithm enables practical deep neural networks to learn representation while having global convergence guarantees of all layers under veri\ufb01able conditions. Our global convergence guarantees are applicable to a wide range of practical deep neural networks, including deep convolutional networks with skip connection and batch normalization. For example, the veri\ufb01able conditions for global convergence guarantees are shown to be satis\ufb01ed by both fully connected deep neural networks and deep residual neural networks (ResNets) with convolutional layers. Our main contributions can be summarized as: \u2022 We propose a novel algorithm that turns any given \ufb01rstorder training algorithm into a two-phase training algorithm. \u2022 We prove that the resulting two-phase training algorithms \ufb01nd global minima for all layers of deep neural networks, under the expressivity condition. \u2022 The condition for global convergence is veri\ufb01ed theoretically for fully connected networks with last hidden layer being wide (as the number of training data points) and all other hidden layers being narrow (as the input dimension). \u2022 The condition for global convergence is veri\ufb01ed numerically for the deep (convolutional) ResNet with bathnormalization on various standard datasets. *Equal contribution arXiv:2104.05785v2 [cs.LG] 15 Apr 2021 \f\u2022 We compare the standard training algorithm (SGD with momentum) and the two-phase version of it with the same hyperparameters and total iterations. The two-phase version is shown to preserve the practical generalization performances of the standard training while providing global convergence guarantees. 2 Related work In this section, we discuss related studies and their relationships with the contributions of this paper. Over-parameterization Over-parameterization has been shown to help optimization of neural networks. More concretely, over-parameterization can remove suboptimal local minima (Soudry and Carmon 2016) and improve the quality of random initialization (Safran and Shamir 2016). Furthermore, gradual over-parameterization (i.e., gradually increasing the number of parameters) is recently shown to improve steadily the quality of local minima (Kawaguchi, Huang, and Kaelbling 2019). The extreme over-parameterization that requires the number of neurons to approach in\ufb01nity is used to prove global convergence (Mei, Montanari, and Nguyen 2018; Mei, Misiakiewicz, and Montanari 2019; Chizat and Bach 2018; Dou and Liang 2020; Wei et al. 2019; Fang et al. 2020). Polynomial degrees of over-parameterization are also utilized for global convergence in the lazy training regime. Neural tangent kernel and lazy training It was shown that neural networks under lazy training regime (with a speci\ufb01c scaling and initialization) is nearly a linear model \ufb01tted with random features induced by the neural tangent kernel (NTK) at random initialization. Accordingly, in the lazy training regime, which is also called the NTK regime, neural networks provably achieve globally minimum training errors. The lazy training regime is studied for both shallow (with one hidden layer) and deep neural networks and convolutional networks in previous studies (Zou et al. 2020; Li and Liang 2018; Jacot, Gabriel, and Hongler 2018; Du et al. 2019, 2018; Chizat, Oyallon, and Bach 2019; Arora et al. 2019b; Allen-Zhu, Li, and Liang 2019; Fang et al. 2020; Montanari and Zhong 2020). Lazy training and degree of overparametrization The global convergence guarantee in the lazy training regime was \ufb01rst proven by using the signi\ufb01cant overparametrization that requires the number of neurons per layer to be large polynomials in the number of data points (Li and Liang 2018; Soltanolkotabi, Javanmard, and Lee 2018). Later, the requirement on the degree of over-parametrization has been improved to a small polynomial dependency (Kawaguchi and Huang 2019; Bresler and Nagaraj 2020). Furthermore, for two-layer networks with random i.i.d. weights and i.i.d. input data, the requirement was reduced to the number of training data points divided by the input dimension up to log factors, which is the optimal order in theory (Daniely 2019; Montanari and Zhong 2020; Bubeck et al. 2020). Beyond lazy training regime However, it has been noted that neural networks in many real-world applications has weight parameters trained beyond the lazy training regime, so that the learned features have better expressive power than random features (Yehudai and Shamir 2019; Ghorbani et al. 2019; Arora et al. 2019b,a). Accordingly, a series of studies have demonstrated that the lazy training perspective of neural networks is not enough for understanding the success of deep learning (Wei et al. 2019; Chizat, Oyallon, and Bach 2019; Yehudai and Shamir 2019). Indeed, there are also previous works for the regime beyond the lazy training (Kawaguchi 2016; Kawaguchi and Bengio 2019; Jagtap, Kawaguchi, and Karniadakis 2020; Jagtap, Kawaguchi, and Em Karniadakis 2020). To overcome the weakness of lazy training, in this work, we present a novel method to use learned representation with a learned neural tangent kernel, instead of standard lazy training that use almost the random initialized neural tangent kernel. Our experiments on multiple ML benchmark datasets show empirically that our twophase training method achieves comparable generalization performances with standard SGD training. Relation to this paper Unlike previous work on the lazy training regime that use the NTK at random initialization, we allow the NTK to change signi\ufb01cantly during training, to learn features and representation. In terms of the degree of overparametrization, the results in this paper achieve the linear order (in the number of training data points) without the assumptions of the i.i.d. weights and i.i.d random input. Our results are also applicable for deep neural networks in practical settings without degrading the generalization performances. On the other hand, this paper further shows that the study of lazy training regime is also useful to understand the new two-phase training algorithm. Thus, we hope that the proposed two-phase training algorithm becomes a bridge between practice and theory of neural tangent kernel. 3 Model In this paper, we consider the empirical risk minimization problem. Let ((xi, yi))n i=1 be a training dataset of n samples where Sx = {xi}n i=1 and Sy = {yi}n i=1 are the set of training inputs and target outputs, with xi \u2208X \u2286Rmx and yi \u2208Y \u2286Rmy. Let \u2113: Rmy \u00d7 Y \u2192R\u22650 be the loss of each sample that measures the difference between the prediction f(xi, w) and the target yi. The goal of empirical risk minimization is to \ufb01nd a prediction function f(\u00b7 ; w) : Rmx \u2192R1\u00d7my, by minimizing L(w) = 1 n n X i=1 \u2113(f(xi, w)\u22a4, yi) where w \u2208Rd is the parameter vector that contains all the trainable parameters, including the weights and bias terms of all layers of deep neural networks. We de\ufb01ne w(l) \u2208Rdl to be the vector of all the trainable parameters at l-th layer. For any pair (r, q) such that 1 \u2264r \u2264q \u2264H+1, let w(r:q) = [w\u22a4 (r), . . . , w\u22a4 (q)]\u22a4\u2208Rdr:q where w(r:q) = w(r) when r = q. With this notation, we can write w = w(1:H+1). Here, f(x, w) represents the pre-activation output of the last layer of a neural network for a given (x, w). Then the output over all data points is fX(w) = \f[f(x1, w)\u22a4, . . . , f(xn, w)\u22a4]\u22a4 \u2208 Rn\u00d7my. The preactivation of the last layer is an af\ufb01ne map given by fX(w) = h(H) X W (H+1) + b(H+1), where W (H+1) \u2208RmH\u00d7my and b(H+1) \u2208R1\u00d7my are the weight matrix and the bias term at the last layer. Here, h(H) X = h(H) X (w(1:H)) \u2208Rn\u00d7mH is the matrix that contains the outputs at the last hidden layer. In order to consider layers with batch normalization, we allow h(H) X (w(1:H)) and f(xi, w) for each i \u2208{1, . . . , n} to depend on all data points x1, . . . , xn. Here, w(1:H) represents the vector of the trainable parameters of all hidden layers, including the parameters of batch normalization. 4 Algorithm We now describe the two-phase method as a modi\ufb01cation of any given \ufb01rst-order base algorithm. The modi\ufb01ed algorithm is proven to have global convergence guarantee under veri\ufb01able conditions as shown in the next two sections. The base algorithm can be any given \ufb01rst-order algorithm, including both batch and stochastic algorithms, such as gradient descent and stochastic gradient descent with momentum and adaptive step-size. The description of the algorithm is presented in Algorithm 1, where \u03b7t \u2299gt represent the Hadamard product of \u03b7t and gt. Here, gt represents the rules of the parameter update that correspond to different base algorithms. For example, if we use the (mini-batch) stochastic gradient descent as the base algorithm, gt represents the (mini-batch) stochastic gradient of the loss function with respect to w at the time t. The \ufb01rst phase of the training algorithm is the same as the base algorithm. Then a random Gaussian perturbation is added on all but last layer weights. After the random perturbation, the second training phase starts. In the second training phase, the base algorithm is modi\ufb01ed to preserve the rank of the NTK at time \u03c4 after random perturbation, as: rank \u0000K(wk) \u0001 \u2265rank (K(w\u03c4)) , \u2200k = \u03c4, \u03c4 + 1, . . . , t where the NTK matrix, K(w) \u2208Rnmy\u00d7nmy, is de\ufb01ned by K(w) = \u2202vec(fX(w)\u22a4) \u2202w \u0012\u2202vec(fX(w)\u22a4) \u2202w \u0013\u22a4 . As two examples, the rank can be preserved by lazy training on all layer weights or by only training the parameters in the last layer in the second phase. In the next section, we will develop the global convergence theory for Algorithm 1. 5 Theoretical analysis In this section, we prove that the parameter wt in Algorithm 1 converges to a global minimum w\u2217of all layers under the expressivity condition. As a concrete example, we prove that fully-connected neural networks with softplus nonlinear activations and moderately wide last hidden layer satisfy the expressivity condition for all distinguishable training datasets. All proofs in this paper are deferred to Appendix. Algorithm 1 Two-phase modi\ufb01cation A of a base algorithm with global convergence guarantees 1: Inputs: an initial parameter vector w0, a time \u03c4 and a base algorithm with updates sequence (gt)t and learning rate sequence (\u03b7t)t. 2: \u0003 First training phase 3: for t = 0, 1, . . . , \u03c4 \u22121 do 4: Update parameters: wt+1 = wt \u2212\u03b7t \u2299gt 5: \u0003 Random perturbation Add noise at time \u03c4, w\u03c4 (1:H) \u2190w\u03c4 (1:H) + \u03b4, where the noise vector \u03b4 = (\u03b41. . . . , \u03b4H) \u2208Rd1:H is sampled from a non-degenerate Gaussian distribution: \u03b4h \u223cN(0, \u03c32 hIdh) for h = 1, . . . , H . 6: \u0003 Second training phase 7: for t = \u03c4, \u03c4 + 1, . . . do 8: Update parameters: wt+1 = wt \u2212\u03b7t \u2299gt, where the learning rate (\u03b7t)t>\u03c4 is modi\ufb01ed to satisfy the rank preserving condition: for k = \u03c4, \u03c4 + 1, . . . , t, rank \u0000K(wk) \u0001 \u2265rank (K(w\u03c4)) . 5.1 Expressivity condition Making the right assumption is often the most critical step in theoretical analysis of deep learning. The assumption needs to be both weak enough to be useful in practice and strong enough for proving desired conclusions. It is often challenging to \ufb01nd the assumption with the right theory-practice trade-off, as typical assumptions that lead to desired conclusions are not weak enough to hold in practice, which contributes to the gap between theory and practice. We aim to \ufb01nd the right trade-off by proposing a data-architecturedependent, time-independent, and veri\ufb01able condition called the expressivity condition as a cornerstone for global convergence results. The expressivity condition guarantees the existence of parameters that can interpolate all the training data. Assumption 1. (Expressivity condition) There exists w(1:H) such that \u03d5(w(1:H)) \u0338= 0, where \u03d5(w(1:H)) := det( [h(H) X (w(1:H)), 1n][h(H) X (w(1:H)), 1n]\u22a4). In the expressivity condition, the map h(H) X depends on both architecture and dataset. Such dependency is essential for the theory-practice trade-off; i.e., we obtain a desired conclusion yet only for a certain class of paris of dataset and architecture. We verify the expressivity condition in our experiments. The expressivity condition is also veri\ufb01able as demonstrated below. 5.2 Real analyticity To prove the global convergence, we also require the function h(H) X to be real analytic. Since a composition of real analytic functions is real analytic, we only need to check whether each operation satis\ufb01es the real analyticity. The \fconvolution, af\ufb01ne map, average pooling, shortcut skip connection are all real analytic functions. Therefore, the composition of these layers preserve real analyticity. We now prove that the batch normalization function is also real analytic. The batch normalization that is applied to an output z of an arbitrary coordinate can be written by BN\u03b3,\u03b2(z) = \u03b3 z \u2212\u00b5 \u221a \u03c32 + \u03f5 + \u03b2. Here, \u00b5 and \u03c32 depend also on other samples as \u00b5 = 1 |S| P i\u2208S zi and \u03c32 = 1 |S| P i\u2208S(zi \u2212\u00b5)2, where S is an arbitrary subset of {1, 2, . . . , n} such that z \u2208{zi : i \u2208S}. Then, the following statement holds: Proposition 1. Batch normalization function (z, \u03b2, \u03b3) 7\u2192 BN\u03b3,\u03b2(z) is real analytic. We also require the activation function to be analytic. For example, sigmoid, hyperbolic tangents and softplus activations \u03c3(z) = ln(1 + exp(\u03c2z))/\u03c2 are all real analytic functions, with any hyperparameter \u03c2 > 0. The softplus activation can approximate the ReLU activation for any desired accuracy as \u03c3(x) \u2192relu(x) as \u03c2 \u2192\u221e. Therefore, the function h(H) X is real analytic for a large class of neural networks (with batch normalization) such as the standard deep residual networks (He et al. 2016) with real analytic approximation of ReLU activation via softplus. 5.3 Global convergence In the following, we assume that the loss function satis\ufb01es assumption 2. Assumption 2. (Use of common loss criteria) For any i \u2208{1, . . . , n}, the function \u2113i : q 7\u2192\u2113(q, yi) \u2208R\u22650 is differentiable and convex, and \u2207\u2113i is L\u2113-Lipschitz: i.e., \u2225\u2207\u2113i(q) \u2212\u2207\u2113i(q\u2032)\u2225\u2264L\u2113\u2225q \u2212q\u2032\u2225for all q, q\u2032 \u2208R. Assumption 2 is satis\ufb01ed by standard loss functions such as the squared loss \u2113(q, y) = \u2225q\u2212y\u22252 2 and cross entropy loss \u2113(q, y) = \u2212Pdy k=1 yk log exp(qk) P k\u2032 exp(qk\u2032). Although the objective function L : w 7\u2192L(w) used to train a neural network is non-convex in w, the loss criterion \u2113i : q 7\u2192\u2113(q, yi) is often convex in q. Before we state the main theorem, we de\ufb01ne the following notation. Let w\u2217 \u2208 Rd be a global minimum of all layers; i.e., w\u2217is a global minimum of L. De\ufb01ne \u03bd = [0\u22a4 d1:H, 1\u22a4 dH+1]\u22a4where 0d1:H \u2208Rd1:H is the column vector of all entries being zeros and 1dH+1 \u2208 RdH+1 is the column vector of all entries being ones. Let R2 = min \u00af w\u2217 (H+1)\u2208Q E[\u2225\u00af w\u2217 (H+1) \u2212w\u03c4 (H+1)\u22252] where Q = argminw(H+1) L([(w\u03c4 (1:H))\u22a4, (w(H+1))\u22a4]\u22a4). Now we are ready to state one of our main theorems. Theorem 1. Suppose H \u22652, assumptions 1 and 2 hold. Assume that the function h(H) X is real analytic. Then, with probability one over a randomly sampled \u03b4, the following two statements hold: (i) (Gradient descent) if gt = \u2207wt (H+1)L(wt) and \u03b7t = 1 LH \u03bd for t \u2265 \u03c4 with LH = L\u2113 n Pn i=1 \u2225[h(H)(xi, w\u03c4 (1:H)), 1]\u22252 2, then for any t > \u03c4, L(wt) \u2212L(w\u2217) \u2264R2LH 2(t \u2212\u03c4). (ii) (SGD) if E[gt|wt] = \u2207wt (H+1)L(wt) (almost surely), E[\u2225gt\u22252] \u2264G2 and \u03b7t = \u00af \u03b7t\u03bd for t \u2265\u03c4 with \u00af \u03b7t \u2208R satisfying that \u00af \u03b7t \u22650, P\u221e t=\u03c4 \u00af \u03b72 t < \u221eand P\u221e t=\u03c4 \u00af \u03b7t = \u221e, then for any t > \u03c4, E[L(wt\u2217)] \u2212L(w\u2217) \u2264R2 + G2 Pt k=\u03c4 \u00af \u03b72 k 2 Pt k=\u03c4 \u00af \u03b7k , where t\u2217\u2208argmink\u2208{\u03c4,\u03c4+1,...,t} L(wk). In particular, Theorem 1 part (ii) shows that if we choose \u00af \u03b7t \u223cO(1/ \u221a t), we have limt\u2192\u221e Pt k=\u03c4 \u00af \u03b72 k Pt k=\u03c4 \u00af \u03b7k = 0 and the optimality gap becomes E[L(wt\u2217)] \u2212L(w\u2217) = \u02dc O(1/ \u221a t). 5.4 Example As a concrete example that satis\ufb01es all the conditions in theorem 1, we consider full-connected deep networks using softplus activation with a wide last hidden layer. In the case of fully-connected networks, the output of the last hidden layer can be simpli\ufb01ed to h(H) X (w(1:H))ij = h(H)(xi, w(1:H)))j \u2208R, (1) where h(l)(xi, w(1:l)) \u2208R1\u00d7ml is de\ufb01ned by h(l)(xi, w(1:l)) = \u03c3(h(l\u22121)(xi, w(1:l\u22121))W (l) + b(l)) (2) for l = 1, 2, . . . , H with h(0)(xi, w(1:0)) := x\u22a4 i \u2208R1\u00d7mx. Here, W (l) \u2208Rml\u22121\u00d7ml and b(l) \u2208R1\u00d7ml are the weight matrix and the bias term of the l-th hidden layer. Also, ml represents the number of neurons at the l-th hidden layer. Since h(H) X is the composition of af\ufb01ne functions and real analytic activation functions (i.e., softplus activation \u03c3), the function h(H) X is real analytic. In theorem 2, we show that the expressivity condition is also satis\ufb01ed for fully-connected networks, for training datasets that satisfy the following input distinguishability assumption. Assumption 3. (Input distinguishability) \u2225xi\u22252\u2212x\u22a4 i xj > 0 for any xi, xj \u2208Sx with i \u0338= j. Theorem 2. Suppose assumption 3 hold. Assume that h(H) X is de\ufb01ned by equations (1)-(2) with softplus activation \u03c3 and H \u22652 such that min(m1, . . . , mH\u22121) \u2265min(mx, n), and mH \u2265n. Then, assumption 1 hold true. In Theorem 2, the case of min(m1, . . . , mH\u22121) = mx is allowed. That is, all of the 1, 2, . . . , H \u22121-th hidden layers are allowed to be narrow (instead of wide) with the number of neurons to be mx, which is typically smaller than \fn. A previous paper recently proved that gradient descent \ufb01nds a global minimum in a lazy training regime (i.e., the regime where NTK approximately remains unchanged during training) with d = \u02dc \u2126(myn + mxH2 + H5) (Kawaguchi and Huang 2019). In contrast, Theorem 2 only requires d \u2265(my + mx)n + m2 xH and allows NTK to change signi\ufb01cantly during training. Assumption 3 used in Theorem 2 can be easily satis\ufb01ed, for example, by normalizing the input features for x1, . . . , xn so that \u2225xi\u22252 = \u2225xj\u22252. With the normalization, the condition is satis\ufb01ed as long as \u2225xi \u2212xj\u22252 > 0 for i \u0338= j since 1 2\u2225xi \u2212xj\u22252 = \u2225xi\u22252 \u2212x\u22a4 i xj. In general, normalization is not necessary, for example, orthogonality on xi and xj along with xi \u0338= 0 satis\ufb01es the condition. 5.5 Global convergence with lazy training in the second phase In the previous section, we did not assume the rank preservation condition. Instead, we considered the special learning rate \u03b7t in the second phase to keep submatrices of the kernel matrix K(w) unchanged during the second phase (t \u2265\u03c4). In this section, we show that algorithm 1 can still ensure the global convergence with a standard uniform learning rate \u03b7t = 2\u00af \u03b7 L 1d, as long as the rank preservation condition is satis\ufb01ed. Theorem 3. Let \u03b7t = 2\u00af \u03b7 L 1d with \u00af \u03b7 \u2208R for t \u2265\u03c4. Suppose that H \u22652 and the following three assumptions hold: \u2022 Assumption 1 (expressivity condition) \u2022 Assumption 2, along with \u2225\u2207L(w) \u2212\u2207L(w\u2032)\u2225\u2264 L\u2225w \u2212w\u2032\u2225for all w, w\u2032 in the domain of L. \u2022 (rank preserving condition) rank \u0000K(wk) \u0001 \u2265 rank (K(w\u03c4)) for for all k \u2208{\u03c4 + 1, \u03c4 + 2, . . . , t}. Then, the following statement hold for any t > \u03c4: L(wt\u2217)\u2212L(w\u2217) \u2264 1 p (t \u2212\u03c4) + 1 s L \u00af R2(L(w\u03c4) \u2212L(w\u2217)) 2\u00af \u03b7(1 \u2212\u00af \u03b7) . where t\u2217 \u2208 argmink\u2208{\u03c4,\u03c4+1,...,t} L(wk) and \u00af R = max\u03c4\u2264k\u2264t min\u02c6 \u03c9k\u2208\u00af Qk \u2225(\u03bd \u2299wk) \u2212\u02c6 \u03c9k\u2225with \u00af Qk = argmin\u02c6 \u03c9\u2208Rd 1 n Pn i=1 \u2113 \u0010Pd j=1 \u02c6 \u03c9j \u2202f(xi,wk)\u22a4 \u2202wj , yi \u0011 . Theorem 3 shows that using the lazy training that preserves the rank of the NTK matrix during the second phase can ensure the global convergence for Algorithm 1. Therefore, the proposed two-phase training algorithm provides a new perspective for the lazy training regime. That is, NTK in Algorithm 1 is allowed to change signi\ufb01cantly during the \ufb01rst phase training t < \u03c4 to learn the features or representation beyond the random features induced by the dataindependent NTK at initialization. Our two-phase method allows the lazy training with the learned data-dependent NTK at time \u03c4, which is often a better representation of practical dataset than the NTK at random initialization. The property of the lazy training now depends on the quantities at time \u03c4 instead of time t = 0. For example, if we conduct the lazy training with over-parameterization, then the number of neurons required per layer depends on the residual error and the minimum eigenvalue of NTK at time \u03c4, instead of time t = 0. This could potential lead to global convergence theory with weaker assumptions. Thus, the two-phase algorithm opens up a new direction of future research for applying the lazy training theory to the datadependent NTK obtained at the end of \ufb01rst phase training. We can de\ufb01ne the domain of L to be a sublevel set around an initial point to satisfy the Lipschitz continuity. 6 Proof idea and key challenges For global convergence for optimization of deep neural networks, recent results rely on different assumptions, such as over-parameterization and initial conditions on gradient dynamics. Those different assumptions are essentially used in proofs to enforce the full rankness of the NTK matrix and the corresponding feature matrix during training. If the feature matrix is of full rank, then the global convergence is ensured (and the convergence rate depends on the minimum eigenvalue of the NTK matrix). In order to apply our theory for practical settings, we want data-dependency in two key aspects. First, we want the feature matrix to be data-dependent and to change signi\ufb01cantly during training, in order to learn data-dependent features. In contrast, the various assumptions in previous works essentially make the feature matrix to be approximately dataindependent. Second, we want the global convergence results to hold data-dependently for a certain class of practical datasets. Instead, previous global convergence results need to hold data-independently, or for linearly-separable datasets or synthetic datasets generated by simple models (e.g., Gaussian mixtures). Because of these differences, there needs to be new proof strategies instead of adopting previous assumptions and their proof ideas. 6.1 Proof for general networks The \ufb01rst step in our proof is to show that the feature matrix is of full rank with probability one over random entries of the parameter vector. The global convergence then follows from the full rankness (as shown in the complete proof in Appendix). A challenge in the proof for the general case is to make the right assumption as discussed above. If we assume significant over-parameterization, then proving the full rankness is relatively easy, but it limits the applicability. Indeed, we want to allow deep networks to have narrow layers. Although the entries of the parameter vector after random perturbation have independent components, the entries of the feature matrix are dependent on each other. Indeed, the entries of the feature matrix are the outputs of nonlinear and non-convex functions of the entries of the parameter vector. Therefore, we cannot use elementary facts from linear algebra and random matrix theory with i.i.d. entries to prove the full rankness of the feature matrix. Instead, we take advantage of the fact on the zero set of a real analytic function: if a function is real analytic and not identically zero, then the Lebesgue measure of its zero set is zero (Mityagin 2015). To utilize this fact, we de\ufb01ne a func\ftion \u03d5(w(1:H)) = det(h(H) X (w(1:H))h(H) X (w(1:H))\u22a4). We then observe that \u03d5 is real analytic since h(H) X is assumed to be real analytic and a composition of real analytic functions is real analytic. Furthermore, since the rank of h(H) X (w(1:H)) and the rank of the Gram matrix are equal, we have that {w(1:H) \u2208Rd1:H : rank(h(H) X (w(1:H))) \u0338= n} = {w(1:H) \u2208 Rd1:H : \u03d5(w(1:H)) = 0}. Since \u03d5 is analytic, if \u03d5 is not identically zero (\u03d5 \u0338= 0), the Lebesgue measure of its zero set {w(1:H) \u2208Rd1:H : \u03d5(w(1:H)) = 0} is zero. Therefore, if \u03d5(w(1:H)) \u0338= 0 for some w(1:H) \u2208Rd1:H, the Lebesgue measure of the set {w(1:H) \u2208Rd1:H : rank(h(H) X (w(1:H))) \u0338= n} is zero. Then, from the full rankness of the feature (sub)matrix h(H) X (w(1:H)), we can ensure the global convergence, as in the previous papers with over-parameterization and neural tangent kernel. Based on the above proof idea, as long as there exists a w(1:H) \u2208Rd1:H such that \u03d5(w(1:H)) \u0338= 0, then we can conclude the desired global convergence. The key observation is that this condition is a time-independent and easily veri\ufb01able condition in practice. This condition is de\ufb01ned as assumption 1. We verify that assumption 1 holds in experiments numerically for deep ResNets and in theory for fully-connected networks. We complete the proof in Appendix. 6.2 Proof for fully-connected networks Without our result on the general case, a challenge to prove the global convergence of fully-connected networks lies in the task of dealing with narrow hidden layers; i.e., in the case of min(m1, . . . , mH\u22121) = mx < n. In the case of min(m1, . . . , mH\u22121) \u2265n, it is easy to see that k-th layer with mk \u2265n can preserve rank n for the corresponding matrices. In the case of min(m1, . . . , mH\u22121) = mx < n, however, it cannot preserve rank n, and deriving the global convergence is non-trivial. With our result for the general case, however, our only remaining task is to show the existence of a w(1:H) \u2208Rd1:H such that \u03d5(w(1:H)) \u0338= 0. Accordingly, we complete the proof in appendix by constructing such a w(1:H) \u2208Rd1:H for the fully-connected networks with narrow layers. 7 Experiments In this section, we study the empirical aspect of our method. The network model we work with is the standard (convolutional) pre-activation ResNet with 18 layers (He et al. 2016). -1.0 -0.5 0.5 1 . 0 0.2 0.4 0.6 0.8 1.0 ReLU Softplus 100 -1000 -500 500 1000 200 400 600 800 1000 Figure 2: ReLU versus Softplus with \u03c2 = 100. To satisfy all the assumptions in our theory, we added a fully-connected last hidden layer of cn neurons with a small constat c = 1.1 and set the nonlinear activation functions of all layers to be softplus \u03c3(z) = ln(1 + exp(\u03c2z))/\u03c2 with \u03c2 = 100, as a real analytic function that approximates the ReLU activation. This approximation is of high accuracy as shown in Figure 2. As discussed above, Proposition 1 implies that the function h(H) X for the ResNet is real analytic. The loss function we work with is cross-entropy loss, which satis\ufb01es assumption 2. Therefore, the only assumption in theorem 1 that is left to verify is assumption 1. In the following subsection, we numerically verify assumption 1. 7.1 Veri\ufb01cation of expressivity condition Assumption 1 assumes that the network satisfy expressivity condition, which only requires an existence of a w(1:H) such that \u03d5(w(1:H)) \u0338= 0. Here, \u03d5(w(1:H)) \u0338= 0 is implied by rank([h(H) X (w(1:H)), 1n]) = n. In other words, if we \ufb01nd one w(1:H) with rank([h(H) X (w(1:H)), 1n]) = n, then assumption 1 is ensured to hold true. A simple way to \ufb01nd such a w(1:H) is to randomly sample a single w(1:H) and check the condition of rank([h(H) X (w(1:H)), 1n]) = n. Table 1 column 3 summarizes the results of the veri\ufb01cation of assumption 1 for various datasets. Here, we used a randomly sampled w(1:H) returned from the default initialization of the ResNet with version 1.4.0. of PyTorch (Paszke et al. 2019) by setting random seed to be 1. This initialization is based on the implementation of (He et al. 2015). The condition of rank([h(H) X (w(1:H)), 1n]) = n was checked by using numpy.linalg.matrix rank in NumPy version 1.18.1 with the default option (i.e., without any arguments except the matrix [h(H) X (w(1:H)), 1n]), which uses the standard method from (Press et al. 2007). 7.2 Performance In the following, we compare the generalization performances of the two-phase training algorithm over different hyper-parameters\u2019 choices and with the baseline algorithm. Experimental setting We \ufb01xed all hyper-parameters of the base algorithm a priori across all different datasets by using a standard hyper-parameter setting of SGD (following the setting of Kawaguchi and Lu 2020), instead of aiming for state-of-the-art test errors with a possible issue of over\ufb01tting to test and validation datasets (Dwork et al. 2015; Rao, Fung, and Rosales 2008). Concretely, we \ufb01xed the mini-batch size to be 64, the weight decay rate to be 10\u22125, the momentum coef\ufb01cient to be 0.9, the \ufb01rst phase learning rate to be \u03b7t = 0.01 and the second phase learning rate to be \u03b7t = 0.01 \u00d7 [0\u22a4 d1:H, 1\u22a4 dH+1]\u22a4to only train the last layer. The last epoch T was \ufb01xed a priori as T = 100 without data augmentation and T = 400 with data augmentation. Choice of \u03c4 and \u03b4 Now we discuss the choice of hyperparameters for the time of transition \u03c4 and for the size of the noise \u03b4. Instead of potentially over\ufb01tting hyper-parameters \fTable 1: Test errors (%) of base and A(base) with guarantee where the operator A maps any given \ufb01rst-order training algorithm to the two-phase version of the given algorithm with theoretical guarantees. The numbers indicate the mean test errors (and standard deviations in parentheses) over \ufb01ve random trials. The column of \u2018Augmentation\u2019 shows \u2018No\u2019 for no data augmentation, and \u2018Yes\u2019 for data augmentation. The expressivity condition (assumption 1) was numerically veri\ufb01ed to all datasets. Dataset # of training data Expressivity Condition Augmentation Base A(base) with guarantee MNIST 60000 Veri\ufb01ed No 0.41 (0.02) 0.38 (0.04) Yes 0.34 (0.03) 0.28 (0.04) CIFAR-10 50000 Veri\ufb01ed No 13.99 (0.17) 13.57 (0.32) Yes 7.01 (0.18) 6.84 (0.16) CIFAR-100 50000 Veri\ufb01ed No 41.43 (0.43) 40.78 (0.22) Yes 27.92 (0.36) 27.41 (0.58) SVHN 73257 Veri\ufb01ed No 4.51 (0.04) 4.50 (0.09) Yes 4.32 (0.06) 4.16 (0.16) Table 2: Test errors (%) of A(base) with guarantee for Kuzushiji-MNIST with different hyperparameters \u03c4 = \u03c40T and \u03b4 = \u03b40\u03f5. The numbers indicate the mean test errors (and standard deviations in parentheses) over three random trials. The expressivity condition (Assumption 1) was numerically veri\ufb01ed to hold for Kuzushiji-MNIST as well. PPPP P \u03b40 \u03c40 0.4 0.5 0.6 0.8 0.0001 2.10 (0.14) 2.06 (0.07) 2.02 (0.09) 2.01 (0.05) 0.001 2.06 (0.07) 2.11 (0.12) 1.92 (0.06) 2.01 (0.12) 0.01 2.25 (0.11) 2.25 (0.17) 2.25 (0.11) 2.09 (0.08) to each dataset, we used a different dataset, KuzushijiMNIST (Clanuwat et al. 2019), to \ufb01x all the hyperparameters of Algorithm 1 across all other datasets. That is, we used Kuzushiji-MNIST with different hyperparameters\u2019 values \u03c40 = 0.4, 0.5, 0.6, 0.8 and \u03b40 = 0.0001, 0.001, 0.01, where \u03c4 = \u03c40T and \u03b4 = \u03b40\u03f5. Here, \u03f5 \u223cN(0, Id) where Id is the d \u00d7 d identity matrix. Based on the results from Kuzushiji-MNIST in Table 2, we \ufb01xed \u03c40 = 0.6 and \u03b40 = 0.001 for all datasets. Generalization and optimization comparison We now compare the performances between the base algorithm and its two-phase modi\ufb01ed version. For generalization aspect, as shown in the last three columns of table 1, the modi\ufb01ed algorithm improved the test errors consistently over the four datasets with and without data augmentation. This suggests that the modi\ufb01ed version of the base algorithm is competitive with the base algorithm for generalization performance. For optimization aspect, \ufb01gure 3 shows that the two-phase training algorithm indeed improves training loss values of the base algorithm in the second phase without changing any hyper-parameters (e.g., learning rate and momentum) of the base algorithm, as expected from our theory. These results suggest that the two-phase algorithm can provide global convergence guarantees to a given base algorithm without hurting the generalization performance. 0 100 200 300 400 0.001 2 5 0.01 2 5 0.1 2 5 2 1 train loss epoch (a) CIFAR-10 0 100 200 300 400 2 5 0.01 2 5 0.1 2 5 1 2 train loss epoch base A(base) (b) SVHN Figure 3: Training loss of base and A(base) with guarantee: the plots show the mean values over \ufb01ve random trials. 8 Conclusion In this paper, we proposed a two-phase method that modi\ufb01es any given \ufb01rst-order optimization algorithm to have global convergence guarantees without degrading practical performances of the given algorithm. The conditions for global convergence are mathematically proven to hold for fullyconnected deep networks with wide last hidden layer (while all other layers are allowed to be narrow). The conditions are also numerically veri\ufb01ed for deep ResNet with batch normalization under various standard classi\ufb01cation datasets. The two-phase training method opens up a new future research direction to study the use of the novel NTK regime with learned representation from data unlike the standard NTK regime near random initialization. Extending our theoretical analysis on a larger class of NTK with learned representation for global convergence and for generalization performance would be an interesting future direction. As the global optimal parameters can often achieve near zero training loss, we could expect models trained by our modi\ufb01ed algorithm to have the bene\ufb01ts from terminal phase of training (Papyan, Han, and Donoho 2020) potentially for better generalization performance, robustness, and interpretability. Verifying these bene\ufb01ts would be sensible directions for future work. An extension of our theory and algorithm to implicit deep learning (Bai, Kolter, and Koltun 2019; El Ghaoui et al. 2019; Kawaguchi 2021) would be another interesting future direction. \fAcknowledgments This work is partially supported by the Center of Mathematical Sciences and Applications at Harvard University. The authors thank Yiqiao Zhong, Mengyuan Yan for discussion." + }, + { + "url": "http://arxiv.org/abs/2102.07346v2", + "title": "On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers", + "abstract": "A deep equilibrium model uses implicit layers, which are implicitly defined\nthrough an equilibrium point of an infinite sequence of computation. It avoids\nany explicit computation of the infinite sequence by finding an equilibrium\npoint directly via root-finding and by computing gradients via implicit\ndifferentiation. In this paper, we analyze the gradient dynamics of deep\nequilibrium models with nonlinearity only on weight matrices and non-convex\nobjective functions of weights for regression and classification. Despite\nnon-convexity, convergence to global optimum at a linear rate is guaranteed\nwithout any assumption on the width of the models, allowing the width to be\nsmaller than the output dimension and the number of data points. Moreover, we\nprove a relation between the gradient dynamics of the deep implicit layer and\nthe dynamics of trust region Newton method of a shallow explicit layer. This\nmathematically proven relation along with our numerical observation suggests\nthe importance of understanding implicit bias of implicit layers and an open\nproblem on the topic. Our proofs deal with implicit layers, weight tying and\nnonlinearity on weights, and differ from those in the related literature.", + "authors": "Kenji Kawaguchi", + "published": "2021-02-15", + "updated": "2021-02-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "math.OC", + "stat.ML" + ], + "main_content": "INTRODUCTION A feedforward deep neural network consists of a stack of H layers, where H is the depth of the network. The value for the depth H is typically a hyperparameter and is chosen by network designers (e.g., ResNet-101 in He et al. 2016). Each layer computes some transformation of the output of the previous layer. Surprisingly, several recent studies achieved results competitive with the state-ofthe-art performances by using the same transformation for each layer with weight tying (Dabre & Fujita, 2019; Bai et al., 2019b; Dehghani et al., 2019). In general terms, the output of the l-th layer with weight tying can be written by z(l) = h(z(l\u22121); x, \u03b8) for l = 1, 2, . . . , H \u22121, (1) where x is the input to the neural network, z(l) is the output of the l-th layer (with z(0) = x), \u03b8 represents the trainable parameters that are shared among different layers (i.e., weight tying), and z(l\u22121) 7\u2192h(z(l\u22121); x, \u03b8) is some continuous function that transforms z(l\u22121) given x and \u03b8. With weight tying, the memory requirement does not increase as the depth H increases in the forward pass. However, the ef\ufb01cient backward pass to compute gradients for training the network usually requires to store the values of the intermediate layers. Accordingly, the overall computational requirement typically increases as the \ufb01nite depth H increases even with weight tying. Instead of using a \ufb01nite depth H, Bai et al. (2019a) recently introduced the deep equilibrium model that is equivalent to running an in\ufb01nitely deep feedforward network with weight tying. Instead of running the layer-by-layer computation in equation (1), the deep equilibrium model uses root\ufb01nding to directly compute a \ufb01xed point z\u2217= liml\u2192\u221ez(l), where the limit can be ensured to exist by a choice of h. We can train the deep equilibrium model with gradient-based optimization by analytically backpropagating through the \ufb01xed point using implicit differentiation (e.g., Griewank & Walther, 2008; Bell & Burke, 2008; Christianson, 1994). With numerical experiments, Bai et al. (2019a) showed that the deep equilibrium model can improve performance over previous state-ofthe-art models while signi\ufb01cantly reducing memory consumption. 1 arXiv:2102.07346v2 [cs.LG] 18 Feb 2021 \fPublished as a conference paper at ICLR 2021 Despite the remarkable performances of deep equilibrium models, our theoretical understanding of its properties is yet limited. Indeed, immense efforts are still underway to mathematically understand deep linear networks, which have \ufb01nite values for the depth H without weight tying (Saxe et al., 2014; Kawaguchi, 2016; Hardt & Ma, 2017; Laurent & Brecht, 2018; Arora et al., 2018; Bartlett et al., 2019; Du & Hu, 2019; Arora et al., 2019a; Zou et al., 2020b). In deep linear networks, the function h at each layer is linear in \u03b8 and linear in x; i.e., the map (x, \u03b8) 7\u2192h(z(l\u22121); x, \u03b8) is bilinear. Despite this linearity, several key properties of deep learning are still present in deep linear networks. For example, the gradient dynamics is nonlinear and the objective function is nonconvex. Accordingly, understanding gradient dynamics of deep linear networks is considered to be a valuable step towards the mathematical understanding of deep neural networks (Saxe et al., 2014; Arora et al., 2018; 2019a). In this paper, inspired by the previous studies of deep linear networks, we initiate a theoretical study of gradient dynamics of deep equilibrium linear models as a step towards theoretically understanding general deep equilibrium models. As we shall see in Section 2, the function h at each layer is nonlinear in \u03b8 for deep equilibrium linear models, whereas it is linear for deep linear networks. This additional nonlinearity is essential to enforce the existence of the \ufb01xed point z\u2217. The additional nonlinearity, the in\ufb01nite depth, and weight tying are the three key proprieties of deep equilibrium linear models that are absent in deep linear networks. Because of these three differences, we cannot rely on the previous proofs and results in the literature of deep linear networks. Furthermore, we analyze gradient dynamics, whereas Kawaguchi (2016); Hardt & Ma (2017); Laurent & Brecht (2018) studied the loss landscape of deep linear networks. We also consider a general class of loss functions for both regression and classi\ufb01cation, whereas Saxe et al. (2014); Arora et al. (2018); Bartlett et al. (2019); Arora et al. (2019a); Zou et al. (2020b) analyzed gradient dynamics of deep linear networks in the setting of the square loss. Accordingly, we employ different approaches in our analysis and derive qualitatively and quantitatively different results when compared with previous studies. In Section 2, we provide theoretical and numerical observations that further motivate us to study deep equilibrium linear models. In Section 3, we mathematically prove convergence of gradient dynamics to global minima and the exact relationship between the gradient dynamics of deep equilibrium linear models and that of the adaptive trust region method. Section 5 gives a review of related literature, which strengthens the main motivation of this paper along with the above discussion (in Section 1). Finally, Section 6 presents concluding remarks on our results, the limitation of this study, and future research directions. 2 PRELIMINARIES We begin by de\ufb01ning the notation. We are given a training dataset ((xi, yi))n i=1 of n samples where xi \u2208X \u2286Rmx and yi \u2208Y \u2286Rmy are the i-th input and the i-th target output, respectively. We would like to learn a hypothesis (or predictor) from a parametric family H = {f\u03b8 : Rmx \u2192 Rmy | \u03b8 \u2208\u0398} by minimizing the objective function L (called the empirical loss) over \u03b8 \u2208\u0398: L(\u03b8) = Pn i=1 \u2113(f\u03b8(xi), yi), where \u03b8 is the parameter vector and \u2113: Rmy \u00d7 Y \u2192R\u22650 is the loss function that measures the difference between the prediction f\u03b8(xi) and the target yi for each sample. For example, when the parametric family of interest is the class of linear models as H = {x 7\u2192W\u03c6(x) | W \u2208Rmy\u00d7m}, the objective function L can be rewritten as: L0(W) = n X i=1 \u2113(W\u03c6(xi), yi), (2) where the feature map \u03c6 is an arbitrary \ufb01xed function that is allowed to be nonlinear and is chosen by model designers to transforms an input x \u2208Rmx into the desired features \u03c6(x) \u2208Rm. We use vec(W) \u2208Rmym to represent the standard vectorization of a matrix W \u2208Rmy\u00d7m. Instead of linear models, our interest in this paper lies on deep equilibrium models. The output z\u2217 of the last hidden layer of a deep equilibrium model is de\ufb01ned by z\u2217= lim l\u2192\u221ez(l) = lim l\u2192\u221eh(z(l\u22121); x, \u03b8) = h(z\u2217; x, \u03b8), (3) where the last equality follows from the continuity of z 7\u2192h(z; x, \u03b8) (i.e., the limit commutes with the continuous function). Thus, z\u2217can be computed by solving the equation z\u2217= h(z\u2217; x, \u03b8) without running the in\ufb01nitely deep layer-by-layer computation. The gradients with respect to parameters are computed analytically via backpropagation through z\u2217using implicit differentiation. 2 \fPublished as a conference paper at ICLR 2021 2.1 DEEP EQUILIBRIUM LINEAR MODELS A deep equilibrium linear model is an instance of the family of deep equilibrium models and is de\ufb01ned by setting the function h at each layer as follows: h(z(l\u22121); x, \u03b8) = \u03b3\u03c3(A)z(l\u22121) + \u03c6(x), (4) where \u03b8 = (A, B) with two trainable parameter matrices A \u2208Rm\u00d7m and B \u2208Rmy\u00d7m. Along with a positive real number \u03b3 \u2208(0, 1), the nonlinear function \u03c3 is used to ensure the existence of the \ufb01xed point and is de\ufb01ned by \u03c3(A)ij = exp(Aij) Pm k=1 exp(Akj). The class of deep equilibrium linear models is given by H = {x 7\u2192B \u0000liml\u2192\u221ez(l)(x, A) \u0001 | A \u2208Rm\u00d7m, B \u2208Rmy\u00d7m}, where z(l)(x, A) = \u03b3\u03c3(A)z(l\u22121) + \u03c6(x). Therefore, the objective function for deep equilibrium linear models can be written as L(A, B) = n X i=1 \u2113 \u0012 B \u0012 lim l\u2192\u221ez(l)(xi, A) \u0013 , yi \u0013 . (5) The outputs of deep equilibrium linear models f\u03b8(x) = B \u0000liml\u2192\u221ez(l)(x, A) \u0001 are nonlinear and non-multilinear in the optimization variable A. This is in contrast to linear models and deep linear networks. From the optimization viewpoint, linear models W\u03c6(x) are called linear because they are linear in the optimization variables W. Deep linear networks W (H)W (H\u22121) \u00b7 \u00b7 \u00b7 W (1)x are multilinear in the optimization variables (W (1), W (2), . . . , W (H)) (this holds also when we replace x by \u03c6(x)). This difference creates a challenge in the analysis of deep equilibrium linear models. Following previous works on gradient dynamics of different machine learning models (Saxe et al., 2014; Ji & Telgarsky, 2020), we consider the process of learning deep equilibrium linear models via gradient \ufb02ow: d dtAt = \u2212\u2202L \u2202A(At, Bt), d dtBt = \u2212\u2202L \u2202B (At, Bt), \u2200t \u22650, (6) where (At, Bt) represents the model parameters at time t with an arbitrary initialization (A0, B0). Throughout this paper, a feature map \u03c6 and a real number \u03b3 \u2208(0, 1) are given and arbitrary (except in experimental observations) and we omit their universal quanti\ufb01ers for the purpose of brevity. 2.2 PRELIMINARY OBSERVATION FOR ADDITIONAL MOTIVATION Our analysis is chie\ufb02y motivated as a step towards mathematically understanding general deep equilibrium models (as discussed in Sections 1 and 5). In addition to the main motivation, this section provides supplementary motivations through theoretical and numerical preliminary observations. In general deep equilibrium models, the limit, liml\u2192\u221ez(l), is not ensured to exist (see Appendix C). In this view, the class of deep equilibrium linear models is one instance where the limit is guaranteed to exist for any values of model parameters as stated in Proposition 1: Proposition 1. Given any (x, A), the sequence (z(l)(x, A))l in Euclidean space Rm converges. Proof. We use the nonlinearity \u03c3 to ensure the convergence in our proof in Appendix A.5. Proposition 1 shows that we can indeed de\ufb01ne the deep equilibrium linear model with liml\u2192\u221ez(l) = z\u2217(x, A). Therefore, understanding this model is a sensible starting point for theory of general deep equilibrium models. As our analysis has been mainly motivated for theory, it would be of additional value to discuss whether the model would also make sense in practice, at least potentially in the future. Consider an (unknown) underling data distribution P(x, y) = P(y|x)P(x). Intuitively, if the mean of the P(y|x) is approximately given by a (true unknown) deep equilibrium linear model, then it would make sense to use the parametric family of deep equilibrium linear models to have the inductive bias in practice. To con\ufb01rm this intuition, we conducted numerical simulations. To generate datasets, we \ufb01rst drew uniformly at random 200 input images for input data points xi from a standard image dataset \u2014 CIFAR-10, CIFAR-100 or Kuzushiji-MNIST (Krizhevsky & Hinton, 2009; Clanuwat et al., 2019). We then generated targets as yi = B\u2217(liml\u2192\u221ez(l)(xi, A\u2217)) + \u03b4i where \u03b4i i.i.d. \u223cN(0, 1). Each entry of the true (unknown) matrices A\u2217and B\u2217was independently drawn from the standard normal distribution. For each dataset generated in this way, we used stochastic gradient descent (SGD) to train linear models, fully-connected feedforward deep neural networks with ReLU nonlinearity 3 \fPublished as a conference paper at ICLR 2021 0 1000 2000 3000 4000 5000 epoch 101 102 8 9 20 30 40 50 60 70 80 90 200 test loss Linear (best) Linear (worst) DELM (best) DELM (worst) 0 1000 2000 3000 4000 5000 epoch 10 11 10 9 10 7 10 5 10 3 10 1 101 103 train loss (a) Modi\ufb01ed Kuzushiji-MNIST: Linear v.s. DELM 0 1000 2000 3000 4000 5000 epoch 101 102 8 9 20 30 40 50 60 70 80 90 200 test loss DNN (H=2) DNN (H=3) DNN (H=4) DELM (best) DELM (worst) 0 1000 2000 3000 4000 5000 epoch 10 11 10 9 10 7 10 5 10 3 10 1 101 103 train loss (b) Modi\ufb01ed Kuzushiji-MNIST: DNN v.s. DELM 0 1000 2000 3000 4000 5000 epoch 102 20 30 40 50 60 70 80 90 test loss Linear (best) Linear (worst) DELM (best) DELM (worst) 0 1000 2000 3000 4000 5000 epoch 10 4 10 3 10 2 10 1 100 101 102 train loss (c) Modi\ufb01ed CIFAR-10: Linear v.s. DELM 0 1000 2000 3000 4000 5000 epoch 102 20 30 40 50 60 70 80 90 200 test loss DNN (H=2) DNN (H=3) DNN (H=4) DELM (best) DELM (worst) 0 1000 2000 3000 4000 5000 epoch 10 10 10 8 10 6 10 4 10 2 100 102 train loss (d) Modi\ufb01ed CIFAR-10: DNN v.s. DELM 0 1000 2000 3000 4000 5000 epoch 102 20 30 40 50 60 70 80 90 test loss Linear (best) Linear (worst) DELM (best) DELM (worst) 0 1000 2000 3000 4000 5000 epoch 10 3 10 2 10 1 100 101 102 train loss (e) Modi\ufb01ed CIFAR-100: Linear v.s. DELM 0 1000 2000 3000 4000 5000 epoch 102 20 30 40 50 60 70 80 90 200 test loss DNN (H=2) DNN (H=3) DNN (H=4) DELM (best) DELM (worst) 0 1000 2000 3000 4000 5000 epoch 10 11 10 9 10 7 10 5 10 3 10 1 101 train loss (f) Modi\ufb01ed CIFAR-100: DNN v.s. DELM Figure 1: Preliminary observations for additional motivation to theoretically understand deep equilibrium linear models. The \ufb01gure shows test and train losses versus the number of epochs for linear models, deep equilibrium linear models (DELMs), and deep neural networks with ReLU (DNNs). (DNNs), and deep equilibrium linear models. For all models, we \ufb01xed \u03c6(x) = x. See Appendix D for more details of the experimental settings. The results of this numerical test are presented in Figure 1. In the \ufb01gure, the plotted lines indicate the mean values over \ufb01ve random trials whereas the shaded regions represent error bars with one standard deviation. The plots for linear models and deep equilibrium linear models are shown with the best and worst learning rates (separately for each model in terms of the \ufb01nal test errors at epoch = 5000) from the set of learning rates SLR = {0.01, 0.005, 0.001, 0.0005, 0.0001, 0.00005}. The plots for DNNs are shown with the best learning rates (separately for each depth H) from the set SLR. As can be seen, all models preformed approximately the same at initial points, but deep equilibrium linear models outperformed both linear models and DNNs in test errors after training, con\ufb01rming our intuition above. Moreover, we con\ufb01rmed qualitatively same behaviors with four more datasets as well as for DNNs with and without bias terms in Appendix D. These observations additionally motivated us to study deep equilibrium linear models to obtain our main results in the next section. The purpose of these experiments is to provide a secondary motivation for our theoretical analyses. 3 MAIN RESULTS In this section, we establish mathematical properties of gradient dynamics for deep equilibrium linear models by directly analyzing its trajectories. We prove linear convergence to global minimum in Section 3.1 and further analyze the dynamics from the viewpoint of trust region in Section 3.2. 3.1 CONVERGENCE ANALYSIS We begin in Section 3.1.1 with a presentation of the concept of the Polyak-\u0141ojasiewicz (PL) inequality and additional notation. The PL inequality is used to regularize the choice of the loss functions \u2113in our main convergence theorem for a general class of losses in Section 3.1.2. We conclude in Section 3.1.3 by providing concrete examples of the convergence theorem with the square loss and the logistic loss, where the PL inequality is no longer required as the PL inequality is proven to be satis\ufb01ed by these loss functions. 3.1.1 THE POLYAK-\u0141OJASIEWICZ INEQUALITY AND ADDITIONAL NOTATION In our context, the notion of the PL inequality is formally de\ufb01ned as follows: 4 \fPublished as a conference paper at ICLR 2021 De\ufb01nition 1. The function L0 is said to satisfy the Polyak-\u0141ojasiewicz (PL) inequality with radius R \u2208(0, \u221e] and parameter \u03ba > 0 if 1 2\u2225\u2207Lvec 0 (vec(W))\u22252 2 \u2265\u03ba(Lvec 0 (vec(W)) \u2212L\u2217 0,R) for all \u2225W\u22251 < R, where Lvec 0 (vec(\u00b7)) := L0(\u00b7) and L\u2217 0,R := infW :\u2225W \u22251 0 suf\ufb01ciently large (such that it covers the domain of L0), De\ufb01nition 1 becomes equivalent to the de\ufb01nition of the PL inequality in the optimization literature (e.g., Polyak, 1963; Karimi et al., 2016). See Appendix C for additional explanations on the equivalence. In general, the non-convex objective function L of deep equilibrium linear models does not satisfy the PL inequality. Therefore, we cannot assume the inequality on L. However, in order to obtain linear convergence for a general class of the loss functions \u2113, we need some assumption on \u2113: otherwise, we can choose a loss \u2113to violate the convergence. Accordingly, we will regularize the choice of the loss \u2113through the PL inequality on the function L0 : W 7\u2192Pn i=1 \u2113(W\u03c6(xi), yi). The PL inequality with a radius R \u2208(0, \u221e] (De\ufb01nition 1) leads to the notion of the global minimum value in the domain corresponding to the radius in our analysis: L\u2217 R = infA\u2208Rm\u00d7m,B\u2208BR L(A, B), where BR = {B \u2208Rmy\u00d7m | \u2225B\u22251 < (1 \u2212\u03b3)R}. With R = \u221e, this recovers the global minimum value L\u2217in the unconstrained domain as L\u2217 R = L\u2217:= infA\u2208Rm\u00d7m,B\u2208Rmy\u00d7m L(A, B). Furthermore, if a global minimum (A\u2217, B\u2217) \u2208Rm\u00d7m \u00d7 Rmy\u00d7m exists, there exists \u00af R < \u221esuch that for any R \u2208[ \u00af R, \u221e), we have B\u2217\u2208BR and thus L\u2217 R = L\u2217. In other words, if a global minimum exists, using a (suf\ufb01ciently large) \ufb01nite radiusR < \u221esuf\ufb01ces to obtain L\u2217 R = L\u2217. We close this subsection by introducing additional notation. For a real symmetric matrix M, we use \u03bbmin(M) to represent its smallest eigenvalue. For an arbitrary matrix M \u2208Rd\u00d7d\u2032, we let rank(M) be its rank, \u2225M\u2225p be its matrix norm induced by the vector p-norm, \u03c3min(M) be its smallest singular value (i.e., the min(d, d\u2032)-th largest singular value), M\u2217j be its j-th column vector in Rd, and Mi\u2217be its i-th row vector in Rd\u2032. For d \u2208N>0, we denote by Id the identify matrix in Rd\u00d7d. We de\ufb01ne the Jacobian matrix Jk,t \u2208Rm\u00d7m of the vector-valued function A\u2217k 7\u2192\u03c3(A)\u2217k by (Jk,t)ij = \u2202\u03c3(A)ik \u2202Ajk |A=At for all t \u22650 and k = 1, . . . , m. Finally, we de\ufb01ne the feature matrix \u03a6 \u2208Rm\u00d7n by \u03a6ki = \u03c6(xi)k for k = 1, . . . , m and i = 1, . . . , n . 3.1.2 MAIN CONVERGENCE THEOREM Using the PL inequality only on the loss function \u2113through L0 (De\ufb01nition 1), we present our main theorem \u2014 a guarantee on linear convergence to global minimum for the gradient dynamics of the non-convex objective L for deep equilibrium linear models: Theorem 1. Let \u2113: Rmy \u00d7 Y \u2192R\u22650 be arbitrary such that the function q 7\u2192\u2113(q, yi) is differentiable for any i \u2208{1, . . . , n} (with an arbitrary my \u2208N>0 and an arbitrary Y). Then, for any T > 0, R \u2208(0, \u221e] and \u03ba > 0 such that \u2225Bt\u22251 < (1 \u2212\u03b3)R for all t \u2208[0, T] and L0 satis\ufb01es the PL inequality with the radius R and the parameter \u03ba, the following holds: L(AT , BT ) \u2264L\u2217 R + \u0000L(A0, B0) \u2212L\u2217 0,R \u0001 e\u22122\u03ba\u03bbT T , (7) where \u03bbT := inft\u2208[0,T ] \u03bbmin(Dt) > 0 and Dt is a positive de\ufb01nite matrix de\ufb01ned by Dt := m X k=1 \u0002 (U \u2212\u22a4 t )\u2217k(U \u22121 t )k\u2217\u2297 \u0000Imy + \u03b32BtU \u22121 t Jk,tJ\u22a4 k,tU \u2212T t B\u22a4 t \u0001\u0003 , (8) with Ut := Im \u2212\u03b3\u03c3(At). Furthermore, \u03bbT \u2265 1 m(1+\u03b3)2 for any T \u22650 (limT \u2192\u221e\u03bbT \u2265 1 m(1+\u03b3)2 ). Proof. The additional nonlinearity \u03c3 creates a complex interaction among m hidden neurons. This interaction is dif\ufb01cult to be factorized out for the gradients of L with respect to A. This is different from but analogous to the challenge to deal with nonlinear activations in the loss landscape of (nonoverparameterized) deep nonlinear networks, for which previous works have made assumptions of sparse connections to factorize the interaction (Kawaguchi et al., 2019). In contrast, we do not rely on sparse connections. Instead, we observe that although it is dif\ufb01cult to factorize this complex interaction (due to the nonlinearity \u03c3) in the space of loss landscape, we can factorize it in the space of gradient dynamics. See Appendix A.1 for the proof overview and the complete proof. Theorem 1 shows that in the worst case for \u03bbT , the optimality gap decreases exponentially towards zero as L(AT , BT ) \u2212L\u2217 R \u2264C0e \u2212 2\u03ba m(1+\u03b3)2 T , where C0 = L(A0, B0) \u2212L\u2217 0,R. Therefore, for any 5 \fPublished as a conference paper at ICLR 2021 desired accuracy \u03f5 > 0, setting C0e \u2212 2\u03ba m(1+\u03b3)2 T \u2264\u03f5 and solving for T yield that L(AT , BT ) \u2212L\u2217 R \u2264\u03f5 for any T \u2265m(1 + \u03b3)2 2\u03ba log L(A0, B0) \u2212L\u2217 0,R \u03f5 . (9) Theorem 1 also states that the rate of convergence improves further depending on the quality of the matrix Dt (de\ufb01ned in equation (8)) in terms of its smallest eigenvalue over the particular trajectory (At, Bt) up to the speci\ufb01c time t \u2264T; i.e., \u03bbT = inft\u2208[0,T ] \u03bbmin(Dt). This opens up the direction of future work for further improvement of the convergence rate through the design of initialization (A0, B0) to maximize \u03bbT for trajectories generated from a speci\ufb01c initialization scheme. 3.1.3 EXAMPLES: SQUARE LOSS AND LOGISTIC LOSS The main convergence theorem in the previous subsection is stated for any radius R \u2208(0, \u221e] and parameter \u03ba > 0 that satisfy the conditions on \u2225Bt\u22251 and the PL inequality (see Theorem 1). The values of these variables are not completely speci\ufb01ed there as they depend on the choice of the loss functions \u2113. In this subsection, we show that these values can be speci\ufb01ed further and the condition on PL inequality can be discarded by considering a speci\ufb01c choice of loss functions \u2113. In particular, by using the square loss for \u2113, we prove that we can set R = \u221eand \u03ba = 2\u03c3min(\u03a6)2: Corollary 1. Let \u2113(q, yi) = \u2225q \u2212yi\u22252 2 where yi \u2208Rmy for i = 1, 2, . . . , n (with an arbitrary my \u2208N>0). Assume that rank(\u03a6) = min(n, m). Then for any T > 0, L(AT , BT ) \u2264L\u2217+ (L(A0, B0) \u2212L\u2217 0) e\u22124\u03c3min(\u03a6)2\u03bbT T , where \u03c3min(\u03a6) > 0, L\u2217 0 := infW \u2208Rmy\u00d7m L0(W), and \u03bbT := inft\u2208[0,T ] \u03bbmin(Dt) \u2265 1 m(1+\u03b3)2 . Proof. This statement follows from Theorem 1. The conditions on \u2225Bt\u22251 and the PL inequality (in Theorem 1) are now discarded by using the property of the square loss \u2113. See Appendix A.3 for the complete proof. In Corollary 1, the global linear convergence is established for the square loss without the notion of the radius R as we set R = \u221e. Even with the square loss, the objective function L is non-convex. Despite the non-convexity, Corollary 1 shows that for any desired accuracy \u03f5 > 0, L(AT , BT ) \u2212L\u2217\u2264\u03f5 for any T \u2265m(1 + \u03b3)2 4\u03c3min(\u03a6)2 log L(A0, B0) \u2212L\u2217 0 \u03f5 . (10) Corollary 1 allows both cases of m \u2264n and m > n. In the case of over-parameterization m > n, the covariance matrix \u03a6\u03a6\u22a4\u2208Rm\u00d7m (or XX\u22a4with \u03c6(x) = x) is always rank de\ufb01cient because rank(\u03a6\u03a6\u22a4) = rank(\u03a6) \u2264n < m. This implies that the Hessian of L0 is always rank de\ufb01cient, because the Hessian of L0 is \u22072Lvec 0 (vec(W)) = 2[\u03a6\u03a6\u22a4\u2297Imy] \u2208Rmym\u00d7mym (see Appendix A.3 for its derivation) and because rank([\u03a6\u03a6\u22a4\u2297Imy]) = rank(\u03a6\u03a6\u22a4) rank(Imy) \u2264myn < mym. Since the strong convexity on a twice differentiable function requires its Hessian to be of full rank, this means that the objective L0 for linear models is not strongly convex in the case of overparameterization m > n. Nevertheless, we establish the linear convergence to global minimum for deep equilibrium linear models in Corollary 1 for both cases of m > n and m \u2264n by using Theorem 1. For the logistic loss for \u2113, the following corollary proves the global convergence at a linear rate: Corollary 2. Let \u2113(q, yi) = \u2212yi log( 1 1+e\u2212q ) \u2212(1 \u2212yi) log(1 \u2212 1 1+e\u2212q ) + \u03c4\u2225q\u22252 2 with an arbitrary \u03c4 \u22650 where yi \u2208{0, 1} for i = 1, 2, . . . , n. Assume that rank(\u03a6) = m. Then for any T > 0 and R \u2208(0, \u221e] such that \u2225Bt\u22251 < (1 \u2212\u03b3)R for all t \u2208[0, T], the following holds: L(AT , BT ) \u2264L\u2217 R + \u0000L(A0, B0) \u2212L\u2217 0,R \u0001 e\u22122(2\u03c4+\u03c1(R))\u03c3min(\u03a6)2\u03bbT T , where \u03c3min(\u03a6) > 0, \u03bbT := inft\u2208[0,T ] \u03bbmin(Dt) \u2265 1 m(1+\u03b3)2 , and \u03c1(R) := inf W :\u2225W \u22251 0, L(AT , BT ) \u2264L\u2217+ (L(A0, B0) \u2212L\u2217 0) e\u22124\u03c4\u03c3min(\u03a6)2\u03bbT T , for the logistic loss. For any \u03c4 > 0, this implies that for any desired accuracy \u03f5 > 0, L(AT , BT ) \u2212L\u2217\u2264\u03f5 for any T \u2265m(1 + \u03b3)2 4\u03c4\u03c3min(\u03a6)2 log L(A0, B0) \u2212L\u2217 0 \u03f5 . (11) In practice, we may want to set \u03c4 > 0 to regularize the parameters (for generalization) and to ensure the existence of global minima (for optimization and identi\ufb01ability). That is, if we set \u03c4 = 0 instead, the global minima may not exist in any bounded space, due to the property of the logistic loss. This is consistent with Corollary 2 in that if \u03c4 = 0, equation (11) does not hold and we must consider the convergence to the global minimum value L\u2217 R de\ufb01ned in a bounded domain with a radius R < \u221e. In the case of \u03c4 = 0 and R < \u221e, Corollary 2 implies that for desired accuracy \u03f5 > 0, L(AT , BT ) \u2212L\u2217 R \u2264\u03f5 for any T \u2265 m(1 + \u03b3)2 2\u03c1(R)\u03c3min(\u03a6)2 log L(A0, B0) \u2212L\u2217 0,R \u03f5 , (12) where we have \u03c1(R) > 0 because R < \u221e. Therefore, Corollary 2 establish the linear convergence to global minimum with both cases of \u03c4 > 0 and \u03c4 = 0 for the logistic loss. 3.2 UNDERSTANDING DYNAMICS THROUGH TRUST REGION NEWTON METHOD In this subsection, we analyze the dynamics of deep equilibrium linear models in the space of the hypothesis, f\u03b8t : x 7\u2192Bt \u0000liml\u2192\u221ez(l)(x, At) \u0001 . For any functions g and \u00af g with a domain X \u2286 Rmx, we write g = \u00af g if g(x) = \u00af g(x) for all x \u2208X. The following theorem shows that the dynamics of deep equilibrium linear models f\u03b8t can be written as d dtf\u03b8t = 1 \u03b4t Vt\u03c6 where 1 \u03b4t is scalar and Vt follows the dynamics of a trust region Newton method of shallow models with the (non-standard) adaptive trust region Vt. This suggests potential bene\ufb01ts of deep equilibrium linear models in two aspects: when compared to shallow models, it can sometimes accelerate optimization via the effect of the implicit trust region method (but not necessarily as the trust region method does not necessarily accelerate optimization) and induces novel implicit bias for generalization via the non-standard implicit trust region Vt. Theorem 2. Let \u2113: Rmy \u00d7 Y \u2192R\u22650 be arbitrary such that the function q 7\u2192\u2113(q, yi) is differentiable for any i \u2208{1, . . . , n} with my = 1 and (an arbitrary Y). Then for any time t \u22650, there exist a real number \u00af \u03b4t > 0 such that for any \u03b4t \u2208(0, \u00af \u03b4t], d dtf\u03b8t = 1 \u03b4t Vt\u03c6, vec(Vt) \u2208argmin v\u2208Vt Lt 0(v), (13) where Vt := {v \u2208Rm : \u2225v\u2225Gt \u2264\u03b4t\u2225d dt vec(BtU \u22121 t )\u2225Gt}, Gt := Ut \u0000S\u22121 t \u2212\u03b4tFt \u0001 U \u22a4 t \u227b0, and Lt 0(v) := Lvec 0 (vec(BtU \u22121 t )) + \u2207Lvec 0 (vec(BtU \u22121 t ))\u22a4v + 1 2v\u22a4\u22072Lvec 0 (vec(BtU \u22121 t ))v. Here, Ft := Pn i=1 \u22072\u2113i(f\u03b8t(xi))(liml\u2192\u221ez(l)(xi, At))(liml\u2192\u221ez(l)(xi, At))\u22a4with \u2113i(q) := \u2113(q, yi) and St := Im + \u03b32 diag(vS t ) with vS t \u2208Rm and (vS t )k := \u2225J\u22a4 k,t(BtU \u22121 t )\u22a4\u22252 2 \u2200k. Proof. This is proven with the Karush\u2013Kuhn\u2013Tucker (KKT) conditions for the constrained optimization problem: minimizev\u2208Vt Lt 0(v). See Appendix A.2. When many global minima exist, a difference in the gradient dynamics can lead to a signi\ufb01cant discrepancy in the learned models: i.e., two different gradient dynamics can \ufb01nd signi\ufb01cantly different global minima with different behaviors for generalization and test accuracies (Kawaguchi et al., 2017). In machine learning, this is an important phenomenon called implicit bias \u2014 inductive bias induced implicitly through gradient dynamics \u2014 and is the subject of an emerging active research area (Gunasekar et al., 2017; Soudry et al., 2018; Gunasekar et al., 2018; Woodworth et al., 2020; Moroshko et al., 2020). 7 \fPublished as a conference paper at ICLR 2021 As can be seen in Theorem 2, the gradient dynamics of deep equilibrium linear models f\u03b8t differs from that of linear models Wt\u03c6 with any adaptive learning rates, \ufb01xed preconditioners, and existing variants of Newton methods. This is consistent with our experiments in Section 2.2 and Appendix D where the dynamics of deep equilibrium linear models resulted in the learned predictors with higher test accuracies, when compared to linear models with any learning rates. In this regard, Theorem 2 provides a partial explanation (and a starting point of the theory) for the observed generalization behaviors, whereas Theorem 1 (with Corollaries 1 and 2) provides the theory for the global convergence observed in the experiments. Theorem 2, along with our experimental results, suggests the importance of theoretically understanding implicit bias of the dynamics with the time-dependent trust region. In Appendix B, we show that Theorem 2 suggests a new type of implicit bias towards a simple function as a result of in\ufb01nite depth, whereas understanding this implicit bias in more details is left as an open problem for future work. 4 EXPERIMENTS In this section, we conduct experiments to further verify and demonstrate our theory. To compare with the previous \ufb01ndings, we use the same synthetic data as that in the previous work (Zou et al., 2020b): i.e., we randomly generate xi \u2208R10 from the standard normal distribution and set yi = \u2212xi + 0.1\u03c2i for all i \u2208{1, 2, . . . , n} with n = 1000, where \u03c2i is independently generated by the standard normal distribution. We set \u03c6(x) = x and use the square loss \u2113(q, yi) = \u2225q \u2212yi\u22252 2. As in the previous work, we consider random initialization and identity initialization (Zou et al., 2020b) and report the results in Figure 2 (a). As can be seen in the \ufb01gure, deep equilibrium linear models converges to the global minimum value with all initialization and random trials, whereas linear ResNet converges to a suboptimal value with identity initialization. This is consistent with our theory for deep equilibrium linear models and the previous work for ResNet (Zou et al., 2020b). We repeated the same experiment by generating (xi)k independently from the uniform distribution of the interval [\u22121, 1] instead for all i \u2208{1, . . . , n} and k \u2208{1, . . . , m} with n = 1000 and m = 10. Figure 2 (b) shows the results of this experiment with the uniform distribution and con\ufb01rm the global convergence of deep equilibrium linear models again with all initialization and random trials. In this case, linear ResNet with identity initialization also converged to the global minimum value. These observations are consistent with Corollary 1 where deep equilibrium linear models are guaranteed to converge to the global minimum value without any condition on the initialization. We now consider the rate of the global convergence. In Corollary 1, we can set \u03bbT = 1 m(1+\u03b3)2 to get a guarantee for the global linear convergence rate for all initializations in theory. However, in practice, this is a pessimistic convergence rate and we may want to choose \u03bbT depending on a initialization. To demonstrate this, using the same data as that in Figure 2 (a), Figure 2 (c) reports the numerical training trajectory along with theoretical upper bounds with initialization-independent \u03bbT = 1 m(1+\u03b3)2 and initialization-dependent \u03bbT = inft\u2208[0,T ] \u03bbmin(Dt). As can be seen in Figure 2 (c), the theoretical bound with initialization-dependent \u03bbT demonstrates a faster and more accurate convergence rate. A qualitatively same observation is reported for the logistic loss in Appendix D.2. 0 250 500 750 1000 1250 1500 1750 2000 # of iterations 0 200 400 600 800 1000 train loss DELM: identity DELM: random 1 DELM: random 2 DELM: random 3 Linear ResNet global optima (a) Gaussian data 0 250 500 750 1000 1250 1500 1750 2000 # of iterations 0 200 400 600 800 1000 train loss DELM: identity DELM: random 1 DELM: random 2 DELM: random 3 Linear ResNet global optima (b) Uniform data 100 101 102 103 104 105 # of iterations 103 104 train loss training trajectory bound: init-ind T bound: init-dep T global optima (c) Theoretical bounds Figure 2: (a)-(b): Convergence performances for deep equilibrium linear models (DELMs) with identity initialization and random initialization of three random trials, and linear ResNet with identity initialization. (c) the numerical training trajectory of DELMs with random initialization along with theoretical upper bounds with initialization-independent \u03bbT and initialization-dependent \u03bbT . 8 \fPublished as a conference paper at ICLR 2021 5 RELATED WORK The theoretical study of gradient dynamics of deep networks with some linearized component is a highly active area of research. Recently, Bartlett et al. (2019); Du & Hu (2019); Arora et al. (2019a); Zou et al. (2020b) analyzed gradient dynamics of deep linear networks and proved global convergence rates for the square loss under certain assumptions on the dataset, initialization, and network structures. For example, the dataset is assumed to be whitened (i.e., \u03a6\u03a6\u22a4= Im or XX\u22a4= Imx) and the initial loss is assumed to be smaller than the loss of any rank-de\ufb01cient solution by Arora et al. (2019a): the input and output layers are assumed to represent special transformations and are \ufb01xed during training by Zou et al. (2020b). Deep networks are also linearized implicitly in the neural tangent kernel (NTK) regime with significant over-parameterization m \u226bn (Yehudai & Shamir, 2019; Lee et al., 2019). By signi\ufb01cantly increasing model parameters (or more concretely the width m), we can ensure deep features or corresponding NTK to stay nearly the same during training. In other words, deep networks in this regime are approximately linear models with random features corresponding to the NTK at random initialization. Because of this implicit linearization, deep networks in the NTK regime are shown to achieve globally minimum training errors by interpolating all training data points (Zou et al., 2020a; Li & Liang, 2018; Jacot et al., 2018; Du et al., 2019; 2018; Chizat et al., 2019; Arora et al., 2019b; Allen-Zhu et al., 2019; Lee et al., 2019; Fang et al., 2020; Montanari & Zhong, 2020). These previous studies have signi\ufb01cantly advanced our theoretical understanding of deep learning through the study of deep linear networks and implicitly linearized deep networks in the NTK regime. In this context, this paper is expected to contribute to the theoretical advancement through the study of a new and signi\ufb01cantly different type of deep models \u2014 deep equilibrium linear models. In deep equilibrium linear models, the function at each layer A 7\u2192h(z(l\u22121); x, \u03b8) is nonlinear due to the additional nonlinearity \u03c3: A 7\u2192h(z(l\u22121); x, \u03b8) := \u03b3\u03c3(A)z(l\u22121) + \u03c6(x). In contrast, for deep linear networks, the function at each layer W (l) 7\u2192h(l)(z(l\u22121); x, W (l)) := W (l)z(l\u22121) is linear (it is linear also with skip connection). Furthermore, the nonlinearity \u03c3 is not an element-wise function, which poses an additional challenge in the mathematical analysis of deep equilibrium linear models. The nonlinearity \u03c3, the in\ufb01nite depth, and weight tying in deep equilibrium linear models necessitated us to develop new approaches in our proofs. The differences in the models and proofs naturally led to qualitatively and quantitatively different results. For example, we do not require any of over-parameterization m \u226bn, interpolation of all training data points, and any assumptions mentioned above for deep linear networks. Unlike previous papers, we also related the dynamics of deep equilibrium linear models to that of a trust region Newton method of shallow models with Gt-quadratic norm. This suggested potential bene\ufb01ts of deep equilibrium linear models. Our theory is consistent with our numerical observations. 6 CONCLUSION For deep equilibrium linear models, despite the non-convexity, we have rigorously proven convergence of gradient dynamics to global minima, at a linear rate, for a general class of loss functions, including the square loss and logistic loss. Moreover, we have proven the relationship between the gradient dynamics of deep equilibrium linear models and that of the adaptive trust region method. These results apply to models with any con\ufb01guration on the width of hidden layers, the number of data points, and input/output dimensions, allowing rank-de\ufb01cient covariance matrices as well as both under-parameterization and over-parameterization. The crucial assumption for our analysis is the differentiability of the function q 7\u2192\u2113(q, yi), which is satis\ufb01ed by standard loss functions, such as the square loss, the logistic loss, and the smoothed hinge loss \u2113(q, yi) = (max{0, 1 \u2212yiq})k with k \u22652. However, it is not satis\ufb01ed by the (non-smoothed) hinge loss \u2113(q, yi) = max{0, 1 \u2212yiq}, the treatment of which is left to future work. Future work also includes corresponding theoretical analyses with stochastic gradient descent. Our theoretical results (in Section 3) and numerical observations (in Section 2.2 and Appendix D) uncover the special properties of deep equilibrium linear models, providing a basis of future work for theoretical studies of implicit bias and for further empirical investigations of deep equilibrium models. In our proofs, the treatments of the additional nonlinearity \u03c3, the in\ufb01nite depth, and weight tying are especially unique, and we expect our new proof techniques to be proven useful in further studies of gradient dynamics for deep models. 9 \fPublished as a conference paper at ICLR 2021" + }, + { + "url": "http://arxiv.org/abs/1907.04371v5", + "title": "Ordered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization", + "abstract": "We propose a new stochastic optimization framework for empirical risk\nminimization problems such as those that arise in machine learning. The\ntraditional approaches, such as (mini-batch) stochastic gradient descent (SGD),\nutilize an unbiased gradient estimator of the empirical average loss. In\ncontrast, we develop a computationally efficient method to construct a gradient\nestimator that is purposely biased toward those observations with higher\ncurrent losses. On the theory side, we show that the proposed method minimizes\na new ordered modification of the empirical average loss, and is guaranteed to\nconverge at a sublinear rate to a global optimum for convex loss and to a\ncritical point for weakly convex (non-convex) loss. Furthermore, we prove a new\ngeneralization bound for the proposed algorithm. On the empirical side, the\nnumerical experiments show that our proposed method consistently improves the\ntest errors compared with the standard mini-batch SGD in various models\nincluding SVM, logistic regression, and deep learning problems.", + "authors": "Kenji Kawaguchi, Haihao Lu", + "published": "2019-07-09", + "updated": "2020-02-01", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.OC" + ], + "main_content": "Introduction Stochastic Gradient Descent (SGD), as the workhorse training algorithm for most machine learning applications including deep learning, has been extensively studied in recent years (e.g., see a recent review by Bottou et al. 2018). At every step, SGD draws one training sample uniformly at random from the training dataset, and then uses the (sub-)gradient of the Proceedings of the 23rdInternational Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). loss over the selected sample to update the model parameters. The most popular version of SGD in practice is perhaps the mini-batch SGD (Bottou et al., 2018; Dean et al., 2012), which is widely implemented in the state-of-the-art deep learning frameworks, such as TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2017) and CNTK (Seide and Agarwal, 2016). Instead of choosing one sample per iteration, mini-batch SGD randomly selects a mini-batch of the samples, and uses the (sub-)gradient of the average loss over the selected samples to update the model parameters. Both SGD and mini-batch SGD utilize uniform sampling during the entire learning process, so that the stochastic gradient is always an unbiased gradient estimator of the empirical average loss over all samples. On the other hand, it appears to practitioners that not all samples are equally important, and indeed most of them could be ignored after a few epochs of training without a\ufb00ecting the \ufb01nal model (Katharopoulos and Fleuret, 2018). For example, intuitively, the samples near the \ufb01nal decision boundary should be more important to build the model than those far away from the boundary for classi\ufb01cation problems. In particular, as we will illustrate later in Figure 1, there are cases when those far-away samples may corrupt the model by using average loss. In order to further explore such structures, we propose an e\ufb03cient sampling scheme on top of the mini-batch SGD. We call the resulting algorithm ordered SGD, which is used to learn a di\ufb00erent type of models with the goal to improve the testing performance. The above motivation of ordered SGD is related to that of importance sampling SGD, which has been extensively studied recently in order to improve the convergence speed of SGD (Needell et al., 2014; Zhao and Zhang, 2015; Alain et al., 2015; Loshchilov and Hutter, 2015; Gopal, 2016; Katharopoulos and Fleuret, 2018). However, our goals, algorithms and theoretical results are fundamentally di\ufb00erent from those in the previous studies on importance sampling SGD. Indeed, all aforementioned studies are aimed to accelerate the *equal contribution arXiv:1907.04371v5 [stat.ML] 1 Feb 2020 \fOrdered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization minimization process for the empirical average loss, whereas our proposed method turns out to minimize a new objective function by purposely constructing a biased gradient. Our main contributions can be summarized as follows: i) we propose a computationally e\ufb03cient and easily implementable algorithm, ordered SGD, with principled motivations (Section 3), ii) we show that ordered SGD minimizes an ordered empirical risk with sublinear rate for convex and weakly convex (non-convex) loss functions (Section 4), iii) we prove a generalization bound for ordered SGD (Section 5), and iv) our numerical experiments show ordered SGD consistently improved mini-batch SGD in test errors (Section 6). 2 Empirical Risk Minimization Empirical risk minimization is one of the main tools to build a model in machine learning. Let D = ((xi, yi))n i=1 be a training dataset of n samples where xi \u2208X \u2286Rdx is the input vector and yi \u2208Y \u2286Rdy is the target output vector for the i-th sample. The goal of empirical risk minimization is to \ufb01nd a prediction function f(\u00b7 ; \u03b8) : Rdx \u2192Rdy, by minimizing L(\u03b8) := 1 n n X i=1 Li(\u03b8) + R(\u03b8), (1) where \u03b8 \u2208Rd\u03b8 is the parameter vector of the prediction model, Li(\u03b8) := \u2113(f(xi; \u03b8), yi) with the function \u2113: Rdy \u00d7 Y \u2192R\u22650 is the loss of the i-th sample, and R(\u03b8) \u22650 is a regularizer. For example, in logistic regression, f(x; \u03b8) = \u03b8T x is a linear function of the input vector x, and \u2113(a, y) = log(1 + exp(\u2212ya)) is the logistic loss function with y \u2208{\u22121, 1}. For a neural network, f(x; \u03b8) represents the pre-activation output of the last layer. 3 Algorithm In this section, we introduce ordered SGD and provide an intuitive explanation of the advantage of ordered SGD by looking at 2-dimension toy examples with linear classi\ufb01ers and small arti\ufb01cial neural networks (ANNs). Let us \ufb01rst introduce a new notation q-argmax as an extension to the standard notation argmax: De\ufb01nition 1. Given a set of n real numbers (a1, a2, . . . , an), an index subset S \u2286{1, 2, . . . , n}, and a positive integer number q \u2264|S|, we de\ufb01ne q-argmaxj\u2208S aj such that Q \u2208q-argmaxj\u2208S aj is a set of q indexes of the q largest values of (aj)j\u2208S; i.e., q-argmaxj\u2208S aj = argmaxQ\u2286S,|Q|=q P i\u2208Q ai. Algorithm 1 Ordered Stochastic Gradient Descent (ordered SGD) 1: Inputs: an initial vector \u03b80 and a learning rate sequence (\u03b7k)k 2: for t = 1, 2, . . . do 3: Randomly choose a mini-batch of samples: S \u2286 {1, 2, . . . , n} such that |S| = s. 4: Find a set Q of top-q samples in S in term of loss values: Q \u2208q-argmaxi\u2208SLi(\u03b8t). 5: Compute a subgradient \u02dc gt of the top-q samples LQ(\u03b8t): \u02dc gt \u2208\u2202LQ(\u03b8t) where LQ(\u03b8t) = 1 q P i\u2208Q Li(\u03b8t)+R(\u03b8t) and \u2202LQ is the set of subgradient1of function LQ. 6: Update parameters \u03b8: \u03b8t+1 = \u03b8t \u2212\u03b7t\u02dc gt Algorithm 1 describes the pseudocode of our proposed algorithm, ordered SGD. The procedures of ordered SGD follow those of mini-batch SGD except the following modi\ufb01cation: after drawing a mini-batch of size s, ordered SGD updates the parameter vector \u03b8 based on the (sub-)gradient of the average loss over the top-q samples in the mini-batch in terms of individual loss values (lines 4 and 5 of Algorithm 1). This modi\ufb01cation is used to purposely build and utilize a biased gradient estimator with more weights on the samples having larger losses. As it can be seen in Algorithm 1, ordered SGD is easily implementable, requiring to change only a single line or few lines on top of a minibatch SGD implementation. Figure 1 illustrates the motivation of ordered SGD by looking at two-dimensional toy problems of binary classi\ufb01cation. To avoid an extra freedom due to the hyper-parameter q, we employed a single \ufb01xed procedure to set the hyper-parameter q in the experiments for Figure 1 and other experiments in Section 6, which is further explained in Section 6. The details of the experimental settings for Figure 1 are presented in Section 6 and in Appendix C. It can be seen from Figure 1 that ordered SGD adapts better to imbalanced data distributions compared with mini-batch SGD. It can better capture the information of the smaller sub-clusters that contribute less to the empirical average loss L(\u03b8): e.g., the small sub-clusters in the middle of Figures 1a and 1b, as well as the small inner ring structure in Figures 1c and 1d (the two inner rings contain only 40 data points while the two outer rings contain 960 data points). The smaller subclusters are informative for training a classi\ufb01er when they are not outliers or by-products of noise. A sub1The sub-gradient for (non-convex) \u03c1-weakly convex function LQ at \u03b8t is de\ufb01ned as {g|LQ(\u03b8) \u2265LQ(\u03b8t)+\u27e8g, \u03b8 \u2212 \u03b8t\u27e9\u2212\u03c1 2\u2225\u03b8 \u2212\u03b8t\u22252, \u2200\u03b8} (Rockafellar and Wets, 2009). \fKenji Kawaguchi*, Haihao Lu* (a) with linear classi\ufb01er (b) with linear classi\ufb01er (c) with small ANN (d) with tiny ANN Figure 1: Decision boundaries of mini-batch SGD predictors (top row) and ordered SGD predictors (bottom row) with 2D synthetic datasets for binary classi\ufb01cation. In these examples, ordered SGD predictors correctly classify more data points than mini-batch SGD predictors, because a ordered SGD predictor can focus more on a smaller yet informative subset of data points, instead of focusing on the average loss dominated by a larger subset of data points. cluster of data points would be less likely to be an outlier as the size of the sub-cluster increases. The value of q in ordered SGD can control the size of subclusters that a classi\ufb01er should be sensitive to. With smaller q, the output model becomes more sensitive to smaller sub-clusters. In an extreme case with q = 1 and n = s, ordered SGD minimizes the maximal loss (Shalev-Shwartz and Wexler, 2016) that is highly sensitive to every smallest sub-cluster of each single data point. 4 Optimization Theory In this section, we answer the following three questions: (1) what objective function does ordered SGD solve as an optimization method, (2) what is the convergence rate of ordered SGD for minimizing the new objective function, and (3) what is the asymptotic structure of the new objective function. Similarly to the notation of order statistics, we \ufb01rst introduce the notation of ordered indexes: given a model parameter \u03b8, let L(1)(\u03b8) \u2265L(2)(\u03b8) \u2265\u00b7 \u00b7 \u00b7 \u2265 L(n)(\u03b8) be the decreasing values of the individual losses L1(\u03b8), . . . , Ln(\u03b8), where (j) \u2208{1, . . . , n} (for all j \u2208{1, . . . , n}). That is, {(1), . . . , (n)} as a perturbation of {1, . . . , n} de\ufb01nes the order of sample indexes by loss values. Throughout this paper, whenever we encounter ties on the values, we employ a tie-breaking rule in order to ensure the uniqueness of such an order.2 Theorem 1 shows that ordered SGD is a stochastic \ufb01rst-order method for minimizing the new ordered 2In the case of ties, the order is de\ufb01ned by the order of the original indexes (1, 2, . . . , n) of L1(\u03b8), . . . , Ln(\u03b8); i.e., if Li1(\u03b8) = Li2(\u03b8) and i1 < i2, then i1 appears before i2 in the sequence ((1), (2), . . . , (n)). empirical loss Lq(\u03b8). Theorem 1. Consider the following objective function: Lq(\u03b8) := 1 q n X j=1 \u03b3jL(j)(\u03b8) + R(\u03b8), (2) where the parameter \u03b3j depends on the tuple (n, s, q), and is de\ufb01ned by \u03b3j := Pq\u22121 l=0 \u0000j\u22121 l \u0001\u0000 n\u2212j s\u2212l\u22121 \u0001 \u0000n s \u0001 . (3) Then, ordered SGD is a stochastic \ufb01rst-order method for minimizing Lq(\u03b8) in the sense that \u02dc gt used in ordered SGD is an unbiased estimator of a (sub-)gradient of Lq(\u03b8). Although the order of individual losses change with di\ufb00erent \u03b8, Lq is a well-de\ufb01ned function. For any given \u03b8, the order of individual losses is \ufb01xed and Lq(\u03b8) has a unique value, which means Lq(\u03b8) is a function of \u03b8. All proofs in this paper are deferred to Appendix A. As we can see from Theorem 1, the objective function minimized by ordered SGD (i.e., Lq(\u03b8)) depends on the hyper-parameters of the algorithm through the values of \u03b3j. Therefore, it is of practical interest to obtain deeper understandings on how the hyper-parameters (n, s, q) a\ufb00ects the objective function Lq(\u03b8) through \u03b3j. The next proposition presents the asymptotic value of \u03b3j (when n \u2192\u221e), which shows that a rescaled \u03b3j converges to the cumulative distribution function of a Beta distribution: Proposition 1. Denote z = j n and \u03b3(z) := Pq\u22121 l=0 zl(1 \u2212z)s\u2212l\u22121 s! l!(s\u2212l\u22121)!. Then, it holds that lim j,n\u2192\u221e,j/n=z \u03b3j = 1 n\u03b3(z). \fOrdered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization (a) (s, q) = (10, 3) (b) (s, q) = (100, 30) (c) (s, q) = (100, 60) Figure 2: \u02c6 \u03b3(z) and \u03b3(z) for di\ufb00erent (n, s, q) where \u02c6 \u03b3 is a rescaled version of \u03b3j: \u02c6 \u03b3(j/n) = n\u03b3j . Moreover, it holds that 1 \u22121 s\u03b3(z) is the cumulative distribution function of Beta(z; q, s \u2212q). To better illustrate the structure of \u03b3j in the nonasymptotic regime, Figure 2 plots \u02c6 \u03b3(z) and \u03b3(z) for di\ufb00erent values of (n, s, q) where \u02c6 \u03b3(z) is a rescaled version of \u03b3j de\ufb01ned by \u02c6 \u03b3(j/n) = n\u03b3j (and the value of \u02c6 \u03b3(\u00b7) between j/n and (j + 1)/n is de\ufb01ned by linear interpolation for better visualization). As we can see from Figure 2, \u02c6 \u03b3(z) monotonically decays. In each sub\ufb01gure, with \ufb01xed s, q, the cli\ufb00gets smoother and \u02c6 \u03b3(z) converges to \u03b3(z) as n increases. Comparing Figures 2a and 2b, we can see that as s, q and n all increase proportionally, the cli\ufb00gets steeper. Comparing Figures 2b and 2c, we can see that with \ufb01xed n and q, the cli\ufb00shifts to the right as q increases. As a direct extension of Theorem 1, we can now obtain the computational guarantees of ordered SGD for minimizing Lq(\u03b8) by taking advantage of the classic convergence results of SGD: Theorem 2. Let (\u03b8t)T t=0 be a sequence generated by ordered SGD (Algorithm 1). Suppose that Li(\u00b7) is G1Lipschitz continuous for i = 1, . . . , n, and R(\u00b7) is G2Lipschitz continuous. Suppose that there exists a \ufb01nite \u03b8\u2217\u2208argmin\u03b8 Lq(\u03b8) and Lq(\u03b8\u2217) is \ufb01nite. Then, the following two statements hold: (1) (Convex setting). If Li(\u00b7) and R(\u00b7) are both convex, for any step-size \u03b7t, it holds that min 0\u2264t\u2264n E[Lq(\u03b8t) \u2212Lq(\u03b8\u2217)] \u22642(G2 1 + G2 2) PT t=0 \u03b72 t + \u2225\u03b8\u2217\u2212\u03b80\u22252 2 PT t=0 \u03b7t . (2) (Weakly convex setting) Suppose that Li(\u00b7) is \u03c1weakly convex (i.e., Li(\u03b8) + \u03c1 2\u2225\u03b8\u22252 is convex) and R(\u00b7) is convex. Recall the de\ufb01nition of Moreau envelope: L\u03bb q (\u03b8) := min\u03b2{Lq(\u03b2) + 1 2\u03bb\u2225\u03b2 \u2212\u03b8\u22252}. Denote \u00af \u03b8T as a random variable taking value in {\u03b80, \u03b81, . . . , \u03b8T } according to the probability distribution P(\u00af \u03b8T = \u03b8t) = \u03b7t PT t=0 \u03b7t . Then for any constant \u02c6 \u03c1 > \u03c1, it holds that E[\u2225\u2207L1/\u02c6 \u03c1 q (\u00af \u03b8T )\u22252] \u2264 \u02c6 \u03c1 \u02c6 \u03c1 \u2212\u03c1 \u0010 L1/\u02c6 \u03c1 q (\u03b80) \u2212Lq(\u03b8\u2217) \u0011 + \u02c6 \u03c1(G2 1 + G2 2) PT t=0 \u03b72 t PT t=0 \u03b7t . Theorem 2 shows that in particular, if we choose \u03b7t \u223c O(1/ \u221a t), the optimality gap mint Lq(\u03b8t) \u2212Lq(\u03b8\u2217) and E[\u2225\u2207L1/\u02c6 \u03c1 q (\u00af \u03b8T )\u22252] decay at the rate of \u02dc O(1/ \u221a t) (note that limT \u2192\u221e PT t=0 \u03b72 t PT t=0 \u03b7t = 0 with \u03b7t \u223cO(1/ \u221a t)). The Lipschitz continuity assumption in Theorem 2 is a standard assumption for the analysis of stochastic optimization algorithms. This assumption is generally satis\ufb01ed with logistic loss, hinge loss and Huber loss without any constraints on \u03b8t, and with square loss when one can presume that \u03b8t stays in a compact space (which is typically the case being interested in practice). For the weakly convex setting, E\u2225\u2207\u03d51/2\u03c1(\u03b8k)\u22252 (appeared in Theorem 2 (2)) is a natural measure of the near-stationarity for a non-di\ufb00erentiable weakly convex function \u03d5 : \u03b8 7\u2192\u03d5(\u03b8) (Davis and Drusvyatskiy, 2018). The weak convexity (also known as negative strong convexity or almost convexity) is a standard assumption for analyzing non-convex optimization problem in optimization literature (Davis and Drusvyatskiy, 2018; Allen-Zhu, 2017). With a standard loss criterion such as logistic loss, the individual objective Li(\u00b7) with a neural network using sigmoid or tanh activation functions is weakly convex (neural network with ReLU activation function is not weakly convex and falls out of our setting). 5 Generalization Bound This section presents the generalization theory for ordered SGD. To make the dependence on a training dataset D explicit, we de\ufb01ne L(\u03b8; D) := 1 n Pn i=1 Li(\u03b8; D) and Lq(\u03b8; D) := 1 q Pm j=1 \u03b3jL(j)(\u03b8; D) by rewriting Li(\u03b8; D) = Li(\u03b8) and L(j)(\u03b8; D) = \fKenji Kawaguchi*, Haihao Lu* L(j)(\u03b8), where ((j))n j=1 de\ufb01nes the order of sample indexes by the loss value, as stated in Section 4. Denote ri(\u03b8; D) = Pn j=1 1{i = (j)}\u03b3j where (j) depends on (\u03b8, D). Given an arbitrary set \u0398 \u2286Rd\u03b8, we de\ufb01ne Rn(\u0398) as the (standard) Rademacher complexity of the set {(x, y) 7\u2192\u2113(f(x; \u03b8), y) : \u03b8 \u2208\u0398}: Rn(\u0398) = E \u00af D,\u03be \" sup \u03b8\u2208\u0398 1 n n X i=1 \u03bei\u2113(f(\u00af xi; \u03b8), \u00af yi) # , where D = ((\u00af xi, \u00af yi))n i=1, and \u03be1, . . . , \u03ben are independent uniform random variables taking values in {\u22121, 1} (i.e., Rademacher variables). Given a tuple (\u2113, f, \u0398, X, Y), de\ufb01ne M as the least upper bound on the di\ufb00erence of individual loss values: |\u2113(f(x; \u03b8), y)\u2212\u2113(f(x\u2032; \u03b8), y\u2032)| \u2264M for all \u03b8 \u2208\u0398 and all (x, y), (x\u2032, y\u2032) \u2208X \u00d7 Y. For example, M = 1 if \u2113is the 0-1 loss function. Theorem 3 presents a generalization bound for ordered SGD: Theorem 3. Let \u0398 be a \ufb01xed subset of Rd\u03b8. Then, for any \u03b4 > 0, with probability at least 1 \u2212\u03b4 over an iid draw of n examples D = ((xi, yi))n i=1, the following holds for all \u03b8 \u2208\u0398: E(x,y)[\u2113(f(x; \u03b8), y)] (4) \u2264Lq(\u03b8; D) + 2Rn(\u0398) + Ms q r ln(1/\u03b4) 2n \u2212Qn(\u0398; s, q), where Qn(\u0398; s, q) := E \u00af D[inf\u03b8\u2208\u0398 Pn i=1( ri(\u03b8; \u00af D) q \u2212 1 n)\u2113(f(\u00af xi; \u03b8), \u00af yi)] \u22650. The expected error E(x,y)[\u2113(f(x; \u03b8), y)] in the left-hand side of Equation (4) is a standard objective for generalization, whereas the right-hand side is an upper bound with the dependence on the algorithm parameters q and s. Let us \ufb01rst look at the asymptotic case when n \u2192\u221e. Let \u0398 be constrained such that Rn(\u0398) \u21920 as n \u2192\u221e, which has been shown to be satis\ufb01ed for various models and sets \u0398 (Bartlett and Mendelson, 2002; Mohri et al., 2012; Bartlett et al., 2017; Kawaguchi et al., 2017). With s/q being bounded, the third term in the right-hand side of Equation (4) disappear as n \u2192\u221e. Thus, it holds with high probability that E(x,y)[\u2113(f(x; \u03b8), y)] \u2264Lq(\u03b8; D)\u2212Qn(\u0398; s, q) \u2264 Lq(\u03b8; D), where Lq(\u03b8; D) is minimized by ordered SGD as shown in Theorem 1 and Theorem 2. From this viewpoint, ordered SGD minimizes the expected error for generalization when n \u2192\u221e. A special case of Theorem 3 recovers the standard generalization bound of the empirical average loss (e.g., Mohri et al., 2012), That is, if q = s, ordered SGD becomes the standard mini-batch SGD and Equation (4) becomes E(x,y)[\u2113(f(x; \u03b8), y)] \u2264L(\u03b8; D) + 2Rn(\u0398) + M s ln 1 \u03b4 2n , (5) which is the standard generalization bound (e.g., Mohri et al., 2012). This is because if q = s, then ri(\u03b8; \u00af D) q = 1 n and hence Qn(\u0398; s, q) = 0. For the purpose of a simple comparison of ordered SGD and (mini-batch) SGD, consider the case where we \ufb01x a single subset \u0398 \u2286Rd\u03b8. Let \u02c6 \u03b8q and \u02c6 \u03b8s be the parameter vectors obtained by ordered SGD and (mini-batch) SGD respectively as the results of training. Then, when n \u2192\u221e, with s/q being bounded, the upper bound on the expected error for ordered SGD (the right hand-side of Equation 4) is (strictly) less than that for (mini-batch) SGD (the right hand-side of Equation 5) if Qn(\u0398; s, q)+L(\u02c6 \u03b8s; D)\u2212Lq(\u02c6 \u03b8q; D) > 0 or if L(\u02c6 \u03b8s; D) \u2212Lq(\u02c6 \u03b8q; D) > 0. For a given model f, whether Theorem 3 provides a non-vacuous bound depends on the choice of \u0398. In Appendix B, we discuss this e\ufb00ect as well as a standard way to derive various data-dependent bounds from Theorem 3. 6 Experiments In this section, we empirically evaluate ordered SGD with various datasets, models and settings. To avoid an extra freedom due to the hyper-parameter q, we introduce a single \ufb01xed setup of the adaptive values of q as the default setting: q = s at the beginning of training, q = \u230as/2\u230bonce train acc \u226580%, q = \u230as/4\u230b once train acc \u226590%, q = \u230as/8\u230bonce train acc \u2265 95%, and q = \u230as/16\u230bonce train acc \u226599.5%, where train acc represents training accuracy. The value of q was automatically updated at the end of each epoch based on this simple rule. This rule was derived based on the intuition that in the early stage of training, all samples are informative to build a rough model, while the samples around the boundary (with larger losses) are more helpful to build the \ufb01nal classi\ufb01er in later stage. In the \ufb01gures and tables of this section, we refer to ordered SGD with this rule as \u2018OSGD\u2019, and ordered SGD with a \ufb01xed value q = \u00af q as \u2018OSGD: q = \u00af q\u2019. Experiment with \ufb01xed hyper-parameters. For this experiment, we \ufb01xed all hyper-parameters a priori across all di\ufb00erent datasets and models by using a standard hyper-parameter setting of mini-batch SGD, instead of aiming for state-of-the-art test errors for each dataset with a possible issue of over-\ufb01tting to test and validation datasets (Dwork et al., 2015; Rao et al., 2008). We \ufb01xed the mini-batch size s to be 64, \fOrdered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization Table 1: Test errors (%) of mini-batch SGD and ordered SGD (OSGD). The last column labeled \u201cImprove\u201d shows relative improvements (%) from mini-batch SGD to ordered SGD. In the other columns, the numbers indicate the mean test errors (and standard deviations in parentheses) over ten random trials. The \ufb01rst column shows \u2018No\u2019 for no data augmentation, and \u2018Yes\u2019 for data augmentation. Data Aug Datasets Model mini-batch SGD OSGD Improve No Semeion Logistic model 10.76 (0.35) 9.31 (0.42) 13.48 No MNIST Logistic model 7.70 (0.06) 7.35 (0.04) 4.55 No Semeion SVM 11.05 (0.72) 10.25 (0.51) 7.18 No MNIST SVM 8.04 (0.05) 7.66 (0.07) 4.60 No Semeion LeNet 8.06 (0.61) 6.09 (0.55) 24.48 No MNIST LeNet 0.65 (0.04) 0.57 (0.06) 11.56 No KMNIST LeNet 3.74 (0.08) 3.09 (0.14) 17.49 No Fashion-MNIST LeNet 8.07 (0.16) 8.03 (0.26) 0.57 No CIFAR-10 PreActResNet18 13.75 (0.22) 12.87 (0.32) 6.41 No CIFAR-100 PreActResNet18 41.80 (0.40) 41.32 (0.43) 1.17 No SVHN PreActResNet18 4.66 (0.10) 4.39 (0.11) 5.95 Yes Semeion LeNet 7.47 (1.03) 5.06 (0.69) 32.28 Yes MNIST LeNet 0.43 (0.03) 0.39 (0.03) 9.84 Yes KMNIST LeNet 2.59 (0.09) 2.01 (0.13) 22.33 Yes Fashion-MNIST LeNet 7.45 (0.07) 6.49 (0.19) 12.93 Yes CIFAR-10 PreActResNet18 8.08 (0.17) 7.04 (0.12) 12.81 Yes CIFAR-100 PreActResNet18 29.95 (0.31) 28.31 (0.41) 5.49 Yes SVHN PreActResNet18 4.45 (0.07) 4.00 (0.08) 10.08 the weight decay rate to be 10\u22124, the initial learning rate to be 0.01, and the momentum coe\ufb03cient to be 0.9. See Appendix C for more details of the experimental settings. The code to reproduce all the results is publicly available at: [the link is hidden for anonymous submission]. Table 1 compares the testing performance of ordered SGD and mini-batch SGD for di\ufb00erent models and datasets. Table 1 consistently shows that ordered SGD improved mini-batch SGD in test errors. The table reports the mean and the standard deviation of test errors (i.e., 100 \u00d7 the average of 0-1 losses on test dataset) over 10 random experiments with di\ufb00erent random seeds. The table also summarises the relative improvements of ordered SGD over mini-batch SGD, which is de\ufb01ned as [100\u00d7 ((mean test error of minibatch SGD) (mean test error of ordered SGD)) / (mean test error of mini-batch SGD)]. Logistic model refers to linear multinomial logistic regression model, SVM refers to linear multiclass support vector machine, LeNet refers to a standard variant of LeNet (LeCun et al., 1998) with ReLU activations, and PreActResNet18 refers to pre-activation ResNet with 18 layers (He et al., 2016). Figure 3 shows the test error and the average training loss of mini-batch SGD and ordered SGD versus the number of epoch. As shown in the \ufb01gure, ordered SGD with the \ufb01xed q value also outperformed minibatch SGD in general. In the \ufb01gures, the reported training losses refer to the standard empirical average loss 1 n Pn i=1 Li(\u03b8) measured at the end of each epoch. When compared to mini-batch SGD, ordered SGD had lower test errors while having higher training losses in Figures 3a, 3d and 3g, because ordered SGD optimizes over the ordered empirical loss instead. This is consistent with our motivation and theory of ordered SGD in Sections 3, 4 and 5. The qualitatively similar behaviors were also observed with all of the 18 various problems as shown in Appendix C. Moreover, ordered SGD is a computationally e\ufb03cient algorithm. Table 2 shows the wall-clock time in illustrative four experiments, whereas Table 4 in Appendix C summarizes the wall-clock time in all experiments. The wall-clock time of ordered SGD measures the time spent by all computations of ordered SGD, including the extra computation of \ufb01nding top-q samples in a mini-batch (line 4 of Algorithm 1). The extra computation is generally negligible and can be completed in O(s log q) or O(s) by using a sorting/selection algorithm. The ordered SGD algorithm can be faster than mini-batch SGD because ordered SGD only computes the (sub-)gradient \u02dc gt of the top-q samples (in \fKenji Kawaguchi*, Haihao Lu* (a) MNIST & Logistic (b) MNIST & LeNet (c) KMNIST (d) CIFAR-10 (e) Semeion & LeNet (f) KMNIST (g) CIFAR-100 (h) SVHN Figure 3: Test error and training loss (in log scales) versus the number of epoch. These are without data augmentation in sub\ufb01gures (a)-(d), and with data augmentation in sub\ufb01gures (e)-(h). The lines indicate the mean values over 10 random trials, and the shaded regions represent intervals of the sample standard deviations. Table 2: Average wall-clock time (seconds) per epoch with data augmentation. PreActResNet18 was used for CIFAR-10, CIFAR-100, and SVHN, while LeNet was used for MNIST and KMNIST. Datasets mini-batch SGD OSGD MNIST 14.44 (0.54) 14.77 (0.41) KMNIST 12.17 (0.33) 11.42 (0.29) CIFAR-10 48.18 (0.58) 46.40 (0.97) CIFAR-100 47.37 (0.84) 44.74 (0.91) SVHN 72.29 (1.23) 67.95 (1.54) line 5 of Algorithm 1). As shown in Tables 2 and 4, ordered SGD was faster than mini-batch SGD for all larger models with PreActResNet18. This is because the computational reduction of the back-propagation in ordered SGD can dominate the small extra cost of \ufb01nding top-q samples in larger problems. Experiment with di\ufb00erent q values. Figure 4 shows the e\ufb00ect of di\ufb00erent \ufb01xed q values for CIFAR10 with PreActResNet18. Ordered SGD improved the Figure 4: E\ufb00ect of di\ufb00erent q values with CIFAR-10. test errors of mini-batch SGD with di\ufb00erent \ufb01xed q values. We also report the same observation with different datasets and models in Appendix C. Experiment with di\ufb00erent learning rates and mini-batch sizes. Figures 5 and 6 in Appendix C consistently show the improvement of ordered SGD over mini-batch SGD with di\ufb00erent di\ufb00erent learning rates and mini-batch sizes. \fOrdered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization Table 3: Test errors (%) by using the best learning rate of mini-batch SGD with various data augmentation methods for CIFAR-10. Data Aug mini-batch SGD OSGD Improve Standard 6.94 6.46 6.92 RE 3.24 3.06 5.56 Mixup 3.31 3.05 7.85 Experiment with the best learning rate, mixup, and random erasing. Table 3 summarises the experimental results with the data augmentation methods of random erasing (RE) (Zhong et al., 2017) and mixup (Zhang et al., 2017; Verma et al., 2019) by using CIFAR-10 dataset. For this experiment, we purposefully adopted the setting that favors mini-batch SGD. That is, for both mini-batch SGD and ordered SGD, we used hyper-parameters tuned for mini-batch SGD. For RE and mixup data, we used the same tuned hyper-parameter settings (including learning rates) and the codes as those in the previous studies that used mini-batch SGD (Zhong et al., 2017; Verma et al., 2019) (with WRN-28-10 for RE and with PreActResNet18 for mixup). For standard data augmentation, we \ufb01rst searched the best learning rate of mini-batch SGD based on the test error (purposefully over\ufb01tting to the test dataset for mini-batch SGD) by using the grid search with learning rates of 1.0, 0.5, 0.1, 0.05. 0.01, 0.005, 0.001, 0.0005, 0.0001. Then, we used the best learning rate of mini-batch SGD for ordered SGD (instead of using the best learning rate of ordered SGD for ordered SGD). As shown in Table 3, ordered SGD with hyper-parameters tuned for mini-batch SGD still outperformed \ufb01ne-tuned mini-batch SGD with the different data augmentation methods. 7 Related work and extension Although there is no direct predecessor of our work, the following \ufb01elds are related to this paper. Other mini-batch stochastic methods. The proposed sampling strategy and our theoretical analyses are generic and can be extended to other (mini-batch) stochastic methods, including Adam (Kingma and Ba, 2014), stochastic mirror descent (Beck and Teboulle, 2003; Nedic and Lee, 2014; Lu, 2017; Lu et al., 2018; Zhang and He, 2018), and proximal stochastic subgradient methods (Davis and Drusvyatskiy, 2018). Thus, our results open up the research direction for further studying the proposed stochastic optimization framework with di\ufb00erent base algorithms such as Adam and AdaGrad. To illustrate it, we presented ordered Adam and reported the numerical results in Appendix C. Importance Sampling SGD. Stochastic gradient descent with importance sampling has been an active research area for the past several years (Needell et al., 2014; Zhao and Zhang, 2015; Alain et al., 2015; Loshchilov and Hutter, 2015; Gopal, 2016; Katharopoulos and Fleuret, 2018). In the convex setting, (Zhao and Zhang, 2015; Needell et al., 2014) show that the optimal sampling distribution for minimizing L(\u03b8) is proportional to the per-sample gradient norm. However, maintaining the norm of gradient for individual samples can be computationally expensive when the dataset size n or the parameter vector size d\u03b8 is large in particular for many applications of deep learning. These importance sampling methods are inherently di\ufb00erent from ordered SGD in that importance sampling is used to reduce the number of iterations for minimizing L(\u03b8), whereas ordered SGD is designed to learn a di\ufb00erent type of models by minimizing the new objective function Lq(\u03b8). Average Top-k Loss. The average top-k loss is introduced by Fan et al. (2017) as an alternative to the empirical average loss L(\u03b8). The ordered loss function Lq(\u03b8) di\ufb00ers from the average top-k loss as shown in Section 4. Furthermore, our proposed framework is fundamentally di\ufb00erent from the average top-k loss. First, the algorithms are di\ufb00erent \u2013 the stochastic method proposed in Fan et al. (2017) utilizes duality of the function and is unusable for deep neural networks (and other non-convex problems), while our proposed method is a modi\ufb01cation of mini-batch SGD that is usable for deep neural networks (and other non-convex problems) and scales well for large problems. Second, the optimization results are di\ufb00erent, and in particular, the objective functions are di\ufb00erent and we have convergence analysis for weakly convex (non-convex) functions. Finally, the focus of generalization property is di\ufb00erent \u2013 Fan et al. (2017) focuses on the calibration for binary classi\ufb01cation problem, while we focus on the generalization bound that works for general classi\ufb01cation and regression problems. Random-then-Greedy Procedure. Ordered SGD randomly picks a subset of samples and then greedily utilizes a part of the subset, which is related to the random-then-greedy procedure proposed recently in the di\ufb00erent topic \u2013 the greedy weak learner for gradient boosting (Lu and Mazumder, 2018). 8 Conclusion We have presented an e\ufb03cient stochastic \ufb01rst-order method, ordered SGD, for learning an e\ufb00ective predictor in machine learning problems. We have shown that ordered SGD minimizes a new ordered empirical loss Lq(\u03b8), based on which we have developed the optimization and generalization properties of ordered SGD. The numerical experiments con\ufb01rmed the e\ufb00ectiveness of our proposed algorithm. \fKenji Kawaguchi*, Haihao Lu*" + } + ], + "T. Tony Cai": [ + { + "url": "http://arxiv.org/abs/2401.12272v1", + "title": "Transfer Learning for Nonparametric Regression: Non-asymptotic Minimax Analysis and Adaptive Procedure", + "abstract": "Transfer learning for nonparametric regression is considered. We first study\nthe non-asymptotic minimax risk for this problem and develop a novel estimator\ncalled the confidence thresholding estimator, which is shown to achieve the\nminimax optimal risk up to a logarithmic factor. Our results demonstrate two\nunique phenomena in transfer learning: auto-smoothing and super-acceleration,\nwhich differentiate it from nonparametric regression in a traditional setting.\nWe then propose a data-driven algorithm that adaptively achieves the minimax\nrisk up to a logarithmic factor across a wide range of parameter spaces.\nSimulation studies are conducted to evaluate the numerical performance of the\nadaptive transfer learning algorithm, and a real-world example is provided to\ndemonstrate the benefits of the proposed method.", + "authors": "T. Tony Cai, Hongming Pu", + "published": "2024-01-22", + "updated": "2024-01-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction Transfer learning, a technique that utilizes knowledge gained from related source domains to improve performance in a target domain, has gained widespread popularity in machine learning due to its successes across a range of applications, including natural language processing (Daum\u00b4 e III, 2009), computer vision (Tzeng et al., 2017), and epidemiology (Apostolopoulos and Mpesiana, 2020). See the recent survey papers on transfer learning (Weiss et al., 2016; Zhuang et al., 2020) for more examples and in-depth discussions. Transfer learning has received significant recent attention in statistics, due to its empirical successes. It has been studied in a decision-theoretical framework \u2217The research was supported in part by NSF Grant DMS-2015259 and NIH grant R01GM129781. MSC 2010 subject classifications: Primary 62G08; secondary 62L12 Keywords and phrases: Adaptivity, nonparametric regression, optimal rate of convergence, transfer learning \u2020These authors contributed equally to this work. 1 arXiv:2401.12272v1 [stat.ML] 22 Jan 2024 \ffor a variety of supervised learning problems, such as classification (Cai and Wei, 2021; Reeve et al., 2021), high-dimensional linear regression (Li et al., 2022a), and generalized linear models (Li et al., 2021), as well as unsupervised learning problems, such as Gaussian graphical models (Li et al., 2022b). Minimax optimal rates of convergence have been established and data-driven adaptive algorithms have been developed. In this paper we consider transfer learning for nonparametric regression. Formally, in the target domain one observes independent and identically distributed (i.i.d.) samples (Xi, Yi) iid \u223cQ, i = 1, . . . , nQ, with Yi = f(Xi) + zi, where f(\u00b7) is an unknown function of interest and {z1, . . . , znQ} are random noises satisfying E(zi|Xi) = 0. Different from the conventional setting, in transfer learning one also has auxiliary data from the source domains. For ease of presentation, we first focus on the case of a single source domain and discuss the case of multiple source domains later. In the single source domain setting, in addition to the samples from the target domain, one observes i.i.d. samples (X\u2032 i, Y \u2032 i ) iid \u223cP, i = 1, . . . , nP , from the source domain with Y \u2032 i = g(X\u2032 i) + z\u2032 i, where {z\u2032 i} are i.i.d. random noise satisfying E(z\u2032 i|X\u2032 i) = 0 and g is an unknown function. In the context of transfer learning for nonparametric regression, the joint distribution P of (X\u2032 i, Y \u2032 i ) from the source domain and the joint distribution Q of (Xi, Yi) from the target domain are different but related. Two popular settings that have been considered in the literature are covariate shift and posterior drift. In the case of covariate shift, the conditional mean functions from the target and source domains, f and g, are the same, but the marginal distributions of the covariates, Xi and X\u2032 i, are different (Shimodaira, 2000; Huang et al., 2006; Wen et al., 2014). On the other hand, the posterior drift model assumes that the mean functions, f and g, may be different, but the marginal distributions of the covariates, Xi and X\u2032 i, are the same. The posterior drift model is a general framework that can be applied to many practical problems, including robotics control (Vijayakumar et al., 2002; Nguyen-Tuong et al., 2008a,b; Yeung and Zhang, 2009; Cao et al., 2010) and air quality prediction (Mei et al., 2014; Wang et al., 2016). In this paper, we focus on the posterior drift model, where we assume that the difference between the mean functions, f and g, can be well approximated by a polynomial function of a given order in L1 distance. This distance is referred to as the bias strength, denoted by \u03f5. It controls the similarity between f and g up to a polynomial of a given order. The smaller the bias strength, the more similar f and g are, and vice versa. When the bias strength is zero, f and g differ by a polynomial. In this special case, transfer learning can be highly beneficial. We refer to g as a \u201cperfect reference\u201d of f when the bias strength is zero, and an \u201cimperfect reference\u201d otherwise. 2 \fThere are two natural and important goals for transfer learning: to accurately quantify the contribution of observations from the source domain to the regression task in the target domain, and to develop an optimal transfer learning algorithm. The answer to the first objective depends on various factors, including sample sizes nP and nQ, the smoothness of f and g, and the bias strength \u03f5. In this paper, we investigate the non-asymptotic minimax risk of this problem and propose a data-driven, adaptive transfer learning algorithm to achieve optimal results. 1.1 Main Results and Our Contribution We first establish the minimax optimal rate of convergence for transfer learning for nonparametric regression in the posterior drift setting. Suppose f is \u03b2Qsmooth and g is \u03b2P -smooth (which will be defined precisely later). Let F denote the set of distribution pairs (Q, P) defined in (2) in Section 2. It is shown that the minimax risk of transfer learning satisfies CL \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max + (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + 1 nQ \u0013 \u2264inf \u02c6 f sup (Q,P )\u2208F E|| \u02c6 f \u2212f||2 2 \u2264CU \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max ln4(nmax)+ ln8(nQ) \u00b7 (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + ln4(nQ) nQ \u0013 , for some positive constants CL and CU not depending on \u03f5, nP or nQ, where nmax = max(nP , nQ) and \u03b2max = max(\u03b2Q, \u03b2P ). In the special case of observing the data from the target domain only, i.e., nP = 0, then the minimax risk for estimating f is of order n \u2212 2\u03b2Q 2\u03b2Q+d Q . Comparing it with the transfer learning risk, we can conclude when and how much transfer learning is helpful for the target task. The necessary and sufficient condition for transfer learning to improve the estimation performance is that the bias strength is smaller than n \u2212 \u03b2Q 2\u03b2Q+d Q and either the mean function from the source domain is smoother or the sample size of the source domain is larger. If we fix f and g and thus the bias strength \u03f5 and let nQ go to infinity then this condition fails when nQ is sufficiently large unless \u03f5 = 0. This means in order to make transfer learning work asymptotically, g has to be a perfect reference. However, with any fixed and finite sample size, the non-asymptotic analysis above shows that transfer learning can help with an imperfect reference function g as long as the bias strength \u03f5 is smaller than the phase transition threshold n \u2212 \u03b2Q 2\u03b2Q+d Q . There are some interesting phenomena from the minimax analysis in the case where bias strength \u03f5 is sufficiently small. 1. If the function g is rougher than f, i.e., \u03b2P < \u03b2Q, then the minimax risk does not depend on \u03b2P and is the same as the minimax risk when 3 \f\u03b2P = \u03b2Q. This means that transfer learning can be effective even if g is less smooth than f. 2. If estimation of f and g is considered separately using the data from the target domain and from the source domain alone, the usual minimax risks for estimating f and g are proportional to n \u2212 2\u03b2Q 2\u03b2Q+d Q and n \u2212 2\u03b2P 2\u03b2P +d P respectively. Then if \u03b2P < \u03b2Q and nP \u226bnQ or \u03b2P > \u03b2Q and nP \u226a nQ, the minimax risk for the transfer learning can be much smaller than either of these two minimax risks, provided \u03f5 is sufficiently small. This phenomenon sheds new light on the understanding of transfer learning in that even the task from the source domain is harder, i.e., n \u2212 2\u03b2Q 2\u03b2Q+d Q \u226a n \u2212 2\u03b2P 2\u03b2P +d P , it may still help the task in the target domain. A novel transfer learning algorithm is developed and shown to attain the minimax optimal risk, possibly up to a logarithmic factor. However, the algorithm relies on knowledge of the smoothness parameters \u03b2Q and \u03b2P . To address this, we propose a data-driven algorithm that adaptively achieves the minimax risk, up to a logarithmic factor, over a wide range of parameter spaces. Simulation studies are conducted to further demonstrate the performance of the adaptive transfer learning algorithm and validate the phenomena discussed. Simulation studies are conducted to evaluate the performance of the adaptive transfer learning algorithm. The numerical results further support our theoretical analysis. The proposed method is then applied to a wine quality dataset (Cortez et al., 2009) to compare the performance of direct local polynomial regression on red wine data to using the transfer learning algorithm on both red and white wine data with varying numbers of observations. The results show that the transfer learning method improves performance. The results and algorithms can also be extended to the setting of multiple source distributions. Suppose there are K source distributions (P1, . . . , PK) and one target distribution Q. Each source distribution Pj corresponds to a mean function gj and the difference between f and each gj can be well approximated by a polynomial function of a given order. We establish the non-asymptotic minimax risk and construct an adaptive procedure that simultaneously attains the optimal risk, up to a logarithmic factor, over a large collection of parameter spaces. 1.2 Related Literature The problem of transfer learning for nonparametric regression in the posterior drift setting has been studied in Wang et al. (2016), under the assumption that the mean functions from both domains have the same Sobolev smoothness and the difference belongs to a smoother Sobolev class. An upper bound for the performance of transfer learning was obtained, however, no lower bound was provided, leaving it unclear if the upper bound is sharp. Additionally, their upper bound can only be achieved when the smoothness is known, which is 4 \fnot typically the case in practice and no adaptation to smoothness was considered. In contrast, in the present paper, we allow the mean functions to have different H\u00a8 older smoothness and assume that the difference function can be approximated in L1 distance by a polynomial function. We prove matching upper and lower bounds, up to a logarithmic factor, to quantify what transfer learning can achieve. In the covariate shift setting, transfer learning for nonparametric regression has also been considered in Huang et al. (2006); Wen et al. (2014). For nonparametric transfer learning, much attention has been given to classification, with the general problem being studied in Ben-David et al. (2007), Blitzer et al. (2008), Mansour et al. (2009). Theoretical results and adaptive procedures have been developed in both the posterior drift setting (Cai and Wei, 2021; Reeve et al., 2021) and the covariate shift setting (Shimodaira, 2000; Sugiyama et al., 2007). The transfer learning problem we consider here is related to the classical nonparametric regression literature, where only observations from the target domain are available. In particular, our algorithm uses local polynomial regression as a basic tool, which has been well-studied in the conventional setting (see, for example, Stone (1977), Cleveland (1979), Tsybakov (1986), Fan and Gijbels (1992), Fan (1993), and Xiao et al. (2003)). 1.3 Organization and Notation The rest of the paper is organized as follows. We finish this section with the notation. Section 2 presents the precise formulation of the transfer learning problem studied in the paper. We then construct a novel algorithm in Section 3 and derive an upper bound for its risk. A matching lower bound is then established. The problem of adaptation to smoothness is considered in Section 4. A data-driven procedure is proposed and shown to be adaptively rate-optimal. Simulation studies and a real application are carried out in Section 5 and 6 to investigate the numerical performance of the adaptive algorithm. Section 7 generalizes the methods and theoretical analysis to the multiple source distributions setting and Section 8 discusses possible future directions. For reasons of space, the proofs are given in the Supplementary Material (Cai and Pu, 2022). The following notation will be used in the paper. For any function h : [0, 1]d \u2192R, let ||h||1 = R [0,1]d |h(x)|dx, ||h|| = ||h||2 = qR [0,1]d h2(x)dx and ||h||\u221e= supx\u2208[0,1]d |h(x)|. For any a, b \u2208R, we define a \u2228b = max(a, b) and a \u2227 b = min(a, b). For any \u03b2 > 0, let w(\u03b2) denote the largest integer that is strictly smaller than \u03b2. Let Poly(M, T) denote the polynomial functions whose degree is smaller than or equal to T and coefficients have absolute values upper bounded by M. For any multi-index t = (t1, . . . , td) and vector x = (x1, . . . , xd), let xt = Qd i=1 xti i . For any multi-index t = (t1, . . . , td), define |t| = Pd j=1 tj. Define the multi-index class \u039b(k) = {t : |t| \u2264k}. For two functions h1(n), h2(n) > 0, we write h1(n) = O \u0000h2(n) \u0001 if lim supn\u2192\u221e h1(n) h2(n) < \u221e; h1(n) = \u02dc O \u0000h2(n) \u0001 if there exists a constant C > 0 such that h1(n) = O \u0000h2(n) \u00b7 lnC(n) \u0001 ; h1(n) = \u2126 \u0000h2(n) \u0001 5 \fif lim infn\u2192\u221e h1(n) h2(n) > 0; h1(n) = \u0398 \u0000h2(n) \u0001 if h1(n) = O \u0000h2(n) \u0001 and h2(n) = O \u0000h1(n) \u0001 ; and h1(n) = \u02dc \u0398 \u0000h2(n) \u0001 if h1(n) = \u02dc O \u0000h2(n) \u0001 and h2(n) = \u02dc O \u0000h1(n) \u0001 . 2 Problem Formulation We begin by formally defining the H\u00a8 older smoothness as follows. Definition 1. (H\u00a8 older class) For any positive numbers \u03b2, L, the H\u00a8 older function class H(\u03b2, L) is defined to be the set of all functions h with continuous partial derivatives of order w(\u03b2) that satisfy max |t|\u2264w(\u03b2) sup x\u2208[0,1]d |Dth(x)| + max |t|=w(\u03b2) sup x\u0338=y\u2208[0,1]d |Dth(x) \u2212Dth(y)| ||x \u2212y||\u03b2\u2212|t| \u2264L. We model the target mean function f and the source mean function g as H\u00a8 older smooth functions and assume for some constants \u03b2Q, \u03b2P , LQ, LP > 0, f \u2208H(\u03b2Q, LQ) and g \u2208H(\u03b2P , LP ). In this case, we call f \u03b2Q-smooth and g \u03b2P -smooth. Definition 2. For any \u03f5 > 0, M > 0, positive integer T, the class of functions \u03a8(\u03f5, M, T) is defined as all the functions h such that there exists a polynomial function \u03c8(x) \u2208Poly(M, T) such that ||\u03c8 \u2212h||1 = Z [0,1]d |\u03c8(x) \u2212h(x)|dx \u2264\u03f5. We assume f \u2212g \u2208\u03a8(\u03f5, M, T). This assumption requires that g is close in L1 distance to f plus a polynomial of order T. In this paper we only consider polynomials with coefficients bounded in absolute value by a constant M. It is possible to generalize \u03c8 to be an arbitrary polynomial function. The discussion on this generalization is deferred to Section 8. Definition 3. For any u1, u2 > 0, the class of sub-Gaussian random variables G(u1, u2) with constants u1, u2 > 0 are defined as all random variables Z such that for any t > 0, P \u0010 |Z| \u2265t \u0011 \u2264u1 \u00b7 e\u2212u2t2. For any x \u2208[0, 1]d we assume the random noises z1|X1 = x and z\u2032 1|X\u2032 1 = x to be sub-Gaussian with some constants u1, u2. This assumption ensures the outcome to be not heavy-tailed. We assume the marginal distributions of X1 and X\u2032 1 have density functions f Q den and f P den respectively and they are lower and upper bounded by constants CL, CU. 6 \fGiven these definitions, the parameter space is defined by F(\u03b2Q, \u03b2P , \u03f5, u1, u2, M, T, LP , LQ, CL, CU) = \u001a (Q, P) : f \u2208H(\u03b2Q, LQ), g \u2208H(\u03b2P , LP ), f \u2212g \u2208\u03a8(\u03f5, M, T), z1|X1, z\u2032 1|X\u2032 1 \u2208G(u1, u2), CL \u2264 (1) f Q den, f P den \u2264CU \u001b . (2) This space will be denoted by F(\u03b2Q, \u03b2P , \u03f5) when there is no confusion. The minimax estimation risk over this parameter space is then defined as R\u03b2Q,\u03b2P ,\u03f5,u1,u2,M,T,LP ,LQ,CL,CU (nQ, nP ) = inf \u02c6 f sup (Q,P )\u2208F (\u03b2Q,\u03b2P ,\u03f5) E ( \u02c6 f(X) \u2212f(X))2. For simplicity we may write R\u03b2Q,\u03b2P ,\u03f5,u1,u2,M,T,LP ,LQ,CL,CU (nQ, nP ) as R\u03b2Q,\u03b2P ,\u03f5 or R(nQ, nP ) if there is no confusion. It is interesting to understand when and how much transfer learning can improve the estimation accuracy in the target domain. This question can be answered by comparing the transfer learning minimax risk R\u03b2Q,\u03b2P ,\u03f5 with the minimax risk using only data from the target domain, which is well known to be of order n \u2212 2\u03b2Q 2\u03b2Q+d Q . 3 Minimax Risk We consider in this section the minimax risk for the transfer learning problem. We begin with a brief review of local polynomial regression in Section 3.1 that serves as a basic tool for nonparametric regression in our algorithm. We then formally present in Section 3.2 the algorithm. The minimax risk is established in Section 3.3 and a discussion on interesting phenomena is given in Section 3.4. 3.1 Local Polynomial Regression Local polynomial regression has been widely recognized for its empirical success and desirable theoretical properties (Cleveland and Devlin, 1988; Fan and Gijbels, 1992; Fan, 1993). In particular, local polynomial regression achieves the minimax optimal rate over a H\u00a8 older ball with properly tuned parameters (Gy\u00a8 orfi et al., 2002). For observations D = {(X(i), Y (i)), i = 1, . . . , n}, degree l and bandwidth b (we assume 1/b is an integer) the local polynomial regression estimate is defined as follows. Divide [0, 1)d into 1 bd hypercubes {Qd i=1[b \u00b7 ai, b \u00b7 ai + b) : ai = 0, . . . , 1/b \u22121}. For each hypercube B(a1, . . . , ad) = Qd i=1[b \u00b7 ai, b \u00b7 ai + b), let all the observations whose covariates falling into this hypercube be {(Xi1, Yi1), . . . , (Xir, Yir)}. Let amid = (a1 + b/2, . . . , ad + b/2) be the center of 7 \fthis hypercube. The local polynomial regression estimate \u02c6 flpr on B(a1, . . . , ad) is given by \u02c6 flpr(x; D, l, b) = X t\u2208\u039b(l) clpr t (x \u2212amid b/2 )t, \u2200x \u2208B(a1, . . . , ad), where {clpr t : t \u2208\u039b(l)} are given by {clpr t : t \u2208\u039b(l)} = arg min {ct:t\u2208\u039b(l)} r X m=1 (Yim \u2212 X t\u2208\u039b(l) ct(Xim \u2212amid b/2 )t)2. The confidence upper and lower limits are constructed as follows: \u02c6 f ub lpr(x; D, l, b, \u03b2) = \u02c6 flpr(x : D, l, b) + p ln(|D|) \u00b7 (b\u2212\u03b2 + ln2(|D|) p |D| \u00b7 bd ), \u02c6 f lb lpr(x; D, l, b, \u03b2) = \u02c6 flpr(x : D, l, b) \u2212 p ln(|D|) \u00b7 (b\u2212\u03b2 + ln2(|D|) p |D| \u00b7 bd ). The length of confidence interval is then LCI,lpr(D, l, b, \u03b2) = 2 \u00b7 p ln(|D|) \u00b7 (b\u2212\u03b2 + ln2(|D|) p |D| \u00b7 bd ). 3.2 The Confidence Thresholding(CT) Algorithm We now present our main transfer learning algorithm. Since we have data from both the source and target domains, the most important and challenging step of this algorithm is integrating the information from both domains. This step in our algorithm is called confidence thresholding. We shall first present this confidence thresholding procedure and then show the details of our algorithm. 3.2.1 The Confidence Thresholding Estimator We first introduce the confidence thresholding estimator that is designed to estimate a function when one has access to two different estimates. Suppose we have two different estimators \u02c6 h1 and \u02c6 h2 for some unknown function h. \u02c6 h1 converges slower and \u02c6 h2 converges faster but is slightly biased, which means \u02c6 h2 converges to a function that is different from but close to h in L1 distance. The confidence thresholding estimator is constructed based on \u02c6 h1 and \u02c6 h2 as follows. Let e1 be an upper bound of the L\u221enorm of \u02c6 h1 \u2212h. Then a \u201cconfidence interval\u201d for h(x) is [\u02c6 h1(x) \u2212e1, \u02c6 h1(x) + e1] for all x \u2208[0, 1)d. There are three different possible cases of the relationship between this confidence interval and \u02c6 h2(x). If \u02c6 h2(x) is greater than the upper bound of the confidence interval, then this confidence interval upper bound is better than \u02c6 h2(x) in estimating h(x) and in this case the confidence interval upper bound is used as the estimate. If 8 \f\u02c6 h2(x) is in the confidence interval, then \u02c6 h2(x) is acceptable and we use \u02c6 h2(x). If \u02c6 h2(x) is smaller than the lower bound of the confidence interval, then the confidence interval lower bound is better than \u02c6 h2(x) in estimating h(x) and the confidence interval lower bound is used as the estimate. We call this estimator the confidence thresholding estimator: \u02c6 \u00b5ct(\u02c6 h1(x), \u02c6 h2(x), e1) = \u02c6 h1(x) + sgn(\u02c6 h2(x) \u2212\u02c6 h1(x)) \u00b7 (|\u02c6 h2(x) \u2212\u02c6 h1(x)| \u2227e1). (3) See Figure 1 for an illustration of the confidence thresholding estimator. Figure 1: An illustration of the confidence thresholding estimator. On the left panel, the blue dashed line is \u02c6 h1, the green line is \u02c6 h2 and two red lines are the confidence upper bound \u02c6 h1 + e1 and lower bound \u02c6 h1 \u2212e1. On the right panel, the black line is the confidence thresholding estimator \u02c6 \u00b5ct(\u02c6 h1(x), \u02c6 h2(x), e1). The following lemma justifies the convergence of \u02c6 \u00b5ct. Lemma 1. Suppose for a function h : [0, 1]d \u2192R, we have two estimates \u02c6 h1 and \u02c6 h2. Suppose for some e1, e2, e\u2032 2 > 0, ||h \u2212\u02c6 h1||\u221e\u2264e1 and ||h + \u02dc h \u2212 \u02c6 h2||\u221e\u2264e2 \u2264e1 where function \u02dc h : [0, 1]d \u2192R satisfies ||\u02dc h||1 \u2264e\u2032 2. Let \u02c6 h(x) = \u02c6 \u00b5ct(\u02c6 h1(x), \u02c6 h2(x), e1) for all x \u2208[0, 1]d. Then ||\u02c6 h \u2212h||2 \u2264(e2 + p 4e1 \u00b7 e\u2032 2) \u22272e1. With this lemma we can compare the confidence thresholding estimator with the two original estimators. In the setting of this lemma, the L2 error of \u02c6 h1 is upper bounded by e1 and the L2 error of \u02c6 h2 can be arbitrarily large since ||\u02dc h2||2 can be arbitrarily large. The L2 error of the confidence thresholding estimator is upper bounded by 2e1, so it is at least as good as the \u02c6 h1 up to a constant. Besides, in the case where e\u2032 2 \u226ae1 and e2 \u226ae1, the confidence thresholding estimator outperforms both of two original estimators. 3.2.2 The Confidence Thresholding(CT) Algorithm We now present in detail the CT algorithm, which utilizes the confidence thresholding estimator and involves fitting local polynomial regression twice to produce 9 \ftwo preliminary estimators. These estimators are used to mimic \u02c6 h1 and \u02c6 h2 in the confidence thresholding estimator. We begin with sample splitting by randomly dividing DQ into two equalsized subsets DQ,1 and DQ,2. We first fit local polynomial regression on DQ,1 with some bandwidth \u02dc \u03b2Q and obtain an estimate \u02c6 fref. We also construct a confidence interval and compute its length, which is denoted by LCI. In the confidence thresholding estimator, \u02c6 fref will serve as \u02c6 h1 and LCI/2 will serve as e1. We then fit another local polynomial regression to mimic \u02c6 h2. We can fit it either on DQ,1 or DP because \u02c6 h2 is allowed to be biased. To get a faster convergence, we fit this local polynomial regression on the dataset with some larger bandwidth \u02dc b. Let this estimate be \u02c6 fraw. Note f \u2212g is close to a polynomial function in L1 distance. Then \u02c6 fraw plus some polynomial function \u03c8 should be close to f. If \u03c8 were known, the confidence thresholding estimator \u02c6 fct(\u00b7; \u03c8) = \u02c6 \u00b5ct( \u02c6 fref, \u02c6 fraw + \u03c8, LCI/2) could be used to estimate f(x). However, since \u03c8 is unknown, we shall first estimate it and then plug the estimate into the confidence thresholding estimator. The estimator \u02c6 \u03c8 is obtained by minimizing the empirical mean squared error on the validation set DQ,2. Formally, \u02c6 \u03c8 = arg min \u03c8\u2208Poly(\u221a ln(nQ),l) nQ X i=\u230a nQ 2 \u230b+1 [Yi \u2212\u02c6 fct(Xi; \u03c8)]2. Finally, we truncate the estimate \u02c6 \u00b5ct( \u02c6 fref(x), \u02c6 fraw+ \u02c6 \u03c8, LCI/2) since f is bounded. The CT algorithmx is summarized in Algorithm 1. Remark 1. The bandwidths \u02dc bQ and \u02dc b are chosen such that \u02dc bQ \u221dn \u2212 1 2 \u02dc \u03b2Q+d Q and \u02dc b \u221d(nQ \u2228nP )\u2212 1 2 \u02dc \u03b2max+d , due to the fact that n\u2212 1 2\u03b2+d is the optimal bandwidth for estimating a \u03b2-smooth function based on n observations (Gy\u00a8 orfi et al., 2002). 3.3 Minimax Risk The following theorem gives an upper bound for the risk of the CT algorithm. Recall \u03b2max = \u03b2Q \u2228\u03b2P and nmax = nP \u2228nQ. Theorem 1 (Minimax upper bound). Suppose in Algorithm 1, \u02dc \u03b2Q = \u03b2Q, \u02dc \u03b2P = \u03b2P , l \u2265w(\u03b2max) \u2228T, then the risk of this algorithm satisfies R( \u02c6 f) = sup (Q,P )\u2208F (\u03b2Q,\u03b2P ,\u03f5) E( \u02c6 f(X) \u2212f(X))2 \u2264 CU \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max ln4(nmax) + ln8(nQ) \u00b7 (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + ln4(nQ) nQ \u0013 , for some constant CU > 0 that only depends on \u03b2Q, \u03b2P ,u1, u2,M, T, d, l,LP , LQ, CL, CU and not on nQ, nP , \u03f5. 10 \fAlgorithm 1: A1(\u02dc \u03b2Q, \u02dc \u03b2P , l) Input: H\u00a8 older smoothness \u02dc \u03b2Q, \u02dc \u03b2P and polynomial degree l. Split DQ into DQ,1 = {(X1, Y1), . . . , (X\u230a nQ 2 \u230b, Y\u230a nQ 2 \u230b)} and DQ,2 = {(X\u230a nQ 2 \u230b+1, Y\u230a nQ 2 \u230b+1), . . . , (XnQ, YnQ)}. Calculate \u02dc \u03b2max = max(\u02dc \u03b2Q, \u02dc \u03b2P ), nQ,1 = |DQ,1|, nP = |DP |, \u02dc nmax = max(nQ,1, nP ), \u02dc b = 1 \u230a\u02dc n 1 2 \u02dc \u03b2max+d max \u230b , \u02dc bQ = 1 \u230an 1 2 \u02dc \u03b2Q+d Q,1 \u230b . Fit local polynomial regression on DQ,1 with bandwidth \u02dc bQ, and calculate estimates and confidence interval length. \u02c6 fref(\u00b7) = \u02c6 flpr(\u00b7; DQ,1, l,\u02dc bQ), LCI = LCI,lpr(DQ,1, l,\u02dc bQ, \u02dc \u03b2Q). if nQ,1 > nP then fit local polynomial regression on DQ,1 with bandwidth \u02dc b, \u02c6 fraw(\u00b7) = \u02c6 flpr(\u00b7; DQ,1, l,\u02dc b), else fit local polynomial regression on DP with bandwidth \u02dc b, \u02c6 fraw(\u00b7) = \u02c6 flpr(\u00b7; DP , l,\u02dc b). Estimate \u02c6 \u03c8 by \u02c6 \u03c8 = arg min \u03c8\u2208Poly(\u221a ln(nQ),l) nQ X i=nQ,1+1 [Yi \u2212\u02c6 fct(Xi; \u03c8)]2, where \u02c6 fct(x; \u03c8) = \u02dc \u00b5ct( \u02c6 fref(x), \u02c6 fraw(x) + \u03c8(x), LCI/2). Truncate the estimate at nQ, \u02c6 f(x) = sgn( \u02c6 fct(x; \u02c6 \u03c8)) \u00b7 (| \u02c6 fct(x; \u02c6 \u03c8)| \u2227nQ) 11 \fThe next theorem provides a lower bound for the minimax risk and shows that CT algorithm is minimax optimal up to a logarithmic factor. Theorem 2 (Minimax lower bound). There exists a constant CL > 0 that only depends on \u03b2Q, \u03b2P , u1, u2, M, T, d, LP , LQ and not on nQ, nP , \u03f5 such that R\u03b2Q,\u03b2P ,\u03f5 \u2265CL \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max + (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + 1 nQ \u0013 . Theorems 1 and 2 together show that the non-asymptotic minimax risk of transfer learning for nonparametric regression R\u03b2Q,\u03b2P ,\u03f5 is proportional to n \u2212 2\u03b2max 2\u03b2max+d max + (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + 1 nQ . (4) Comparing this risk with the minimax risk of nonparametric regression with the observations from the target domain only, which is proportional to n \u2212 2\u03b2Q 2\u03b2Q+d Q , we can see when and how transfer learning improve the estimation accuracy for f. The sufficient and necessary condition is that the bias strength \u03f5 \u226an \u2212 \u03b2Q 2\u03b2Q+d Q and either the source domain has a smoother mean function \u03b2P > \u03b2Q or much more observations nP \u226bnQ. The second term (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q in (4) represents the influence of the bias strength to the difficulty of the current problem. It has two phase transition points. The first is n \u2212 \u03b2Q 2\u03b2Q+d Q . If the bias strength is larger than it then the minimax risk (4) is as large as the minimax risk of regression with the target domain data only, which is proportional to n \u2212 \u03b2Q 2\u03b2Q+d Q . If the bias strength is smaller than it then whether the minimax risk (4) is smaller than n \u2212 \u03b2Q 2\u03b2Q+d Q does not depend on the bias strength. In other words, n \u2212 \u03b2Q 2\u03b2Q+d Q is the maximum tolerable bias strength for transfer learning to help and quantifies the robustness of this model. The second phase transition point is n \u2212 \u03b2Q+d 2\u03b2Q+d Q \u2228n \u03b2Q 2\u03b2Q+d Q \u00b7n \u2212 2\u03b2max 2\u03b2max+d max . Whether the bias strength is larger than it determines whether the influence of the bias strength is the dominating term. In other words, if the bias strength is smaller than it then transfer learning can work as if there is no bias. The first term in equation (4) is equivalent to the minimax rate for nonparametric regression over a \u03b2max-smooth H\u00a8 older class with nmax observations. This suggests that transfer learning can benefit from larger sample sizes and improved smoothness, regardless of whether these advantages are present in different domains. Essentially, transfer learning allows for the transfer of sample size and smoothness to a common domain. 12 \f3.4 Discussion We now take a closer look at the minimax risk in cases where the bias strength is strong enough to not be the dominant term in the minimax risk.We explore two unique phenomena displayed by the minimax risk: auto-smoothing and super-acceleration. \u2022 Auto-smoothing: When \u03b2P < \u03b2Q, the minimax rate is n \u2212 2\u03b2Q 2\u03b2Q+d max + (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + 1 nQ , which does not depend on \u03b2P . This implies that even if g is highly irregular (\u03b2P \u22480), it is still possible to estimate f as if g is a \u03b2Q-smooth function. The CT algorithm only relies on \u02dc \u03b2Q and \u02dc \u03b2max, thus it is not affected by \u02dc \u03b2P if \u02dc \u03b2P \u2264\u02dc \u03b2Q. This aligns with the auto-smoothing phenomenon observed in minimax theory. \u2022 Super-acceleration: In transfer learning, a common question of interest is whether and to what extent observations from the source domain can significantly improve estimation accuracy in the target domain. In this scenario, if the source domain has a smoother mean function but a smaller sample size, i.e. \u03b2P > \u03b2Q and nP < nQ, and the bias strength \u03f5 is sufficient, then the minimax risk for transfer learning is R\u03b2Q,\u03b2P ,\u03f5 = n \u2212 2\u03b2P 2\u03b2P +d Q , which is smaller than both the minimax risk for estimating f using data from the target domain alone and the minimax risk for estimating g using data from the source domain alone. This phenomenon is referred to as super-acceleration. This provides new insights into transfer learning by demonstrating that it can significantly enhance performance on the target domain even if the task in the source domain is more difficult (based on data from the source domain alone). Similarly, super-acceleration also occurs if the source domain has a rougher mean function and more observations. On the other hand, if the source domain has a smoother mean function and a larger sample size, it is not surprising that transfer learning can improve the convergence rate. In this case, f can be estimated as accurately as g, as the minimax risk for transfer learning is of order n \u2212 2\u03b2P 2\u03b2P +d P when \u03f5 is sufficiently small. There have been other results in transfer learning for different tasks where the best one can do on the target domain is as good as the performance of the corresponding task on the source domain. This kind of acceleration is referred to as normal acceleration. The following table summarizes different cases. \u03b2Q > \u03b2P \u03b2Q = \u03b2P \u03b2Q < \u03b2P nQ \u226bnP no acceleration no acceleration super-acceleration nQ \u221dnP no acceleration no acceleration normal acceleration nQ \u226anP super-acceleration normal acceleration normal acceleration 13 \f4 Adaptive Confidence Thresholding Algorithm Section 3 establishes the non-asymptotic minimax risk for estimation over the parameter space F(\u03b2Q, \u03b2P , \u03f5) and the optimality of the CT algorithm. However, the CT algorithm requires the knowledge of the smoothness parameters \u03b2Q and \u03b2P , which are typically unknown. A natural and important question is whether it is possible to construct a data-driven algorithm that adaptively achieves the optimal risk simultaneously over a wide rage of parameter spaces. In this section we develop an adaptive algorithm, called adaptive confidence thresholding (ACT) algorithm, that is based on the CT algorithm and consists of three main steps: \u2022 Step 1: Constructing a set of smoothness parameter pairs. Since the CT algorithm only depends on \u02dc \u03b2Q and \u02dc \u03b2max, we construct a finite set SQ \u2282R for \u02dc \u03b2Q\u2019s and a finite set Smax for \u02dc \u03b2max\u2019s. \u02dc \u03b2Q and \u02dc \u03b2max are to be chosen from SQ and Smax respectively. SQ and Smax are both arithmetic sequences. The common differences of these two sequences are roughly proportion to 1 ln(nQ) and 1 ln(nmax) respectively. \u2022 Step 2: Selecting the best pair of smoothness parameters. For each pair of \u02dc \u03b2Q and \u02dc \u03b2max we can construct an estimator \u02c6 fct as in the CT algorithm. We select the best smoothness parameters \u03b2\u2217 Q and \u03b2\u2217 max by minimizing the empirical MSE on the validation data DQ,2. \u2022 Step 3: Plugging the selected smoothness parameters into the CT algorithm. Run the CT algorithm with (\u03b2\u2217 Q, \u03b2\u2217 max) as the smoothness parameters. The ACT algorithm is summarized in Algorithm 2. Note that in Step 2 we minimize the empirical mean squared error on validation data DQ,2 to select the best polynomial function \u03c8 for each pair of \u02dc \u03b2Q and \u02dc \u03b2max and in Step 3 we minimize the same empirical mean squared error on the same validation data to select the best pair of \u02dc \u03b2Q and \u02dc \u03b2max. Therefore we can combine these two steps in the algorithm table into one step, where we minimize empirical mean squared error on validation data among all choices of (\u02dc \u03b2Q, \u02dc \u03b2max) and polynomial function \u03c8. Theorem 3 (Adaptive upper bound). Suppose in Algorithm 2, l \u2265w(\u03b2max)\u2228T, then the risk of this algorithm satisfies R( \u02c6 fada) = sup (Q,P )\u2208F (\u03b2Q,\u03b2P ,\u03f5) E( \u02c6 fada(X) \u2212f(X))2 \u2264Cada U \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max \u00b7 ln4(nmax) + ln8(nQ) \u00b7 (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + ln4(nQ) nQ \u0013 , for some constant Cada U > 0 not depending on nQ, nP , \u03f5. 14 \fTherefore the data-driven estimator simultaneously achieves the minimax risk, up to a logarithmic factor, for a large collection of parameter spaces. Remark 2. Theorem 3 holds when the intermediate term LCI,lpr(DQ,1, l,\u02dc bQ, \u02dc \u03b2Q) is equal to C \u00b7 p ln(|DQ,1|) \u00b7 (\u02dc b\u2212\u02dc \u03b2Q Q + ln2(|DQ,1|) q |DQ,1|\u00b7\u02dc bd Q ) for any constant C > 0. In Algorithm 2, C is taken to be 2. In practice, it may be beneficial to tune this constant C while using the algorithm. This process can be easily integrated into the algorithm, by tuning C along with the parameters \u02dc \u03b2Q and \u02dc \u03b2max on the second half of the dataset. 5 Simulation In this section, we evaluate the performance of the ACT algorithm through simulations and compare it to existing methods. The numerical results further support our theoretical analysis. Recall that the minimax risk (4) is affected by both the sample size and the bias strength. To demonstrate their impact on the empirical performance, we conduct two series of experiments. In the first series, we fix all other parameters and vary the sample size. In the second series, we fix all other parameters and vary the bias strength. In all experiments, we set the dimension to 1, the covariate distributions on both the source and target domains to uniform distribution on [0, 1], and the random noise on both domains to normal random variables with zero mean and standard deviation 1/3. We evaluate the performance of all algorithms using the mean squared error (MSE), which is the expected squared L2 distance between the estimator and the true mean function. The MSE is calculated by averaging 2000 random repeated experiments. In the first series of experiments, we investigate the influence of sample size by fixing the bias strength and varying the sample size. Specifically we let the mean functions be f(x) = sin(10\u03c0x) + x3/2 \u22120.1x + (0.1 \u2212|x \u22120.5|)+, g(x) = sin(10\u03c0x) + x3/2. Therefore g differ from f by a linear function and a small spike with width 0.2 and height 0.1. In this case f \u2208H(1, 40) and g \u2208H(3/2, 40). The sample size of the target domain is fixed at 200. The sample sizes of the source domain are taken to be (300, 600, 1200, 2400, 4800). In this series of experiments, g is smoother than f and nP is greater than nQ, so g is easier to estimate. We compare the performance of the ACT algorithm to that of local polynomial regression using only data from the target domain. The bandwidth for local polynomial regression is determined through a five-fold cross-validation method. By comparing the performance of ACT to local polynomial regression, we are able to gauge the improvement gained through transfer learning with various sample sizes from the target domain. 15 \fIn the second series of experiments, we investigate the impact of bias strength by fixing the sample size and varying the bias strength. Specifically we let the mean functions be f(x) = sin(10\u03c0x) + x3/2, g(x) = sin(10\u03c0x) + x3/2 \u22120.1x + (3 \u2212 6 lwid |x \u22120.5|)+, where lwid is taken to be 0, 0.005, 0.01, 0.015, 0.02 respectively in each of the five experiments. In this case g is equal to f plus a linear function plus a spike with width lwid and height 3. Note \u03f5 = lwid \u22173/2 in this case. Therefore the bias strengths are (0, 0.00375, 0.0075, 0.01125, 0.015). For each of the latter four cases lwid \u2208(0.005, 0.01, 0.015, 0.02), g \u2208H(1, 6 lwid + 40) and in the first case where lwid = 0, g \u2208H(3/2, 40). The sample sizes of the source and target domains are fixed at 200 and 600, respectively. We compare the performance of ACT with local polynomial regression using only the observations from the target domain to study the effects of transfer learning with varying bias strengths. Additionally, comparisons are made with the performance of local polynomial regression using only the observations from the source domain to estimate g. These comparisons help illustrate the super-acceleration phenomenon. Both local polynomial regressions are fitted using bandwidths selected through fivefold cross-validation. Figure 2a presents the results of the first series of experiments, specifically the MSEs of local polynomial regression with cross validation and the ACT algorithm for various sample sizes. As noted, in the first series of experiments, g is smoother and has more corresponding observations, making it easier to estimate. The plot clearly demonstrates the gap in performance between the ACT algorithm and local polynomial regression as predicted by theory. Additionally, the plot indicates that the ACT algorithm\u2019s performance improves as the sample size from the source domain increases, however, this improvement seems to level off when nP is large (nP > 2400). This is also consistent with the minimax theory. The minimax risk (4) in this case is proportional to n \u22123 4 P + (\u03f5 \u2227n \u22121 3 Q ) \u00b7 n \u22121 3 Q + n\u22121 Q , which decreases as nP grows when nP is not large and keeps fixed when nP is large enough such that n \u22123 4 P is dominated by the following two terms. Figure 2b illustrates the simulation results of the second series of experiments. Specifically, it shows the MSEs of local polynomial regression with cross validation for both f and g and ACT algorithm with different bias strength. We first compared the ACT algorithm and local polynomial regression for estimating f using observations from the target domain only. The results showed a clear gap in the MSE between the two methods when the bias strength was small enough (\u03f5 < 0.01125). As the bias strength increased, the MSE of ACT grew and eventually became as large as the MSE of local polynomial regression when the bias strength was large enough (\u03f5 > 0.015). These findings are consistent with the theory of minimax risk, which predicts that transfer learning 16 \fcan improve performance when bias strength is small and worsen as it increases. To further illustrate different types of acceleration, we also compared the performance of local polynomial regression for estimating g. In the special case of \u03f5 = 0, where g is as smooth as f, normal acceleration was observed, as discussed in Section 3. The results showed that ACT performed worse than estimating g with local polynomial regression but better than estimating f with local polynomial regression. In the general case where \u03f5 > 0, g is rougher than f but has more observations. The theory predicts a super-acceleration phenomenon if the bias strength is small enough. The results showed that when the bias strength was small but nonzero (0.00375 < \u03f5 < 0.01125), the ACT algorithm outperformed local polynomial regression for both estimating f and g. This validates the theoretical predictions. (a) Experiments on different nP (b) Experiments on different \u03f5 Figure 2: MSEs of different regression methods. Blue: MSE of local polynomial regression on the target domain. Red: MSE of ACT on the target domain. Green: MSE of local polynomial regression on the source domain. 6 Application In this section, an application of the adaptive estimator is demonstrated using the wine quality data from Cortez et al. (2009). The dataset comprises both red and white wine quality, which share the same features and outcome (wine quality). The aim is to build a regression model that predicts wine quality based on all features. The white wine dataset serves as the source domain and the red wine dataset as the target domain. The objective is to investigate if using the white wine dataset can enhance the prediction of red wine quality. As in Section 5, we compare the performance of local polynomial regression applied directly to the red wine dataset with that of our transfer learning algorithm. Both algorithms are based on local polynomial regression, which is 17 \fFigure 3: MSEs of different regression methods. Red: MSE of local polynomial regression on the target domain. Green: MSE of ACT. suitable for low-dimensional problems. However, the original dataset has 13 features, which are too many for local polynomial regression with the given sample size. To address this, we select the most influential feature, \u201calcohol,\u201d using feature importance ranking with random forest (Breiman, 2001) and only use this feature. Both local polynomial regression and the transfer learning algorithm have tuning parameters, so to compare them fairly, we use half of the training samples as the validation dataset to tune the parameters for both algorithms. The degrees of all local polynomials used in both algorithms are set to be 1. The degree of the polynomial that is used to approximate f \u2212g in ACT algorithm is also set to be 1. We let nP = 4898 and nQ = (100, 200, 300, 400). The remaining observations in the target domain serve as the test data to evaluate the performance of both algorithms, which is characterized by the MSE. Figure 3 shows the MSEs of the two algorithms with different numbers of target sample size. As more observations in the target domain are used, the relative contribution from the source dataset decreases. However, the proposed adaptive estimator consistently outperforms the naive local polynomial regression. This shows that in this application, the performance of the target task can be significantly improved by transfer learning when the source domain has many more observations. 7 Multiple Source Domains We have so far focused on the single source domain setting. In practical applications, it is common to have data from multiple source domains. In this section, we will expand our analysis to encompass the scenario of utilizing data from multiple source domains, and generalize the procedures and results from the single source domain case to this setting. We consider the following model where observations from multiple source distributions P1, . . . , PK and one target distribution Q are available. Suppose there are nPj observations {(X\u2032 1,j, Y \u2032 1,j), . . . , (X\u2032 nPj ,j, Y \u2032 nPj ,j)} from Pj for each j = 1, . . . , K and nQ observations {(X1, Y1), . . . , (XnQ, YnQ)} from Q. All the 18 \fobservations are independent. Similar to the single-source model, let Yi = f(Xi) + zi, i = 1, . . . , nQ Y \u2032 i,j = gj(X\u2032 i,j) + z\u2032 i,j, i = 1, . . . , nPj, j = 1, . . . , K. where {zi} and {z\u2032 i,j} are i.i.d. zero mean random noises. The parameter space is defined as follows: F(\u03b2Q, \u03b2P , \u03f5, u1, u2, M, T, \u2212 \u2192 LP , LQ) = \u001a (Q, P1, . . . , PK) : f \u2208H(\u03b2Q, LQ), gj \u2208H(\u03b2P , LPj), f \u2212gj \u2208\u03a8(\u03f5, M, T), z1|X1, z\u2032 1,j|X\u2032 1,j \u2208G(u1, u2), j = 1, . . . , K \u001b , where \u2212 \u2192 LP = (LP1, . . . , LPK). This space will be denoted by F for simplicity when there is no confusion. We establish the minimax risk in this section, We also construct a datadriven algorithm, which is an extension of Algorithm 2, that adaptively achieves the minimax risk up to a logarithmic factor. For reasons of space, the algorithm is given in the Supplementary Material (Cai and Pu, 2022). Theorem 4 (lower bound). Let \u03b2max = max(\u03b2Q, \u03b2P ) and nP = PK j=1 nP,j. Let all assumptions be satisfied, there exists some constant C > 0 that only depends on \u03b2Q, \u03b2P , u1, u2, M, K, d, LP , LQ and not on nQ, nP,1, . . . , nP,K, \u03f5 such that R\u03b2Q,\u03b2P ,\u03f5 \u2265C \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max + (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + 1 nQ \u0013 . Theorem 5 (adaptive upper bound). Suppose in Algorithm 2, l \u2265w(\u03b2max)\u2228T, then the risk of this algorithm satisfies R(nQ; nP ) \u2264C \u00b7 \u0012 n \u2212 2\u03b2max 2\u03b2max+d max ln4(nmax)+ ln8(nQ) \u00b7 (\u03f5 \u2227n \u2212 \u03b2Q 2\u03b2Q+d Q ) \u00b7 n \u2212 \u03b2Q 2\u03b2Q+d Q + ln4(nQ) nQ \u0013 , for some constant C > 0 that only depends on \u03b2Q, \u03b2P , u1, u2, M, K, d and not on nQ, nP,1, . . . , nP,K, \u03f5. 8 Discussion We studied in the present paper transfer learning for nonparametric regression under the posterior drift model and established the minimax risk, which quantifies when and how much data from the source domains can improve the performance of nonparametric regression in the target domain. A novel, datadriven algorithm is developed and shown to be adaptively minimax optimal, up to a logarithmic factor, over a wide range of parameter spaces. 19 \fThe minimax risk of this problem exhibits interesting and novel phenomena. The \u201cauto-smoothing\u201d phenomenon demonstrates that transfer learning can smooth the mean function of the source domain when it is rougher than that of the target domain. The \u201csuper-acceleration\u201d phenomenon shows that even if the task of the source domain is more difficult, it may still be beneficial for the regression task in the target domain in certain cases. Further research in other transfer learning problems could yield similar phenomena. We use the L1 norm to measure bias strength in this paper, but it is easy to generalize to all Lp norms. This is because L1 norm is smaller than or equal to all Lp norms for p \u22651. Additionally, polynomial functions are used to approximate the difference between the mean functions of the source and target domains, but it could be interesting to consider other collections of functions in the future. These functions should be easier to estimate than the source and target mean functions, and examples could include infinitely differentiable functions or general H\u00a8 older functions with smoothness larger than \u03b2max. In this paper, we consider the common support \u039e of the covariates of the source and target domains to be a hypercube of dimension d with edges of length 1, and develop methods and theory for this case. These results can also be generalized to other types of supports. Specifically, by using linear transformations, our results can be extended to all hypercube-shaped supports. Additionally, we can further generalize our results to more general types of supports by making an assumption on the measure of points not contained in a grid of hypercubes with edge length \u03b4. If this measure is bounded by O(\u03b4\u03b6) for some \u03b6 > 0, our methods and theory can be applied to that support. Examples of supports that satisfy this assumption include all bounded convex sets with \u03b6 = 1. Our methods and upper bounds can be adapted to these other supports by considering only the hypercubes contained within them and ignoring the remaining points. The risk of the generalized algorithm is then upper bounded by a constant times the corresponding upper bound for the hypercube support plus O(\u03b4\u03b6). When \u03b6 is large enough in relation to the smoothness parameters of the problem, this upper bound matches the lower bound." + }, + { + "url": "http://arxiv.org/abs/2401.03820v1", + "title": "Optimal Differentially Private PCA and Estimation for Spiked Covariance Matrices", + "abstract": "Estimating a covariance matrix and its associated principal components is a\nfundamental problem in contemporary statistics. While optimal estimation\nprocedures have been developed with well-understood properties, the increasing\ndemand for privacy preservation introduces new complexities to this classical\nproblem. In this paper, we study optimal differentially private Principal\nComponent Analysis (PCA) and covariance estimation within the spiked covariance\nmodel.\n We precisely characterize the sensitivity of eigenvalues and eigenvectors\nunder this model and establish the minimax rates of convergence for estimating\nboth the principal components and covariance matrix. These rates hold up to\nlogarithmic factors and encompass general Schatten norms, including spectral\nnorm, Frobenius norm, and nuclear norm as special cases.\n We introduce computationally efficient differentially private estimators and\nprove their minimax optimality, up to logarithmic factors. Additionally,\nmatching minimax lower bounds are established. Notably, in comparison with\nexisting literature, our results accommodate a diverging rank, necessitate no\neigengap condition between distinct principal components, and remain valid even\nif the sample size is much smaller than the dimension.", + "authors": "T. Tony Cai, Dong Xia, Mengyue Zha", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.IT", + "math.IT", + "stat.ME", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction The covariance structure plays a fundamental role in multivariate analysis, and Principal Component Analysis (PCA) is a widely recognized technique known for its efficacy in dimension reduction and feature extraction (Anderson, 2003). PCA is particularly adept in settings where the data is high-dimensional but the underlying signal displays a low-dimensional structure. The estimation of covariance matrices and principal components finds applications across a diverse spectrum, encompassing tasks such as image recognition, data compression, clustering, risk management, portfolio allocation, mean tests, independence tests, and correlation analysis. Methodologies and theoretical advancements, including minimax optimality, for covariance matrix estimation and PCA, have been well-established in both low-dimensional and high-dimensional settings. See, for example, Koltchinskii and Lounici (2017); Vershynin (2012); Srivastava and Vershynin (2013); Bickel and Levina (2008); Cai et al. (2010); Ravikumar et al. (2011); Johnstone (2001); Cai et al. (2013, 2015); Zhang et al. (2022). For a survey on optimal estimation of high-dimensional covariance structures, see Cai et al. (2016). Amidst the increasing availability of large datasets containing sensitive personal information, privacy concerns in statistical data analysis have gained heightened prominence. The utilization of personal information in statistical analyses raises apprehensions about the potential compromise of individual privacy. Consequently, there is a growing emphasis on developing methodologies and techniques that offer robust privacy guarantees while still facilitating accurate statistical insights. This motivates a comprehensive exploration of the optimal tradeoff between privacy and accuracy in fundamental statistical problems, including PCA and covariance matrix estimation. Differential privacy (DP), a concept introduced by Dwork et al. (2006), provides a framework for safeguarding individual privacy in statistical analysis. DP has become a commonly accepted standard in both industrial and governmental applications (Erlingsson et al., 2014; Ding et al., 2017; Apple Differential Privacy Team, 2017; Abowd, 2016; Abowd et al., 2020). The goal of the present paper is to develop methods and optimality results for PCA and covariance matrix estimation within the framework of the spiked covariance model under DP constraints. 1.1 Problem formulation We begin by formally introducing the spiked covariance model and general formulation of the privacy constrained estimation problems. 2 \fThe spiked covariance structure (Johnstone, 2001; Johnstone and Lu, 2009) naturally arises from factor models with homoscedastic noise and has found diverse applications in signal processing, chemometrics, econometrics, population genetics, and various other fields. See, for example, Fan et al. (2008); Kritchman and Nadler (2008); Onatski (2012); Patterson et al. (2006). The spiked covariance model assumes that the population covariance matrix can be decomposed as \u03a3 = U\u039bU \u22a4+ \u03c32Ip, (1) where U \u2208Op,r and \u039b = diag(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbr) represent the leading eigenvectors and eigenvalues (excluding \u03c32), respectively. Here, Op,r denotes the set of p \u00d7 r matrices satisfying U \u22a4U = Ir. The spiked covariance model is convenient for studying the distribution of sample eigenvalues and eigenvectors, which play a critical role in the statistical inference of \u03a3 and its eigenvectors. For instance, Donoho et al. (2018) studied the optimal shrinkage of sample eigenvalues in the spiked covariance model. In particular, Cai et al. (2015) and Zhang et al. (2022) established the minimax optimal rates inf b U sup \u03a3\u2208\u0398(\u03bb,\u03c32) E\u2225b U b U \u22a4\u2212UU \u22a4\u2225\u224d \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013rp n; inf b \u03a3 sup \u03a3\u2208\u0398(\u03bb,\u03c32) E\u2225b \u03a3 \u2212\u03a3\u2225\u224d\u03bb r r n + p \u03c32(\u03bb + \u03c32) rp n, (2) where the infimum is taken over all possible estimators using the data set X = (X1, \u00b7 \u00b7 \u00b7 , Xn) consisting of n observations sampled independently from the spiked covariance model (1) , the parameter set \u0398(\u03bb, \u03c32) is defined in (4) with \u03bb being the magnitudes of eigenvalues, and \u2225\u00b7 \u2225 denotes the matrix spectral norm. The concept of differential privacy was first introduced in Dwork et al. (2006). For a given dataset X and any \u03b5 > 0 and \u03b4 \u2208[0, 1), a randomized algorithm A that maps X into Rd1\u00d7d2 is called (\u03b5, \u03b4)-differentially private ((\u03b5, \u03b4)-DP) over the dataset X if P \u0000A(X) \u2208Q \u0001 \u2264e\u03b5P \u0000A(X\u2032) \u2208Q \u0001 + \u03b4, for all measurable subset Q \u2282Rd1\u00d7d2 and all neighboring data set X\u2032. In the standard definition, a dataset X\u2032 is a neighbor of X if they differ by only one datum, i.e., one observation in X is replaced by some other, possibly arbitrary, datum. In the context of PCA and covariance matrix estimation, as observations in X are independently sampled from a common distribution, a neighboring dataset X\u2032 is obtained by replacing one datum in X with an independent copy. This facilitates exploration of the statistical properties of the sample data. 3 \fUnder the (\u03b5, \u03b4)-DP constraint, our goal is to investigate the cost of privacy in PCA and covariance matrix estimation. This includes designing minimax optimal (\u03b5, \u03b4)-DP estimators of the principal components and covariance matrix and establishing the privacy-constrained minimax lower bounds. 1.2 Main contribution In this paper, we establish the minimax optimal rates for PCA and covariance matrix estimation in the spiked model under DP constraints. These rates, up to logarithmic terms, are given by: inf b U\u2208U\u03b5,\u03b4 sup \u03a3\u2208\u0398(\u03bb,\u03c32) E\u2225b U b U \u22a4\u2212UU \u22a4\u2225q r1/q \u224d \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u0012rp n + p\u221ar n\u03f5 \u0013 ^ 1; inf b \u03a3\u2208M\u03b5,\u03b4 sup \u03a3\u2208\u0398(\u03bb,\u03c32) E\u2225b \u03a3 \u2212\u03a3\u2225q r1/q \u224d \u0012 \u03bb \u0012r r n + r3/2 n\u03b5 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012rp n + \u221arp n\u03b5 \u0013\u0013 ^ \u03bb, (3) where the infimum is taken over all possible (\u03b5, \u03b4)-DP algorithms denoted by U\u03b5,\u03b4 for principal components and M\u03b5,\u03b4 for the covariance matrix. The expectation is taken with respect to the randomness of both the data and the differentially private algorithm. These rates hold in Schatten-q norms for all q \u2208[1, \u221e], including spectral norm (q = \u221e), Frobenius norm (q = 2), and nuclear norm (q = 1) as special cases. The rank r can grow with respect to p as long as r \u2264p/2, and the sample size can be much smaller than p as long as the signal-to-noise ratio (SNR) satisfies \u03bb/\u03c32 \u2265C1( p p/n + p/n). This condition is minimal since no consistent estimation is possible when this condition does not hold. To our knowledge, this represents the first comprehensive presentation of minimax optimal rates for PCA and covariance matrix estimation under DP constraints. Our contributions are manifold. Methodologically, we introduce (\u03b5, \u03b4)-DP estimators for PCA and covariance matrices that are computationally efficient. Specifically, we employ the Gaussian mechanism for the sample spectral projector in differentially private PCA. Notably, our DP estimator for the covariance matrix incorporates a novel design to handle unknown orthogonal rotations. These estimators are shown to achieve minimax optimality, up to logarithmic factors. Theoretically, we provide a comprehensive understanding of the minimax optimal rates for PCA and covariance estimation under privacy constraints, valid across all Schatten norms. The derivation of minimax lower bounds employs Fano\u2019s lemma with a differential privacy constraint and the construction of well-separated spectral projectors based on the packing complexity of Grassmannians (Koltchinskii and Xia, 2015; Zhang and Xia, 2018). 4 \fDifferentially private PCA and covariance estimation are challenging because it is difficulty to characterize a sharp sensitivity bound for the eigenvectors. Our main technical contribution lies in a precise characterization of the sensitivity of the sample spectral projector b U b U \u22a4, quantifying its deviation when one datum Xi is replaced by an independent copy X\u2032 i. A key technical tool is an explicit spectral representation formula for b U b U \u22a4adapted from Xia (2021). We derive a similar formula specifically for the spiked covariance model, which is of independent interest. Based on this sharp sensitivity analysis, we apply the Gaussian mechanism to achieve the upper bounds in (3), up to logarithmic terms. 1.3 Related work Minimax optimal rates under (\u03f5, \u03b4)-DP guarantees have been established for several statistical problems, such as mean estimation, linear regression, pairwise comparisons, matrix completion, factorization, generalized linear models (GLMs), and sparse GLMs (Cai et al., 2021, 2023; Chien et al., 2021; Wang et al., 2023; Cai et al., 2023). Additionally, optimality results have also been developed under local privacy constraints. For example, Duchi et al. (2018) established minimax rates for mean estimation, GLMs, and nonparametric density estimation, while Rohde and Steinberger (2020) developed minimax theory for estimating linear functionals under local privacy. It is worth noting that local privacy is a stronger notion of privacy compared to (\u03b5, \u03b4)DP, and it may not be compatible with high-dimensional problems (Duchi et al., 2018). Differentially private PCA algorithms were proposed in Blum et al. (2005); Chaudhuri et al. (2011); Dwork et al. (2014) based on the perturbation mechanism, treating each datum Xi as a fixed vector and investigating the sensitivity of sample eigenvectors. However, their deterministic sensitivity analysis disregards the statistical properties of sample data, resulting in suboptimal error rates when Xi\u2019s are i.i.d. sampled from a common distribution, such as the spiked covariance model. Recently, Liu et al. (2022) introduced an online PCA algorithm with DP, providing a much sharper upper bound for differentially private PCA under the spiked covariance model. The online Oja\u2019s algorithm in Liu et al. (2022) consumes one datum at a time, allowing for an explicit representation formula in the updated estimate of eigenvectors and enabling a study of their sensitivity. However, their bound is valid only for the rank-one case (r = 1) and is minimax optimal only when \u03bb \u2265\u03c32. The optimality of their algorithm for general rank r or \u03bb \u226a\u03c32 remains unclear. Moreover, the minimax optimal rates for estimating \u03a3 under privacy constraints are still unknown. 5 \f1.4 Organization of the paper The rest of the paper is organized as follows. In Section 2, we introduce the Gaussian mechanism and study the sensitivity of the empirical spectral projector under the spiked covariance model. We present a DP algorithm for estimating the spectral projector and spiked covariance matrix in the same section. The upper bounds for our proposed DP algorithms are proven in Section 3, where an explicit spectral representation formula under the spiked covariance model is also developed. Section 4 establishes a differentially private Fano\u2019s lemma and minimax lower bounds. Extensions to the settings with diverging conditioning number and sub-Gaussian distributions are discussed in Section 5. The proofs of the main results and some of the key technical lemmas are presented in Section 6. The proofs of additional results and technical lemmas are given in the Appendix A and B. 2 Methodology: Gaussian Mechanism and Sensitivity Our differentially private PCA and covariance estimation method relies on a precise characterization of the sensitivity for both eigenvectors and eigenvalues under the spiked covariance model. We first focus on Gaussian PCA for technical convenience, with a broader discussion on general sub-Gaussian PCA provided in Section 5. For brevity, let X := (X1, \u00b7 \u00b7 \u00b7 , Xn) represent the p\u00d7n matrix collecting all i.i.d. observations Xi sampled from a centered normal distribution N(0, \u03a3). The sensitivity of eigenvectors and eigenvalues denotes their perturbation if an observation Xi is replaced by an independent copy X\u2032 i expressed briefly as X(i) := (X1, \u00b7 \u00b7 \u00b7 , Xi\u22121, X\u2032 i, Xi+1, \u00b7 \u00b7 \u00b7 , Xn). Here, X and X(i) form a pair of neighboring datasets (Dwork et al., 2006). Notably, the sensitivity is contingent on the covariance matrix \u03a3. Through out this paper, we consider the spiked covariance matrix model where \u03a3 is from the following parameter space \u0398(p, r, \u03bb, \u03c32) = n \u03a3 = U\u039bU \u22a4+\u03c32Ip : U \u2208Op,r, \u039b = diag(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbr), c0\u03bb \u2264\u03bbr \u2264\u03bb1 \u2264C0\u03bb o , (4) where Ip is the identity matrix and and Op,r refers to the set of matrices with orthonormal columns, i.e., matrices satisfying U \u22a4U = Ir. Thus, our focus is on spiked covariance matrices with a bounded condition number, a common assumption in existing literature (Cai et al., 2013; Chaudhuri et al., 2011; Liu et al., 2022). However, our methodology remains valid, and 6 \fthe theoretical framework can be extended to the case of an unbounded condition number, as discussed in Section 5. For simplicity, we use \u0398(\u03bb, \u03c32) without explicitly stating the dimensions p and rank r. Let P denote the family of normal distributions N(0, \u03a3) with the population covariance matrix \u03a3 \u2208\u0398(\u03bb, \u03c32). Without loss of generality, we assume that \u03c32 is known. Formally, the sensitivity and Gaussian mechanism are described as follows without proofs. See, for example, Dwork et al. (2006) for more details. Here, \u2225\u00b7 \u2225F stands for the matrix Frobenius norm. Lemma 1 (sensitivity and Gaussian mechanism). Let X be a given data set and X\u2032 be any neighboring data set of X, i.e., X and X\u2032 differs by at most one observation. The sensitivity of a function f that maps X into Rd1\u00d7d2 is defined by \u2206f := sup neighboring(X,X\u2032) \u2225f(X) \u2212f(X\u2032)\u2225F. (5) Then, for any \u03b5 > 0 and \u03b4 \u2208[0, 1), the randomized algorithm A defined by A(X) = f(X) + Z where Z has i.i.d. N \u00000, 2\u22062 f\u03b5\u22122 log(1.25/\u03b4) \u0001 entries is (\u03b5, \u03b4)-DP over the dataset X. The definition of sensitivity in Lemma 1 relies on the pair of neighboring data sets. Here, X is simply the data matrix where each column represents one observation. While X and X\u2032 differ only by one observation, the sensitivity can still be unbounded if no restriction is posed on the difference, e.g., by replacing one observation of X by infinite. Since X consists of i.i.d. columns under the spiked covariance model, we assume that a neighboring data set X\u2032 is obtained by replacing some column of X by its i.i.d. copy throughout this paper. 2.1 Differentially private estimation by Gaussian mechanism Our DP-estimators of principal components and covariance matrix are built on Gaussian mechanism. Here, we assume that the rank r and nuisance variance \u03c32 are known for simplicity. Let b U be the top-r eigenvectors of the sample covariance matrix b \u03a3 := n\u22121 Pn i=1 XiX\u22a4 i and denote b U b U \u22a4the sample spectral projector. By Lemma 1, differentially private PCA can be obtained by adding Gaussian noise Z to b U b U \u22a4provided that the entrywise variance of Z dominates the sensitivity of b U b U \u22a4. While publishing b U b U \u22a4+ Z protects privacy, it is certainly not a preferable estimator of principal components as it generally lacks validity as a spectral projector. We therefore take the eigenvectors of b U b U \u22a4+ Z as the ultimate estimator. This choice maintains differential privacy, as the post-processing of a differentially private algorithm retains differential privacy according to well-established results, as discussed in Dwork et al. (2006). 7 \fAlgorithm 1 Differentially private PCA and covariance estimation Input: data matrix X = (X1, \u00b7 \u00b7 \u00b7 , Xn) \u2208Rn\u00d7p; eigenvectors and eigenvalues sensitivity \u22061 and \u22062 > 0; rank r; nuisance variance \u03c32; privacy budget \u03b5 > 0, \u03b4 \u2208(0, 1). Output: (\u03b5, \u03b4)-DP estimate of U and \u03a3. Compute the sample covariance matrix and top-r eigenvectors: b \u03a3 \u2190 \u22121 n n X i=1 XiX\u22a4 i and b U \u2190 \u2212SVDr(b \u03a3) Compute (\u03b5/2, \u03b4/2)-DP PCA by adding artificial Gaussian noise: e U \u2190 \u2212SVDr \u0010 b U b U \u22a4+ Z \u0011 where Zij = Zji i.i.d. \u223cN \u0010 0, 8\u22062 1 \u03b52 log 2.5 \u03b4 \u0011 , \u22001 \u2264i \u2264j \u2264p Compute (\u03b5/2, \u03b4/2)-DP estimates of eigenvalues up to rotations: e \u039b \u2190 \u2212e U \u22a4\u0000b \u03a3 \u2212\u03c32Ip \u0001e U + E where Eij = Eji i.i.d. \u223cN \u0010 0, 8\u22062 2 \u03b52 log 2.5 \u03b4 \u0011 , \u22001 \u2264i \u2264j \u2264r Compute (\u03b5, \u03b4)-DP covariance estimate by : e \u03a3 \u2190 \u2212e U e \u039be U \u22a4+ \u03c32Ip Return: e U and e \u03a3 8 \fOur proposed differentially private PCA and covariance estimation procedures are given in Algorithm 1. The sensitivities \u22061 and \u22062 are determined by Lemma 3 and Lemma 4 in Section 2.2, respectively. However, e U and b U are close up to an orthogonal rotation. As a result, our algorithm chooses to add Gaussian noise to e U \u22a4b \u03a3e U instead of the empirical eigenvalues b \u039b := (b \u03bb1, \u00b7 \u00b7 \u00b7 , b \u03bbr)\u22a4. The added noise level depends on the sensitivity of e U \u22a4b \u03a3e U, within which e U is already differentially private. It thus suffices to study the upper bound of \u2225e U \u22a4(b \u03a3\u2212b \u03a3(i))e U\u2225F \u2264 \u2225b \u03a3 \u2212b \u03a3(i)\u2225F, which will be established in Lemma 4. Our approach to differentially privately estimating the main covariance term involves separately privatizing the eigenvectors and eigenvalues. This separation is driven by the observation that the relative sensitivity of eigenvalues is significantly larger than that of eigenvectors. Note that a natural estimator of U(\u039b + \u03c32Ir)U \u22a4is b U b U \u22a4b \u03a3b U b U \u22a4. It is possible to characterize the sensitivity of this estimator by directly studying the bound \u2225b U b U \u22a4b \u03a3b U b U \u22a4\u2212b U (i) b U (i)\u22a4b \u03a3(i) b U (i) b U (i)\u22a4\u2225F. However, the sensitivity of eigenvalues will be the dominating factor and force us to add unnecessarily large noise to a p \u00d7 p matrix. This delivers a statistically sub-optimal estimator of the spiked covariance matrix. The estimated eigenvectors e U is (\u03b5/2, \u03b4/2)-DP and eigenvalues e \u039b is (\u03b5/2, \u03b4/2)-DP with high probability. By the composition property of differentially private algorithm, the estimator e U e \u039be U \u22a4 is (\u03b5, \u03b4)-DP. The conclusion is formally stated in the following lemma. Recall that \u02dc r = (r\u03bb + p\u03c32)/(\u03bb + \u03c32) is the effective rank of \u03a3. Here, \u03bb is regarded as the signal strength. Lemma 2. Let the data matrix X = (X1, \u00b7 \u00b7 \u00b7 , Xn) consists of i.i.d. columns sampled from N(0, \u03a3) with \u03a3 \u2208\u0398(\u03bb, \u03c32), \u03b5 > 0, \u03b4 \u2208(0, 1), and assume n \u2265C1(r log n+log2 n), 2r+C1 log n \u2264 p, and \u03bb/\u03c32 \u2265C1(p/n + p p/n) for some large absolute constant C1 > 0. If we choose \u22061 = C2 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p p(r + log n) n and \u22062 = C3 \u03bb(r + log n) + \u03c32p n , for some large absolute constants C2, C3 > 0, then Algorithm 1 is (\u03b5, \u03b4)-DP with probability at least 1 \u2212e\u2212c1p \u22124n\u22129 \u221210\u221220\u02dc r for some absolute constant c1 > 0. Compared with existing literature (Chaudhuri et al., 2011; Liu et al., 2022), our algorithm does not truncate the observations so that \u2225Xi\u2225is essentially unbounded and, as a result, our algorithm is differentially private with high probability. Note that the probability terms n\u22129 and 10\u221220\u02dc r in Lemma 2 can be replaced by n\u2212C3 and 10\u2212C4\u02dc r with any absolute constants C3, C4 > 0. While we believe Algorithm 1 can be easily modified, e.g., by an additional trimming procedure, to ensure (\u03b5, \u03b4)-differentially privacy almost surely, which will inevitably introduce 9 \fmore logarithmic factors into the upper bounds, we spare no further efforts to pursue the goal in this paper. The sensitivities \u22061 and \u22062 play a critical role in guaranteeing the differential privacy of Algorithm 1, which shall be developed in next section. The conditions r log n + log2 n = O(n) and 2r + log n = O(p) are mild. The SNR condition \u03bb/\u03c32 \u2265C1(p/n + p p/n) is typical in the existing literature of spiked covariance matrix model. See, e.g., Nadler (2008); Zhang et al. (2022) and references therein. 2.2 Sensitivity analysis In this section, we analyze the sensitivities of sample eigenvectors and eigenvalues under the spiked covariance model. The data matrix X = (X1, \u00b7 \u00b7 \u00b7 , Xn) \u223cN(0, \u03a3)\u2297n for some \u03a3 \u2208 \u0398(\u03bb, \u03c32). Similarly, its neighboring data matrix X(i) = (X1, \u00b7 \u00b7 \u00b7 , X\u2032 i, \u00b7 \u00b7 \u00b7 , Xn) \u223cN(0, \u03a3)\u2297n. Define the sample covariance matrices by b \u03a3 := 1 n n X i=1 XiX\u22a4 i and b \u03a3(i) := 1 n \u0010 X\u2032 iX\u2032\u22a4 i + X j\u0338=i XjX\u22a4 j \u0011 , Denote b U \u2208Op,r and b U (i) \u2208Op,r the top-r left eigenvectors of b \u03a3 and b \u03a3(i), respectively. The sensitivity of sample eigenvectors characterizes the deviation between b U and b U (i) caused by replacing the i-th observation by its i.i.d. copy. Since eigenvectors are determined up to an orthogonal rotation (note that we allow the eigengap |\u03bbi \u2212\u03bbj| to be zero), a commonly used metric for measuring the distance between eigenvectors is the projection distance defined by \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F. The primary challenge in differentially private PCA lies in characterizing a precise upper bound for \u2225b U b U \u22a4\u2212b U (i)\u22a4b U (i)\u2225F. In most existing works (Blum et al., 2005; Chaudhuri et al., 2011; Dwork et al., 2014), the data matrix X is assumed to be fixed, and its columns are all bounded, denoted as \u2225Xi\u2225\u2264\u03b3, where we slightly abuse the notation by letting \u2225\u00b7 \u2225denote the \u21132-norm for vectors and \u03b3 is a deterministic value. This immediately implies an upper bound \u2225b \u03a3 \u2212b \u03a3(i)\u2225\u22642\u03b32/n and the sensitivity of b U b U \u22a4is guaranteed by the Davis-Kahan theorem. However, this approach becomes invalid when observations are unbounded and sub-optimal when observations are randomly sampled from a common distribution. A more recent work Liu et al. (2022) aimed to exploit the statistical properties of i.i.d. samples to achieve a sharper bound for differentially private PCA. This work focused on the rank-one case (r = 1) and the Oja\u2019s algorithm, well-known for online PCA, which iteratively updates the estimation with one 10 \fadditional observation. The online fashion of Oja\u2019s algorithm in the rank-one case allows for an explicit representation of the eigenvector estimator, enabling a sharp upper bound of the sensitivity to be derived. Consequently, nearly optimal differentially private PCA for the case r = 1 was achieved. However, it remains unclear how this approach can be extended to the rank-r case and what the minimax optimal convergence rates are. We take a fundamentally different approach by directly focusing on \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F. This task presents two challenges: the spectral projector b U b U \u22a4involves a complicated function of the data matrix X, and a sharp perturbation analysis is required for a set of r empirical eigenvectors. Fortunately, we leverage an explicit spectral representation formula adapted from Xia (2021) and successfully establish a precise upper bound for \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F. For brevity in notation, we assume n \u2265C1(r log n + log2 n) and 2r + C1 log n \u2264p. We define \u02dc r := tr(\u03a3)/|\u03a3| = (r\u03bb + p\u03c32)/(\u03bb + \u03c32) as the effective rank. It is clear that r \u2264\u02dc r \u2264p. Lemma 3. There exist absolute constants c1, C1, C2, C3 > 0 such that if \u03bb/\u03c32 \u2265C1(p/n + p p/n), then with probability at least 1 \u2212e\u2212c1p \u22123n\u22129 \u221210\u221220\u02dc r, max i\u2208[n] \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F \u2264C2 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p p(r + log n) n . (6) The log n term in upper bound (6) is due to the maximization over n. Nevertheless, the bound is much smaller than that achieved by the deterministic analysis in (Blum et al., 2005; Chaudhuri et al., 2011; Dwork et al., 2014). Indeed, a direct application of Davis-Kahan theorem yields an upper bound O \u0000\u2225b \u03a3\u2212b \u03a3(i)\u2225\u221ar/\u03bb \u0001 , which is at least in the order O \u0000(r\u03bb+p\u03c32)\u221ar/(n\u03bb) \u0001 , with high probability. The significant improvement is due to a sharp spectral characterization showing that the difference b U b U \u22a4\u2212b U (i) b U (i)\u22a4is mainly contributed by the term \u2225U \u22a4(XiX\u22a4 i \u2212 X\u2032 iX\u2032\u22a4 i )U\u22a5\u2225F/(n\u03bb). Here, U\u22a5\u2208Op,p\u2212r denotes the orthogonal complement of U such that (U, U\u22a5) is an orthogonal matrix. The proof of Lemma 3 is technically involved and deferred to Section 6.2. It is worth noting that the original spectral representation formula developed in Xia (2021) is inapplicable here because \u03a3 is not exactly rank-r. Interestingly, we establish a similar spectral representation formula exclusively for spiked covariance matrix, which may be of independent interest. See Lemma 5 in Section 3.1. The sensitivity of eigenvalues is also necessary for constructing differentially private covariance estimation. Let \u03bbk(b \u03a3) and \u03bbk(b \u03a3(i)) denote the k-th largest eigenvalue of b \u03a3 and b \u03a3(i), respectively. Compared to the eigenvectors, the sensitivity of eigenvalues can be easily characterized by Hoffman-Weilandt\u2019s inequality. The proof of Lemma 4 is deferred to Section 6.3. 11 \fLemma 4. There exist absolute constants c1, C1, C2 > 0 such that with probability at least 1 \u2212n\u221210, p X k=1 \f \f \f\u03bbk(b \u03a3) \u2212\u03bbk(b \u03a3(i)) \f \f \f 2 \u2264C2 \u0012\u03bb(r + log n) + \u03c32p n \u00132 , (7) for all i \u2208[n]. We can regard \u0010 Pp k=1 \u0000\u03bbk(b \u03a3) \u2212\u03bbk(b \u03a3(i)) \u00012\u00111/2 /\u03bb and \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F/\u221ar as the relative sensitivity of eigenvectors and eigenvalues, respectively. Lemmas 3 and 4 show that the relative sensitivity of eigenvalues can be considerably larger than that of eigenvectors. This insight implies that, when designing a differentially private optimal estimation procedure for the population covariance matrix, it is advisable to privatize the eigenvalues and eigenvectors separately, as elaborated in Algorithm 1. 3 Upper Bounds with Differential Privacy 3.1 Spectral representation formula Our key technical tool is the following spectral representation formula. Recall that b U and U denote the top-r eigenvectors of b \u03a3 and \u03a3, respectively. Denote the deviation matrix by b \u2206= b \u03a3 \u2212\u03a3 so that b \u03a3 = \u03a3 + b \u2206is viewed as a perturbation of the \u201csignal\u201d matrix \u03a3. The spectral representation formula was first introduced in Xia (2021), which, however, requires the \u201csignal\u201d matrix to be exactly rank-r. This is certainly not the case here since \u03a3 is full-rank. Here, we develop the spectral representation formula exclusively for the perturbation of a spiked covariance matrix. The spectral representation formula is actually deterministic. Let the symmetric matrix \u2206\u2208Rp\u00d7p be an arbitrary perturbation. Denote b U the top-r eigenvectors of \u03a3 + \u2206where \u03a3 = U\u039bU \u22a4+ \u03c32Ip with \u039b = diag(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbr). We are interested in developing an explicit representation formula for the spectral projector b U b U \u22a4in terms of \u2206. Let Q\u22a5:= U\u22a5U \u22a4 \u22a5= Ip \u2212UU \u22a4denotes the orthogonal projection. For all t \u22651, we define Q\u2212t := U\u039b\u2212tU \u22a4. We slightly abuse the notation and denote Q0 := Q\u22a5= U\u22a5U \u22a4 \u22a5. Lemma 5. Suppose that 2\u2225\u2206\u2225\u2264\u03bbr, then b U b U \u22a4\u2212UU \u22a4= X k\u22651 S\u03a3,k(\u2206), 12 \fwhere the k-th order term S\u03a3,k(\u2206) is a summation of \u00002k k \u0001 terms defined by S\u03a3,k(\u2206) = X s:s1+...+sk+1=k (\u22121)1+\u03c4(s) \u00b7 Q\u2212s1\u2206Q\u2212s2 . . . \u2206Q\u2212sk+1, where s = (s1, . . . , sk+1) contains non-negative indices and \u03c4(s) = Pk+1 j=1 I (sj > 0) . A simple upper bound of the k-th order term is \r \rS\u03a3,k(\u2206) \r \r \u2264 \u00122k k \u0013\u0010\u2225\u2206\u2225 \u03bbr \u0011k . Based on Lemma 5, the leading term, i.e., the 1st-order term, of b U b U \u22a4\u2212UU \u22a4is contributed by \u039b\u22121U \u22a4\u2206U\u22a5and U \u22a4 \u22a5\u2206U\u039b\u22121. The latter terms can be sharply controlled by exploiting the statistical properties of \u2206if observations are i.i.d. sampled. 3.2 Upper bounds In this section, we present the upper bounds of our (\u03b5, \u03b4)-DP estimator e U e U \u22a4and e \u03a3. Let \u2225\u00b7 \u2225q denotes the matrix Schatten-q norm for any q \u2208[1, \u221e], e.g., the spectral norm \u2225\u00b7 \u2225if q = \u221e, the Frobenius norm \u2225\u00b7 \u2225F if q = 2, and the nuclear norm \u2225\u00b7 \u2225\u2217if q = 1. A simple fact by the triangle inequality \u2225e U e U \u22a4\u2212UU \u22a4\u2225q \u2264\u2225e U e U \u22a4\u2212b U b U \u22a4\u2225q + \u2225b U b U \u22a4\u2212UU \u22a4\u2225q, leads to the following theorem. Theorem 1. Suppose that n \u2265C1(r log n + log2 n), 2r + C1 log n \u2264p, and \u03bb/\u03c32 \u2265C1(p/n + p p/n) for some large absolute constant C1 > 0. If we choose \u22061 = C2 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p p(r + log n) n , then, there exist absolute constants c1, C4 > 0 such that, for any \u03b5 > 0, \u03b4 \u2208(0, 1), Algorithm 1 outputs an (\u03b5, \u03b4)-DP estimator e U e U \u22a4satisfying \u2225e U e U \u22a4\u2212UU \u22a4\u2225q r1/q \u2264C4 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013\u0012rp n + p\u221ar + log n n\u03b5 r log 2.5 \u03b4 \u0013 , with probability at least 1 \u2212e\u2212c1p \u22123n\u22129 \u221210\u221220\u02dc r. Moreover, E\u2225e U e U \u22a4\u2212UU \u22a4\u2225q r1/q \u2264C4 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013\u0012rp n + p\u221ar + log n n\u03b5 r log 2.5 \u03b4 \u0013 . Here, q can be any number in [1, \u221e] and \u02dc r denotes the effective rank of \u03a3. 13 \fBasically, the upper bounds consist of two parts: the first one represent the statistical error rate and the second one is the cost of privacy constraint. It is well-known that the first term is minimax optimal (Nadler, 2008; Cai et al., 2015; Koltchinskii and Lounici, 2017). The second term decays at the rate O \u0000p/(n\u03b5) log1/2 \u03b4\u22121\u0001 with respect to the sample size, dimension and privacy-related parameters, which is typical in differentially private algorithms (Cai et al., 2023; Liu et al., 2022). In Section 4, we shall develop matching minimax lower bounds showing that the rates in Theorem 1 are minimax optimal up the log n and log(2.5/\u03b4) terms. It worth to mention that the log n term appearing in the privacy-related rate is due to the requirement of differential privacy that applies to each of the n observations. This log n term seems to be present in the upper bounds of most differentially private algorithms. See, e.g., Cai et al. (2021, 2023); Dwork et al. (2014) and references therein. A slight difference here is that the log n term appears not as an additional factor, but as an additive term. If r \u2265log n, the logarithmic factor can be ignored and the rate becomes minimax optimal except for the log \u03b4\u22121 factor. Note that the probability guarantee in Theorem 1 depends on the effective rank \u02dc r. The rationale is that if the nuisance variance \u03c32 is very small, e.g., \u03c32 = 0 , the distribution of N(0, U\u039bU \u22a4+ \u03c32Ip) becomes actually degenerate to a distribution in r-dimensional space. The concentration phenomenon for an r-dimensional distribution can be weaker than that for a p-dimensional distribution. We now present the performance bound of our differentially private covariance estimator e \u03a3. Theorem 2. Suppose that n \u2265C1(r log n + log2 n), 2r + C1 log n \u2264p, and \u03bb/\u03c32 \u2265C1(p/n + p p/n) for some large absolute constant C1 > 0. If we choose \u22061 = C2 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p p(r + log n) n and \u22062 = C2 \u03bb(r + log n) + \u03c32p n , then, there exist absolute constants c1, C4 > 0 such that, for any \u03b5 > 0, \u03b4 \u2208(0, 1), Algorithm 1 outputs an (\u03b5, \u03b4)-DP estimator e \u03a3 satisfying \u2225e \u03a3 \u2212\u03a3\u2225q r1/q \u2264C4 \u03bb \u0012r r n + \u221ar(r + log n) n\u03b5 \u00b7 r log 2.5 \u03b4 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013! , with probability at least 1 \u221210\u221219r \u22123n\u22129 \u2212e\u2212c1p. Moreover, E\u2225e \u03a3 \u2212\u03a3\u2225q r1/q \u2264C4 \u03bb \u0012r r n + \u221ar(r + log n) n\u03b5 \u00b7 r log 2.5 \u03b4 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013! . 14 \fBy Theorem 2, the privacy-irrelevant error rate \u03bb r r n + p \u03c32(\u03bb + \u03c32) rp n, matches the minimax optimal rate of spiked covariance estimation in the existing literature (Cai et al., 2015, 2010). For ease of discussion, let us focus on the error rate in spectral norm. There are two terms related to the cost of privacy: \u03bb \u00b7 \u221ar(r + log n) n\u03b5 r log 2.5 \u03b4 and p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 , where the second term is approximately of order \u03bb\u2225e U e U \u22a4\u2212UU \u22a4\u2225, contributed by the cost of estimating the eigenvectors. The first term grows at the rate O(r3/2) with respect to the rank, which is contributed by the cost of estimating the eigenvalues. Due to the unknown orthogonal rotation measuring the alignment between b U and e U, privacy cost is also paid for the r \u00d7 r unknown rotation matrix. Minimax lower bounds are developed in Section 4 demonstrating the optimality of these bound up to the log n and log(2.5/\u03b4) related terms. 4 Minimax Lower Bounds In this section, we establish the minimax lower bound of PCA and covariance matrix estimation under the constraint of differential privacy. Our main technical tool is a version of Fano\u2019s lemma with privacy constraint. 4.1 DP-constrained Fano\u2019s Lemma Several techniques have been developed to establish minimax lower bounds under the constraint of differential privacy. Notable examples include the fingerprint method (Kamath et al., 2019), Le Cam\u2019s method under differential privacy (Barber and Duchi, 2014), differentially private Fano\u2019s lemma (Acharya et al., 2021), and the recently introduced Score Attack method (Cai et al., 2023). Le Cam\u2019s method and Fano\u2019s lemma construct a multitude of hypotheses that are difficult to distinguish, while the fingerprint method and Score Attack design a test statistic with a prior distribution. For our convenience, we employ the differentially private Fano\u2019s lemma, as detailed in Lemma 6, whose proof is provided in Section A.3. For technical reasons, the lemma is only valid for (\u03b5, 0)-DP algorithms. Here, KL(\u00b7, \u00b7) and TV(\u00b7, \u00b7) denote the Kullback-Leibler divergence and total variation distance between two distributions. 15 \fLemma 6. Let P := {P : P = \u00b5(1) \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5(n)} be a family of product measures indexed by a parameter from a pseudo-metric space (\u0398, \u03c1). Denote \u03b8(P) \u2208\u0398 the parameter associated with the distribution P. Let Q = {P1, \u00b7 \u00b7 \u00b7 , PN} \u2282P contain N probability measures and there exist constants \u03c10, l0, t0 > 0 such that for all i \u0338= i\u2032 \u2208[N], \u03c1 (\u03b8(Pi), \u03b8(Pi\u2032)) \u2a7e\u03c10, KL (Pi\u2225Pi\u2032) \u2264l0, and X k\u2208[n] TV \u0010 \u00b5(k) i , \u00b5(k) i\u2032 \u0011 \u2264t0, where Pi = \u00b5(1) i \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5(n) i and Pi\u2032 = \u00b5(1) i\u2032 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5(n) i\u2032 . Then, inf A\u2208A\u03b5,\u03b4(P) sup P\u2208P EA \u03c1(A, \u03b8(P)) \u2a7emax \u001a\u03c10 2 \u0012 1 \u2212l0 + log 2 log N \u0013 , \u03c10 4 \u0012 1 ^ N \u22121 exp (4\u03b5t0) \u0013 \u0012 1 \u22122\u03b4e4\u03b5t0 e\u03b5 \u22121 \u0013\u001b , (8) where the infimum is taken over all the (\u03b5, \u03b4)-DP randomized algorithm defined by A\u03b5,\u03b4(P) := {A : X 7\u2192\u0398 and A is (\u03b5, \u03b4)-differentially private for all X \u223cP \u2208P } . Lemma 6 provides a powerful tool for developing a minimax lower bound in estimation problems under the constraint of differential privacy. Basically, if one can construct a sufficiently large set of distributions which are pairwise close in both Kullback-Leibler divergence and total variation distance, then a minimax lower bound can be derived if the underlying parameters are well-separated. The first term in the RHS of (8) is derived from the classic Fano\u2019s Lemma without privacy constraint and serves as a lower bound for the statistical error rate. This term is a well-established outcome in information theory by the framework of hypothesis testing and has been extensively employed in the statistics literature. The second term in the RHS of (8) characterizes the price one needs to pay for differential privacy. It is noteworthy that the cost of privacy is determined by t0, which is the summation of marginal total variances. Intuitively, if the marginal total variance distances between Pi = \u00b5(1) i \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5(n) i and Pi\u2032 = \u00b5(1) i\u2032 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5(n) i\u2032 are small , it becomes challenging to identify the distribuiton from which the dataset is drawn. Therefore, the cost of privacy is expected to be low when t0 is small. Moreover, if we assume that X = (X1, \u00b7 \u00b7 \u00b7 , Xn) \u223cPi, then the cost of privacy resulting from replacing Xk \u223c\u00b5(k) i by X\u2032 k \u223c\u00b5(k) i\u2032 should be upper bounded in terms of TV(\u00b5(k) i , \u00b5(k) i\u2032 ). 16 \f4.2 Minimax lower bounds In this section, we apply Lemma 6 to establish the minimax lower bounds for differentially private PCA and covariance estimation under the spiked covariance model. Denote the family of normal distribution with a spiked covariance matrix by P(\u03bb, \u03c32) := n N(0, \u03a3) : \u03a3 = U\u039bU \u22a4+ \u03c32Ip \u2208\u0398(\u03bb, \u03c32) o . By definition, each distribution P \u2208P(\u03bb, \u03c32) is indexed by the pair of eigenvalues \u039b and eigenvectors U \u2208Op,r. We first focus on the minimax lower bounds for estimating the spectral projector UU \u22a4. Similarly, the minimax lower bounds are established in all Schatten-q norms for q \u2208[1, \u221e]. Theorem 3. Let the p \u00d7 n data matrix X have i.i.d. columns sampled from a distribution P = N(0, U \u22a4\u039bU \u22a4+ \u03c32Ip) \u2208P(\u03bb, \u03c32). Then, there exists an absolute constant c1 > 0 such that inf e U\u2208U\u03b5,\u03b4 sup P\u2208P(\u03bb,\u03c32) E\u2225\u02dc U \u02dc U \u22a4\u2212UU \u22a4\u2225q r1/q \u2265c1 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u0012rp n + p\u221ar n\u03f5 \u0013 ! ^ 1, where the infimum is taken over all the possible (\u03b5, \u03b4)-DP algorithms, denoted by U\u03b5,\u03b4, and the expectation is taken with respect to both e U and P. It is worth noting the two terms in the minimax lower bound of spectral norm (q = \u221e): \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013rp n and \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p\u221ar n\u03b5 . (9) The first term concerns the statistical error of PCA without privacy constraint. The error bound is free of the rank r, which is very typical in spectral norm error rate and the rate matches the existing minimax optimal rate of PCA for spiked covariance model. See, e.g., Cai et al. (2015); Zhang and Xia (2018); Yu et al. (2015). The second term is the price paid for differential privacy. Interestingly, the second term is dependent on the rank r even though spectral norm is considered here. The technical explanation is that the sensitivity of empirical spectral projector increases as the number of PC\u2019s grows. Comparing the two terms in (9), we observe that if \u03b5 \u2265(rp/n)1/2, the cost of privacy is dominated by the statistical error. A minimax lower bound for rank-one PCA has been established in (Liu et al., 2022, Theorem 5.3). Their developed rate in spectral norm also have two terms: r \u03c32 \u03bb + \u03c32 \u00b7 rp n and r \u03c32 \u03bb + \u03c32 \u00b7 p n\u03b5. 17 \fTheir rate matches ours when r = 1 and \u03bb \u2265\u03c32. On the other hand, if \u03bb \u226a\u03c32, our minimax lower bound is much stronger. Moreover, our minimax lower bounds hold for a diverging rank as long as 2r \u2264p. We now shift our focus to the minimax lower bound of differentially private estimation of the spiked covariance matrix. Here, we assume \u03c32 is known and it suffices to estimate the signal part U\u039bU \u22a4. As a result, the minimax lower bound is essentially determined jointly by the lower bounds in estimating eigenvalues and eigenvectors. Theorem 4. Let the p \u00d7 n data matrix X have i.i.d. columns sampled from a distribution P = N(0, U \u22a4\u039bU \u22a4+ \u03c32Ip) \u2208P(\u03bb, \u03c32). Then, there exists an absolute constant c1 > 0 such that inf e \u03a3 \u2208M\u03b5,\u03b4 sup P\u2208P(\u03bb,\u03c32) E \r \re \u03a3 \u2212\u03a3 \r \r q r1/q \u2265c1 \u0012 \u03bb \u0012r r n + r3/2 n\u03b5 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012rp n + \u221arp n\u03b5 \u0013\u0013 ^ \u03bb, where the infimum is taken over all the possible (\u03b5, \u03b4)-DP algorithms, denoted by M\u03b5,\u03b4, and the expectation is taken with respect to both e \u03a3 and P. Here, q can be any number in [1, \u221e]. Without loss of generality, let us discuss the two terms in the spectral norm distance \u03bb \u0012r r n + r3/2 n\u03b5 \u0013 and p \u03c32(\u03bb + \u03c32) \u0012rp n + \u221arp n\u03b5 \u0013 . (10) The second term is contributed by the differentially private estimation error of PCA in the form of \u03bb\u2225e U e U \u22a4\u2212UU \u22a4\u22252 F. The first term dominates if the signal strength is exceedingly large, or more precisely, when \u03bb/\u03c32 \u226bp/r. In this case, we can simply regard \u03c3 = 0 and the stochastic error mainly comes from the randomness of a low-dimensional distribution. Basically, it suffices to consider the minimax optimal estimation under a smaller family of normal distributions {N(0, \u03bbUU \u22a4+ \u03bbIr) : U \u2208Or,r/4}. By replacing \u03c3 \u2190\u03bb, r \u2190r/4, and p \u2190r, the second term reduces to the first term in (10). Without the privacy constraint, the first term also matches the existing optimal rate in covariance estimation under spiked covariance model (Cai et al., 2015, 2010). 5 Extensions For the sake of clarity, we have assumed uniformity in the order of spiked eigenvalues and Gaussian distributions. In this section, we extend our analysis to provide upper bounds for differentially private PCA and covariance estimation without requiring these specific conditions. 18 \f5.1 Diverging condition number Suppose that X1, \u00b7 \u00b7 \u00b7 , Xn i.i.d. \u223cN(0, \u03a3) where \u03a3 = U\u039bU \u22a4+ \u03c32Ip with spiked eigenvalues \u039b = diag(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbr). Denote \u03ba0 := \u03bb1/\u03bbr, the ratio of the largest and smallest spiked eigenvalues. The proof of Corollary 1 is almost identical to that of Theorems 1 and 2, and thus omitted. We only present the upper bounds of the expected error in Schatten norms, but high probability bounds hold similarly. Corollary 1. Suppose that n \u2265C1(\u03ba2 0r log n+log2 n), 2r+C1 log n \u2264p, and \u03bbr/\u03c32 \u2265C1(\u03ba0p/n+ p p/n) for some large absolute constant C1 > 0. If we choose \u22061 = C2 \u0012\u03c32 \u03bbr + s \u03ba0\u03c32 \u03bbr \u0013p p(r + log n) n and \u22062 = C2 \u03bb1(r + log n) + \u03c32p n , then, there exist absolute constants C4 > 0 such that, for any \u03b5 > 0, \u03b4 \u2208(0, 1), Algorithm 1 outputs an (\u03b5, \u03b4)-DP estimators e U e U \u22a4and e \u03a3 satisfying E\u2225e U e U \u22a4\u2212UU \u22a4\u2225q r1/q \u2264C4 \u0012\u03c32 \u03bbr + s \u03ba0\u03c32 \u03bbr \u0013\u0012rp n + p\u221ar + log n n\u03b5 log1/2 \u00102.5 \u03b4 \u0011\u0013 . and E\u2225e \u03a3 \u2212\u03a3\u2225q r1/q \u2264C4 \u03bb1 \u0012r r n + \u221ar(r + log n) n\u03b5 \u00b7 r log 2.5 \u03b4 \u0013 + p \u03c32(\u03bb1 + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013! . for all q \u2208[1, \u221e]. 5.2 Sub-Gaussian Suppose that X follows a sub-Gaussian distribution satisfying that, for any u \u2208Rp, the following bound holds E exp \u001a\u27e8X, u\u27e92 u\u22a4\u03a3u \u001b \u22642, where \u03a3 \u2208\u0398(\u03bb, \u03c32). For ease of exposition, we focus on the case of bounded condition number. Interestingly, the sensitivity of eigenvectors and eigenvalues is actually identical to that under Gaussian distributions. Corollary 2. Suppose that n \u2265C1 \u0000r log(p + n) log2 r + log2 n \u0001 , 2r + C1 log n \u2264p, and \u03bb/\u03c32 \u2265 C1(p/n + p p/n) log(p + n) for some large absolute constant C1 > 0. If we choose \u22061 = C2 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p p(r + log n) n and \u22062 = C2 \u03bb(r + log n) + \u03c32p n , 19 \fthen, there exist absolute constant C4 > 0 such that, for any \u03b5 > 0, \u03b4 \u2208(0, 1), Algorithm 1 outputs an (\u03b5, \u03b4)-DP estimators e U e U \u22a4and e \u03a3 satisfying E\u2225e U e U \u22a4\u2212UU \u22a4\u2225q r1/q \u2264C4 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013\u0012r p log p n + p\u221ar + log n n\u03b5 log1/2 \u00102.5 \u03b4 \u0011\u0013 , and E\u2225e \u03a3 \u2212\u03a3\u2225q r1/q \u2264C4 \u03bb \u0012r r n + \u221ar(r + log n) n\u03b5 \u00b7 r log 2.5 \u03b4 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012r p log p n + p\u221ar + log n n\u03b5 r log 2.5 \u03b4 \u0013! . for all q \u2208[1, \u221e]. As shown by Corollary 2, the upper bounds of differentially private sub-Gaussian PCA and covariance estimation are almost the same as those for Gaussian distributions, implying that these bounds are minimax optimal. However, some additional logarithmic factors appear in the upper bound and signal-to-noise ratio condition when controlling the higher-order terms in spectral perturbation. 6 Proofs of Main Results In this section, we prove the main results and some of the key lemmas. Some additional technical results are given in the Appendix A. We begin by stating several technical lemmas that will be frequently used in the subsequent proofs. Due to space constraints, their proofs are given in the Appendix B. 6.1 Technical lemmas Lemma 7 is a well-known dimension-free concentration inequality of sample covariance matrix developed by Koltchinskii and Lounici (2017). Here, \u2225\u00b7 \u2225denote the spectral norm of a matrix and \u21132-norm of a vector. Lemma 7 (Koltchinskii and Lounici (2017)). Suppose X1, \u00b7 \u00b7 \u00b7 , Xn are i.i.d. sampled from N(0, \u03a3) and b \u03a3 := Pn i=1 XiX\u22a4 i /n. Then, E\u2225b \u03a3 \u2212\u03a3\u2225\u224d r tr(\u03a3)\u2225\u03a3\u2225 n _ tr(\u03a3) n ! . 20 \fMoreover, there exists an absolute constant C1 > 0 such that, for all t \u22651, with probability at least 1 \u2212e\u2212t, \f \f \f\u2225b \u03a3 \u2212\u03a3\u2225\u2212E\u2225b \u03a3 \u2212\u03a3\u2225 \f \f \f \u2264C1\u2225\u03a3\u2225 t n + r t n \u0012 1 + r tr(\u03a3)/\u2225\u03a3\u2225 n \u0013! . The following lemma characterizes the concentration of the norm of a Gaussian random vector. Lemma 8. Let X \u223cN(0, \u03a3) and the eigenvalues of \u03a3 are \u03bb1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bbp \u22650. Then, there exist absolute constants C1, C2, c1 > 0 such that P \u0012\f \f \f\u2225X\u22252 \u2212tr(\u03a3) \f \f \f \u2264C1 \u0010 u p X i=1 \u03bb2 i \u00111/2 + C2\u03bb1u \u0013 \u22651 \u2212e\u2212c1u, for any u > 0. Under the spiked covariance model \u03a3 \u2208\u0398(\u03bb, \u03c32) and the condition that p \u2265 C6 log n for some absolute constant C6 > 0, we have P \u0012n max i\u2208[n]\u2225Xi\u22252 + \u2225X\u2032 i\u22252 \u2264C3(r\u03bb + p\u03c32) + C4 p (r\u03bb2 + p\u03c34) log n + C5(\u03bb + \u03c32) log n o\u0013 \\n max i\u2208[n] \u2225U \u22a4Xi\u22252 + max i\u2208[n] \u2225U \u22a4X\u2032 i\u22252 \u2264C3r(\u03bb + \u03c32) + C4 p r(\u03bb2 + \u03c34) log n + C5(\u03bb + \u03c32) log n o \\n max i\u2208[n] \u2225U \u22a4 \u22a5Xi\u22252 + max i\u2208[n] \u2225U \u22a4 \u22a5X\u2032 i\u22252 \u2264C3p\u03c32o \u22651 \u2212n\u221210, where C3, C4, C5 > 0 are some absolute constants. Let E0 denote the above event. Moreover, E\u2225Xi\u22252 \u2264C3(r\u03bb + p\u03c32), E\u2225U \u22a4Xi\u22252 \u2264C3r(\u03bb + \u03c32) and E\u2225U \u22a4 \u22a5Xi\u22252 \u2264C3p\u03c32. Denote \u2206:= b \u03a3\u2212\u03a3 and \u2206(i) := b \u03a3(i)\u2212\u03a3. We shall frequently use several concentration bounds related to \u2206and \u2206(i) throughout the proof. For reader\u2019s convenience, these concentration bounds are collected in the following lemma. Recall that \u02dc r = tr(\u03a3)/\u2225\u03a3\u2225is the effective rank of \u03a3. Lemma 9. Suppose that \u03a3 \u2208\u0398(\u03bb, \u03c32), n \u2265C1(r+log2 n), 2r+C1 log n \u2264p, and \u03bb/\u03c32 \u2265C1p/n for some absolute constant C1 > 0. There exist absolute constants C2 > 0 such that the event E\u2206:= ( \u2225\u2206\u2225+ max i\u2208[n] \u2225\u2206(i)\u2225\u2264C2 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n ) (11) 21 \fholds with probability P(E\u2206) \u22651 \u2212n\u221210 \u221210\u221220\u02dc r. Meanwhile, the upper bound in (11) also holds for E\u2225\u2206\u2225. There exist absolute constants c2, C3 > 0 such that the event E1 := ( \r \rU \u22a4\u2206U\u22a5 \r \r + max i\u2208[n] \u2225U \u22a4\u2206(i)U\u22a5\u2225\u2264C3 r \u03c32(\u03bb + \u03c32)p n ) (12) \\ ( max i\u2208[n] \r \rU \u22a4(XiX\u22a4 i /n)U\u22a5 \r \r + \r \rU \u22a4(X\u2032 iX\u2032\u22a4 i /n)U\u22a5 \r \r \u2264C3 p \u03c32(\u03bb + \u03c32)p(r + log n) n ) \\ ( max i\u2208[n] \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5Xi \r \r \r \u2264C3\u03c3 \u00b7 r \u03c32(\u03bb + \u03c32)p(r + log n) n ) holds with probability P(E1) \u22651 \u2212e\u2212c1p \u22122n\u22129. Meanwhile, the first upper bound in (12) also hods for E\u2225U \u22a4\u2206U\u22a5\u2225. The following perturbation bound of principal subspace will be useful. Lemma 10. Suppose that \u03bbr \u2265(4 + \u03b4)\u2225\u2206\u2225for some \u03b4 > 0, then \r \rb U b U \u22a4\u2212UU \u22a4\r \r \u22642 \r \r\u039b\u22121U \u22a4\u2206U\u22a5 \r \r + 6(4 + \u03b4)\u2225\u2206\u2225 \r \rU \u22a4\u2206U\u22a5 \r \r \u03b4\u03bb2 r . 6.2 Proof of Lemma 3 Let the events E0, E\u2206, E1 be defined as in Section 6.1. The following analysis proceeds mainly on the event E\u2217:= E0 \u2229E\u2206\u2229E1, which occurs with probability at least 1 \u2212e\u2212c1p \u22123n\u22129 \u221210\u221220\u02dc r. On the event E\u2217and under the conditions that \u03a3 \u2208\u0398 (p, r, \u03bb, \u03c32), n \u2265C1(r log n + log2 n), 2r + C1 log n \u2264p and \u03bb/\u03c32 \u2265C1 \u0000p/n + p p/n \u0001 for a large absolute constant C1 > 0, we have \u03bbr \u22655 \u0000\u2225\u2206\u2225\u2228\u2225\u2206(i)\u2225 \u0001 . Therefore, we are able to apply Lemma 5 to obtain b U b U \u22a4\u2212UU \u22a4= X k\u22651 S\u03a3,k(\u2206) and b U (i) b U (i)\u22a4\u2212UU \u22a4= X k\u22651 S\u03a3,k(\u2206(i)). The explicit formula of spectral projectors implies that \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F = \r \r \r \r X k\u22651 S\u03a3,k(\u2206) \u2212 X k\u22651 S\u03a3,k(\u2206(i)) \r \r \r \r F \u2264\u2225S\u03a3,1(\u2206) \u2212S\u03a3,1(\u2206(i))\u2225F + \r \r \r \r X k\u22652 S\u03a3,k(\u2206) \u2212 X k\u22652 S\u03a3,k(\u2206(i)) \r \r \r \r F . (13) We now bound the first-order term \u2225S\u03a3,1(\u2206)\u2212S\u03a3,1(\u2206(i))\u2225F and the higher order term \u2225P k\u22652 S\u03a3,k(\u2206)\u2212 P k\u22652 S\u03a3,k(\u2206(i))\u2225F, separately. 22 \fStep 1: bounding the first order term. By the definitions of S\u03a3,1(\u2206) and S\u03a3,1(\u2206(i)), max i\u2208[n] \u2225S\u03a3,1(\u2206) \u2212S\u03a3,1(\u2206(i))\u2225F \u2264max i\u2208[n] \u2225Q\u22121(\u2206\u2212\u2206(i))Q\u22a5\u2225F + \u2225Q\u22a5(\u2206\u2212\u2206(i))Q\u22121\u2225F \u22642 n max i\u2208[n] \u0010 \u2225\u039b\u22121U \u22a4XiX\u22a4 i U\u22a5\u2225+ \u2225\u039b\u22121U \u22a4X\u2032 iX\u2032\u22a4 i U\u22a5\u2225 \u0011 \u2264C3 r \u03c32(\u03bb + \u03c32) \u03bb2 \u00b7 p p(r + log n) n , (14) where the last inequality holds on E\u2217for all i \u2208[n] based on Lemma 9. Step 2: bounding the higher-order terms. Let Ik be the index set for terms in S\u03a3,k Ik = \u001a s : s = (s1, . . . , sk+1) , k+1 X m=1 sm = k, sm \u22650, \u2200m \u2208[k + 1] \u001b , with the cardinality |Ik| = \u00002k k \u0001 . We define T\u03a3,k,s,l(\u2206\u2212\u2206(i)) := Q\u2212s1\u2206(i)Q\u2212s2 \u00b7 \u00b7 \u00b7 Q\u2212sl(\u2206\u2212\u2206(i))Qsl+1 \u00b7 \u00b7 \u00b7 Q\u2212sk\u2206Qsk+1, for k \u22652, s = (s1, \u00b7 \u00b7 \u00b7 , sk+1) \u2208Ik and l \u2208[k]. Since |Ik| = \u00002k k \u0001 , the higher order terms can be bounded as follows \r \r \r X k\u22652 S\u03a3,k(\u2206) \u2212 X k\u22652 S\u03a3,k(\u2206(i)) \r \r \r F = \r \r \r X k\u22652 X s\u2208Ik X l\u2208[k] T\u03a3,k,s,l(\u2206\u2212\u2206(i)) \r \r \r F \u2264 X k\u22652 \u00122k k \u0013 max s\u2208Ik X l\u2208[k] \u2225T\u03a3,k,s,l(\u2206\u2212\u2206(i))\u2225F. (15) It suffices to upper bound \u2225T\u03a3,k,s,l(\u2206\u2212\u2206(i))\u2225F for any s \u2208Ik. Denote Dmax := C2 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n , the upper bound appeared in the event E\u2206so that \u2225\u2206\u2225+ maxi\u2208[n] \u2225\u2206(i)\u2225\u2264Dmax in the event E\u2206. Lemma 11. Under the conditions of Lemma 3, the following bound holds in the event E\u2217for all k \u22652 and s \u2208Ik. X l\u2208[k] \u2225T\u03a3,k,s,l(\u2206\u2212\u2206(i))\u2225F \u2264C6 (3 + k)k 2 \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n , where C6 > 0 is an absolute constant. 23 \fWe now continue from (15) and get for all i \u2208[n] \r \r \r X k\u22652 S\u03a3,k(\u2206) \u2212 X k\u22652 S\u03a3,k(\u2206(i)) \r \r \r F \u2264 X k\u22652 \u00122k k \u0013 max s\u2208Ik X l\u2208[k] \u2225T\u03a3,k,s,l(\u2206\u2212\u2206(i))\u2225F \u2272 X k\u22652 \u00122k k \u0013(3 + k)k 2 \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n \u2272\u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n , (16) where the last inequality holds if \u03bb \u2265C7Dmax for a large enough constant C7 > 0. Combining (14) and (16) together with the condition \u03bb/\u03c32 \u2265C3( p p/n + p/n), we get in event E\u2217that max i\u2208[n] \u2225b U b U \u22a4\u2212b U (i) b U (i)\u22a4\u2225F \u2272 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n . 6.3 Proof of Lemma 4 Recall that, by definition of \u2206and \u2206(i), we can write b \u03a3(i) = b \u03a3 \u22121 nXiX\u22a4 i + 1 nX\u2032 iX\u2032\u22a4 i . By Hoffman-Weilandt\u2019s inequality, we have p X k=1 \u0000\u03bbk(b \u03a3) \u2212\u03bbk(b \u03a3(i)) \u00012 \u2264\u2225b \u03a3 \u2212b \u03a3(i)\u22252 F \u22642\u2225b \u03a3 \u2212b \u03a3(i)\u22252 \u22644 n2 \u0010 \u2225Xi\u22252 + \u2225X\u2032 i\u22252\u00112 \u2264C4 \u0012\u03bb(r + log n) + p\u03c32 n \u00132 , where the last inequality holds in the event E0 defined in Lemma 8 and under the condition p \u2265C1 log n. This completes the proof. 6.4 Proof of Theorem 1 It suffices to bound \u2225e U e U \u22a4\u2212b U b U \u22a4\u2225q and \u2225b U b U \u22a4\u2212UU \u22a4\u2225q. Without loss of generality, we start with q = \u221e, i.e., the upper bound of spectral norm. Recall that e U e U \u22a4denotes the spectral projector of the top-r eigenvectors of \u03a3 + \u2206. Based on Lemma 9, \u2225\u2206\u2225\u2264C2 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n 24 \fwith probability at least 1 \u2212n\u221210 \u221210\u221220\u02dc r. In this event, we have \u03bb \u22655\u2225\u2206\u2225since n \u2265C1r and \u03bb/\u03c32 \u2265C1 \u0000p/n + p p/n \u0001 . Applying Lemma 10, we get in this event, \r \r \rb U b U \u22a4\u2212UU \u22a4\r \r \r \u22642 \r \r\u039b\u22121U \u22a4\u2206U\u22a5 \r \r + C3 \u2225\u2206\u2225 \r \rU \u22a4\u2206U\u22a5 \r \r \u03bb2 \u2264C4 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013rp n, where the last inequality holds with probability at least 1\u22122n\u22129\u2212e\u2212c1p by Lemma 9. Moreover, E\u2225e U e U \u22a4\u2212UU \u22a4\u2225\u2264C4 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013rp n ^ 1. By Davis-Kahan theorem, \u2225e U e U \u22a4\u2212e U e U \u22a4\u2225\u2272\u2225Z\u2225\u22271, where Z is a symmetric matrix with i.i.d. entries having distribution N \u00000, 8\u22062 1\u03b5\u22122 log 2.5 \u03b4 \u0001 . By (Vershynin, 2020, Theorem 4.4.5), \u2225Z\u2225\u2272 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u00b7 p\u221ar + log n n\u03b5 log1/2(2.5 \u03b4 ), with probability at least 1 \u2212O (e\u2212c1p) and the same bound also holds for E\u2225Z\u2225. Combining the above two bounds, we can conclude that with probability at least 1\u2212e\u2212c1p \u2212 3n\u22129 \u221210\u221220\u02dc r, \u2225e U e U \u22a4\u2212UU \u22a4\u2225\u2272 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u00b7 \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 , and the same bound also holds for E\u2225e U e U \u22a4\u2212UU \u22a4\u2225. In the same event, we can also get the upper bound of nuclear norm distance: \u2225e U e U \u22a4\u2212UU \u22a4\u2225\u2217\u2272 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u00b7 r \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 and E\u2225e U e U \u22a4\u2212UU \u22a4\u2225\u2217. For general q \u2208[1, \u221e], we can apply the interpolation inequality: \u2225e U e U \u22a4\u2212UU \u22a4\u2225q \u2264\u2225e U e U \u22a4\u2212UU \u22a4\u22251/q \u2217 \u00b7 \u2225e U e U \u22a4\u2212UU \u22a4\u2225 q\u22121 q , which completes the proof. 6.5 Proof of Theorem 2 Recall that we assume \u03c32 is known. By the definition of e \u03a3 stated in Algorithm 1, for any q \u2208[1, \u221e], we have the Schatten-q norm bounded as \u2225e \u03a3 \u2212\u03a3\u2225q =\u2225e U e \u039be U \u22a4\u2212U\u039bU \u22a4\u2225q = \r \re U \u0000e U \u22a4(b \u03a3 \u2212\u03c32Ip + E)e U \u0001e U \u22a4\u2212UU \u22a4(\u03a3 \u2212\u03c32Ip)UU \u22a4\r \r q \u2264 \r \re U e U \u22a4(b \u03a3 \u2212\u03c32Ip)e U e U \u22a4\u2212UU \u22a4(\u03a3 \u2212\u03c32Ip)UU \u22a4\r \r q + \u2225E\u2225q. 25 \fWithout loss of generality, we begin with q = \u221eand bound the spectral norm. Since E is an r \u00d7 r symmetric matrix with i.i.d. entries N \u00000, 8\u22062 2\u03b5\u22122 log \u0000 2.5 \u03b4 \u0001 \u0001 , by (Vershynin, 2020, Theorem 4.4.5), we get \u2225E\u2225\u2264C4 \u03bb\u221ar(r + log n) + \u03c32p\u221ar n\u03b5 \u00b7 r log 2.5 \u03b4 , (17) with probability at least 1 \u221210\u221220r and for some absolute constant C4 > 0. Moreover, the same bound holds for E\u2225E\u2225. Observe that \r \re U e U \u22a4(b \u03a3 \u2212\u03c32Ip)e U e U \u22a4\u2212UU \u22a4(\u03a3 \u2212\u03c32Ip)UU \u22a4\r \r \u2264 \r \r\u0000e U e U \u22a4\u2212UU \u22a4\u0001\r \r\r \r(b \u03a3 \u2212\u03c32Ip) \r \r + \u2225UU \u22a4(b \u03a3 \u2212\u03a3)\u2225+ \u2225\u03a3 \u2212\u03c32Ip\u2225\u2225e U e U \u22a4\u2212UU \u22a4\u2225. (18) Since \u2225b \u03a3 \u2212\u03c32Ip\u2225\u2264\u2225\u03a3 \u2212\u03c32Ip\u2225+ \u2225b \u03a3 \u2212\u03a3\u2225\u2272\u03bb, where the last inequality holds in event E\u2206defined in Lemma 9 and under the conditions n \u2265C1r and \u03bb/\u03c32 \u2265C1 \u0000p/n + p p/n \u0001 . By (18), we get \r \re U e U \u22a4(b \u03a3 \u2212\u03c32Ip)e U e U \u22a4\u2212UU \u22a4(\u03a3 \u2212\u03c32Ip)UU \u22a4\r \r \u2272\u03bb\u2225e U e U \u22a4\u2212UU \u22a4\u2225+ \u2225U \u22a4(b \u03a3 \u2212\u03a3)\u2225 \u2272 p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 + r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n \u2272 p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 + \u03bb r r n, (19) where the last inequality is due to Theorem 1 and Lemma 9. Combining (17) and (19), we get \u2225e \u03a3 \u2212\u03a3\u2225\u2272\u03bb \u0012r r n + \u221ar(r + log n) n\u03b5 \u00b7 r log 2.5 \u03b4 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 , with probability at least 1 \u221210\u221220r \u22123n\u22129 \u221210\u221220\u02dc r \u2212e\u2212c1p. Since \u02dc r \u2265r, we can simplify 10\u221220r + 10\u221220\u02dc r \u226410\u221219r. Moreover, E \r \re U e U \u22a4(b \u03a3 \u2212\u03c32Ip)e U e U \u22a4\u2212UU \u22a4(\u03a3 \u2212\u03c32Ip)UU \u22a4\r \r \u2272E \r \r\u0000e U e U \u22a4\u2212UU \u22a4\u0001\r \r\r \r(b \u03a3 \u2212\u03c32Ip) \r \r + E\u2225UU \u22a4(b \u03a3 \u2212\u03a3)\u2225+ E\u2225\u03a3 \u2212\u03c32Ip\u2225\u2225e U e U \u22a4\u2212UU \u22a4\u2225 \u2272E1/2\r \r\u0000e U e U \u22a4\u2212UU \u22a4\u0001\r \r2E1/2\r \r(b \u03a3 \u2212\u03c32Ip) \r \r2 + r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n + p \u03c32(\u03bb + \u03c32) \u0012rp n + p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 . 26 \fNote that E\u2225e U e U \u22a4\u2212UU \u22a4\u22252 \u22642E\u2225e U e U \u2212e U e U \u22a4\u22252 + 2E\u2225e U e U \u22a4\u2212UU \u22a4\u22252 \u2272 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u00b7 \u0012rp n + p\u221ar + log n n\u03b5 r log 2.5 \u03b4 \u0013 , where the last inequality is due to the classical concentration of operator norm of a Gaussian random matrix, e.g., Koltchinskii and Xia (2016). and \u2225b \u03a3 \u2212\u03c32Ip\u22252 \u22642\u2225b \u03a3 \u2212\u03a3\u22252 + 2\u2225\u03a3 \u2212\u03c32Ip\u22252 \u2272\u03bb + r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n \u2272\u03bb, where the first inequality can be obtained by integrating the probability bound in Lemma 7. Therefore, we conclude that E\u2225e \u03a3\u2212\u03a3\u2225\u2272\u03bb \u0012r r n+ \u221ar(r + log n) n\u03b5 \u00b7 r log 2.5 \u03b4 \u0013 + p \u03c32(\u03bb + \u03c32) \u0012rp n+p p (r + log n) n\u03b5 r log 2.5 \u03b4 \u0013 . Similar bounds can also be derived for the nuclear norm distance \u2225e \u03a3 \u2212\u03a3\u2225\u2217and the general Schatten-q norm \u2225e \u03a3 \u2212\u03a3\u2225q. The detailed proof is skipped. 6.6 Proof of Theorem 3 Some preliminary results on the KL-divergence and total variation distance between Gaussian distributions are required. Let N(\u00b51, \u03a31) and N(\u00b52, \u03a32) be two p-dimensional multivariate Gaussians, then KL (N (\u00b51, \u03a31) \u2225N (\u00b52, \u03a32)) = 1 2 \u0012 Tr \u0000\u03a3\u22121 2 \u03a31 \u2212Ip \u0001 + (\u00b52 \u2212\u00b51)\u22a4\u03a3\u22121 2 (\u00b52 \u2212\u00b51) + log \u0012det \u03a32 det \u03a31 \u0013\u0013 . Suppose Ui, Uj \u2208Op,r satisfying \r \rUiU \u22a4 i \u2212UjU \u22a4 j \r \r F \u2264\u03b50. Let \u03bb, \u03c32 \u22650 be constants and define \u03a3i = \u03bbUiU \u22a4 i + \u03c32Ip and \u03a3j = \u03bbUjU \u22a4 j + \u03c32Ip, respectively. Then it is easy to check that KL \u0000N(0, \u03a31)\u2225N(0, \u03a3j) \u0001 = 1 2 \u0012 Tr \u0000\u03a3\u22121 j \u03a3i \u2212Ip \u0001 + log \u0012det \u03a3j det \u03a3i \u0013\u0013 = 1 2 \u0012 \u03bb \u03c32 \u2212 \u03bb \u03bb + \u03c32 \u0013 \u0002 r \u2212Tr \u0000UjU \u22a4 j UiU \u22a4 i \u0001\u0003 \u2a7d1 2 \u03bb2 \u03c32(\u03c32 + \u03bb)\u03b52 0, and further by Pinsker\u2019s inequality, we have TV (N(0, \u03a3i), N(0, \u03a3j)) \u2264 r 1 2 KL (N(0, \u03a3i)\u2225N(0, \u03a3j)) \u22641 2\u03b50 s \u03bb2 \u03c32(\u03c32 + \u03bb). 27 \fIn order to apply Fano\u2019s lemma, we need to construct a large subset of Op,r within which the elements are well-separated. Towards that end, we apply existing results of the packing number of Grassmannians. Indeed, by (Pajor, 1998, Proposition 8) and (Koltchinskii and Xia, 2015, Lemma 5), for any q \u2208[1, \u221e], there exists an absolute constant c\u2032 > 0 and a subset S(p\u2212r) q \u2282Op\u2212r,r such that for any Vi \u0338= Vj \u2208S(p\u2212r) q , \u2225ViV \u22a4 i \u2212VjV \u22a4 j \u2225q \u2265c\u2032r1/q and the cardinality of S(p\u2212r) q is at least 2r(p\u2212r). Here, \u2225\u00b7 \u2225q denotes the Schatten-q norm of a matrix. In particular, spectral norm is Schatten-\u221enorm, Frobenius norm is Schatten-2 norm, and nuclear norm is Schatten-1 norm. Let \u03b50 > 0 be a small number to be decided later. Now, for each V \u2208S(p\u2212r) q , we define U = \uf8eb \uf8ed p 1 \u2212\u03b52 0Ir p \u03b52 0V \uf8f6 \uf8f8 such that U \u2208Rp\u00d7r and U \u22a4U = Ir. This means that, for any V \u2208S(p\u2212r) q , we can construct a U \u2208Op,r. This defines a subset S(p) q \u2282Op,r with Card \u0000S(p) q \u0001 \u22652r(p\u2212r) such that for any Ui \u0338= Uj \u2208S(p) q , \u2225UiU \u22a4 i \u2212UjU \u22a4 j \u2225q \u2265 q \u03b52 0(1 \u2212\u03b52 0)\u2225Vi \u2212Vj\u2225q \u2273 q \u03b52 0(1 \u2212\u03b52 0)\u2225ViV \u22a4 i \u2212VjV \u22a4 j \u2225q \u2273 q \u03b52 0(1 \u2212\u03b52 0)r1/q and, meanwhile, \u2225UiU \u22a4 i \u2212UjU \u22a4 j \u2225F \u2272\u2225Ui \u2212Uj\u2225F \u2264\u03b50\u2225Vi \u2212Vj\u2225F \u2264 \u221a 2r\u03b50. We then consider a family of distributions as P \u0000S(p) q , \u03bb, \u03c32\u0001 = {N(0, \u03a3)\u2297n : \u03a3 = \u03bbUU \u22a4+ \u03c32Ip, U \u2208S(p) q } \u2282P(\u03bb, \u03c32), whose cardinality N := Card \u0010 P \u0000S(p) q , \u03bb, \u03c32\u0001\u0011 \u22652r(p\u2212r). For i \u0338= i\u2032 \u2208[N], the probability measures Pi = N(0, \u03a3i)\u2297n and Pi\u2032 = N(0, \u03a3i\u2032)\u2297n in P \u0000S(p) q , \u03bb, \u03c32\u0001 satisfy X k\u2208[n] TV (N(0, \u03a3i), N(0, \u03a3i\u2032)) \u2272n 2 \u221ar\u03b50 s \u03bb2 \u03c32(\u03c32 + \u03bb), and max i\u0338=j\u2208[N] KL \u0000N(0, \u03a3i)\u2297n\u2225N(0, \u03a3i\u2032)\u2297n\u0001 \u2272n 2 \u03bb2 \u03c32(\u03c32 + \u03bb)\u03b52 0r. 28 \fTo invoke Lemma 6, we define the metric \u03c1 : Op,r\u00d7Op,r 7\u2192R+ as \u03c1(Ui, Uj) := \u2225UiU \u22a4 i \u2212UjU \u22a4 j \u2225q for any q \u2208[1, \u221e] and take \u03c10 \u224d\u03c4\u03b50r1/q, l0 = c0 n 2 \u03bb2 \u03c32(\u03c32 + \u03bb)\u03b52 0r and t0 = c0 n 2 \u221ar\u03b50 s \u03bb2 \u03c32(\u03c32 + \u03bb) for some small absolute constant c0, \u03c4 > 0. Then, by Lemma 6, for any (\u03b5, \u03b4)-DP estimator e U, sup P\u2208P \u0000S(p) q ,\u03bb,\u03c32\u0001 E \r \r \u02dc U \u02dc U \u22a4\u2212UU \u22a4\r \r q \u2a7emax \uf8f1 \uf8f2 \uf8f3 \u03c4\u03b50r1/q 2 1 \u2212 c0 n 2 \u03bb2 \u03c32(\u03c32+\u03bb)\u03b52 0r + log 2 log N ! , \u03c4\u03b50r1/q 4 \uf8eb \uf8ed1 \u2227 N \u22121 exp \u0010 4\u03b5c0 n 2 \u221ar\u03b50 q \u03bb2 \u03c32(\u03c32+\u03bb) \u0011 \uf8f6 \uf8f8 \uf8fc \uf8fd \uf8fe, where, for simplicity, we can choose \u03b4 = 0. Recall that N \u22652rp/2 if p \u22652r. We can take \u03b50 \u224d r \u03c32(\u03bb + \u03c32) \u03bb2 rp n + r \u03c32(\u03bb + \u03c32) \u03bb2 p\u221ar n\u03b5 , and get sup P\u2208P \u0000S(p) q ,\u03bb,\u03c32\u0001 E \r \r \r \u02dc U \u02dc U \u22a4\u2212UU \u22a4\r \r \r q \u2273 r \u03c32(\u03bb + \u03c32) \u03bb2 \u00b7 r1/q rp n + r \u03c32(\u03bb + \u03c32) \u03bb2 \u00b7 r 1 2 + 1 q p n\u03b5. Since a trivial upper of \u2225e U e U \u22a4\u2212UU \u22a4\u2225q \u2264(2r)1/q and P \u0000S(p) q , \u03bb, \u03c32\u0001 \u2282P(\u03bb, \u03c32), we conclude that, for any q \u2208[1, \u221e], inf \u02dc U sup P\u2208P(\u03bb,\u03c32) E \r \r \r \u02dc U \u02dc U \u22a4\u2212UU \u22a4\r \r \r q \u2273 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013 \u0012 r1/q rp n + r 1 2 + 1 q p n\u03b5 \u0013 ^ r1/q, where the infimum is taken over all possible (\u03b5, \u03b4)-DP algorithms. Now it suffices to choose q = 1, 2, \u221eto obtain the bounds in nuclear norm, Frobenius norm, and spectral norm, respectively. 6.7 Proof of Theorem 4 Note that the two terms in the minimax lower bounds are contributed by estimating the eigenvalues and eigenvectors, separately. We begin with the term related to estimating eigenvectors. Consider a subset P1(\u03bb, \u03c32) \u2282P(\u03bb, \u03c32) defined by P1(\u03bb, \u03c32) := \u001a N(0, \u03a3) : \u03a3 = \u03bbUU \u22a4+ \u03c32Ip and U \u2208Op,r \u001b , 29 \fwhere we assume \u03bb and \u03c32 are both known. If \u03bb is already known, it suffices to estimate UU \u22a4 differentially privately by an estimator e U e U \u22a4so that we can construct the covariance matrix estimator e \u03a3 = \u03bbe U e U \u22a4+ \u03c32Ip. Therefore, inf e \u03a3 sup P\u2208P1(\u03bb,\u03c32) E \r \r\u02dc \u03a3\u2212\u03a3 \r \r \u2265inf e U sup P\u2208P1(\u03bb,\u03c32) \u03bb \u00b7 E \r \re U e U \u22a4\u2212UU \u22a4\r \r \u2273 p \u03c32(\u03bb + \u03c32) \u0010rp n + \u221arp n\u03b5 \u0011 ^ \u03bb, (20) and inf e \u03a3 sup P\u2208P1(\u03bb,\u03c32) E \r \r\u02dc \u03a3\u2212\u03a3 \r \r F \u2265inf e U sup P\u2208P1(\u03bb,\u03c32) \u03bb \u00b7 E \r \re U e U \u22a4\u2212UU \u22a4\r \r F \u2273 p \u03c32(\u03bb + \u03c32) \u0010rpr n + rp n\u03b5 \u0011 ^ \u03bb\u221ar, (21) where the last inequalities in (20) and (21) are both due to Theorem 3. These establish the second term in the minimax lower bounds of Theorem 4. More generally, we can establish the minimax lower bound in Schatten-q norms: inf e \u03a3 sup P\u2208P1(\u03bb,\u03c32) E \r \r\u02dc \u03a3\u2212\u03a3 \r \r q \u2265inf e U sup P\u2208P1(\u03bb,\u03c32) \u03bb \u00b7 E \r \re U e U \u22a4\u2212UU \u22a4\r \r q \u2273 p \u03c32(\u03bb + \u03c32) \u0010 r1/q rp n + pr 1 2 + 1 q n\u03b5 \u0011 ^ \u03bbr1/q, (22) for any q \u2208[1, \u221e]. We now establish the first term in the minimax lower bounds \u03bb \u0000p r/n + r/(n\u03b5) \u0001 , which is contributed by estimating the singular values. It is unrelated to the nuisance variance \u03c32 and the eigenvectors U. Without loss of generality, we can assume \u03c32 = 0 and UV in the format UV = \uf8eb \uf8ed V 0(p\u2212r)\u00d7r \uf8f6 \uf8f8, where V = [V0, V0\u22a5] \u2208Or,r with some V0 \u2208Or,r/4. It is easy to check that U \u22a4 V UV = V \u22a4V = Ir. Define \u039b0 = diag(2\u03bb, \u00b7 \u00b7 \u00b7 , 2\u03bb | {z } r/4 , \u03bb, \u00b7 \u00b7 \u00b7 , \u03bb), which is an r \u00d7 r diagonal matrix. For any V = [V0, V0\u22a5] \u2208Or,r with V0 \u2208Or,r/4, we consider the following covariance matrix \u03a3V0 := UV \u039b0U \u22a4 V = \uf8eb \uf8ed\u03bbIr + \u03bbV0V \u22a4 0 0r\u00d7(p\u2212r) 0(p\u2212r)\u00d7r 0(p\u2212r)\u00d7(p\u2212r), \uf8f6 \uf8f8 30 \fTo this end, we define a subset P2(\u03bb) \u2282P(\u03bb, \u03c32) as P2(\u03bb) := \uf8f1 \uf8f2 \uf8f3N(0, \u03a3V0) : \u03a3V0 = \uf8eb \uf8ed\u03bbIr + \u03bbV0V \u22a4 0 0r\u00d7(p\u2212r) 0(p\u2212r)\u00d7r 0(p\u2212r)\u00d7(p\u2212r) \uf8f6 \uf8f8, V0 \u2208Or,r/4 \uf8fc \uf8fd \uf8fe. We are interested in the minimax lower bound: inf e \u03a3 sup P\u2208P2(\u03bb) E\u2225e \u03a3 \u2212\u03a3\u2225 and inf e \u03a3 sup P\u2208P2(\u03bb) E\u2225e \u03a3 \u2212\u03a3\u2225F, where the infimum is taken over all the possible (\u03b5, \u03b4)-DP algorithms. Observe that if a random vector X = (X1, \u00b7 \u00b7 \u00b7 , Xp)\u22a4\u223cN(0, \u03a3V0), it means that only the first r entries of X are random variables since other variables are simply zeros with probability one. This suggests that it suffices to consider the reduced problem of estimating an r \u00d7 r spiked covariance matrix. Towards that end, define a family of distributions of r-dimensional random vector P(0) 2 (\u03bb) := n N(0, \u03a30V0) : \u03a30V0 = \u03bbIr + \u03bbV0V \u22a4 0 , V0 \u2208Or,r/4 o . Given i.i.d. observations X(0) 1 , \u00b7 \u00b7 \u00b7 , X(0) n \u223cP \u2208P(r) 2 (\u03bb), we aim to estimate the covariance matrix \u03a30 with an (\u03b5, \u03b4)-differentially private algorithm. Clearly, by definition, inf e \u03a3 sup P\u2208P2(\u03bb) E\u2225e \u03a3 \u2212\u03a3\u2225q = inf e \u03a30 sup P\u2208P(0) 2 (\u03bb) E\u2225e \u03a30 \u2212\u03a30\u2225q, (23) for any Schatten-q norms. It therefore suffices to study the RHS of (23), which is the differentially private minimax lower bound for estimating an r \u00d7 r spiked covariance matrix with rank r/4. Without loss of generality, we can still assume \u03bb is known and we can immediately invoke the bounds (20), (21), (22) by replacing \u03c32 = \u03bb and r \u2190r/4, p \u2190r there, and conclude that inf e \u03a30 sup P\u2208P(0) 2 (\u03bb) E \r \r\u02dc \u03a3\u2212\u03a3 \r \r \u2273\u03bb \u0010r r n + r3/2 n\u03b5 \u0011 ^ \u03bb, (24) and inf e \u03a30 sup P\u2208P(0) 2 (\u03bb) E \r \r\u02dc \u03a3\u2212\u03a3 \r \r F \u2273\u03bb \u0010 r \u221an + r2 n\u03b5 \u0011 ^ \u03bb\u221ar, (25) or more generally, inf e \u03a30 sup P\u2208P(0) 2 (\u03bb) E \r \r\u02dc \u03a3\u2212\u03a3 \r \r q \u2273\u03bb \u0010r 1 2 + 1 q \u221an + r 3 2 + 1 q n\u03b5 \u0011 ^ \u03bbr1/q, (26) for any q \u2208[1, +\u221e]. 31 \fFinally, putting together (20)-(22) and (24)-(26), we conclude that inf e \u03a3 sup P\u2208P(\u03bb,\u03c32) E\u2225e \u03a3 \u2212\u03a3\u2225q \u2273 \u03bb \u0010r 1 2 + 1 q \u221an + r 3 2 + 1 q n\u03b5 \u0011 + p \u03c32(\u03bb + \u03c32) \u0010 r1/q rp n + pr 1 2 + 1 q n\u03b5 \u0011! ^ \u03bbr1/q. Now by setting q = 1, 2, \u221e, we complete the proof. 6.8 Proof of Corollary 2 It suffices to establish the concentration bounds as in Lemmas 8 and 9. Note that the while Lemma 7 is stated in Gaussian distributions, but the claimed bounds still hold for sub-Gaussian distributions. See Koltchinskii and Lounici (2017) for more details. As a result, it is easy to check that the upper bounds of maxi\u2208[n] \u2225Xi\u2225, maxi\u2208[n] \u2225U \u22a4Xi\u2225, and maxi\u2208[n] \u2225U \u22a4 \u22a5Xi\u22252 stated in Lemma 8 still hold for sub-Gaussian distribution. Similarly, the upper bound of \u2225\u2206\u2225in Lemma 9 still holds. However, we shall pay specific attentions to the bounds in event E1. Lemma 12. Suppose that X1, X\u2032 1, \u00b7 \u00b7 \u00b7 , Xn, X\u2032 n follow sub-Gaussian distribution with the proxycovariance matrix \u03a3 \u2208\u0398(\u03bb, \u03c32), n \u2265C1 \u0000r log(p + n) log2 r + log2 n \u0001 , 2r + C1 log n \u2264p, and \u03bb/\u03c32 \u2265C1p/n for some absolute constant C1 > 0. There exist absolute constants c2, C3 > 0 such that the event E\u2032 1 := ( \r \rU \u22a4\u2206U\u22a5 \r \r + max i\u2208[n] \u2225U \u22a4\u2206(i)U\u22a5\u2225\u2264C3 r \u03c32(\u03bb + \u03c32)p log(p + n) n ) \\ ( max i\u2208[n] \r \rU \u22a4(XiX\u22a4 i /n)U\u22a5 \r \r + \r \rU \u22a4(X\u2032 iX\u2032\u22a4 i /n)U\u22a5 \r \r \u2264C3 p \u03c32(\u03bb + \u03c32)p(r + log n) n ) \\ ( max i\u2208[n] \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5Xi \r \r \r \u2264C3\u03c3 \u00b7 s \u03c32(\u03bb + \u03c32)p(r + log n) log2(p + n) n ) , holds with probability P(E\u2032 1) \u22651 \u22122n\u22129. Meanwhile, E\u2225U \u22a4\u2206U\u22a5\u2225\u2264C3 r \u03c32(\u03bb + \u03c32)p log p n . We then study the sensitivity of eigenvectors and eigenvalues by bounding \u2225e U e U \u22a4\u2212e U (i) e U (i)\u22a4\u2225F and Pp k=1 \u0000\u03bbk(b \u03a3) \u2212\u03bbk(b \u03a3(i)) \u00012, whose proofs are similar to those of Lemmas 3 and 4. We can show that if \u03bb/\u03c32 \u2265C3 \u0000p p/n + p/n \u0001 log(p + n), then in the event E\u2217:= E0 \u2229E\u2032 1 \u2229E\u2206, max i\u2208[n] \r \re U e U \u22a4\u2212e U (i) e U (i)\u22a4\r \r F \u2272 r \u03c32(\u03bb + \u03c32) \u03bb2 \u00b7 p p(r + log n) n . 32 \fBasically, the sensitivity upper bound is the same as in the Gaussian case since it is determined by the first order term maxi\u2208[n] \r \rU \u22a4(XiX\u22a4 i /n)U\u22a5 \r \r = maxi\u2208[n] \u2225U \u22a4Xi\u2225\u2225U \u22a4 \u22a5Xi\u2225/n. Similarly, p X k=1 \u0000\u03bbk(b \u03a3) \u2212\u03bbk(b \u03a3(i)) \u00012 \u22642\u2225b \u03a3 \u2212b \u03a3(i)\u22252 \u2264C4 \u0012\u03bb(r + log n) + p\u03c32 n \u00132 , where we used the upper bounds of \u2225Xi\u22252 and \u2225X\u2032 i\u22252 stated in Lemma 8. The rest of the proof is skipped. A Proofs of Additional Results and Lemmas A.1 Proof of Lemma 2 A non-private estimator of the spectral projector UU \u22a4is the empirical spectral projector b U b U \u22a4. To privatize the spectral projector b U b U \u22a4, we introduce randomness according to Gaussian mechanism by Lemma 1. Let Z be a p \u00d7 p symmetric matrix with i.i.d. entries (Z)ij \u223cN \u0012 0, 8\u22062 1 \u03b52 log 2.5 \u03b4 \u0013 , \u22001 \u2264i \u2264j \u2264p. We define a randomized algorithm b P in the way that b P := b U b U + Z. In order to ensure that b P is (\u03b5/2, \u03b4/2)-DP, by Lemma 1, it suffices to choose \u22061 as follows \u22061 = C1 \u0012\u03c32 \u03bb + r \u03c32 \u03bb \u0013p p(r + log n) n , for a large enough but absolute constant C1 > 0. By Lemma 1 and Lemma 3, b P is (\u03b5/2, \u03b4/2)-DP with probability at least 1 \u2212e\u2212c1p \u22123n\u22129 \u221210\u221220\u02dc r. Finally, by the post-processing property of differentially private algorithm, we conclude that e U e U \u22a4is an (\u03b5/2, \u03b4/2)-DP estimator with the same probability. Similarly, according to Lemma 1 and Lemma 4, e \u039b is (\u03b5/2, \u03b4/2)-DP with probability at least 1 \u2212n\u221210 if we choose a large \u22062 as \u22062 = C2 \u03bb(r + log n) + \u03c32p n for some large absolute constant C2 > 0. This is due to the sensitivity e U \u22a4(b \u03a3\u2212\u03c32Ip)e U is bounded by (note that e U is already differentially private) \r \re U \u22a4(b \u03a3 \u2212b \u03a3(i))e U \r \r F \u2264\u2225b \u03a3 \u2212b \u03a3(i)\u2225F \u2264\u22062, 33 \fwhere the last inequality holds in event E0 by Lemma 8. Then, the joint mechanism e J = (e U, e \u039b) is (\u03b5, \u03b4)-DP with probability at least 1 \u2212e\u2212c1p \u22124n\u221210 \u221210\u221220\u02dc r, by the post-processing property of differentially private algorithms. Therefore, we conclude that e \u03a3 = e U e \u039be U \u22a4+ \u03c32Ip is (\u03b5, \u03b4)-DP with the same probability. A.2 Proof of Lemma 5 The proof is in spirit similar to the proof for the representation formula in Xia (2021). Let {\u02dc \u03bbi, ui}i\u2208[p] be the the singular values and singular vectors of \u03a3 where \u02dc \u03bbi = \u03bbi + \u03c32 for i \u2208[r] and \u02dc \u03bbi = \u03c32 for i > r. Let {b \u03bbi, b ui}p i=1 denote the singular values and singular vectors of b \u03a3. Due to Weyl\u2019s Lemma, for all i \u2208[p], |\u02c6 \u03bbi \u2212\u02dc \u03bbi| \u2264\u2225b \u03a3 \u2212\u03a3\u2225= \u2225\u2206\u2225, and thus b \u03bbi must lie within I(\u02dc \u03bbi, \u2225\u2206\u2225) := h \u02dc \u03bbi \u2212\u2225\u2206\u2225, \u02dc \u03bbi + \u2225\u2206\u2225 i , a closed interval centered at \u02dc \u03bbi with half width \u2225\u2206\u2225. Under the condition that \u02dc \u03bbr \u2212\u02dc \u03bbr+1 > 2\u2225\u2206\u2225, i.e. \u03bbr 2 \u2265\u2225\u2206\u2225, we have I(\u02dc \u03bbr, \u2225\u2206\u2225) \u2229I(\u02dc \u03bbr+1, \u2225\u2206\u2225) = \u2205and therefore, there exist a contour \u0393 (see Figure 1) on the complex plane such that {\u02dc \u03bbi}i\u2208[r] \u222a{b \u03bbi}i\u2208[r] \u2282\u0393D, but {\u02dc \u03bbi}i\u2208[p]\\[r] \u222a{b \u03bbi}i\u2208[p]\\[r] \u2282\u0393\u2201 D, where \u0393D is the open region enclosed by \u0393, i.e. \u0393 = \u2202\u0393D. By Cauchy\u2019s integral formula, \ud835\udf0e! \" \ud835\udf06\"#$ \" \ud835\udf06\" \ud835\udf06\" + \ud835\udf0e! \ud835\udf06\"%$ + \ud835\udf0e! \ud835\udf06$ + \ud835\udf0e! \u0393 \" \ud835\udf06$ Figure 1: The contour plot of \u0393. All grey balls share the same radius \u2225\u2206\u2225. 34 \f1 2\u03c0i I \u0393 (\u03b7I \u2212\u02c6 \u03a3)\u22121d\u03b7 = r X i=1 1 2\u03c0i I \u0393 d\u03b7 \u03b7 \u2212\u02c6 \u03bbi (b uib u\u22a4 i ) + p X i=r+1 1 2\u03c0i I \u0393 d\u03b7 \u03b7 \u2212\u02c6 \u03bbi (b uib u\u22a4 i ) = r X i=1 b uib u\u22a4 i = b U b U \u22a4. As a result, we have b U b U \u22a4= 1 2\u03c0i I \u0393 (\u03b7I \u2212\u02c6 \u03a3)\u22121d\u03b7, (27) and similarly, UU \u22a4= 1 2\u03c0i I \u0393 (\u03b7I \u2212\u03a3)\u22121d\u03b7. (28) We denote R\u03a3(\u03b7) := (\u03b7I \u2212\u03a3)\u22121 and thus (\u03b7I \u2212\u02c6 \u03a3)\u22121 = (\u03b7I \u2212\u03a3 \u2212\u2206)\u22121 = \u0002 (\u03b7I \u2212\u03a3) \u0000I \u2212R\u03a3(\u03b7)\u2206 \u0001\u0003\u22121 = \u0000I \u2212R\u03a3(\u03b7)\u2206 \u0001\u22121R\u03a3(\u03b7). Since \r \rR\u03a3(\u03b7)\u2206 \r \r \u2264\u2225R\u03a3(\u03b7)\u2225\u2225\u2206\u2225\u22642\u2225\u2206\u2225 \u02dc \u03bbr < 1. the Neumann series of \u0000I \u2212R\u03a3(\u03b7)\u2206 \u0001\u22121 is \u0000I \u2212R\u03a3(\u03b7)\u2206 \u0001\u22121 = I + X k\u22651 [R\u03a3(\u03b7)\u2206]k. (29) Plugging (29) and R\u03a3(\u03b7) = (\u03b7I \u2212\u03a3)\u22121 into (27), we have b U b U \u22a4= 1 2\u03c0i I \u0393 (\u03b7I \u2212\u02c6 \u03a3)\u22121d\u03b7 = 1 2\u03c0i I \u0393 R\u03a3(\u03b7)d\u03b7 + X k\u22651 1 2\u03c0i I \u0393 \u0002 R\u03a3(\u03b7)\u2206 \u0003kR\u03a3(\u03b7)d\u03b7 =UU \u22a4+ X k\u22651 1 2\u03c0i I \u0393 \u0002 R\u03a3(\u03b7)\u2206 \u0003kR\u03a3(\u03b7)d\u03b7. Then we can write b U b U \u22a4\u2212UU \u22a4= X k\u22651 1 2\u03c0i I \u0393 \u0002 R\u03a3(\u03b7)\u2206 \u0003kR\u03a3(\u03b7)d\u03b7. We denote the k-th order perturbation as S\u03a3,k(\u2206) := 1 2\u03c0i I \u0393 \u0002 R\u03a3(\u03b7)\u2206 \u0003kR\u03a3(\u03b7)d\u03b7 (30) 35 \ffor all integer k \u22651 and hence obtain b U b U \u22a4\u2212UU \u22a4= X k\u22651 S\u03a3,k(\u2206). (31) We derive the explicit formulas for S\u03a3,1(\u2206) and S\u03a3,2(\u2206) as an appetizer served before showing the explicit formulas for S\u03a3,k(\u2206) with general integer k \u22651. Note that R\u03a3(\u03b7) = p X j=1 1 \u03b7 \u2212\u02dc \u03bbj uju\u22a4 j , where \u02dc \u03bbi = \u03bbi + \u03c32 for i \u2208[r] and \u02dc \u03bbi = \u03c32, for i \u2208[p] \\ [r]. Let Pj = uju\u22a4 j be the spectral projector onto uj, for all j \u2208[p]. Derivation of S\u03a3,1(\u2206). By the definition of S\u03a3,1(\u2206), S\u03a3,1(\u2206) = 1 2\u03c0i I \u0393 R\u03a3(\u03b7)\u2206R\u03a3(\u03b7)d\u03b7 = p X j1=1 p X j2=1 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)(\u03b7 \u2212\u02dc \u03bbj2) Pj1\u2206Pj2. (32) Case 1: When both j1 and j2 are greater than r, the contour integral in (32) is zero according to Cauchy\u2019s integral formula. Case 2: When only one of j1 and j2 is greater than r. W.L.O.G, we assume j2 > r, then r X j1=1 p X j2>r 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)(\u03b7 \u2212\u02dc \u03bbj2) Pj1\u2206Pj2 = r X j1=1 X j2>r (\u02dc \u03bbj1 \u2212\u02dc \u03bbj2)\u22121Pj1\u2206Pj2 = r X j1=1 X j2>r (\u03bbj1)\u22121Pj1\u2206Pj2 = Q\u22121\u2206Q\u22a5, where and Q\u22121 = U\u039b\u22121U \u22a4and Q\u22a5= U\u22a5U \u22a4 \u22a5. Case 3: When none of j1 or j2 is greater than r, the contour integral in (32) is zero. In summary, S\u03a3,1(\u2206) = Q\u22121\u2206Q\u22a5+ Q\u22a5\u2206Q\u22121. Derivation of S\u03a3,2(\u2206). By the definition of S\u03a3,2(\u2206), S\u03a3,2(\u2206) = 1 2\u03c0i I \u0393 R\u03a3(\u03b7)\u2206R\u03a3(\u03b7)\u2206R\u03a3(\u03b7)d\u03b7 = p X j1=1 p X j2=1 p X j3=1 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)(\u03b7 \u2212\u02dc \u03bbj2)(\u03b7 \u2212\u02dc \u03bbj3) Pj1\u2206Pj2\u2206Pj3. (33) 36 \fCase 1: When all j1, j2, j3 are greater than r, the contour integral in (33) is zero by Cauchy\u2019s integral formula. Case 2: When two of j1, j2, j3 are greater than r. W.L.O.G., we assume j1 \u2264r and j2, j3 > r, then r X j1=1 p X j2,j3>r 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)(\u03b7 \u2212\u02dc \u03bbj2)(\u03b7 \u2212\u02dc \u03bbj3) Pj1\u2206Pj2\u2206Pj3 = r X j1=1 p X j2,j3>r 1 (\u02dc \u03bbj1 \u2212\u02dc \u03bbj2)(\u02dc \u03bbj1 \u2212\u02dc \u03bbj3) Pj1\u2206Pj2\u2206Pj3 = Q\u22122\u2206Q\u22a5\u2206Q\u22a5 = r X j1=1 p X j2,j3>r 1 \u03bb2 j1 Pj1\u2206Pj2\u2206Pj3 = Q\u22122\u2206Q\u22a5\u2206Q\u22a5. Case 3: one of j1, j2, j3 is greater than r. W.L.O.G., let j1, j2 \u2264r and j3 > r, we get r X j1,j2=1 p X j3>r 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)(\u03b7 \u2212\u02dc \u03bbj2)(\u03b7 \u2212\u02dc \u03bbj3) Pj1\u2206Pj2\u2206Pj3 = r X j1=j2=1 p X j3>r 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)2(\u03b7 \u2212\u02dc \u03bbj3) Pj1\u2206Pj1\u2206Pj3 + r X j1\u0338=j2\u22651 p X j3>r 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212\u02dc \u03bbj1)(\u03b7 \u2212\u02dc \u03bbj2)(\u03b7 \u2212\u02dc \u03bbj3) Pj1\u2206Pj2\u2206Pj3 = \u2212 r X j1=1 (\u02dc \u03bbj1 \u2212\u02dc \u03bbj3)\u22122Pj1\u2206Pj1\u2206Q\u22a5\u2212 r X j1\u0338=j2\u22651 (\u02dc \u03bbj1 \u2212\u02dc \u03bbj3)\u22121(\u02dc \u03bbj2 \u2212\u02dc \u03bbj3)\u22121Pj1\u2206Pj2\u2206Q\u22a5 = \u2212 r X j1=1 \u03bb\u22122 j1 Pj1\u2206Pj1\u2206Q\u22a5\u2212 r X j1\u0338=j2\u22651 \u03bb\u22121 j1 \u03bb\u22121 j2 Pj1\u2206Pj2\u2206Q\u22a5 = \u2212Q\u22121\u2206Q\u22121\u2206Q\u22a5. Case 4: When none of j1, j2, j3 is greater than r, the contour integral in (33) is zero. In summary, S\u03a3,2(\u2206) = \u0000Q\u22122\u2206Q\u22a5\u2206Q\u22a5+ Q\u22a5\u2206Q\u22122\u2206Q\u22a5+ Q\u22a5\u2206Q\u22a5\u2206Q\u22122\u0001 \u2212 \u0000Q\u22a5\u2206Q\u22121\u2206Q\u22121 + Q\u22121\u2206Q\u22a5\u2206Q\u22121 + Q\u22121\u2206Q\u22121\u2206Q\u22a5\u0001 . Derivation of S\u03a3,k(\u2206) for general k. By the definition of S\u03a3,k(\u2206), S\u03a3,k(\u2206) = p X j1,\u00b7\u00b7\u00b7 ,jk+1\u22651 1 2\u03c0i I \u0393 \u0010 k+1 Y i=1 1 \u03b7 \u2212\u02dc \u03bbji \u0011 d\u03b7Pj1\u2206Pj2\u2206\u00b7 \u00b7 \u00b7 Pjk\u2206Pjk+1. (34) 37 \fIn order to deal with each component in the summations (34), we first consider some special cases for ease of the notation. W.L.O.G., we consider the cases where some \u00af k indices from {j1, \u00b7 \u00b7 \u00b7 , jk+1} are not larger than r. More specifically, we restrict our discussion to the case where j1, \u00b7 \u00b7 \u00b7 , j\u00af k \u2264r and j\u00af k+1, \u00b7 \u00b7 \u00b7 , jk+1 > r. By Cauchy integral formula, the integral in (34) is zero once \u00af k = 0 or \u00af k = k + 1 and thus we focus on the non-trivial case \u00af k \u2208[k \u22121]. Note that r X j1,\u00b7\u00b7\u00b7 ,j\u00af k\u22651 p X j\u00af k+1,\u00b7\u00b7\u00b7 ,jk+1>r 1 2\u03c0i I \u0393 \u0010 p Y i=1 1 \u03b7 \u2212\u02dc \u03bbji \u0011 d\u03b7Pj1\u2206Pj2\u2206\u00b7 \u00b7 \u00b7 Pjk\u2206Pjk+1 = r X j1,\u00b7\u00b7\u00b7 ,j\u00af k\u22651 p X j\u00af k+1,\u00b7\u00b7\u00b7 ,jk+1>r 1 2\u03c0i I \u0393 \u0010 \u00af k Y i=1 1 \u03b7 \u2212(\u03bbji + \u03c32) \u0011 (\u03b7 \u2212\u03c32) \u00af k\u2212k\u22121d\u03b7Pj1\u2206Pj2\u2206\u00b7 \u00b7 \u00b7 Pjk\u2206Pjk+1 = r X j1,\u00b7\u00b7\u00b7 ,j\u00af k\u22651 1 2\u03c0i I \u0393 \u0010 \u00af k Y i=1 1 \u03b7 \u2212(\u03bbji + \u03c32) \u0011 (\u03b7 \u2212\u03c32) \u00af k\u2212k\u22121d\u03b7Pj1\u2206Pj2\u2206\u00b7 \u00b7 \u00b7 Pj\u00af k\u2206Q\u22a5\u2206\u00b7 \u00b7 \u00b7 \u2206Q\u22a5. Recall that our goal is to prove S\u03a3,k(\u2206) = X s:s1+\u00b7\u00b7\u00b7+sk+1=k (\u22121)1+\u03c4(s) \u00b7 Q\u2212s1\u2206Q\u2212s2\u2206\u00b7 \u00b7 \u00b7 \u2206Q\u2212sk+1. Accordingly, in the above summations, we consider the components, where s1, \u00b7 \u00b7 \u00b7 , s\u00af k \u22651 and s\u00af k+1 = \u00b7 \u00b7 \u00b7 = sk+1 = 0, namely, X s1+\u00b7\u00b7\u00b7+s\u00af k=k sj\u22651 (\u22121) \u00af k+1Q\u2212s1\u2206\u00b7 \u00b7 \u00b7 \u2206Q\u2212s\u00af k\u2206Q\u22a5\u00b7 \u00b7 \u00b7 \u2206Q\u22a5. It turns out that we need to prove r X j1,\u00b7\u00b7\u00b7 ,j\u00af k\u22651 1 2\u03c0i I \u0393 \u0010 \u00af k Y i=1 1 \u03b7 \u2212(\u03bbji + \u03c32) \u0011 (\u03b7 \u2212\u03c32) \u00af k\u2212k\u22121d\u03b7Pj1\u2206Pj2\u2206\u00b7 \u00b7 \u00b7 Pj\u00af k = r X j1,\u00b7\u00b7\u00b7 ,j\u00af k\u22651 X s1+\u00b7\u00b7\u00b7+s\u00af k=k sj\u22651 (\u22121) \u00af k+1 1 (\u03bbj1)s1 \u00b7 \u00b7 \u00b7 (\u03bbj\u00af k)s\u00af k Pj1\u2206Pj2\u2206\u00b7 \u00b7 \u00b7 \u2206Pj\u00af k. It suffices to prove that for all j = (j1, . . . , j\u00af k) \u2208{1, \u00b7 \u00b7 \u00b7 , r} \u00af k, 1 2\u03c0i I \u0393 \u0010 \u00af k Y i=1 1 \u03b7 \u2212(\u03bbji + \u03c32) \u0011 (\u03b7 \u2212\u03c32) \u00af k\u2212k\u22121d\u03b7 = X s1+\u00b7\u00b7\u00b7+s\u00af k=k sj\u22651 (\u22121) \u00af k+1 1 (\u03bbj1)s1 \u00b7 \u00b7 \u00b7 (\u03bbj\u00af k)s\u00af k . (35) 38 \fTo prove (35), we rewrite its right hand side. Given any j = (j1, \u00b7 \u00b7 \u00b7 , j\u00af k) \u2208{1, \u00b7 \u00b7 \u00b7 , r} \u00af k, we define Vi(j) := \b 1 \u2264t \u2264\u00af k : jt = i \t \u2200i \u2208[r], as a set that contains all location l \u2208[k + 1] such that \u03bbjl = \u03bbi. For simplicity, we also denote vi(j) = |Vi(j)|. Then, the right hand side of (35) is written as X s1+\u00b7\u00b7\u00b7+s\u00af k=k sj\u22651 (\u22121) \u00af k+1 1 (\u03bbj1)s1 \u00b7 \u00b7 \u00b7 (\u03bbj\u00af k)s\u00af k = (\u22121) \u00af k+1 X s1+\u00b7\u00b7\u00b7+s\u00af k=k sj\u22651 \u03bb \u2212P p\u2208V1(j) sp 1 \u00b7 \u00b7 \u00b7 \u03bb \u2212P p\u2208Vr(j) sp r . Now, we denote ti(j) = P p\u2208Vi(j) sp for all i \u2208[r] and rewrite the above equation as X s1+\u00b7\u00b7\u00b7+s\u00af k=k sj\u22651 (\u22121) \u00af k+1 1 \u03bbs1 j1 \u00b7 \u00b7 \u00b7 \u03bb s\u00af k j\u00af k = (\u22121) \u00af k+1 X t1(j)+\u00b7\u00b7\u00b7+tr(j)=k ti(j)\u2265vi(j) ti(j)=0 if vi(j)=0 Y i:vi(j)\u22651 \u0012ti(j) \u22121 vi(j) \u22121 \u0013 \u03bb\u2212ti(j) i = (\u22121) \u00af k+1 X t1(j)+\u00b7\u00b7\u00b7+tr(j) =k\u2212\u00af k ti(j)=0 if vi(j)=0 Y i:vi(j)\u22651 \u0012ti(j) + vi(j) \u22121 vi(j) \u22121 \u0013 \u03bb\u2212ti(j)\u2212vi(j) i where the last equality is due to the fact v1(j) + \u00b7 \u00b7 \u00b7 + vr(j) = \u00af k. Similarly, the left hand side of (35) can be written as 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212(\u03bbj1 + \u03c32)) \u00b7 \u00b7 \u00b7 (\u03b7 \u2212(\u03bbj\u00af k + \u03c32))(\u03b7 \u2212\u03c32)k+1\u2212\u00af k = 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212(\u03bb1 + \u03c32))v1(j) \u00b7 \u00b7 \u00b7 (\u03b7 \u2212(\u03bbr + \u03c32))vr(j)(\u03b7 \u2212\u03c32)k+1\u2212\u00af k . Therefore, in order to prove (35), it suffices to prove that for any j = (j1, \u00b7 \u00b7 \u00b7 , j\u00af k) the following equality holds 1 2\u03c0i I \u0393 d\u03b7 (\u03b7 \u2212(\u03bb1 + \u03c32))v1 \u00b7 \u00b7 \u00b7 (\u03b7 \u2212(\u03bbr + \u03c32))vr(\u03b7 \u2212\u03c32)k+1\u2212\u00af k = (\u22121) \u00af k+1 X t1+\u00b7\u00b7\u00b7+tr=k\u2212\u00af k ti=0 if vi=0 r Y i:vi\u22651 \u0012ti + vi \u22121 vi \u22121 \u0013 \u03bb\u2212ti\u2212vi i , (36) 39 \fwhere we omitted the index j in definitions of vi(j) and ti(j) without causing any confusions. The non-negative numbers v1 + \u00b7 \u00b7 \u00b7 + vr = \u00af k. Let \u03c6(\u03b7) = 1 (\u03b7 \u2212(\u03bb1 + \u03c32))v1 \u00b7 \u00b7 \u00b7 (\u03b7 \u2212(\u03bbr + \u03c32))vr(\u03b7 \u2212\u03c32)k+1\u2212\u00af k , be a function of \u03b7. According to the Residue theorem, 1 2\u03c0i I \u0393 \u03c6(\u03b7)d\u03b7 = \u2212Res(\u03c6, \u03b7 = \u221e) \u2212Res(\u03c6, \u03b7 = \u03c32). Note that Res(\u03c6, \u03b7 = \u221e) = 0 and it suffices to calculate Res(\u03c6, \u03b7 = \u03c32). Let \u03b3\u03c32 be a contour plot around \u03b7 = \u03c32 such that none of {\u03bbi + \u03c32}i\u2208[r] lies in \u2202\u03b3\u03c32. Then, Res(\u03c6, \u03b7 = \u03c32) = 1 2\u03c0i I \u03b3\u03c32 \u03c6(\u03b7)d\u03b7. By Cauchy integral formula for derivatives, we have Res(\u03c6, \u03b7 = \u03c32) = \u03c6(k\u2212\u00af k)(\u03b7) \f \f \f \u03b7=\u03c32 = 1 (k \u2212\u00af k)! h r Y i:vi\u22651 (\u03b7 \u2212(\u03bbi + \u03c32))\u2212vii(k\u2212\u00af k)\f \f \f \u03b7=\u03c32 where \u03c6(\u03b7)(k\u2212\u00af k) is the k \u2212\u00af k-th order differentiation of \u03c6(\u03b7). Further, by general Leibniz rule, Res(\u03c6, \u03b7 = 0) = 1 (k \u2212\u00af k)! X t1+\u00b7\u00b7\u00b7+tr=k\u2212\u00af k ti=0 if vi=0 (k \u2212\u00af k)! t1!t2! \u00b7 \u00b7 \u00b7 tr! r Y i:vi\u22651 h (\u03b7 \u2212(\u03bbi + \u03c32)\u2212vii(ti)\f \f \f \u03b7=\u03c32 = (\u22121)k\u2212\u00af k X t1+\u00b7\u00b7\u00b7+tr=k\u2212\u00af k ti=0 if vi=0 r Y i:vi\u22651 vi(vi + 1) \u00b7 \u00b7 \u00b7 (vi + ti \u22121) ti! (\u2212\u03bbi)\u2212vi\u2212ti = (\u22121)k\u2212\u00af k X t1+\u00b7\u00b7\u00b7+tr=k\u2212\u00af k ti=0 if vi=0 r Y i:vi\u22651 \u0012ti + vi \u22121 vi \u22121 \u0013 (\u2212\u03bbi)\u2212vi\u2212ti = (\u22121)2k\u2212\u00af k X t1+\u00b7\u00b7\u00b7+tr=k\u2212\u00af k ti=0 if vi=0 r Y i:vi\u22651 \u0012ti + vi \u22121 vi \u22121 \u0013 (\u03bbi)\u2212vi\u2212ti. Therefore, 1 2\u03c0i I \u0393 \u03c6(\u03b7)d\u03b7 = (\u22121) \u00af k+1 X t1+\u00b7\u00b7\u00b7+tr=k\u2212\u00af k ti=0 if vi=0 r Y i:vi\u22651 \u0012ti + vi \u22121 vi \u22121 \u0013 \u03bb\u2212vi\u2212ti i which shows (36) is true and the proof is completed. 40 \fA.3 Proof of Lemma 6 Recall that the Kullback-Leibler divergence and total variation distance between two probability measures defined in the same measure space (\u2126, F) are defined by TV(\u00b5\u2225\u03bd) := Z \u2126 log \u0012\u00b5(dx) \u03bd(dx) \u0013 \u00b5(dx) and TV(\u00b5, \u03bd) := sup E\u2208F |\u00b5(E) \u2212\u03bd(E)|. By definition, for any (\u03b5, 0)-DP algorithm A, we have sup P\u2208P EA \u03c1(A, \u03b8(P)) \u2265sup P\u2208Q EA \u03c1(A, \u03b8(P)) \u22651 N N X i=1 \u03c10 2 EAI \u0010 \u03c1 (A, \u03b8(Pi)) > \u03c10 2 \u0011 . Following the proof of generalized Fano\u2019s Lemma (Devroye, 1987; Yu, 1997), we further reduce the estimation problem into hypothesis testing. Let h : \u0398 \u2192\u0398Q be defined by h(\u03b8) = argmin \u03b8i\u2208\u0398Q \u03c1 (\u03b8i, \u03b8). For all i, j \u2208[N], define pji := EA,X\u223cPjI(h(A) = \u03b8(Pi)), measuring the probability that the algorithm A outputs an estimator closer to \u03b8(Pi) using data actually sampled from distribution Pj. The total type-I error of testing H0 : X \u223cPj versus H1 : X \u223cP \u2208P \\ Pj is \u03b2j := EA,X\u223cPjI(h(A) \u0338= \u03b8(Pj)) = X i\u0338=j pji. Observe that 1 N N X i=1 \u03c10 2 EAI \u0010 \u03c1 (A, \u03b8(Pi)) > \u03c10 2 \u0011 \u2a7e\u03c10 2N N X i=1 EA,X\u223cPiI(h (A) \u0338= \u03b8(Pi)) = \u03c10 2N N X i=1 \u03b2i. Therefore, sup P\u2208P EA \u03c1(A, \u03b8(P)) \u2265\u03c10 2N N X i=1 \u03b2i. It suffices to find a lower bound for PN i=1 \u03b2i. Towards that end, the following lemma is needed. Its proof is relegated to Section B.4 Lemma 13. Let \u00b5 = \u00b51 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5n and \u03bd = \u03bd1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03bdn be two probability measures defined on the same measurable space (\u2126, F). The total variation distance is denoted by tk = TV(\u00b5k, \u03bdk) for all k \u2208[n]. Let A : \u21267\u2192\u0398 be an (\u03b5, \u03b4)-DP random algorithm. Then, for any subset E \u2282\u0398, EA,X\u223c\u00b5I \u0000A(X) \u2208E \u0001 \u2264exp \u0012 4\u03b5 n X k=1 tk \u0013 \u0012 EA,X\u223c\u03bdI \u0000A(X) \u2208E \u0001 + 2\u03b4 e\u03b5 \u22121 \u0013 , 41 \fNow consider any pair Pi, Pj \u2208Q where Pi = \u00b5(1) i \u00d7 \u00b7 \u00b7 \u00b7 , \u00d7\u00b5(n) i and Pj = \u00b5(1) j \u00d7 \u00b7 \u00b7 \u00b7 , \u00d7\u00b5(n) j . By Lemma 13 and the fact pii + P j\u0338=i pij = 1 , we have pji \u2a7eexp \uf8eb \uf8ed\u22124\u03b5 X k\u2208[n] TV \u0010 \u00b5(k) i , \u00b5(k) j \u0011 \uf8f6 \uf8f8pii \u2212 2\u03b4 e\u03b5 \u22121 = exp \uf8eb \uf8ed\u22124\u03b5 X k\u2208[n] TV \u0010 \u00b5(k) i , \u00b5(k) j \u0011 \uf8f6 \uf8f8 1 \u2212 X j\u0338=i pij ! \u2212 2\u03b4 e\u03b5 \u22121 = exp \uf8eb \uf8ed\u22124\u03b5 X k\u2208[n] TV \u0010 \u00b5(k) i , \u00b5(k) j \u0011 \uf8f6 \uf8f8(1 \u2212\u03b2i) \u2212 2\u03b4 e\u03b5 \u22121 \u2265exp (\u22124\u03b5t0) (1 \u2212\u03b2i) \u2212 2\u03b4 e\u03b5 \u22121. Taking summation over i \u2208[N] \\ {j}, we have \u03b2j = X i\u0338=j pji \u2a7eexp (\u22124\u03b5t0) N \u22121 \u2212 X i\u0338=j \u03b2i ! \u22122(N \u22121)\u03b4 e\u03b5 \u22121 Further summing over j \u2208[N], we have N X j=1 \u03b2j \u2a7e N(N \u22121) (N \u22121) + exp (4\u03b5t0) \u00b7 \u0012 1 \u22122\u03b4e4\u03b5t0 e\u03b5 \u22121 \u0013 . Therefore, sup P\u2208P EA \u03c1(A, \u03b8(P)) \u2265\u03c10 2 (N \u22121) (N \u22121) + exp (4\u03b5t0) \u00b7 \u0012 1 \u22122\u03b4e4\u03b5t0 e\u03b5 \u22121 \u0013 \u2265\u03c10 4 \u0012 1 \u2227 N \u22121 exp (4\u03b5t0) \u0013 \u0012 1 \u22122\u03b4e4\u03b5t0 e\u03b5 \u22121 \u0013 . Combining the above lower bound with classical generalized Fano\u2019s Lemma (e.g., Tsybakov (2008)), we have inf A\u2208A\u03b5,\u03b4(P) sup P\u2208P EA \u03c1(A, \u03b8(P)) \u2a7emax \u001a\u03c10 2 \u0012 1 \u2212l0 + log 2 log N \u0013 , \u03c10 4 \u0012 1 \u2227 N \u22121 exp (4\u03b5t0) \u0013 \u0012 1 \u22122\u03b4e4\u03b5t0 e\u03b5 \u22121 \u0013\u001b . A.4 Proof of Lemma 11 We begin with discussing the case s = (k, 0, \u00b7 \u00b7 \u00b7 , 0) and l = 1. By definition, we have \u2225T\u03a3,k,s,1 \u0000\u2206\u2212\u2206(i)\u0001 \u2225F = \u2225Q\u2212k \u0000\u2206\u2212\u2206(i)\u0001 Q\u22a5\u2206Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F \u22641 \u03bbr \u2225U \u0000\u2206\u2212\u2206(i)\u0001 U \u22a4 \u22a5\u2225F \u0012\u2225\u2206\u2225 \u03bbr \u0013k\u22121 . 42 \fBy Lemma 9, in the event E\u2217, we get \u2225T\u03a3,k,s,1 \u0000\u2206\u2212\u2206(i)\u0001 \u2225F \u2272 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n \u0012Dmax \u03bbr \u0013k\u22123 . For the cases s = (k, 0, \u00b7 \u00b7 \u00b7 , 0) and 2 \u2264l \u2264k, \u2225T\u03a3,k,s,l \u0000\u2206\u2212\u2206(i)\u0001 \u2225F = \u2225Q\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u0000\u2206\u2212\u2206(i)\u0001 Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F \u2264 \r \r \rQ\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nXiX\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\r \r \r F + \r \r \rQ\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nX\u2032 iX\u2032\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\r \r \r F. We need to control each term in the RHS of above inequality. The proof of the following lemma can be found in the Appendix B. Lemma 14. For all k \u22652 and 2 \u2264l \u2264k, the following bounds hold in the event E\u2217. max i\u2208[n] \r \r \rQ\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nXiX\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\r \r \r F \u2272 \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb rp n \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n , and moreover max i\u2208[n] \r \r \rQ\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7Q\u22a51 nX\u2032 iX\u2032\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\r \r \r F \u2272l \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n . Note that the Q\u22a5term appears l\u22121 times before XiX\u22a4 i or XiX\u2032\u22a4 i in the above product sequences of matrices. According to Lemma 14, for 2 \u2264l \u2264k \u2225T\u03a3,k,s,l \u0000\u2206\u2212\u2206(i)\u0001 \u2225F \u2272(l + 1) \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n . Taking the summation over l \u2208[k], we have X l\u2208[k] \u2225T\u03a3,k,s,l \u0000\u2206\u2212\u2206(i)\u0001 \u2225F \u2272(3 + k)k 2 \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n , which holds for s = {k, 0, \u00b7 \u00b7 \u00b7 , 0} and all i \u2208[n]. We now argue that the above bound holds for all s := {s1, \u00b7 \u00b7 \u00b7 , sk+1} such that P j sj = k and sj\u2019s are non-negative integers. For any s \u2208Ik\\{k, 0, \u00b7 \u00b7 \u00b7 , 0}, there exists at least two subsequence 43 \fwithin T\u03a3,k,s,l \u0000\u2206\u2212\u2206(i)\u0001 such that the sequence is starting from UU \u22a4and ends with all other projectors being U\u22a5U \u22a4 \u22a5(Note that the other cases involving U \u22a4(\u2206\u2212\u2206(i))U are smaller terms). Without loss of the generality, we discuss one subsequence where the first projector is Q\u2212\u00af k with \u00af k > 0 and the remaining projectors are all Q\u22a5. Suppose that the target subsequence contains t + 1 projectors (t \u2264k) with indices (sm+1, \u00b7 \u00b7 \u00b7 , sm+t, sm+t+1) for some integer m \u22650 satisfying Pm+t+1 j=m+1 sj = \u00af k and Pm j=1 sj + Pj=k+1 j=m+t+2 sj = k \u2212\u00af k. When m + 1 \u2264l \u2264m + t + 1, the target subsequence times is upper bounded with the argument for the case s = (t, 0, \u00b7 \u00b7 \u00b7 , 0). The remaining part in T\u03a3,k,s,l \u0000\u2206\u2212\u2206(i)\u0001 contains k \u2212t projectors and is upper bounded by Dk\u2212t max/\u03bbk\u2212\u00af k. Therefore, \u2225T\u03a3,k,s,l \u0000\u2206\u2212\u2206(i)\u0001 \u2225F \u2272Dk\u2212t max \u03bbk\u2212\u00af k \u00b7 \u03bbt\u2212\u00af k \u00b7 (l + 1) \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n \u2272(l + 1) \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n . The same argument can be applied to the case l \u2208[k + 1] \\ [m + 1 : m + t + 1] and is skipped here. Finally we conclude that in event E\u2217, X l\u2208[k] \u2225T\u03a3,k,s,l(\u2206\u2212\u2206(i))\u2225F \u2272(3 + k)k 2 \u0012Dmax \u03bbr \u0013k\u22122 \u00b7 \u03c32 \u03bb \u0010rp n + p n \u0011 \u00b7 r \u03c32(\u03bb + \u03c32) \u03bb2 p p(r + log n) n , for k \u22652 and any s \u2208Ik. This concludes the proof of Lemma 11. B Proofs of Technical Lemmas B.1 Proof of Lemma 8 By the orthogonal invariance of normal distribution, we can simply assume \u03a3 = diag(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbp) and as a result, \u2225X\u22252 d. = \u03bb1Z2 1 + \u00b7 \u00b7 \u00b7 + \u03bbpZ2 p, where Z1, \u00b7 \u00b7 \u00b7 , Zp are i.i.d. standard normal random variables. Note that \u03bbiZ2 i is sub-exponential SE(C1\u03bb2 i , C2\u03bbi) for some absolute constants C1, C2 > 0 meaning that P \u0000|\u03bbiZ2 i \u2212\u03bbi| \u2265t \u0001 \u2264exp \u0012 \u2212c1 min \u001a t2 \u03bb2 i , t \u03bbi \u001b\u0013 , 44 \ffor any t > 0. By the composition property of sub-exponential random variables, we have \u03bb1Z2 1 + \u00b7 \u00b7 \u00b7 + \u03bbpZ2 p \u2208SE \u0010 C1 p X i=1 \u03bb2 i , C2\u03bb1 \u0011 . Therefore, we conclude that P \u0012\f \f \f\u2225X\u22252 \u2212tr(\u03a3) \f \f \f \u2264C1 \u0010 u p X i=1 \u03bb2 i \u00111/2 + C2\u03bb1u \u0013 \u22651 \u2212e\u2212c1u, for any u > 0. The proof of the upper bound \u2225Xi\u22252, \u2225U \u22a4Xi\u22252 and \u2225U \u22a4 \u22a5Xi\u22252 is straightforward and skipped here. B.2 Proof of Lemma 9 Recall that \u2206= b \u03a3 \u2212\u03a3 and \u2206(i) = b \u03a3(i) \u2212\u03a3. By Lemma 7, we have E\u2225\u2206\u2225+ max i\u2208[n] \u2225\u2206(i)\u2225\u2264C1 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n + r\u03bb + p\u03c32 n ! \u2264C2 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n , where the last inequality is due to C3r \u2264n and \u03bb/\u03c32 \u2265C3p/n for some large constant C3 > 0. Since tr(\u03a3) \u2264n\u2225\u03a3\u2225, by Lemma 7, we get P \f \f \f\u2225\u2206\u2225\u2212E\u2225\u2206\u2225 \f \f \f \u2264C4 s (\u03bb + \u03c32) \u0000r\u03bb + p\u03c32 + t(\u03bb + \u03c32) \u0001 n ! \u22651 \u2212e\u2212t10\u221220\u02dc r, where \u02dc r denotes the effective rank tr(\u03a3)/\u2225\u03a3\u2225. By choosing t = 0, we conclude that P \u0012 \u2225\u2206\u2225\u2264C4 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n \u0013 \u22651 \u221210\u221220\u02dc r. To bound \u2225\u2206(i)\u2225, notice that \u2206(i) = \u2206\u2212XiX\u22a4 i /n + X\u2032 iX\u2032\u22a4 i /n so that \u2225\u2206(i)\u2225\u2264\u2225\u2206\u2225+ \u2225Xi\u22252/n + \u2225X\u2032 i\u22252/n. If p \u2265C5 log n and n \u2265C5 log2 n, under the event E0 defined in Lemma 8, for some constant C6 > 0 max i\u2208[n] \u2225\u2206(i)\u2225\u2264C6 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n . Therefore, there exists an event E\u2206with probability P(E\u2206) \u22651 \u221210\u221220\u02dc r \u2212n\u221210 under which the following bound holds max i\u2208[n] \u2225\u2206\u2225+ \u2225\u2206(i)\u2225\u2264C6 r (\u03bb + \u03c32)(r\u03bb + p\u03c32) n . 45 \fNow we study the bounds in event E1 and begin with U \u22a4\u2206U\u22a5. By definition, we write U \u22a4\u2206U\u22a5= 1 n n X i=1 U \u22a4XiX\u22a4 i U\u22a5. Since Xi \u223cN(0, \u03a3) and by the spiked structure of \u03a3, we have U \u22a4Xi \u223cN \u00000, \u039br \u0001 and U \u22a4 \u22a5Xi \u223cN \u00000, \u03c32Ip\u2212r \u0001 , where \u039br := diag \u0000\u03bb1 + \u03c32, \u00b7 \u00b7 \u00b7 , \u03bbr + \u03c32\u0001 . Let Zi \u223cN(0, Ir), Yi \u223cN(0, Ip\u2212r), i \u2208[n] be independent Gaussian random vectors. Then U \u22a4Xi d. = \u039b1/2 r Zi and U \u22a4 \u22a5Xi d. = \u03c3Yi, i \u2208[n]. As a result, U \u22a4\u2206U\u22a5 d. = \u03c3\u039b1/2 r 1 n n X i=1 ZiY \u22a4 i = \u21d2 \r \rU \u22a4\u2206U\u22a5 \r \r \u2264 p \u03c32(\u03bb + \u03c32) n \r \r \r n X i=1 ZiY \u22a4 i \r \r \r. Denote A := [Z1, \u00b7 \u00b7 \u00b7 , Zn] \u2208Rr\u00d7n and B := [Y1, \u00b7 \u00b7 \u00b7 , Yn]\u22a4\u2208Rn\u00d7(p\u2212r). Since B \u2208Rn\u00d7(p\u2212r) has i.i.d. N(0, 1) entries, we are able to write B = [\u02dc Y1, \u00b7 \u00b7 \u00b7 , \u02dc Yp\u2212r] with the column \u02dc Yi i.i.d. to N(0, Ip\u2212r) for i \u2208[p \u2212r]. We can therefore write Pn i=1 ZiY \u22a4 i = AB. Conditioned on A, the matrix AB = \u0000A\u02dc Y1, \u00b7 \u00b7 \u00b7 , A\u02dc Yp\u2212r \u0001 , has i.i.d. columns obeying the distribution N(0, AA\u22a4). By Lemma 7, P \u0012 \u2225AA\u22a4/n \u2212Ir\u2225\u2264C3 r rt n \u0013 \u22651 \u2212e\u2212t, for any t > 0. Therefore, if n \u2265C4r log n, there exists an event F1 with P(F1) \u22651 \u2212n\u221210 under which the following bound holds 9n 10 \u2264\u03bbmin(AA\u22a4) \u2264\u03bbmax(AA\u22a4) \u226411n 10 . Under the event F1, we apply Lemma 7 again and conclude with P \u0012\r \r \rABB\u22a4A\u22a4 p \u2212r \u2212AA\u22a4\r \r \r \u2264 n 100 \u0013 \u22651 \u2212e\u2212c1p, where c1 > 0 is a small constant and used the condition p \u22652r. Finally, we conclude that P \u0010 \u2225AB\u2225\u2264C5 \u221anp \u0011 \u22651 \u2212e\u2212c1p \u2212n\u221210. 46 \fTherefore, \r \rU \u22a4\u2206U\u22a5 \r \r \u2264C5 r p\u03c32(\u03bb + \u03c32) n . (37) To bound \u2225U \u22a4\u2206(i)U\u22a5\u2225, observe that \r \rU \u22a4\u2206(i)U\u22a5 \r \r \u2264 \r \rU \u22a4\u2206U\u22a5 \r \r + \u2225U \u22a4Xi\u2225\u2225U \u22a4 \u22a5Xi\u2225/n + \u2225U \u22a4X\u2032 i\u2225\u2225U \u22a4 \u22a5Xi\u2225/n. Combining (37) and the event E0 defined in Lemma 8, we get P \u0012 \u2225U \u22a4\u2206U\u22a5\u2225+ max i\u2208[n] \u2225U \u22a4\u2206(i)U\u22a5\u2225\u2264C5 r p\u03c32(\u03bb + \u03c32) n \u0013 \u22651 \u2212e\u2212c1p \u2212n\u22129, (38) where we used the fact n \u2265C1(r + log n). Similarly, U \u22a4XiX\u22a4 i U\u22a5 d. = \u03c3\u039b1/2 r ZiY \u22a4 i so that \u2225U \u22a4XiX\u22a4 i U\u22a5\u2225\u2264C p \u03c32(\u03bb + \u03c32)\u2225Zi\u2225\u2225Yi\u2225. Applying Lemma 8 again, we get P \u0010 max i\u2208[n] \u2225U \u22a4XiX\u22a4 i U\u22a5\u2225+ \u2225U \u22a4X\u2032 iX\u2032\u22a4 i U\u22a5\u2225\u2264C5 p \u03c32(\u03bb + \u03c32)p(r + log n) \u0011 \u22651 \u2212n\u221210. (39) We now consider the last terms in the event E1. Note that Xi is independent of P j\u0338=i XiX\u22a4 j . Conditioned on the latter one, we have U \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5Xi \u223cN \u0012 0, \u03c32U \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5 \u00101 n X j\u0338=i XjX\u22a4 j \u0011 U \u0013 . By the bound (38), we get \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5 \r \r \r \u2264C3 r \u03c32(\u03bb + \u03c32)p n , which holds in event E\u2206. Now applying Lemma 8 where u = C(r + log n), we conclude that in event defined in (38), P \u0012\r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5Xi \r \r \r \u2264\u03c3 \u00b7 r \u03c32(\u03bb + \u03c32)p(r + log n) n \u0013 \u22651 \u221210\u221220rn\u221210. (40) By (38), (39), and (40), we conclude that P(E1) \u22651 \u2212e\u2212c1p \u22122n\u22129. B.3 Proof of Lemma 10 According to Lemma 5, b U b U \u22a4\u2212UU \u22a4= X k\u22651 S\u03a3,k(\u2206). 47 \fFor k \u22652, there exists a term either of the form Q\u22121\u2206Q\u22a5or Q\u22a5\u2206Q\u22121 in each summand of S\u03a3,k(\u2206) and thus \u2225S\u03a3,k(\u2206)\u2225\u2264 \u00122k k \u0013 \u0012\u2225\u2206\u2225 \u03bbr \u0013k\u22121 \r \rQ\u22121\u2206Q\u22a5\r \r \u2264 \u00122k k \u0013 \u0012\u2225\u2206\u2225 \u03bbr \u0013k\u22121 \r \rU \u22a4\u2206U\u22a5 \r \r \u03bbr , where we use the fact \r \rQ\u22121\u2206Q\u22a5\r \r = \r \rU\u039b\u22121U \u22a4\u2206U\u22a5U \u22a4 \u22a5 \r \r \u2264(\u03bbr)\u22121 \r \rU \u22a4\u2206U\u22a5 \r \r . For all integers k \u22651, \u00002(k+1) k+1 \u0001\u00002k k \u0001\u22121 = 2(2k + 1)(k + 1)\u22121 \u22644 and we have \u221e X k=2 \u2225S\u03a3,k(\u2206)\u2225\u2264 \u00124 2 \u0013\u2225\u2206\u2225 \u03bbr \u00b7 \r \rU \u22a4\u2206U\u22a5 \r \r \u03bbr \u221e X k=0 4k \u0012\u2225\u2206\u2225 \u03bbr \u0013k \u22646\u2225\u2206\u2225 \u03bbr \u00b7 \r \rU \u22a4\u2206U\u22a5 \r \r \u03bbr \u221e X k=0 \u0012 4 4 + \u03b4 \u0013k = 6(4 + \u03b4) \u03b4 \u00b7 \u2225\u2206\u2225 \u03bbr \u00b7 \r \rU \u22a4\u2206U\u22a5 \r \r \u03bbr . Combining the above result with \u2225S\u03a3,1(\u2206)\u2225= \r \r\u039b\u22121U \u22a4\u2206U\u22a5 \r \r, we have \r \r \rb U b U \u22a4\u2212UU \u22a4\r \r \r \u2264\u2225S\u03a3,1(\u2206)\u2225+ X k\u22652 \u2225S\u03a3,k(\u2206)\u2225\u2264 \r \r\u039b\u22121U \u22a4\u2206U\u22a5 \r \r + 6(4 + \u03b4)\u2225\u2206\u2225 \r \rU \u22a4\u2206U\u22a5 \r \r \u03b4\u03bb2 r . B.4 Proof of Lemma 13 Define, for each k \u2208[n], \u03b7k := \u00b5k \u2227\u03bdk 1 \u2212tk , \u02dc \u00b5k := \u00b5k \u2212\u00b5k \u2227\u03bdk tk , \u02dc \u03bdk := \u03bdk \u2212\u00b5k \u2227\u03bdk tk , which are three probability measures on (\u2126, F). These measures provide the following decomposition of \u00b5k and \u03bdk, \u00b5k = (1 \u2212tk)\u03b7k + tk\u02dc \u00b5k, \u03bdk = (1 \u2212tk)\u03b7k + tk\u02dc \u03bdk. Since \u00b5k and \u03bdk share \u03b7k as the common part, we can make a coupling for \u00b5 = \u00b51 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5n and \u03b7 := \u03b71 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03b7n. Create independent random variables as \u02dc Xk \u223c\u02dc \u00b5k, k \u2208[n] and Zk \u223c\u03b7k, k \u2208[n]. Denote \u02dc X := ( \u02dc X1, \u00b7 \u00b7 \u00b7 , \u02dc Xn) \u223c\u02dc \u00b51 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u02dc \u00b5n, 48 \fZ := (Z1, \u00b7 \u00b7 \u00b7 , Zn) \u223c\u03b71 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03b7n. Let {Wk : Wk \u223cBern(tk)}k\u2208[n] be a collection of independent Bernoulli random variables that are also independent of \u02dc X and Z. Then, Xk := Wk \u00b7 \u02dc Xk + (1 \u2212Wk) \u00b7 Zk, is a random variable with the law \u00b5k for all k \u2208[n]. As a result, the law of random vector X := (X1, \u00b7 \u00b7 \u00b7 , Xn)\u22a4is \u00b5 = \u00b51 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b5n. Therefore, (X, Z) is a coupling for \u00b5 and \u03b7. Note that X = W \u2299\u02dc X + (1\u22a4 n \u2212W) \u2299Z, where 1n is an n-dimensional all one vector and \u2299denotes the element-wise product. Since f(t) = e2\u03b5t \u2212te\u03b5 + t \u22121 \u22650 for any t \u2208[0, 1] and \u03b5 \u2208[0, \u221e), we get EW exp \u0012 \u03b5 X k\u2208[n] Wk \u0013 = Y k\u2208[n] [(1 \u2212tk) + tke\u03b5] \u2264exp \u0012 2\u03b5 X k\u2208[n] tk \u0013 . (41) Finally, we have EA,X\u223c\u00b5I \u0000A(X) \u2208E \u0001 = EX\u223c\u00b5EAI \u0000A(X) \u2208E \u0001 = E(X,Z,W)EAI \u0000A(X) \u2208E \u0001 = EWE \u02dc XEZEAI \u0010 A \u0000W \u2299\u02dc X + (1n \u2212W) \u2299Z \u0001 \u2208E \u0011 = E \u02dc XEZ X W=w P(W = w)EAI \u0010 A \u0000w \u2299\u02dc X + (1n \u2212w) \u2299Z \u0001 \u2208E \u0011 \u2264E \u02dc XEZ X W=w P(W = w) \uf8eb \uf8edexp \uf8eb \uf8ed\u03b5 X k\u2208[n] wk \uf8f6 \uf8f8EAI \u0000A(Z) \u2208E \u0001 + \u03b4 e\u03b5 \u22121 \u00b7 exp \uf8eb \uf8ed\u03b5 X k\u2208[n] wk \uf8f6 \uf8f8 \uf8f6 \uf8f8 = EW exp \uf8eb \uf8ed\u03b5 X k\u2208[n] Wk \uf8f6 \uf8f8\u00b7 \u0012 EZEAI \u0000A(Z) \u2208E \u0001 + \u03b4 e\u03b5 \u22121 \u0013 \u2264exp \u0012 2\u03b5 X k\u2208[n] tk \u0013 \u0012 EA,Z\u223c\u03b7I \u0000AZ \u2208E \u0001 + \u03b4 e\u03b5 \u22121 \u0013 , where the first inequality is due to the (\u03b5, \u03b4)-differential privacy composition property of A (notice that Z and \u03c9 \u2299\u02dc X + (1n \u2212\u03c9) \u2299Z differs by 1\u22a4 n w entries) and the last inequality is by (41). 49 \fIn the same fashion, one can also establish a relation: EA,X\u223c\u03b7I \u0000A(X) \u2208E \u0001 \u2264exp \u0012 2\u03b5 X k\u2208[n] tk \u0013 \u0012 EA,X\u223c\u03bdI \u0000A(X) \u2208E \u0001 + \u03b4 e\u03b5 \u22121 \u0013 , which completes the proof. B.5 Proof of Lemma 12 By definition, we write U \u22a4\u2206U\u22a5= 1 n n X i=1 U \u22a4XiX\u22a4 i U\u22a5. Denote ai = U \u22a4Xi \u2208Rr and bi = U \u22a4 \u22a5Xi \u2208Rp\u2212r. Therefore, ai and bi are sub-Gaussian random vectors with proxy-covariance matrices U \u22a4\u03a3U and U \u22a4 \u22a5\u03a3U\u22a5, respectively. However, ai and bi can be dependent for sub-Gaussian distributions. Nevertheless, we can write U \u22a4\u2206U\u22a5= 1 n n X i=1 aib\u22a4 i , which is a sum of independent rank-one random matrices. To this end, we apply the matrix Bernstein concentration inequality developed by Koltchinskii (2011). Observe that \r \rEaib\u22a4 i biai \r \r \u2264E \r \raib\u22a4 i biai \r \r = max \u2225v\u2225\u22641 E\u2225bi\u22252\u27e8ai, v\u27e92 \u2264E1/2\u2225bi\u22254 \u00b7 sup \u2225v\u2225\u22641 \u27e8ai, v\u27e94 \u2272p\u03c32 \u00b7 (\u03bb + \u03c32), where we used the facts ai, bi are sub-Gaussian random vectors. Similarly, we can get \r \rEbia\u22a4 i aib\u22a4 i \r \r \u2264E\u2225bia\u22a4 i aib\u22a4 i \u2225\u2272r(\u03bb + \u03c32) \u00b7 \u03c32. Moreover, \r \r \r \r \raib\u22a4 i \r \r \r \r \r \u03c81 \u2272\u2225ai\u2225\u03c82\u2225bi\u2225\u03c82 \u2272 p (\u03bb + \u03c32)\u03c32rp. By (Koltchinskii, 2011, Proposition 2), with probability at least 1 \u2212(p + n)\u221210, \r \rU \u22a4\u2206U\u22a5 \r \r \u2264C3 r \u03c32(\u03bb + \u03c32)p log(p + n) n + p \u03c32(\u03bb + \u03c32)rp log(p + n) log r n \u2264C3 r \u03c32(\u03bb + \u03c32)p log(p + n) n , (42) where the last inequality holds since n \u2265r log p log2 r. We can also get E\u2225U \u22a4\u2206U\u22a5\u2225\u2264C3 r \u03c32(\u03bb + \u03c32)p log p n . 50 \fMoreover, since \r \r \rU \u22a4(XiX\u22a4 i /n)U\u22a5 \r \r \r \u2264\u2225ai\u2225\u2225bi\u2225/n, we get, with probability at least 1 \u2212(p + n)\u22129, that max i\u2208[n] \r \r \rU \u22a4(XiX\u22a4 i /n)U\u22a5 \r \r \r + \r \r \rU \u22a4(X\u2032 iX \u2032\u22a4 i /n)U\u22a5 \r \r \r \u2264C3 p \u03c32(\u03bb + \u03c32)p(r + log n) n , (43) where we used the condition log n = O(p). In the event where (42) and (43) hod, we have max i\u2208[n] \u2225U \u22a4\u2206(i)U\u22a5\u2225\u2264\u2225U \u22a4\u2206U\u22a5\u2225+ max i\u2208[n] \r \r \rU \u22a4(XiX\u22a4 i /n)U\u22a5 \r \r \r + \r \r \rU \u22a4(X\u2032 iX \u2032\u22a4 i /n)U\u22a5 \r \r \r \u2272 r \u03c32(\u03bb + \u03c32)p log(p + n) n . Since Xi is independent of P j\u0338=i XjX\u22a4 j , max i\u2208[n] \r \r \r \rU \u22a4 \u00121 n X j\u0338=i XjX\u22a4 j \u0013 U\u22a5U \u22a4 \u22a5Xi \r \r \r \r \u2264max i\u2208[n] \r \r \r \rU \u22a4 \u00121 n X j\u0338=i XjX\u22a4 j \u0013 U\u22a5 \r \r \r \r\u03c3 \u00b7 p (r + log n) log n \u2272\u03c3 \u00b7 r \u03c32(\u03bb + \u03c32)p(r + log n) log(p + n) log n n , where the first inequality holds with probability at least 1 \u2212n\u221210. B.6 Proof of Lemma 14 Simple case: k = 2. It suffices to consider l = k = 2 and to bound the following terms \r \r \rU \u22a4\u2206U\u22a5U \u22a4 \u22a5 \u00101 nX\u2032 iX\u2032\u22a4 i \u0011 U\u22a5 \r \r \r F and \r \r \rU \u22a4\u2206U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F We only show prove the upper bound for \u2225U \u22a4\u2206U\u22a5U \u22a4 \u22a5 \u0000XiX\u22a4 i /n \u0001 U\u22a5\u2225F since the other term can be bounded in exactly the same way. We decompose the term into two parts for decoupling purpose: \r \r \rU \u22a4\u2206U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F \u2264 \r \r \rU \u22a4\u00101 nXiX\u22a4 i \u0011 U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F + \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F. By Lemma 9 and in event E\u2217, we have \r \r \rU \u22a4\u00101 nXiX\u22a4 i \u0011 U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F \u2264 \r \r \rU \u22a4\u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r 2 F \u2272 \u0012p \u03c32(\u03bb + \u03c32)p(r + log n) n \u00132 (44) 51 \fMeanwhile, the other term can be written as \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F = \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5 1 \u221anXi \r \r \r \r \r \r 1 \u221anU \u22a4 \u22a5Xi \r \r \r \u2272 \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5 1 \u221anXi \r \r \r \u00b7 \u03c3 rp n where the last inequality is due to Lemma 8. Finally, applying Lemma 9, we get in the event E\u2217that \r \r \rU \u22a4\u00101 n X j\u0338=i XjX\u22a4 j \u0011 U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 U\u22a5 \r \r \r F \u2272\u03c32 rp n \u00b7 p \u03c32(\u03bb + \u03c32)(r + log n) n . (45) The bound in (45) clearly dominates that in (44) under the conditions in Lemma 3. Finally, we conclude that in event E\u2217, \r \r \rQ\u22122\u2206U\u22a5U \u22a4 \u22a5 \u00101 nXiX\u22a4 i \u0011 Q\u22a5\r \r \r F \u2272\u03c32 \u03bb rp n \u00b7 p \u03c32(\u03bb + \u03c32)(r + log n) n\u03bb . The cases k \u22653. For any 2 \u2264l \u2264k, we aim to bound \u2225Q\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nXiX\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F and \u2225Q\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nX\u2032 iX\u2032\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F, where there exist l \u22121 Q\u22a5factors before XiX\u22a4 i or X\u2032 1X\u2032\u22a4 i . We begin with the first term and note the following simple fact \u2225Q\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nXiX\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F \u22641 n\u2225U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5XiX\u22a4 i U\u22a5\u2225F Dk\u2212l max \u03bbk r \u2264\u2225U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5Xi\u2225\u00b7 \u03c3\u221ap n Dk\u2212l max \u03bbk r , where the last inequality is due to Lemma 8 and recall Dmax is the upper bound for \u2225\u2206\u2225and \u2225\u2206(i)\u2225in event E\u2217. Since \u2206(i) is independent of Xi, conditioned on \u2206(i), we have U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5Xi \u223cN(0, \u03c32U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5\u2206(i)U \u22a4). Observe that \r \rU \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5 \r \r \u2264\u2225U \u22a4\u2206(i)U\u22a5\u2225\u00b7 \u2225\u2206(i)\u2225l\u22122 \u2264C2 r (\u03bb + \u03c32)p\u03c32 n \u00b7 Dl\u22122 max, 52 \fwhere the last inequality is due to Lemma 9. Similarly as the case k = 2, we apply Lemma 8 with u = C(r + log n) and conclude, in the event E\u2217, that max i\u2208[n] \u2225U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5Xi\u2225\u2264\u03c32 rp n \u00b7 Dl\u22122 max p (\u03bb + \u03c32)(r + log n). Therefore, we conclude, in the event E\u2217, that \u2225Q\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nXiX\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F \u2272 \u0010Dmax \u03bb \u0011k\u22122 \u00b7 \u03c32 \u03bb rp n \u00b7 p \u03c32(\u03bb + \u03c32)(r + log n) n\u03bb . Next, we prove the upper bound of \r \r \rQ\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u00101 nX\u2032 iX\u2032\u22a4 i \u0011 Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\r \r \r F. It is slightly more complicated since now \u2206(i) and X\u2032 iX\u2032\u22a4 i are dependent. However, only one term in the summands of \u2206(i) involves XiX\u2032\u22a4 i . Observe first that \u2225Q\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a51 nX\u2032 iX\u2032\u22a4 i Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\u2225F \u22641 n\u2225U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5\u2225F \u2225\u2206\u2225k\u2212l \u03bbk . For decoupling purpose, we write U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5as the summation of l terms. U \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 = U \u22a4 \u00121 nX\u2032 iX\u2032\u22a4 i \u0013 U\u22a5U \u22a4 \u22a5\u2206(i) \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5\u2206(i)U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 + U \u22a4\u0000\u2206\u2212i\u0001 U\u22a5U \u22a4 \u22a5 \u00121 nX\u2032 iX\u2032\u22a4 i \u0013 \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5\u2206(i)U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 + \u00b7 \u00b7 \u00b7 + U \u22a4\u0000\u2206\u2212i\u0001 U\u22a5U \u22a4 \u22a5 \u0000\u2206\u2212i\u0001 \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5 \u00121 nX\u2032 iX\u2032\u22a4 i \u0013 U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 + U \u22a4\u0000\u2206\u2212i\u0001 U\u22a5U \u22a4 \u22a5 \u0000\u2206\u2212i\u0001 \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5 \u0000\u2206\u2212i\u0001 U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 =: g1 + g2 + \u00b7 \u00b7 \u00b7 + gl, where \u2206\u2212i := n\u22121 P j\u0338=i XjX\u22a4 j . Clearly, the term gl can be bounded in the same fashion as before and we conclude that in event E\u2217with \u2225gl\u2225F \u2272\u03c32\u221apDl\u22122 max \u00b7 r \u03c32(\u03bb + \u03c32)p(r + log n) n . 53 \fOn the other hand, for m = 2, \u00b7 \u00b7 \u00b7 , l \u22121, in event E\u2217, \u2225gm\u2225F \u22641 n \r \r \r U \u22a4\u0000\u2206\u2212i\u0001 U\u22a5U \u22a4 \u22a5 \u0000\u2206\u2212i\u0001 \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5 | {z } product of \u2206\u2212i for m\u22121 times X\u2032 i \r \r \r \r \r \r X\u2032\u22a4 i Q\u22a5\u2206(i) \u00b7 \u00b7 \u00b7 Q\u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 | {z } product of \u2206(i) for l\u2212m\u22121 times \r \r \r \u2272 \r \r \r U \u22a4\u0000\u2206\u2212i\u0001 U\u22a5U \u22a4 \u22a5 \u0000\u2206\u2212i\u0001 \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5 | {z } product of \u2206\u2212i for m\u22121 times X\u2032 i \r \r \r \u00b7 Dl\u2212m\u22121 max \u03c33p3/2 n \u2272Dl\u22123 max\u03c34p3/2 n \u00b7 r \u03c32(\u03bb + \u03c32)p(r + log n) n \u2272\u03c32\u221apDl\u22122 max \u00b7 r \u03c32(\u03bb + \u03c32)p(r + log n) n , where the second inequality is due to Lemma 8, the third inequality is similarly derived as in the case k = 2, and the last inequality is due to \u03bb/\u03c32 \u2265C1(p/n + p p/n). For the term g1, we write \u2225g1\u2225F \u22641 n \r \rU \u22a4X\u2032 iX\u2032\u22a4 i U\u22a5U \u22a4 \u22a5\u2206(i) \u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5\u2206(i)U\u22a5U \u22a4 \u22a5X\u2032 i \r \r\r \rU \u22a4 \u22a5X\u2032 i \r \r \u2272\u03c32 p n \u00b7 \r \rU \u22a4X\u2032 iX\u2032\u22a4 i U\u22a5 \r \r \u00b7 Dl\u22122 max \u2272\u03c32 p n \u00b7 Dl\u22122 max p \u03c32(\u03bb + \u03c32)p(r + log n), which holds on in event E\u2217. Combining all the bounds of g1, \u00b7 \u00b7 \u00b7 , gm, we conclude that in event E\u2217, max i\u2208[n] \r \r \rU \u22a4\u2206(i)U\u22a5U \u22a4 \u22a5\u00b7 \u00b7 \u00b7 U\u22a5U \u22a4 \u22a5X\u2032 iX\u2032\u22a4 i U\u22a5 \r \r \r \u2272lDl\u22122 max\u03c32\u0010rp n + p n \u0011p \u03c32(\u03bb + \u03c32)p(r + log n). Finally, it implies that \r \r \rQ\u2212k\u2206(i)Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u00101 nX\u2032 iX\u2032\u22a4 i \u0011 Q\u22a5\u00b7 \u00b7 \u00b7 Q\u22a5\u2206Q\u22a5\r \r \r F \u2272l \u0010Dmax \u03bb \u0011l\u22122\u03c32 \u03bb \u0010p n + rp n \u0011 \u00b7 p \u03c32(\u03bb + \u03c32)p(r + log n) \u03bbn , which holds for all i \u2208[n] in event E\u2217and completes the proof." + }, + { + "url": "http://arxiv.org/abs/2303.07152v1", + "title": "Score Attack: A Lower Bound Technique for Optimal Differentially Private Learning", + "abstract": "Achieving optimal statistical performance while ensuring the privacy of\npersonal data is a challenging yet crucial objective in modern data analysis.\nHowever, characterizing the optimality, particularly the minimax lower bound,\nunder privacy constraints is technically difficult.\n To address this issue, we propose a novel approach called the score attack,\nwhich provides a lower bound on the differential-privacy-constrained minimax\nrisk of parameter estimation. The score attack method is based on the tracing\nattack concept in differential privacy and can be applied to any statistical\nmodel with a well-defined score statistic. It can optimally lower bound the\nminimax risk of estimating unknown model parameters, up to a logarithmic\nfactor, while ensuring differential privacy for a range of statistical\nproblems. We demonstrate the effectiveness and optimality of this general\nmethod in various examples, such as the generalized linear model in both\nclassical and high-dimensional sparse settings, the Bradley-Terry-Luce model\nfor pairwise comparisons, and nonparametric regression over the Sobolev class.", + "authors": "T. Tony Cai, Yichen Wang, Linjun Zhang", + "published": "2023-03-13", + "updated": "2023-03-13", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.CR", + "cs.LG", + "stat.ME", + "stat.ML", + "stat.TH", + "62F30, 62J12, 62G05" + ], + "main_content": "Introduction With the vast amount of data being generated by individuals, businesses, and governments, statistical and machine learning algorithms are widely employed to facilitate in\u2217Department of Statistics and Data Science, the Wharton School, University of Pennsylvania, tcai@wharton.upenn.edu. The research of Tony Cai was supported in part by NSF Grant DMS-2015259 and NIH grant R01-GM129781. \u2020Independent researcher, wangyichen2012@gmail.com. The research of Yichen Wang was conducted prior to and outside of his current employment at Amazon.com Services LLC. \u2021Rutgers University, linjun.zhang@rutgers.edu. The research of Linjun Zhang was supported in part by NSF Grant DMS-2015378. 1 \fformed decision-making in domains such as healthcare, \ufb01nance, public policy, transportation, education, and scienti\ufb01c discoveries. The extensive use of algorithms underscores the importance of safeguarding data privacy. As a result, the di\ufb00erential privacy framework [19, 20] for privacy-preserving data processing has garnered substantial attention. Notably, the US Census Bureau utilized di\ufb00erentially private methods for the \ufb01rst time in the 2020 US Census [28] to publish demographic data. In essence, a di\ufb00erentially private algorithm protects data privacy by ensuring that an observer of the algorithm\u2019s output cannot ascertain the presence or absence of any individual record in the input dataset. The design and analysis of di\ufb00erentially private algorithms is a rapidly evolving research \ufb01eld, with many di\ufb00erentially private solutions available in the literature for essential statistical and machine learning problems. These include mean estimation [10, 32, 33, 15], top-k selection [8, 56], linear regression [61, 15], multiple testing [24], causal inference [38, 37], and deep learning [1, 46]. Achieving optimal statistical performance while preserving privacy is a challenging yet crucial objective in modern data analysis. While desirable for many reasons, di\ufb00erential privacy imposes a constraint on algorithms and may compromise their accuracy in statistical inference. In the decision-theoretical framework, the accuracy of parameter estimation is often measured by the minimax risk, which is de\ufb01ned as the best possible worst-case performance among all procedures. When the class of procedures considered is limited to di\ufb00erentially private ones, we arrive at the privacy-constrained minimax risk, which represents the optimal statistical performance among all di\ufb00erentially private methods in the worst-case scenario. The di\ufb00erence between the unconstrained minimax risk and the privacy-constrained minimax risk quanti\ufb01es the cost of di\ufb00erential privacy, or the amount of accuracy that is inevitably lost due to di\ufb00erential privacy, regardless of how well the di\ufb00erentially private algorithm is designed. Characterizing the minimax risk under privacy constraints is technically di\ufb03cult, and there have been active e\ufb00orts to quantify the cost of di\ufb00erential privacy, in such problems as mean estimation [10, 32, 33, 15], top-k selection [8, 56], linear regression [15], and so on. A key step in establishing minimax theory, whether constrained or unconstrained, is the derivation of minimax lower bounds. In the classical unconstrained setting, several e\ufb00ective lower bound techniques have been developed in the literature, including Le Cam\u2019s two-point argument, Assouad\u2019s Lemma, and Fano\u2019s Lemma. (See [36, 59] for more detailed discussions on minimax lower bound arguments.) However, these methods are not directly applicable to the privacy-constrained setting, and new technical tools are needed. Another line of work [10, 34, 2, 3] lower bounds the privacy-constrained minimax risk using di\ufb00erentially 2 \fprivate analogs of traditional Le Cam\u2019s, Fano\u2019s, and Assouad\u2019s inequalities. Similar to the original versions, these di\ufb00erentially private analogs can, in principle, be applied to general estimation and testing problems and lead to tight lower bound results in discrete distribution estimation [2, 3], but their e\ufb00ectiveness in other statistical problems has yet to be fully explored. In this paper, we introduce a general technique named the \u201cscore attack\u201d to establish lower bounds on the privacy-constrained minimax risk. The method is applicable to any statistical model with a well-de\ufb01ned score statistic, which is simply the gradient of the loglikelihood function with respect to the model parameters. After presenting the technique in general terms in Section 2, we use it to derive precise privacy-constrained minimax lower bounds across four statistical models: the low-dimensional generalized linear models (GLMs), the Bradley-Terry-Luce model for pairwise comparisons, the high-dimensional sparse GLMs, and nonparametric regression. 1.1 Main Results and Our Contribution The score attack technique. The score attack technique generalizes the \u201ctracing adversary\u201d argument, which was \ufb01rst developed by [14, 23]. It has been further applied to various statistical problems, including sharp lower bounds for classical Gaussian mean estimation and linear regression [32, 15], as well as lower bounds for high-dimensional sparse mean estimation and linear regression [56, 15]. In these previous works, the design of tracing attacks is largely ad hoc and speci\ufb01c to statistical models such as Gaussian or Beta-Binomial; a general principle for designing attacks has not been observed. Although some promising proposals have been made in this direction [50, 43], it is unclear whether the suggested attacks in these works actually imply any lower bound results. The proposed score attack technique is a general method for lower bounding the privacyconstrained minimax risk in statistical models that have a well-de\ufb01ned score statistic, which is the gradient of the likelihood function with respect to the model parameters. As explained in Section 2, the score attack method reduces lower bounding the privacyconstrained minimax risk to computing the score statistic and choosing an appropriate prior distribution over the parameter space. This approach is reminiscent of the classical method of lower bounding the minimax risk by the Bayes risk. Optimal di\ufb00erentially private algorithms. In this paper, we establish the minimax optimal rate of convergence, up to a logarithmic factor, under the di\ufb00erential privacy constraint for four statistical estimation problems, namely parameter estimation in low3 \fdimensional generalized linear models (GLMs), the Bradley-Terry-Luce (BTL) model, the high-dimensional sparse GLMs, and nonparametric regression over the Sobolev class. We design optimal algorithms that ensure di\ufb00erential privacy by leveraging established techniques in di\ufb00erential privacy, such as the Laplace and Gaussian mechanisms [20], the Knorm mechanism [27], and di\ufb00erentially private optimization methods [12, 11, 18, 35]. In each of the four problems, we use the score attack technique to establish minimax lower bounds, demonstrating the sharpness of these bounds and the versatility of the score attack method. The main results are summarized as follows. \u2022 Low-dimensional GLMs: Theorem 3.1 presents a minimax lower bound for estimating the parameters and Theorem 3.2 shows that this lower bound is achieved, up to a logarithmic factor, by a noisy gradient descent algorithm. \u2022 BTL model for pairwise comparisons: Similarly, Theorem 4.1 establishes a minimax lower bound for parameter estimation and Theorem 4.2 shows that this lower bound can be attained up to a logarithmic factor by an objective perturbation algorithm. \u2022 High-dimensional sparse GLMs: Theorem 5.1 proves a minimax lower bound which scales only logarithmically with the total dimension and linearly with the sparsity, and Theorem 5.2 shows that this minimax lower bound can be achieved up to a logarithmic factor by an iterative hard-thresholding algorithm. \u2022 Nonparametric regression over the Sobolev class: unlike the previous problems, where the number of parameters is \ufb01nite, this problem deals with estimating an entire function with a di\ufb00erential privacy guarantee. Here, we establish a matching lower bound in Theorem 6.1 and an upper bound in Theorem 6.2 for the minimax mean integrated squared risk. 1.2 Related Work There is a relatively large body of literature on di\ufb00erentially private GLMs, particularly the logistic regression model [17, 18, 64, 51, 52, 6, 7]; in particular, [64] studied sparse logistic regression with di\ufb00erential privacy via the perspective of graphical models. Our paper, while inspired by previous work, is distinct from previous research in its emphasis on parameter estimation accuracy rather than the excess risk of the solution. For problems of ranking based on pairwise comparisons, several papers [49, 29, 51, 40, 63] investigated di\ufb00erentially private rank aggregation. However, to the best of our knowledge, no prior research has explored optimal di\ufb00erentially private parameter estimation in the BTL model. For nonparametric function estimation under di\ufb00erential privacy, [62] and [39] studied the 4 \fconvergence rate of noisy histogram estimators, but did not investigate optimality or explore the lower bound. On the other hand, [26] introduced general mechanisms for releasing di\ufb00erentially private functional data, while [10] proposed a minimax optimal di\ufb00erentially private histogram estimator for Lipschitz functions. 1.3 Organization of the Paper The rest of the paper is organized as follows. We \ufb01nish this section with notational conventions in this paper. Section 2 describes di\ufb00erential privacy and the privacy-constrained minimax risk in technical terms, and formulates the score attack method for general parametric distribution families. The general formulation is then specialized to four examples: the low-dimensional GLMs in Section 3, the Bradley-Terry-Luce model in Section 4, the high-dimensional sparse GLMs in Section 5, and \ufb01nally nonparametric regression in Section 6. We discuss possible extensions in Section 7, present the proof of one main result in Section 8, and defer the rest of the proofs to the supplement [16] due to the space limit. 1.4 Notation For real-valued sequences {an}, {bn}, we write an \u2272bn if an \u2264cbn for some universal constant c \u2208(0, \u221e), and an \u2273bn if an \u2265c\u2032bn for some universal constant c\u2032 \u2208(0, \u221e). We say an \u224dbn if an \u2272bn and an \u2273bn. c, C, c0, c1, c2, \u00b7 \u00b7 \u00b7 , and so on refer to universal constants in the paper, with their speci\ufb01c values possibly varying from place to place. For a vector v \u2208Rd and a subset S \u2286[d], vS denotes the restriction of vector v to the index set S. De\ufb01ne supp(v) := {j \u2208[d] : vj \u0338= 0}. \u2225v\u2225p denotes the vector \u2113p norm for 1 \u2264p \u2264\u221e, with an additional convention that \u2225v\u22250 denotes the number of non-zero coordinates of v. For a square matrix A, \u03bbj(A) refers to its jth smallest eigenvalue, and \u03bbmax(A), \u03bbmin(A) refer to its largest and smallest eigenvalues respectively. For a function f : R \u2192R, \u2225f\u2225\u221edenotes the the essential supremum of |f|. For t \u2208R and R > 0, let \u03a0R(t) denote the projection of t onto the closed interval [\u2212R, R]. 2 The Score Attack This section presents the general framework for the score attack to ensure that the high-level concept is not obscured when we examine speci\ufb01c models later in the paper. We commence by de\ufb01ning the privacy-constrained minimax risk in Section 2.1 and then introduce the score attack in Section 2.2. 5 \f2.1 Di\ufb00erential Privacy and the Minimax Risk The notion of di\ufb00erential privacy formalizes an intuitive idea: an algorithm M compromises the privacy of input data set X if an observer of the output M(X) only can infer better than randomly guessing whether an individual datum x belongs to the input X or not. A di\ufb00erentially algorithm M therefore guarantees that, for every pair of data sets X and X\u2032 that di\ufb00er by a single datum (\u201cadjacent data sets\u201d), the probability distributions of M(X) and of M(X\u2032) are close to each other. De\ufb01nition 1 (Di\ufb00erential Privacy [20]). A randomized algorithm M : X n \u2192R is (\u03b5, \u03b4)di\ufb00erentially private if for every pair of adjacent data sets X, X\u2032 \u2208X n that di\ufb00er by one individual datum and every measurable S \u2286R, P (M(X) \u2208S) \u2264e\u03b5 \u00b7 P (M(X\u2032) \u2208S) + \u03b4, where the probability measure P is induced by the randomness of M only. If an algorithm is (\u03b5, \u03b4)-di\ufb00erentially private for small values of \u03b5, \u03b4 \u22650, the distributions of M(X) and M(X\u2032) are almost indistinguishable. The popularity of di\ufb00erential privacy in applications partially lies in the ease of constructing di\ufb00erentially private algorithms. For example, adding random noise often su\ufb03ces to achieve di\ufb00erential privacy for many non-private algorithms. Example 2.1 (The Laplace and Gaussian Mechanisms [20, 21]). Let M : X n \u2192Rd be an algorithm that is not necessarily di\ufb00erentially private. \u2022 Suppose supX,X\u2032adjacent \u2225M(X) \u2212M(X\u2032)\u22251 < B < \u221e. For w \u2208Rd with its coordinates w1, w2, \u00b7 \u00b7 \u00b7 , wd i.i.d. \u223cLaplace(B/\u03b5), M(X) + w is (\u03b5, 0)-di\ufb00erentially private. \u2022 If instead we have supX,X\u2032adjacent \u2225M(X) \u2212M(X\u2032)\u22252 < B < \u221e, for w \u223cNd(0, \u03c32I) with \u03c32 = 2B2 log(2/\u03b4)/\u03b5, M(X) + w is (\u03b5, \u03b4)-di\ufb00erentially private. That is, if a non-private algorithm\u2019s output is not too sensitive to changing any single datum in the input data set, perturbing the algorithm with Laplace or Gaussian noises produces a di\ufb00erentially private algorithm. Di\ufb00erential privacy is a desirable property, but it is also a constraint that may come at the expense of statistical accuracy. It is important to understand the e\ufb00ect, or cost, of the di\ufb00erential privacy constraint to statistical accuracy that is naturally measured by the privacy-constrained minimax risk. The formal de\ufb01nition of minimax risk consists of the following elements. 6 \f\u2022 {f\u03b8 : \u03b8 \u2208\u0398} is a family of statistical models supported over X . \u2022 X = {x1, x2, \u00b7 \u00b7 \u00b7 , xn} is an i.i.d. sample drawn from f\u03b8\u2217for some unknown \u03b8\u2217\u2208\u0398, and M : X n \u2192\u0398 is an estimator of \u03b8\u2217. \u2022 \u2113: \u0398 \u00d7 \u0398 \u2192R+ is a metric on \u0398 and \u03c1 : R+ \u2192R+ is an increasing function. Then, the (statistical) risk of M is given by E\u03c1(\u2113(M(X), \u03b8\u2217)), where the expectation is taken over the data distribution f\u03b8\u2217and the randomness of estimator M. Because the risk E\u03c1(\u2113(M(X), \u03b8\u2217)) depends on the unknown \u03b8\u2217and can be minimized by choosing M(X) \u2261\u03b8\u2217, a more sensible measure of performance is the maximum risk over the entire class of distributions {f\u03b8 : \u03b8 \u2208\u0398}, sup\u03b8\u2208\u0398 E\u03c1(\u2113(M(X), \u03b8)). The minimax risk of estimating \u03b8 \u2208\u0398 is then given by inf M sup \u03b8\u2208\u0398 E\u03c1(\u2113(M(X), \u03b8)). (2.1) By de\ufb01nition, this quantity characterizes the best possible worst-case performance that an estimator can hope to achieve over the class of models {f\u03b8 : \u03b8 \u2208\u0398}. In this paper, we study a privacy-constrained minimax risk: let M\u03b5,\u03b4 be the collection of all (\u03b5, \u03b4)-di\ufb00erentially private algorithms mapping from X n to \u0398, we consider inf M\u2208M\u03b5,\u03b4 sup \u03b8\u2208\u0398 E\u03c1(\u2113(M(X), \u03b8)). (2.2) As M\u03b5,\u03b4 is a proper subset of all possible estimators, the privacy-constrained minimax risk as de\ufb01ned above will be at least as large as the unconstrained minimax risk, with the di\ufb00erence between these two minimax risks, (2.1) and (2.2) being the \u201ccost of privacy\u201d. Either the unconstrained minimax risk (2.1) or the constrained (2.2) is often characterized from two opposing directions. While analyzing the risk of any concrete algorithm for every \u03b8 \u2208\u0398 leads to an upper bound of the minimax risk, lower bounding the minimax risk requires reasoning abstractly about all estimators and understanding their fundamental limits at estimating the parameter \u03b8. The score attack provides a general and e\ufb00ective method for lower bounding the privacy-constrained minimax risk. 2.2 The Score Attack The score attack is a type of tracing attack [14, 23, 22]. A tracing attack is an algorithm which takes a single \u201ccandidate\u201d datum as input and attempts to infer whether this candidate belongs to a given data set or not, by comparing the candidate with some summary statistics computed from the data set. Statisticians may envision a tracing attack as a 7 \fhypothesis test which rejects the null hypothesis that the candidate is out of the data set when some test statistic takes a large value. This hypothesis testing formulation motivates some desirable properties for a tracing attack. \u2022 Soundness (type I error control): if the candidate does not belong to the data set, the tracing attack is likely to takes small values. \u2022 Completeness (type II error control): if the candidate does belong, the tracing attack is likely to take large values. For example, [23, 32, 15] show that, if the random sample X and the candidate z are drawn from a Gaussian distribution with mean \u00b5 , tracing attacks of the form \u27e8M(X)\u2212\u00b5, z \u2212\u00b5\u27e9 is sound and complete provided that M(X) is an accurate estimator of \u00b5. It is this accuracy requirement that connects tracing attacks with risk lower bounds for di\ufb00erentially private algorithms: if an estimator M(X) is di\ufb00erentially private, it cannot possibly be too close to the estimand, or the existence of tracing attacks leads to a contradiction with the guarantees of di\ufb00erential privacy. Designing sound and complete tracing attacks, therefore, is crucial to the sharpness of privacy-constrained minimax lower bounds. Besides the Gaussian mean tracing attack mentioned above, there are some successful tracing attacks proposed for speci\ufb01c problems, such as top-k selection [56] or linear regression [15], but a general recipe for the design and analysis of tracing attacks has not been available. The score attack is a form of tracing attack applicable to general parametric families of distributions. Given a parametric family of distributions {f\u03b8(x) : \u03b8 \u2208\u0398} with \u0398 \u2286Rd, the score statistics, or simply the score, is given by S\u03b8(x) := \u2207\u03b8 log f\u03b8(x). If x \u223cf\u03b8, we have ES\u03b8(x) = 0 and VarS\u03b8(x) = I(\u03b8), where I(\u03b8) is the Fisher information matrix of f\u03b8. Based on the score statistic, the score attack is de\ufb01ned as A\u03b8(z, M(X)) := \u27e8M(X) \u2212\u03b8, S\u03b8(z)\u27e9. (2.3) The score attack conjectures that z belongs to X for large values of A\u03b8(z, M(X)). In particular, if f\u03b8(x) is the density of N(\u03b8, I), the score attack coincides with the tracing attacks for Gaussian means studied in [23, 32, 15]. As argued earlier, a tracing attack should ideally be \u201csound\u201d (low type I error probability) and \u201ccomplete\u201d (low Type II error probability). This is indeed the case for our score attack (2.3). Theorem 2.1. Let X = {x1, x2, \u00b7 \u00b7 \u00b7 , xn} be an i.i.d. sample drawn from f\u03b8. For each i \u2208 [n], let X\u2032 i denote an adjacent data set of X obtained by replacing xi with an independent copy x\u2032 i \u223cf\u03b8. 8 \f1. Soundness: for each i \u2208[n], EA\u03b8(xi, M(X\u2032 i)) = 0; E|A\u03b8(xi, M(X\u2032 i))| \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)). (2.4) 2. Completeness: if for every j \u2208[d], log f\u03b8(X) is continuously di\ufb00erentiable with respect to \u03b8j and | \u2202 \u2202\u03b8j log f\u03b8(X)| < gj(X) such that E|gj(X)M(X)j| < \u221e, we have X i\u2208[n] EA\u03b8(xi, M(X)) = X j\u2208[d] \u2202 \u2202\u03b8j EM(X)j. (2.5) Theorem 2.1 is proved in Section 8.1. The special form of \u201ccompleteness\u201d for Gaussian and Beta-Binomial families has been discovered as \u201c\ufb01ngerprinting lemma\u201d in the literature [57, 14, 56, 32]. It may not be clear yet how the soundness and completeness properties would imply lower bounds for E\u2225M(X)\u2212\u03b8\u22252 2. For the speci\ufb01c attacks designed for Gaussian mean estimation [32] and top-k selection [56], it has been observed that, if M is an (\u03b5, \u03b4)di\ufb00erentially private algorithm, one can prove inequalities of the form EA\u03b8(xi, M(X)) \u2264 EA\u03b8(xi, M(X\u2032 i))+O(\u03b5)E|A\u03b8(xi, M(X\u2032 i))|. Suppose such relations hold for the score attack as well, the soundness property (2.4) would then imply X i\u2208[n] EA\u03b8(xi, M(X)) \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 \u00b7 n p \u03bbmax(I(\u03b8))O(\u03b5). We give a precise statement of such an inequality in Section 2.2.1. On the other hand, if we can also bound P i\u2208[n] EA\u03b8(xi, M(X)) from below by some positive quantity, a lower bound for E\u2225M(X) \u2212\u03b8\u22252 2 is immediately implied. Completeness may help us in this regard: when EM(X)j is close to \u03b8j, it is reasonable to expect that \u2202 \u2202\u03b8j EM(X)j is bounded away from zero. Indeed several versions of this argument, often termed \u201cstrong distribution\u201d, exist in the literature [23, 55] and have led to lower bounds for Gaussian mean estimation and top-k selection. In Section 2.2.2, we suggest a systematic approach to lower bounding \u2202 \u2202\u03b8j EM(X)j via Stein\u2019s Lemma [53, 54]. The results in Sections 2.2.1 and 2.2.2 combined with Theorem 2.1 would enable us to later prove concrete minimax lower bounds for a variety of statistical problems. 9 \f2.2.1 Score Attack and Di\ufb00erential Privacy In Theorem 2.1, we have found that, when the data set X\u2032 i does not include xi, the score attack is unlikely to take large values: EA\u03b8(xi, M(X\u2032 i)) = 0; E|A\u03b8(xi, M(X\u2032 i))| \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)). If M is di\ufb00erentially private, the distribution of M(X\u2032 i) is close to that of M(X); as a result, the inequalities above can be related to the case where the data set X does include the candidate xi. Proposition 2.1. If M is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm with 0 < \u03b5 < 1 and \u03b4 \u22650, then for every T > 0, EA\u03b8(xi, M(X)) \u22642\u03b5 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)) + 2\u03b4T + Z \u221e T P (|A\u03b8(xi, M(X))| > t) . (2.6) Proposition 2.1 is proved in Section 8.1.1. The quantity on the right side of (2.6) is determined by the statistical model f\u03b8(x) and the choice of T. 2.2.2 Score Attack and Stein\u2019s Lemma Let us denote EX|\u03b8M(X) by g(\u03b8), then g is a map from \u0398 to \u0398, and we are interested in bounding \u2202 \u2202\u03b8j gj(\u03b8) from below. Stein\u2019s Lemma [53, 54], is helpful. Lemma 2.1 (Stein\u2019s Lemma). Let Z be distributed according to some density p(z) which is supported on [a, b] for some \u2212\u221e\u2264a < b \u2264\u221eand continuously di\ufb00erentiable over (a, b). Suppose a function h : [a, b] \u2192R is di\ufb00erentiable and satis\ufb01es E|h\u2032(Z)| < \u221e, E|h\u2032(Z)p\u2032(Z)/p(Z)| < \u221e, then Eh\u2032(Z) = E \u0014\u2212h(Z)p\u2032(Z) p(Z) \u0015 + h(b\u2212)p(b\u2212) \u2212h(a+)p(a+), (2.7) where h(b\u2212), p(b\u2212) are the left limits of h and p at b and h(a+), p(a+) are the right limits of h and p at a. In particular, if p(z) = (2\u03c0)\u22121/2e\u2212z2/2, we have Eh\u2032(Z) = EZh(Z). Stein\u2019s Lemma implies that, by imposing appropriate prior distributions on \u03b8, one can obtain a lower bound for \u2202 \u2202\u03b8j gj(\u03b8) on average over the prior distribution of \u03b8, as follows. 10 \fProposition 2.2. Let \u03b8 be distributed according to a density \u03c0 with marginal densities {\u03c0j}j\u2208[d]. If for every j \u2208[d], \u03c0j, gj satisfy the regularity conditions in Lemma 2.1 and additionally each \u03c0j converges to 0 at the endpoints of its support, we have E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2202 \u2202\u03b8j gj(\u03b8) \uf8f6 \uf8f8\u2265E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \uf8f6 \uf8f8\u2212 v u u u tE\u03c0\u2225g(\u03b8) \u2212\u03b8\u22252 2E\u03c0 \uf8ee \uf8f0X j\u2208[d] \u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132\uf8f9 \uf8fb. (2.8) Proposition 2.2 is proved in Section 8.1.2. Despite the cumbersome expression, the right side is in fact convenient: often we may assume that E\u03c0\u2225g(\u03b8) \u2212\u03b8\u22252 2 \u2264E\u03c0EX|\u03b8\u2225M(X) \u2212 \u03b8\u22252 2 < C for some constant C when the sample size n is su\ufb03ciently large; the right side is then completely determined by the choice of \u03c0, for example: Example 2.2. Let \u03c0 be the density of N(0, I), then (2.8) reduces to E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2202 \u2202\u03b8j gj(\u03b8) \uf8f6 \uf8f8\u2265 X j\u2208[d] E\u03c0j\u03b82 j \u2212 \u221a C sX j\u2208[d] E\u03c0j\u03b82 j = d \u2212 \u221a Cd \u2273d. In view of the completeness property (2.5), Proposition 2.2 suggests an average lower bound for P i\u2208[n] EA\u03b8(xi, M(X)) over some prior distribution \u03c0(\u03b8), with the speci\ufb01c form of this average lower bound entirely determined by the choice of \u03c0. This connection between lower bound and choosing a prior over the parameter space may be reminiscent of the familiar fact that the the Bayes risk always lower bounds the minimax risk, which is the exact reasoning we rely on to \ufb01nish our minimax lower bound argument. In addition to the standard regularity conditions of Stein\u2019s Lemma, Proposition 2.2 assumes that the marginal priors all converge to zero at the boundary of their supports, in order to simplify the right side of (2.8) and highlight the main idea. For those prior distributions not satisfying the vanishing assumption, Proposition 2.2 can be readily extended by adding the last two terms on the right side of Stein\u2019s Lemma, equation (2.7), to the right side of equation (2.8). This extension is carried out in Section 5.1 for truncated normal priors and 6.1 for uniform priors. 2.2.3 From Score Attack to Lower Bounds Theorem 2.1 combined with Propositions 2.1 and 2.2 reveals the connection between the score attack and privacy-constrained minimax lower bounds. Let \u03c0 be a prior distribution supported over the parameter space \u0398 with marginal 11 \fdensities {\u03c0j}j\u2208[d], and assume without the loss of generality that EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 < C for every \u03b8 \u2208\u0398. The completeness part of Theorem 2.1 and Lemma 2.2 imply that X i\u2208[n] E\u03c0EX|\u03b8A\u03b8(xi, M(X)) \u2265E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \uf8f6 \uf8f8\u2212 \u221a C v u u u tE\u03c0 \uf8ee \uf8f0X j\u2208[d] \u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132\uf8f9 \uf8fb Since Lemma 2.1 holds for every \u03b8, it follows from the Lemma that X i\u2208[n] E\u03c0EX|\u03b8A\u03b8(xi, M(X)) \u22642n\u03b5 q E\u03c0EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)) + 2n\u03b4T + X i\u2208[n] Z \u221e T P (|A\u03b8(xi, M(X))| > t) . These two inequalities are true for every (\u03b5, \u03b4)-di\ufb00erentially private M, and they therefore suggest a lower bound for infM\u2208M\u03b5,\u03b4 E\u03c0EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2, which in turn lower bounds infM\u2208M\u03b5,\u03b4 sup\u03b8\u2208\u0398 EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2, since the maximum risk is greater than the average risk over any prior distribution. 2.3 The Utility of Score Attack The analysis in Section 2.2 amounts to a reduction from lower bounding the privacyconstrained minimax risk (2.2) to analyzing the expectation of score attack, X i\u2208[n] EX|\u03b8A\u03b8(xi, M(X)). Speci\ufb01cally, the analysis of score attack consists of upper bounding the expectation via di\ufb00erential privacy, and lower bounding the expectation \u201con average\u201d by choosing a prior over the parameter space \u0398. The proposed score attack method is only as valuable as the concrete minimax lower bound results it implies. In the coming sections, we specialize the general method to a variety of problems. \u2022 Parameter estimation in classical models: the generalized linear model (Section 3), and the Bradley-Terry-Luce model (Section 4). \u2022 High-dimensional sparse parameter estimation (Section 5). \u2022 Non-parametric function estimation (Section 6). 12 \fIn each example, we shall analyze the score attack following the recipe outlined in Section 2.2 and prove the implied minimax risk lower bound; the sharpness of the lower bound is then demonstrated by a concrete di\ufb00erentially private algorithm with matching risk upper bound. These examples will collectively make a strong case for the utility of score attack as a general lower bound technique. While some of them require no more than a straightforward application of the aforementioned method, a few examples involve non-trivial modi\ufb01cations of the general score attack approach which will be highlighted as appropriate. 3 The Generalized Linear Model As a \ufb01rst example, we consider the privacy-constrained minimax risk of estimating parameters \u03b2 \u2208Rd in the generalized linear model f\u03b2(y|x) = h(y, \u03c3) exp \u0012yx\u22a4\u03b2 \u2212\u03c8(x\u22a4\u03b2) c(\u03c3) \u0013 ; x \u223cfx (3.1) using an i.i.d. sample Z = {zi}i\u2208[n] = {(yi, xi)}i\u2208[n] drawn from the model (3.1). The functional form of the model, including the partition function \u03c8 and the normalizing factor h, is assumed to be \ufb01xed and known; the sole parameter of interest is the vector \u03b2. In Section 3.1, we prove a minimax risk lower bound applying the score attack method to the generalized linear model. The lower bound is then shown to be sharp up to a logarithmic factor, via a noisy gradient descent algorithm for estimating \u03b2 in Section 3.2. 3.1 The Privacy-Constrained Minimax Lower Bound For the generalized linear model (3.1) and a candidate datum (\u02dc y, \u02dc x), the score attack (2.3) takes the form A\u03b2((\u02dc y, \u02dc x), M(y, X)) = 1 c(\u03c3) M(y, X) \u2212\u03b2, [\u02dc y \u2212\u03c8\u2032(\u02dc x\u22a4\u03b2)]\u02dc x \u000b . (3.2) As outlined in Section 2.2, we establish a privacy-constrained minimax lower bound for estimating \u03b2 by analyzing the sum of expectations P i\u2208[n] EA\u03b2((yi, xi), M(y, X)). When the reference to data (y, X) and estimator M is clear, we abbreviate A\u03b2((yi, xi), M(y, X)) as Ai. We begin with upper bounding the P i\u2208[n] EAi, which amounts to specializing the soundness part of Theorem 2.1 and Proposition 2.1 to the GLM score attack (3.2). 13 \fProposition 3.1. Consider i.i.d. observations (y1, x1), \u00b7 \u00b7 \u00b7 , (yn, xn) drawn from (3.1). Suppose E(xx\u22a4) is diagonal and \u03bbmax(E(xx\u22a4)) < C < \u221e, \u2225x\u22252 \u2272 \u221a d almost surely, and \u2225\u03c8 \u2032\u2032\u2225\u221e< c2 < \u221e. If the estimator M is (\u03b5, \u03b4)-di\ufb00erentially private with 0 < \u03b5 < 1 and satis\ufb01es \u2225M(y, X) \u2212\u03b2\u22252 2 \u2272d, then X i\u2208[n] Ey,X|\u03b2Ai \u22642n\u03b5 q E\u2225M(y, X) \u2212\u03b2\u22252 2 p Cc2/c(\u03c3) + 4 \u221a 2\u03b4d p c2 log(1/\u03b4)/c(\u03c3). (3.3) Based on the general results, Theorem 2.1 and Proposition 2.1, proving Proposition 3.1 essentially entails computing the Fisher information matrix and choosing an appropriate T in equation (2.6). We defer the details to Section A.1 and move on to deriving an average lower bound of P i\u2208[n] EAi. Proposition 3.2. Let the coordinates of \u03b2 \u2208Rd be drawn i.i.d. from the Beta(3, 3) distribution. For every M satisfying Ey,X|\u03b2\u2225M(y, X) \u2212\u03b2\u22252 2 \u22721 at every \u03b2, we have X i\u2208[n] E\u03c0Ey,X|\u03b2Ai \u2273d, (3.4) where \u03c0 refers to the i.i.d. Beta prior for \u03b2. The proof of Proposition 3.2, which involves plugging the appropriate \u03c0 into the general Proposition 2.2, is in Section A.2. We are now ready to establish the minimax risk lower bound for estimating \u03b2, by combining the bounds for P i\u2208[n] EAi in both directions. The result is presented in the next theorem. Theorem 3.1. Consider i.i.d. observations (y1, x1), \u00b7 \u00b7 \u00b7 , (yn, xn) drawn from (3.1). Suppose E(xx\u22a4) is diagonal and \u03bbmax(E(xx\u22a4)) < C < \u221e, \u2225x\u22252 \u2272 \u221a d almost surely, and \u2225\u03c8 \u2032\u2032\u2225\u221e< c2 < \u221e. If d \u2272n\u03b5, 0 < \u03b5 < 1 and \u03b4 \u2272n\u2212(1+\u03b3) for some \u03b3 > 0, then inf M\u2208M\u03b5,\u03b4 sup \u03b2\u2208Rd E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3) \u0012d n + d2 n2\u03b52 \u0013 . (3.5) The \ufb01rst term in (3.5) is the non-private minimax risk lower bound, and the second term is the \u201ccost of di\ufb00erential privacy\u201d. We show in the next section that the lower bound is attainable, up to a logarithmic term, by a noisy gradient descent algorithm. 14 \f3.2 Optimality of the Private GLM Lower Bound We consider minimizing the negative GLM log-likelihood Ln(\u03b2; Z) = 1 n n X i=1 \u0000\u03c8(x\u22a4 i \u03b2) \u2212yix\u22a4 i \u03b2 \u0001 by noisy gradient descent algorithm, \ufb01rst proposed by [12] in its generic form for arbitrary convex functions. The following algorithm specializes the generic algorithm to GLMs. Algorithm 1: Di\ufb00erentially Private Generalized Linear Regression Input : Ln(\u03b2, Z), data set Z, step size \u03b70, privacy parameters \u03b5, \u03b4, noise scale B, number of iterations T, truncation parameter R, initial value \u03b20 \u2208Rd. 1 for t in 0 to T \u22121 do 2 Generate wt \u2208Rd with wt1, wt2, \u00b7 \u00b7 \u00b7 , wtd i.i.d. \u223cN \u0010 0, (\u03b70)22B2 d log(2T/\u03b4) n2(\u03b5/T)2 \u0011 ; 3 Compute \u03b2t+1 = \u03b2t \u2212(\u03b70/n) Pn i=1(\u03c8\u2032(x\u22a4 i \u03b2t) \u2212\u03a0R(yi))xi + wt; 4 end Output: \u03b2T. For analyzing the privacy guarantee and rate of convergence of Algorithm 1, we collect here some useful assumptions. (D1) Bounded design: there is a constant \u03c3x < \u221esuch that \u2225x\u22252 < \u03c3x \u221a d almost surely. (D2) Bounded moments of design: Ex = 0, and the covariance matrix \u03a3x = Exx\u22a4satis\ufb01es 0 < 1/C < \u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) < C for some constant 0 < C < \u221e. (G1) The function \u03c8 in the GLM (3.1) satis\ufb01es \u2225\u03c8\u2032\u2225\u221e< c1 for some constant c1 < \u221e. (G2) The function \u03c8 satis\ufb01es \u2225\u03c8\u2032\u2032\u2225\u221e< c2 for some constant c2 < \u221e. These assumptions are comparable to those required for the theoretical analysis of GLMs in the non-private setting; for examples, see [45, 41, 60] and the references therein. Because the algorithm is a composition of T individual steps, if each step is (\u03b5/T, \u03b4/T)di\ufb00erentially private, the overall algorithm would be (\u03b5, \u03b4)-di\ufb00erentially private by the composition property of di\ufb00erential privacy. This is indeed the case under appropriate assumptions. Proposition 3.3. If assumptions (D1) and (G1) hold, then choosing B = 4(R + c1)\u03c3x guarantees that Algorithm 1 is (\u03b5, \u03b4)-di\ufb00erentially private. 15 \fProposition 3.3 is proved in Section A.4. Although the privacy guarantee holds for any number of iterations T, choosing T properly is crucial for the accuracy of Algorithm 1, as a larger value of T introduces a greater amount noise into Algorithm 1 to achieve privacy. Existing results on noisy gradient descent typically require O(n) [11] or O(n2) [12] iterations for minimizing generic convex functions. For the GLM problem, it turns out that O(log n) iterations su\ufb03ce, thanks to the restricted strong convexity and restricted smoothness of generalized linear models (see, for example, [41], Proposition 1). These weaker versions of strong convexity and smoothness are su\ufb03cient for Algoirthm 1 to attain linear convergence, which is the same rate for minimizing strongly convex and smooth functions. Therefore, O(log n) iterations would allow the algorithm to converge to an accuracy of O(n\u22121) within \u02c6 \u03b2, the true minimizer of Ln, in terms of squared \u21132 norm; as the squared \u21132 risk of \u02c6 \u03b2, E\u2225\u02c6 \u03b2 \u2212\u03b2\u2217\u22252 2, is of order d/n, there is little reason from a statistical perspective to run the algorithm further than O(log n) iterations. Theorem 3.2. Let {(yi, xi)}i\u2208[n] be an i.i.d. sample from the GLM (3.1), and let the true regression coe\ufb03cients be denoted by \u02c6 \u03b2\u2217\u2208Rd. Suppose assumptions (D1), (D2), (G1) and (G2) are true. There exist data-agnostic choices of tuning parameters \u03b70 = O(1), R = O(\u221alog n), B = O(\u221alog n), T = O(log n), and initial value \u03b20 \u2208Rd such that, if n \u2273c(\u03c3) \u0010 d p log(1/\u03b4) log2 n/\u03b5 \u0011 for a su\ufb03ciently large constant K, the output of Algorithm 1 satis\ufb01es \u2225\u03b2T \u2212\u03b2\u2217\u22252 2 \u2272c(\u03c3) \u0012d n + d2 log(1/\u03b4) log4 n n2\u03b52 \u0013 (3.6) with probability at least 1 \u2212c3 exp(\u2212c4 log n) for some absolute constants c3, c4 > 0. Theorem 3.2 is proved in Section A.5. The requisite scaling of n versus d, \u03b5 and \u03b4 is reasonable, as our lower bound result, Theorem 3.1, implies that no estimator can achieve low \u21132-error unless the assumed scaling holds. More importantly, comparing the rate of convergence (3.6) and the lower bound Theorem 3.1 reveals that the latter is tight up to at most a logarithmic factor in n, under the usual setting of \u03b4 \u224dn\u2212\u03b1 with \u03b1 > 1. 4 The Bradley-Terry-Luce Model Rank aggregation based on pairwise comparisons is a common problem in a range of applications, including recommendation systems [9], sports tournaments [42], and education [30]. The Bradley-Terry-Luce (BTL) model is widely recognized as the most popular model for analyzing pairwise comparisons. In this section, we investigate parameter estimation 16 \fwith di\ufb00erential privacy in the BTL model, where each of the n items is associated with an unobserved parameter that represents its \u201cstrength\u201d or \u201cquality\u201d. The probability of one item winning a comparison over another is determined by their latent parameters. The statistical problem is to estimate these parameters using the observed random comparison outcomes while preserving data privacy through di\ufb00erential privacy techniques. Accurate parameter estimation allows for the ranking of the items. Suppose there are n items indexed by [n] = {1, 2, \u00b7 \u00b7 \u00b7 , n}. We observe comparisons between pairs of items as follows. \u2022 A pair of items indexed by 1 \u2264i < j \u2264n is compared with probability 0 < p < 1 and independent of any other pair. The n items form a \u201ccomparison graph\u201d where an edge (i, j) is present if and only if items i and j are compared. Let G denote the edge set of this comparison graph. \u2022 Each item i is associated with a latent parameter \u03b8i \u2208[\u22121, 1]. Given G, the outcome of a comparison between items i and j is encoded by a Bernoulli random variable Yij which takes the value 1 if i wins. The distribution of Yij is independent of any other pair and determined by the latent parameters: P(Yij = 1) = e\u03b8i e\u03b8i + e\u03b8j . The goal is to estimate the latent parameters \u03b8 = {\u03b8i}i\u2208[n] based on the observed comparison outcomes {Yij}(i,j)\u2208G with a di\ufb00erentially private algorithm. Here, we would like to protect the privacy of the comparison results for each individual given the algorithmic output. More speci\ufb01cally, two data sets are considered adjacent if and only if there exists one individual whose comparison outcomes in one data set di\ufb00er from the same individual\u2019s outcomes in the other data set; the underlying comparison graph is assumed to be identical between adjacent data sets. Let the parameter space be denoted by \u0398 = {\u03b8 \u2208Rn : \u2225\u03b8\u2225\u221e\u22641}. The quantity of interest is the privacy-constrained minimax risk infM\u2208M\u03b5,\u03b4 sup\u03b8\u2208\u0398 E\u2225M(Y ) \u2212\u03b8\u22252 2. A privacy-constrained minimax lower bound for this problem is established via the score attack technique in Section 4.1. We then propose a di\ufb00erentially private estimator via maximizing a randomly perturbed and \u21132-penalized version of the likelihood function in Section 4.2. The minimax lower bound is shown to be optimal by analyzing the performance of this di\ufb00erentially private estimator. 17 \f4.1 The Privacy-constrained Minimax Lower Bound To lower bound the privacy-constrained minimax risk, we consider the score attack that traces if the comparison results of item i are in the training data set for the pairwise comparison model. Let {ek}k\u2208[n] denote the standard basis of Rn; for each item i with 1 \u2264i \u2264n and any estimator M(Y ) of \u03b8 \u2208\u0398, we have the score attack A(M(Y ), i) = n X j=1 1((i, j) \u2208G) \u001c M(Y ) \u2212\u03b8, \u0012 Yij \u2212 1 1 + exp(\u2212(ei \u2212ej)\u22a4\u03b8) \u0013 (ei \u2212ej) \u001d . When the reference to M and Y is unambiguous, it is convenient to notate Ai := A(M(Y ), i). The strategy for establishing a lower bound, as usual, is to analyze Pn i=1 EAi, the expected value of score attacks summed over an entire data set. When M is a di\ufb00erentially private estimator, the soundness of score attack, 2.1 and Proposition 2.1 yield an upper bound of Pn i=1 EAi. Unlike the GLM example in Section 3, the upper bound is not obtained by directly plugging in the Fisher information matrix on the right side, but requires some analysis tailored to the random comparison graph and the BTL model. The detailed proof is deferred to Section B.1. Proposition 4.1. If M is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm with 0 < \u03b5 < 1 and p > 1/2n, then for su\ufb03ciently large n and every \u03b8 \u2208\u0398, it holds that n X i=1 EY |\u03b8Ai \u226416np\u03b5 \u00b7 q EY |\u03b8\u2225M(Y ) \u2212\u03b8\u22252 2 + 16n2\u03b4. (4.1) After upper bounding Pn i=1 EY |\u03b8Ai at every \u03b8 \u2208\u0398, we show that Pn i=1 EY |\u03b8Ai is bounded away from zero in an \u201caverage\u201d sense: there exists a prior distribution \u03c0 over \u0398 such that Pn i=1 E\u03b8EY |\u03b8Ai is lower bounded. Speci\ufb01cally, let the density of each coordinate of \u03b8 be \u03c0(t) = 1(|t| < 1)(15/16)(1 \u2212t2)2, and we have the following result. Proposition 4.2. Suppose M is an estimator of \u03b8 such that sup\u03b8\u2208\u0398 E\u2225M(Y ) \u2212\u03b8\u22252 2 \u2264c0n for a su\ufb03ciently small constant c0. If each coordinate of \u03b8 has density \u03c0(t) = 1(|t| < 1)(15/16)(1 \u2212t2)2, then there is some constant C > 0 such that n X i=1 E\u03b8EY |\u03b8Ai > Cn. (4.2) We are now ready to state the privacy-constrained minimax lower bound for estimating \u03b8, by combining the bounds on Pn i=1 EAi in Propositions 4.1 and 4.2. 18 \fTheorem 4.1. If \u221anp\u03b5 > 1, 0 < \u03b5 < 1 and \u03b4 < cn\u22121 for a su\ufb03ciently small constant c > 0, it holds that inf M\u2208M\u03b5,\u03b4 sup \u03b8\u2208\u0398 EY |\u03b8\u2225M(Y ) \u2212\u03b8\u22252 2 \u22731 p + 1 p2\u03b52. (4.3) The proof is in Section B.3. The privacy-constrained minimax risk lower bound, similar to its GLM counterpart, consists of the \u201cstatistical\u201d term which holds regardless of privacy [44, 48], and a term attributable to the di\ufb00erential privacy constraint. The next step is to show the lower bound (4.3) is optimal, by constructing a di\ufb00erentially private algorithm with matching rate of convergence. 4.2 Optimality of the Private BTL Minimax Lower Bound For constructing an (\u03b5, \u03b4)-di\ufb00erentially private estimator of \u03b8, our approach is to maximize a randomly perturbed and \u21132-penalized version of the likelihood function. The negative log-likelihood function is given by L(\u03b8; y) = X (i,j)\u2208G \u2212yij(ei \u2212ej)\u22a4\u03b8 + log(1 + exp((ei \u2212ej)\u22a4\u03b8)). As the model is invariant to translations of \u03b8, we further assume that the true parameter \u03b8 is centered: 1\u22a4\u03b8 = 0. De\ufb01ne the feasible set \u0398 = {\u03b8 \u2208Rn : \u2225\u03b8\u2225\u221e\u22641, 1\u22a4\u03b8 = 0} and consider an estimator \u02c6 \u03b8 = arg min \u03b8\u2208\u0398 L(\u03b8; y) + \u03b3 2\u2225\u03b8\u22252 2 + w\u22a4\u03b8, w = (w1, w2, \u00b7 \u00b7 \u00b7 , wn) i.i.d. \u223cN(0, \u03c32), (4.4) The choices of \u03b3 and \u03c3 to ensure di\ufb00erential privacy and estimation accuracy of \u02c6 \u03b8 are to be speci\ufb01ed next. Proposition 4.3. If \u03c3 \u2265 \u221an\u221a 8 log(2/\u03b4)+4\u03b5 \u03b5 and \u03b3 \u22651/\u03b5, \u02c6 \u03b8 is (\u03b5, \u03b4)-di\ufb00erentially private. Intuitively, the noise term added to the objective function in (4.4) is equivalent to perturbing the stationary condition of the original problem, and the \u21132-regularization coe\ufb03cient \u03b3 ensures that the objective function is strongly convex, so that perturbing the gradient maps to su\ufb03cient perturbation to the solution. This perturbation method is an instance of the general \u201cobjective perturbation\u201d method in di\ufb00erentially private optimization. While larger values of hyper-parameters \u03c3, \u03b3 lead to stronger privacy guarantees, they also lead to slower convergence of the estimator. The next proposition quanti\ufb01es this e\ufb00ect 19 \fin terms of \u03c3. Proposition 4.4. If \u03b3 = c0\u221anp for some absolute constant c0, p \u2265c1 log n/n for some su\ufb03ciently large constant c1, then E\u2225\u02c6 \u03b8 \u2212\u03b8\u22252 2 \u22721 p + \u03c32 np2. Proposition 4.4 is proved in Section B.5. Comparing the privacy guarantee, Proposition 4.3 with the rate of convergence, Proposition 4.4, tells us the best choice of \u03b3 and \u03c3, which leads to the optimal risk upper bound for the estimator \u02c6 \u03b8. Theorem 4.2. If \u03b5 \u2272log(1/\u03b4), \u03b5 > c0(np)\u22121/2 for some absolute constant c0 > 0, p \u2265 c1 log n/n for some absolute constant c1 > 0 and \u03bb = \u03b5/16, then the estimator \u02c6 \u03b8 de\ufb01ned in (4.4) is (\u03b5, \u03b4)-di\ufb00erentially private and satis\ufb01es E\u2225\u02c6 \u03b8 \u2212\u03b8\u22252 2 \u22721 p + log(1/\u03b4) p2\u03b52 . (4.5) The condition \u03b5 > c0(np)\u22121/2 ensures that the choice of \u03b3 \u224d\u221anp in Proposition 4.4 satis\ufb01es the requirement \u03b3 > 1/\u03b5 in Proposition 4.3. The other regularity conditions are inherited from the two propositions. The bound (4.5) is obtained by plugging \u03c3 = 16\u221an log(1/\u03b4)/\u03b5 into Proposition 4.4. Theorem 4.2 implies that the privacy-constrained minimax lower bound in Theorem 4.1 is rate-optimal up to logarithm factors. 5 The High-dimensional Sparse GLMs High-dimensional generalized linear models (GLMs) are widely used in contemporary datadriven scienti\ufb01c research and have a vast range of applications in various \ufb01elds, such as genetics, metabolomics, \ufb01nance, and econometrics. In this section, we consider privacypreserving parameter estimation under the generalized linear model f\u03b2(y|x) = h(y, \u03c3) exp \u0012yx\u22a4\u03b2 \u2212\u03c8(x\u22a4\u03b2) c(\u03c3) \u0013 ; x \u223cfx (5.1) in a high-dimensional setting where d, the dimension of \u03b2, dominates the sample size n, but the vector of regression coe\ufb03cients \u03b2 is assumed to be s\u2217-sparse: \u2225\u03b2\u22250 \u2264s\u2217. Under the sparsity assumption, the privacy-constrained minimax risk will scale linearly with the sparsity, or the \u201cintrinsic dimension\u201d of \u03b2, and only logarithmically with the \u201cambient 20 \fdimension\u201d d. This much di\ufb00erent setting from the non-sparse GLM considered in Section 3 also calls for new methods: we study a sparse score attack in Section 5.1 to establish the minimax risk lower bound, and propose a iterative hard thresholding algorithm in Section 5.2 with matching risk upper bound. 5.1 The Sparse Score Attack for Minimax Lower Bound For the high-dimensional sparse GLM, we consider a modi\ufb01cation of the classical GLM score attack (3.2), the sparse GLM score attack: A\u03b2,s\u2217((e y, e x), M(y, X)) = 1 c(\u03c3) (M(y, X) \u2212\u03b2)supp(\u03b2), [e y \u2212\u03c8\u2032(e x\u22a4\u03b2)]e x \u000b . (5.2) It is called a sparse score attack because we are restricting the inner product to the non-zero coordinates of \u03b2, which is a small fraction of all d coordinates. For each i \u2208[n], we denote A\u03b2,s\u2217((yi, xi), M(y, X)) by Ai and try to bound the sum of expectations P i\u2208[n] EAi. As usual, upper bounding P i\u2208[n] EAi relies on the soundness of score attack, Theorem 2.1, and the di\ufb00erential privacy of estimator M. Proposition 5.1. Consider i.i.d. observations (y1, x1), \u00b7 \u00b7 \u00b7 , (yn, xn) drawn from (5.1) with \u2225\u03b2\u22250 \u2264s\u2217. Suppose E(xx\u22a4) is diagonal and \u03bbmax(E(xx\u22a4)) < C < \u221e, \u2225x\u2225\u221e< c < \u221e almost surely, and \u2225\u03c8 \u2032\u2032\u2225\u221e< c2 < \u221e. If the estimator M is (\u03b5, \u03b4)-di\ufb00erentially private with 0 < \u03b5 < 1 and satis\ufb01es \u2225M(y, X) \u2212\u03b2\u22252 2 \u2272s\u2217, then X i\u2208[n] EAi \u22642n\u03b5 q E\u2225M(y, X) \u2212\u03b2\u22252 2 p Cc2/c(\u03c3) + 4 \u221a 2\u03b4s\u2217p c2 log(1/\u03b4)/c(\u03c3). (5.3) The proposition is proved in Section C.1. For lower bounding P i\u2208[n] Ey,X|\u03b2Ai on average over some prior distribution of \u03b2, a major di\ufb00erence from its counterpart in the non-sparse GLM case is that we have to choose a prior distribution over the set of s\u2217-sparse vectors, {\u03b2 : \u03b2 \u2208Rd, \u2225\u03b2\u22250 \u2264s\u2217}. Speci\ufb01cally, we consider \u03b2 generated as follows: let e \u03b21, e \u03b22, \u00b7 \u00b7 \u00b7 , e \u03b2d be an i.i.d. sample from the truncated normal N(0, \u03b32) distribution with truncation at \u22121 and 1, let Is\u2217be be the index set of e \u03b2 with top s\u2217greatest absolute values so that |Is\u2217| = s\u2217by de\ufb01nition, and de\ufb01ne \u03b2j = e \u03b2j 1(j \u2208Is\u2217). Then, by the Stein\u2019s Lemma argument in Section 2.2.2, we obtain a lower bound of P i\u2208[n] EpiEy,X|\u03b2Ai, where \u03c0 refers to the sparse truncated normal prior described above. 21 \fProposition 5.2. Suppose s\u2217= o (d1\u2212\u03b3) for some \u03b3 > 0. For every M satisfying Ey,X|\u03b2\u2225M(y, X)\u2212 \u03b2\u22252 2 \u22721 at every \u03b2, we have X i\u2208[n] E\u03c0Ey,X|\u03b2Ai \u2273s\u2217log(d/s\u2217), (5.4) where \u03c0 refers to the sparse truncated normal prior for \u03b2. Proposition 5.2 is proved in Section C.2. It is noteworthy that, as a result of the sparse prior, the right side s\u2217log(d/s) is di\ufb00erent from its non-sparse counterpart in Proposition 3.2. We are now ready to combine the two propositions to obtain a minimax risk lower bound for sparse GLMs. Theorem 5.1. Consider i.i.d. observations (y1, x1), \u00b7 \u00b7 \u00b7 , (yn, xn) drawn from (5.1) with \u2225\u03b2\u22250 \u2264s\u2217, and s\u2217= o(d1\u2212\u03b3) for some \u03b3 > 0. Suppose E(xx\u22a4) is diagonal and \u03bbmax(E(xx\u22a4)) < C < \u221e, \u2225x\u22252\u221e< c < \u221e, and \u2225\u03c8 \u2032\u2032\u2225\u221e< c2 < \u221e. If s\u2217log(d/s\u2217) \u2272n\u03b5, 0 < \u03b5 < 1 and \u03b4 \u2272n\u2212(1+\u03b3) for some \u03b3 > 0, then inf M\u2208M\u03b5,\u03b4 sup \u03b2\u2208Rd,\u2225\u03b2\u22250\u2264s\u2217E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3) \u0012s\u2217log(d/s\u2217) n + (s\u2217log(d/s\u2217))2 n2\u03b52 \u0013 . (5.5) Theorem 5.1 is proved in Section C.3. To show that the lower bound is tight, we propose in the next section an algorithm for estimating the sparse \u03b2 with di\ufb00erential privacy. From the desired rate of convergence (5.5), it is already apparent that the noisy gradient descent algorithm considered in Section 3 is unlikely to succeed, for its requisite noise scales with the full dimension d. Our iterative hard thresholding algorithm manages to add noise which scales with sparsity and shows the lower bound (5.5) is achievable up to a logarithmic factor in n. 5.2 Optimality of the Private Sparse GLM Lower Bound In this section, we construct a di\ufb00erentially private algorithm for estimating GLM parameters when the dimension d dominates the sample size n. Even without privacy requirements, directly minimizing the negative log-likelihood function Ln(\u03b2) no longer achieves any meaningful statistical accuracy, because the objective function Ln can have in\ufb01nitely many minimizers due to a rank-de\ufb01cient Hessian matrix \u22072Ln(\u03b2) = 1 n Pn i=1 \u03c8\u2032\u2032(x\u22a4 i \u03b2)xix\u22a4 i . The problem is nevertheless solvable when the true parameter vector \u03b2\u2217is s\u2217-sparse with s\u2217= o(d), that is when at most s\u2217out of d coordinates of \u03b2\u2217are non-zero. For estimating a sparse \u03b2\u2217, the primary challenge lies in (approximately) solving the non-convex optimization problem \u02c6 \u03b2 = arg min\u03b2:\u2225\u03b2\u22250\u2264s\u2217Ln(\u03b2; Z). Some popular non-private approaches include 22 \fconvex relaxation via \u21131 regularization of Ln [45, 4], or projected gradient descent onto the non-convex feasible set {\u03b2 : \u2225\u03b2\u22250 \u2264s\u2217}, also known as iterative hard thresholding [13, 31]: Algorithm 2: Iterative Hard Thresholding (IHT) Input : Objective function f(\u03b8), sparsity s, step size \u03b7, number of iterations T. 1 Initialize \u03b80 with \u2225\u03b80\u22250 \u2264s, set t = 0; 2 for t in 0 to T \u22121 do 3 \u03b8t+1 = Ps (\u03b8t \u2212\u03b7\u2207f(\u03b8t)), where Ps(v) = arg minz:\u2225z\u22250=s \u2225v \u2212z\u22252 2; 4 end Output: \u03b8T. In each iteration, the algorithm updates the solution via gradient descent, keeps its largest s coordinates in magnitude, and sets the other coordinates to 0. For privately \ufb01tting high-dimensional sparse GLMs, we shall construct a noisy version of Algorithm 2, and show in Section 5.2.2 that it enjoys a linear rate of convergence similar to the noisy gradient descent, Algorithm 1. As a \ufb01rst step, we consider in Section 5.2.1 a noisy, di\ufb00erentially private version of the projection operator Ps, as well as a noisy iterative hard thresholding algorithm applicable to any objective function that satis\ufb01es restricted strong convexity and restricted smoothness. 5.2.1 The Noisy Iterative Hard Thresholding Algorithm At the core of our algoirthm is a noisy, di\ufb00erentially private algorithm that identi\ufb01es the top-s largest coordinates of a given vector with good accuracy. The following \u201cPeeling\u201d algorithm [24] serves this purpose, with fresh Laplace noises added to the underlying vector and one coordinate \u201cpeeled\u201d from the vector in each iteration. The algorithm is guaranteed to be (\u03b5, \u03b4)-di\ufb00erentially private when the vector-valued function v(Z) is not sensitive to replacing any single datum. Lemma 5.1 ([24? ]). If for every pair of adjacent data sets Z, Z\u2032 we have \u2225v(Z) \u2212 v(Z\u2032)\u2225\u221e< \u03bb, then NoisyHT is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm. The accuracy of Algorithm 3 is quanti\ufb01ed by the next lemma. Lemma 5.2. Let e Ps be de\ufb01ned as in Algorithm 3. For any index set I, any v \u2208RI and \u02c6 v such that \u2225\u02c6 v\u22250 \u2264\u02c6 s \u2264s, we have that for every c > 0, \u2225e Ps(v) \u2212v\u22252 2 \u2264(1 + 1/c)|I| \u2212s |I| \u2212\u02c6 s\u2225\u02c6 v \u2212v\u22252 2 + 4(1 + c) X i\u2208[s] \u2225wi\u22252 \u221e. 23 \fAlgorithm 3: Noisy Hard Thresholding (NoisyHT) Input : vector-valued function v = v(Z) \u2208Rd, data Z, sparsity s, privacy parameters \u03b5, \u03b4, noise scale \u03bb. 1 Initialize S = \u2205; 2 for i in 1 to s do 3 Generate wi \u2208Rd with wi1, wi2, \u00b7 \u00b7 \u00b7 , wid i.i.d. \u223cLaplace \u0012 \u03bb \u00b7 2\u221a 3s log(1/\u03b4) \u03b5 \u0013 ; 4 Append j\u2217= arg maxj\u2208[d]\\S |vj| + wij to S; 5 end 6 Set e Ps(v) = vS; 7 Generate e w with e w1, \u00b7 \u00b7 \u00b7 , e wd i.i.d. \u223cLaplace \u0012 \u03bb \u00b7 2\u221a 3s log(1/\u03b4) \u03b5 \u0013 ; Output: e Ps(v) + e wS. Lemma 5.2 is proved in Section C.4. In comparison, the exact, non-private projection operator Ps satis\ufb01es ([31], Lemma 1) \u2225Ps(v)\u2212v\u22252 2 \u2264|I|\u2212s |I|\u2212\u02c6 s\u2225\u02c6 v \u2212v\u22252 2. Algorithm 3, therefore, is as accurate as its non-private counterpart up to a constant multiplicative factor and some additive noise. Taking the private top-s projection algorithm, we have the following noisy iterative hard thresholding algorithm. Algorithm 4: Noisy Iterative Hard Thresholding (NoisyIHT) Input : Objective function Ln(\u03b8, Z) = n\u22121 Pn i=1 l(\u03b8, zi), data set Z, sparsity level s, step size \u03b70, privacy parameters \u03b5, \u03b4, noise scale B, number of iterations T. 1 Initialize \u03b80 with \u2225\u03b80\u22250 \u2264s, set t = 0; 2 for t in 0 to T \u22121 do 3 \u03b8t+1 = NoisyHT (\u03b8t \u2212\u03b70\u2207Ln(\u03b8t; Z), Z, s, \u03b5/T, \u03b4/T, (\u03b70/n)B); 4 end Output: \u03b8T. Compared to the non-private Algorithm 2, we simply replaced the exact projection Ps with the noisy projection given by Algorithm 3. The privacy guarantee of Algorithm 4 is then inherited from that of Algorithm 3. Lemma 5.3. If for every pair of adjacent data z, z\u2032 and every \u03b8 \u2208\u0398 we have \u2225\u2207l(\u03b8; z) \u2212 \u2207l(\u03b8; z\u2032)\u2225\u221e< B, then NoisyIHT is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm. The lemma is proved in Section C.5. Similar to the noisy gradient descent (Algorithm 1), the privacy guarantee of Algorithm 4 is valid for any choice of T, however a fast rate 24 \fof convergence would allow us to select a small T and thereby introducing less noise into the algorithm. To our delight, restricted strong convexity and restricted smoothness again lead to a linear rate of convergence even in the high-dimensional sparse setting. Proposition 5.3. Let \u02c6 \u03b8 = arg min\u2225\u03b8\u22250\u2264s\u2217Ln(\u03b8; Z). For iteration number t \u22650, suppose \u27e8\u2207Ln(\u03b8t) \u2212\u2207Ln( \u02c6 \u03b8), \u03b8t \u2212\u02c6 \u03b8\u27e9\u2265\u03b1\u2225\u03b8t \u2212\u02c6 \u03b8\u22252 2 (5.6) \u27e8\u2207Ln(\u03b8t+1) \u2212\u2207Ln( \u02c6 \u03b8), \u03b8t+1 \u2212\u02c6 \u03b8\u27e9\u2264\u03b3\u2225\u03b8t+1 \u2212\u02c6 \u03b8\u22252 2. (5.7) for constants 0 < \u03b1 < \u03b3. Let w1, w2, \u00b7 \u00b7 \u00b7 , ws be the noise vectors added to \u03b8t\u2212\u03b70\u2207Ln(\u03b8t; Z) when the support of \u03b8t+1 is iteratively selected, St+1 be the support of \u03b8t+1, and e w be the noise vector added to the selected s-sparse vector. Then, for \u03b70 = 2/3\u03b3, there exists an absolute constant c0 so that, choosing s \u2265c0(\u03b3/\u03b1)2s\u2217guarantees Ln(\u03b8t+1) \u2212Ln( \u02c6 \u03b8) \u2264 \u0012 1 \u2212\u03c1 \u00b7 \u03b1 \u03b3 \u2212 2s\u2217 s + s\u2217 \u0013 \u0010 Ln(\u03b8t) \u2212Ln( \u02c6 \u03b8) \u0011 + C\u03b3 \uf8eb \uf8edX i\u2208[s] \u2225wi\u22252 \u221e+ \u2225e wSt+1\u22252 2 \uf8f6 \uf8f8, where 0 < \u03c1 < 1 is an absolute constant, and C\u03b3 > 0 is a constant depending on \u03b3. Proposition 5.3 is proved in Section C.6. While conditions (5.6) and (5.7) are similar to the ordinary strong convexity and smoothness conditions in appearance, they are in fact much weaker because \u02c6 \u03b8, \u03b8t are both s-sparse. In the next section, we apply the iterative hard thresholding algorithm to the GLM likelihood function and obtain its rate of convergence to the truth \u03b2\u2217. 5.2.2 Noisy Iterative Hard Thresholding for the Sparse GLM Assuming that the true GLM parameter vector \u03b2\u2217satis\ufb01es \u2225\u03b2\u2217\u22250 \u2264s\u2217, we now specialize the results of Section 5.2.1 to the GLM negative log-likelihood function Ln(\u03b2; Z) = 1 n n X i=1 \u0000\u03c8(x\u22a4 i \u03b2) \u2212yix\u22a4 i \u03b2 \u0001 . 25 \fAlgorithm 5: Di\ufb00erentially Private Sparse Generalized Linear Regression Input : Ln(\u03b2, Z), data set Z, sparsity level s, step size \u03b70, privacy parameters \u03b5, \u03b4, noise scale B, number of iterations T, truncation parameter R. 1 Initialize \u03b20 with \u2225\u03b20\u22250 \u2264s, set t = 0; 2 for t in 0 to T \u22121 do 3 Compute \u03b2t+0.5 = \u03b2t \u2212(\u03b70/n) Pn i=1(\u03c8\u2032(x\u22a4 i \u03b2t) \u2212\u03a0R(yi))xi; 4 \u03b2t+1 = NoisyHT (\u03b2t+0.5, Z, s, \u03b5/T, \u03b4/T, \u03b70B/n); 5 end Output: \u03b2T. Some assumptions about the data set {(yi, xi)}i\u2208[n] and its distribution will be helpful for analyzing the accuracy and privacy guarantees of Algorithm 5. The necessary assumptions for the high-dimensional sparse case are identical to those for the low-dimensional case, except with (D1) replaced by (D1\u2019), as follows. (D1\u2019) Bounded design: there is a constant \u03c3x < \u221esuch that \u2225x\u2225\u221e< \u03c3x almost surely. Because Algorithm 5 is a special case of the general Algorithm 4, the privacy guarantee of Algorithm 5 reduces to specializing Lemma 5.3 to GLMs, as follows. Lemma 5.4. If assumptions (D1\u2019) and (G1) are true, then choosing B = 4(R + c1)\u03c3x guarantees that Algorithm 5 is (\u03b5, \u03b4)-di\ufb00erentially private. The lemma is proved in Section C.7. For the rate of convergence of Algorithm 5, the restricted strong convexity and restricted smoothness of the GLM likelihood (see, for example, [41], Proposition 1) combined with the sparsity of \u02c6 \u03b2, \u03b2\u2217and \u03b2t for every t are su\ufb03cient for conditions (5.6) and (5.7) in Proposition 5.3 to hold. Applying Proposition 5.3 in a proof by induction leads to an upper bound for \u2225\u03b2T \u2212\u03b2\u2217\u22252 2. Below we state the main result; the detailed proof is in Section C.8. Theorem 5.2. Let {(yi, xi)}i\u2208[n] be an i.i.d. sample from the model (5.1) where the true parameter vector \u03b2\u2217satis\ufb01es \u2225\u03b2\u2217\u22250 \u2264s\u2217. Suppose assumptions (D1\u2019), (D2), (G1) and (G2) are true. There exist data-agnostic choices of tuning parameters s \u224ds\u2217, \u03b70 = O(1), R = O(\u221alog n), B = O(\u221alog n), T = O(log n), and initial value \u03b20 \u2208Rd such that, if n \u2273c(\u03c3) \u0010 s\u2217log d p log(1/\u03b4) log3/2 n/\u03b5 \u0011 , the output of Algorithm 5 satis\ufb01es \u2225\u03b2T \u2212\u03b2\u2217\u22252 2 \u2272c(\u03c3) \u0012s\u2217log d n + (s\u2217log d)2 log(1/\u03b4) log3 n n2\u03b52 \u0013 . (5.8) with probability at least 1 \u2212c3 exp(\u2212c4 log(d/s\u2217log n)) \u2212c3 exp(\u2212c4 log n) for some absolute constants c3, c4 > 0. 26 \fThe assumed scaling of n versus d, s\u2217, \u03b5 and \u03b4 in Theorem 5.2 is reasonable, as the minimax lower bound, Theorem 5.1, shows that no estimator can achieve low \u21132-error unless the assumed scaling holds. The rate of convergence of Algorithm 5 implies that the minimax lower bound (5.5) established via score attack is optimal except possibly for factors of log n, when \u03b4 is set at the usual level \u03b4 \u224dn\u2212\u03b1 for some \u03b1 > 1. 6 Nonparametric Function Estimation Although the score statistics is a fundamentally parametric concept, the score attack method can still lead to optimal minimax lower bounds in nonparametric problems, as this example demonstrates. Consider n pairs of random variables {(Yi, Xi)}i\u2208[n] drawn i.i.d. from the model Yi = f(Xi) + \u03bei, Xi \u223cU[0, 1], where the noise term \u03bei is independent of Xi and follows the N(0, \u03c32) distribution. We would like to estimate the unknown mean function f : [0, 1] \u2192R with (\u03b5, \u03b4) di\ufb00erential privacy. For an estimator \u02c6 f of the true f, a reasonable metric for its performance is the mean integrated squared risk (MISE), R( \u02c6 f, f) = E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 , where the expectation is taken over the joint distribution of {(Yi, Xi)}i\u2208[n]. As the true f is unknown, we cannot hope to know R( \u02c6 f, f) in general and assume instead that f belongs to some pre-speci\ufb01ed class of functions F. We may then circumvent the dependence on unknown f by considering the maximum MISE of \u02c6 f over the entire class F, sup f\u2208F R( \u02c6 f, f) = sup f\u2208F E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 . That is, R( \u02c6 f, F) measures the worst-case performance of \u02c6 f over the function class F. In this example, we take F to be the periodic Sobolev class \u02dc W(\u03b1, C) over [0, 1]: for \u03b1 \u2208N and C > 0, \u02dc W(\u03b1, C) = \u001a f : [0, 1] \u2192R \f \f \f Z 1 0 (f (\u03b1)(x))2dx \u2264C2, f (j)(0) = f (j)(1) for j \u2208[\u03b1 \u22121] \u001b . As usual, let the collection of all (\u03b5, \u03b4)-di\ufb00erentially private estimators be denoted by M\u03b5,\u03b4. 27 \fThe privacy-constrained minimax risk of estimating f is therefore inf \u02c6 f\u2208M\u03b5,\u03b4 sup f\u2208\u02dc W (\u03b1,C) E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 . We shall characterize the privacy-constrained minimax risk by \ufb01rst deriving a lower bound via the score attack method in Section 6.1, and then exhibit an estimator with matching risk upper bound in Section 6.2. 6.1 The Nonparametric Minimax Lower Bound Lower bounding the onparametric privacy-constrained minimax risk is made easier by a sequence of reductions to parametric lower bound problems. The \ufb01rst step is to consider the orthogonal series expansion of f \u2208\u02dc W(\u03b1, C) with respect to the Fourier basis \u03d51(t) = 1; \u03d52k(t) = \u221a 2 cos(2\u03c0kt), \u03d52k(t) = \u221a 2 sin(2\u03c0kt), k = 1, 2, 3 \u00b7 \u00b7 \u00b7 . We have f = P\u221e j=1 \u03b8j\u03d5j(x), where the Fourier coe\ufb03cients are given by \u03b8j = R 1 0 f(x)\u03d5j(x)dx, j = 1, 2, 3, \u00b7 \u00b7 \u00b7 . The Fourier coe\ufb03cients allow a convenient representation of the periodic Sobolev class \u02dc W(\u03b1, C): a function f belongs to \u02dc W(\u03b1, C) if and only if its Fourier coe\ufb03cients belong to the \u201cSobolev ellipsoid\u201d, \u0398(\u03b1, C) = ( \u03b8 \u2208RZ+ : \u221e X j=1 \u03c4 2 j \u03b82 j < C2/\u03c02\u03b1 ) , (6.1) where \u03c4j = j\u03b1 for even j and \u03c4j = (j \u22121)\u03b1 for odd j. We can therefore de\ufb01ne \u02dc W(\u03b1, C) equivalently as \u02dc W(\u03b1, C) = ( f = \u221e X j=1 \u03b8j\u03d5j : \u03b8 \u2208\u0398(\u03b1, C) ) . This alternative de\ufb01nition of \u02dc W(\u03b1, C) motivates a reduction from the original lower bound problem over an in\ufb01nite-dimensional space, \u02dc W(\u03b1, C), to a \ufb01nite-dimensional lower bound problem. Speci\ufb01cally, for k \u2208N, consider the k-dimensional subspace \u02dc Wk(\u03b1, C) = ( f = \u221e X j=1 \u03b8j\u03d5j : \u03b8 \u2208\u0398(\u03b1, C), \u03b8j = 0 for every j > k ) . 28 \fIt follows that \u02dc Wk(\u03b1, C) \u2286\u02dc W(\u03b1, C) for every k; in other words, for every k we have inf \u02c6 f\u2208M\u03b5,\u03b4 sup f\u2208\u02dc W (\u03b1,C) E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 \u2265 inf \u02c6 f\u2208M\u03b5,\u03b4 sup f\u2208\u02dc Wk(\u03b1,C) E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 . (6.2) The next step is to \ufb01nd a minimax lower bound over each k-dimensional subspace, and optimize over k to solve the original problem. 6.1.1 Finite-dimensional Minimax Lower Bounds via Score Attack Once we focus on the k-dimensional subspace, the problem can be further simpli\ufb01ed. For an estimator \u02c6 f and some f \u2208\u02dc Wk(\u03b1, C), let {\u02c6 \u03b8j}j\u2208N and {\u03b8j}j\u2208N be their respective Fourier coe\ufb03cients. By the orthonormality of the Fourier basis, we have E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 \u2265E k X j=1 (\u02c6 \u03b8j \u2212\u03b8j)2, (6.3) reducing the original problem into lower bounding the minimax mean squared risk of estimating a \ufb01nite-dimensional parameter. Let \u0398k(\u03b1, C) denote a \ufb01nite-dimensional restriction of the Sobolev ellipsoid, \u0398k(\u03b1, C) = ( \u03b8 \u2208Rk : k X j=1 \u03c4 2 j \u03b82 j < C2/\u03c02\u03b1 ) , and suppose M(X, Y ) is a di\ufb00erentially private estimator of \u03b8 = (\u03b81, \u03b82, \u00b7 \u00b7 \u00b7 , \u03b8k) \u2208\u0398k(\u03b1, C). For i \u2208[n], consider the score attack given by A(M(X, Y ), (Xi, Yi)) = * M(X, Y ) \u2212\u03b8, \u03c3\u22122 Yi \u2212 k X j=1 \u03b8j\u03d5j(Xi) ! \u03d5(Xi) + , where \u03d5 denotes the vector valued function \u03d5 : R \u2192Rk, \u03d5(x) = (\u03d51(x), \u03d52(x), \u00b7 \u00b7 \u00b7 , \u03d5k(x)). When the reference to M and (X, Y ) is clear, we notate Ai := A(M(X, Y ), (Xi, Yi)). To establish a lower bound of sup\u03b8\u2208\u0398k(\u03b1,C) E\u2225M(X, Y ) \u2212\u03b8\u22252 2, we shall analyze P i\u2208[n] EAi, the expected value of score attacks summed over an entire data set. Proposition 6.1. If M is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm with 0 < \u03b5 < 1, then 29 \ffor su\ufb03ciently large n and every \u03b8 \u2208\u0398k(\u03b1, C), it holds that X i\u2208[n] EX,Y |\u03b8Ai \u2264\u03c3\u22121 \u0012 2n\u03b5 q EX,Y |\u03b8\u2225M(X, Y ) \u2212\u03b8\u22252 2 + 8Cn p k log(1/\u03b4)\u03b4 \u0013 . (6.4) The proof of Proposition 6.1 is deferred to Section D.1. After upper bounding P i\u2208[n] EX,Y |\u03b8Ai at every \u03b8 \u2208\u0398k(\u03b1, C), we show that P i\u2208[n] EX,Y |\u03b8Ai is bounded away from zero in an \u201caverage\u201d sense: there is a prior distribution \u03c0 over \u03b8 \u2208\u0398k(\u03b1, C) such that P i\u2208[n] E\u03b8EX,Y |\u03b8Ai is lower bounded. Speci\ufb01cally, let each \u03b8j follow the uniform distribution between \u2212B and B, where B2 = C2 2\u03c02\u03b1 \u0010R k+1 1 t2\u03b1dt \u0011\u22121 \u224dk\u2212(2\u03b1+1), so that k X j=1 \u03c4 2 j \u03b82 j \u2264B2 k X j=1 j2\u03b1 \u2264C2 2\u03c02\u03b1 ensures the chosen prior distribution is supported within \u0398k(\u03b1, C). Proposition 6.2. Let B2 = C2 2\u03c02\u03b1 \u0010R k+1 1 t2\u03b1dt \u0011\u22121 . Suppose M is an estimator of \u03b8 satisfying sup \u03b8\u2208\u0398k(\u03b1,C) E\u2225M(X, Y ) \u2212\u03b8\u22252 2 \u2264kB2 20 . If each coordinate of \u03b8 follows the uniform distribution between \u2212B and B, then there is some constant c > 0 such that X i\u2208[n] E\u03b8EX,Y |\u03b8Ai > ck. (6.5) The proposition is proved in Section D.2. Like in every parametric example we have considered so far, the bounds of the score attack\u2019s expectations, Propositions 6.1 and 6.2, imply a \ufb01nite-dimensional minimax lower bound. Proposition 6.3. If 0 < \u03b5 < 1 and 0 < \u03b4 < cn\u22122 for a su\ufb03ciently small constant c > 0, it holds that inf M\u2208M\u03b5,\u03b4 sup \u03b8\u2208\u0398k(\u03b1,C) E\u2225M(X, Y ) \u2212\u03b8\u22252 2 \u2273min \u0012 k\u22122\u03b1, k2 n2\u03b52 \u0013 . (6.6) The \ufb01nite-dimensional lower bound is proved in Section D.3. We are now ready to recover the nonparametric lower bound by optimizing over k. 30 \f6.1.2 Optimizing the Finite-dimensional Lower Bounds By the reductions (6.2) and (6.3), it su\ufb03ces to optimize the \ufb01nite-dimensional lower bound (6.6) with respect to k to obtain the desired lower bound over \u02dc W(\u03b1, C), by setting k \u224d (n\u03b5) 1 \u03b1+1. Theorem 6.1. If 0 < \u03b5 < 1, 0 < \u03b4 < cn\u22122 for a su\ufb03ciently small constant c > 0 and n\u03b5 \u22731, it holds that inf \u02c6 f\u2208M\u03b5,\u03b4 sup f\u2208\u02dc W (\u03b1,C) E \u0014Z 1 0 ( \u02c6 f(x) \u2212f(x))2dx \u0015 \u2273n\u2212 2\u03b1 2\u03b1+1 + (n\u03b5)\u22122\u03b1 \u03b1+1. (6.7) The \ufb01rst term can be recognized as the optimal MISE of function estimation in the periodic Sobolev class of order \u03b1, and the second term is the cost of di\ufb00erential privacy. The next section shows the optimality of this nonparametric privacy-constrained lower bound, by exhibiting an estimator with matching MISE up to a logarithmic factor in n. 6.2 Optimality of the Nonparametric Lower Bound Absent the di\ufb00erential privacy constraint, the jth Fourier coe\ufb03cient of the mean function f can be estimated by its empirical version, \u02c6 \u03b8j = n\u22121 Pn i=1 Yi\u03d5j(Xi), and the function f is then estimated by \u02c6 f(x) = PK j=1 \u02c6 \u03b8j\u03d5j(x) for some appropriately chosen K. We construct an estimator of f also by estimating the Fourier coe\ufb03cients with differential privacy, then the estimator of f would be di\ufb00erentially private as well by postprocessing. The sample mean \u02c6 \u03b8j = n\u22121 Pn i=1 Yi\u03d5j(Xi) lends itself naturally to the noise addition mechanisms, except that the Gaussian-distributed Yi are unbounded. Truncating the Yi\u2019s before computing the empirical coe\ufb03cient enables bounding their sensitivity over adjacent data sets and informing our choice of random noise distribution. We \ufb01x the number of terms in the estimator at K, and let \u03d5 denote the vector valued function \u03d5 : R \u2192RK, \u03d5(x) = (\u03d51(x), \u03d52(x), \u00b7 \u00b7 \u00b7 , \u03d5K(x)). With the aforementioned truncation, the empirical Fourier coe\ufb03cients with truncation are given by 1 n n X i=1 Yi 1(|Yi| \u2264T) \u00b7 \u03d5(Xi). Over two adjacent data sets D, D\u2032 with symmetric di\ufb00erence {(Yi, Xi), (Y \u2032 i , X\u2032 i)}, their empirical coe\ufb03cients di\ufb00er by \u2206D,D\u2032 = 1 n (Yi 1(|Yi| \u2264T) \u00b7 \u03d5(Xi) \u2212Y \u2032 i 1(|Y \u2032 i | \u2264T) \u00b7 \u03d5(X\u2032 i)) \u2208RK. 31 \fAlthough the truncation of Y and the boundedness of \u03d5 imply straightforward \u2113p-norms bounds of \u2206D,D\u2032 which scales with the dimension K, [25] observes that noise addition according to the K-norm mechanism [27] (the \u201cK\u201d in \u201cK-norm\u201d is unrelated to the dimension K of the estimator) can achieve much improved accuracy compared to the usual Laplace or Gaussian mechanisms based on \u21131 or \u21132 sensitivities. Speci\ufb01cally, observe that\u2206D,D\u2032 belongs to a scaled version of the set S = conv{\u00b1\u03d5(x), x \u2208[0, 1]} \u2286RK, where conv{\u00b7} refers to the convex hull. The set S, known as the Universal Caratheodory orbitope [25, 47], is convex, compact, centro-symmetric and has an non-empty interior, and therefore induces a norm on Rk: \u2225v\u2225S = inf{r > 0 : x \u2208r \u00b7 S}. It then follows that \u2225\u2206D,D\u2032\u2225S \u22642T/n for any adjacent D, D\u2032, and the K-norm mechanism [27] implies that (\u03b5, 0)-di\ufb00erential privacy is achieved by \u02dc \u03b8K,T = 1 n n X i=1 Yi 1(|Yi| \u2264T) \u00b7 \u03d5(Xi) + w, where w is drawn from the density gw(t) \u221dexp \u0000\u22122n\u03b5 T \u2225t\u2225S \u0001 . While sampling from this unconventional distribution is highly non-trivial, Section 4.4.4. of [25] proposes an e\ufb03cient sampling algorithm, and we focus on the statistical accuracy of \u02dc \u03b8K,T and the associated function estimator \u02dc fK,T = K X j=1 \u0010 \u02dc \u03b8K,T \u0011 j \u03d5j(x). (6.8) Theorem 6.2. If T = 4\u03c3\u221alog n and \u03c32 \u2264c0 for some absolute constant c0, and K = c1 min(n\u2212 1 2\u03b1+1, (n\u03b5)\u2212 1 \u03b1+1) for some absolute constant c1 > 0, then sup f\u2208\u02dc W(\u03b1,C) E \u0014Z 1 0 ( \u02dc fK,T(x) \u2212f(x))2dx \u0015 \u2272n\u2212 2\u03b1 2\u03b1+1 + (n\u03b5)\u22122\u03b1 \u03b1+1 \u00b7 log n. (6.9) Theorem 6.2 is proved in Section D.4. The risk upper bound (6.9) matches the privacyconstrained minimax lower bound (6.7), up to a logarithmic factor in n. 32 \f7 Discussion In the present paper, we introduced a new technique, the score attack, for lower bounding the privacy-constrained minimax risk in di\ufb00erentially private learning. We demonstrated the e\ufb00ectiveness of this novel technique through a range of examples covering classical statistical problems, a ranking problem, a high-dimensional sparse problem, and a nonparametric problem. In each example, we were able to obtain an optimal minimax lower bound (up to at most a logarithmic factor) by de\ufb01ning an appropriate form of score attack and carrying out the analysis introduced in Section 2.2. These results suggest that the score attack technique could be useful for characterizing the necessary cost of di\ufb00erential privacy in other statistical problems. The logarithmic gaps between upper and lower bounds in this paper may warrant further investigation. Some of them appear to be artifacts of truncating unbounded data or compositing iterative steps, and can potentially be eliminated by constructing more e\ufb03cient algorithms. Some other gaps related to the privacy parameter \u03b4 may suggest interesting questions about the inherent di\ufb03culty of parameter estimation with di\ufb00erential privacy, for example, whether, or when, the \u201capproximate\u201d, (\u03b5, \u03b4)-di\ufb00erential privacy is less costly in statistical inference than the \u201cpure\u201d, (\u03b5, 0)-di\ufb00erential privacy in all statistical problems. At present, the score attack method has only been applied to the \u21132-loss, but it would be useful to extend it to other loss functions for statistical problems, such as model selection, where the \u21132-distance may not be the most appropriate metric. Additionally, it would be interesting to explore whether the score attack method can be generalized to interval estimation and testing problems, as many lower bound methods in non-private statistical theory are uni\ufb01ed across point estimation, con\ufb01dence intervals, and hypothesis testing. 8 Proofs We prove Theorem 2.1 in this section. For reasons of space, the proofs of other results and technical lemmas are given in the supplement [16]. 8.1 Proof of Theorem 2.1 Proof. For soundness, we note that xi and M(X\u2032 i) are independent, and therefore EA\u03b8(xi, M(X\u2032 i)) = E\u27e8M(X\u2032 i) \u2212\u03b8, S\u03b8(xi)\u27e9= \u27e8EM(X\u2032 i) \u2212\u03b8, ES\u03b8(xi)\u27e9= 0. 33 \fThe last equality is true by the property of the score that ES\u03b8(z) = 0 for any z \u223cf\u03b8. As to the \ufb01rst absolute moment, we apply Jensen\u2019s inequality, E|A\u03b8(xi, M(X\u2032 i))| \u2264 p E\u27e8M(X\u2032 i) \u2212\u03b8, S\u03b8(xi)\u27e92 \u2264 p E(M(X\u2032 i) \u2212\u03b8)\u22a4(VarS\u03b8(xi))(M(X\u2032 i) \u2212\u03b8) \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)). For completeness, we \ufb01rst simplify X i\u2208[n] EA\u03b8(xi, M(X)) = E D M(X) \u2212\u03b8, X i\u2208[n] S\u03b8(xi) E = E D M(X), X i\u2208[n] S\u03b8(xi) E . By the de\ufb01nition of score and that x1, \u00b7 \u00b7 \u00b7 , xn are i.i.d., P i\u2208[n] S\u03b8(xi) = S\u03b8(x1, \u00b7 \u00b7 \u00b7 , xn) = S\u03b8(X). It follows that E D M(X), X i\u2208[n] S\u03b8(xi) E = E D M(X), S\u03b8(X) E = X j\u2208[d] E \u0014 M(X)j \u2202 \u2202\u03b8j log f\u03b8(X) \u0015 . For each term in the right-side summation, one may exchange di\ufb00erentiation and integration thanks to the regularity conditions on f\u03b8, and therefore E \u0014 M(X)j \u2202 \u2202\u03b8j log f\u03b8(X) \u0015 = E \u0014 M(X)j(f\u03b8(X))\u22121 \u2202 \u2202\u03b8j f\u03b8(X) \u0015 = \u2202 \u2202\u03b8j E \u0002 M(X)j(f\u03b8(X))\u22121f\u03b8(X) \u0003 = \u2202 \u2202\u03b8j EM(X)j. 8.1.1 Proof of Lemma 2.1 Proof. Let Ai := A\u03b8(xi, M(X)), A\u2032 i := A\u03b8(xi, M(X\u2032 i)), and let Z+ = max(Z, 0) and Z\u2212= \u2212min(Z, 0) denote the positive and negative parts of a random variables Z respectively. We have EAi = EA+ i \u2212EA\u2212 i = Z \u221e 0 P(A+ i > t)dt \u2212 Z \u221e 0 P(A\u2212 i > t)dt. For the positive part, if 0 < T < \u221eand 0 < \u03b5 < 1, we have Z \u221e 0 P(A+ i > t)dt = Z T 0 P(A+ i > t)dt + Z \u221e T P(A+ i > t)dt 34 \f\u2264 Z T 0 \u0000e\u03b5P(A+ i > t) + \u03b4 \u0001 dt + Z \u221e T P(A+ i > t)dt \u2264 Z \u221e 0 P(A\u2032 i + > t)dt + 2\u03b5 Z \u221e 0 P(A\u2032 i + > t)dt + \u03b4T + Z \u221e T P(|Ai| > t)dt. Similarly for the negative part, Z \u221e 0 P(A\u2212 i > t)dt = Z T 0 P(A\u2212 i > t)dt + Z \u221e T P(A\u2212 i > t)dt \u2265 Z T 0 \u0010 e\u2212\u03b5P(A\u2032 i \u2212> t) \u2212\u03b4 \u0011 dt + Z \u221e T P(A\u2212 i > t)dt \u2265 Z T 0 P(A\u2032 i \u2212> t)dt \u22122\u03b5 Z T 0 P(A\u2032 i \u2212> t) \u2212\u03b4T + Z \u221e T P(A\u2212 i > t)dt \u2265 Z \u221e 0 P(A\u2032 i \u2212> t)dt \u22122\u03b5 Z \u221e 0 P(A\u2032 i \u2212> t) \u2212\u03b4T. It then follows that EAi \u2264 Z \u221e 0 P(A\u2032 i + > t)dt \u2212 Z \u221e 0 P(A\u2032 i \u2212> t)dt + 2\u03b5 Z \u221e 0 P(|A\u2032 i| > t)dt + 2\u03b4T + Z \u221e T P(|Ai| > t)dt = EA\u2032 i + 2\u03b5E|Ai| + 2\u03b4T + Z \u221e T P(|Ai| > t)dt. The proof is now complete by soundness (2.4). 8.1.2 Proof of Lemma 2.2 Proof. For each j \u2208[d], by Lemma 2.1, we have E\u03c0j \u0012 \u2202 \u2202\u03b8j gj(\u03b8) \u0013 = E\u03c0j \u0012 \u2202 \u2202\u03b8j E[gj(\u03b8)|\u03b8j] \u0013 = E\u03c0j \u0014\u2212E[gj(\u03b8)|\u03b8j]\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 Because |gj(\u03b8) \u2212\u03b8j| \u2264\u2225g(\u03b8) \u2212\u03b8\u22252 \u2264EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 for every \u03b8 \u2208\u0398, we have E\u03c0j \u0014\u2212E[g(\u03b8)|\u03b8j]\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2265E\u03c0j \u0014\u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2212E\u03c0j \u0014 EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 \u00b7 \f \f \f \f \u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \f \f \f \f \u0015 \u2265E\u03c0j \u0014\u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2212 q E\u03c0jEX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 v u u tE\u03c0j \"\u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132# . 35 \fSo we have obtained E\u03c0j \u0012 \u2202 \u2202\u03b8j gj(\u03b8) \u0013 \u2265E\u03c0j \u0014\u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2212 q E\u03c0jEX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 v u u tE\u03c0j \"\u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132# . Now we take expectation over \u03c0(\u03b8)/\u03c0j(\u03b8j) and sum over j \u2208[d] to complete the proof." + }, + { + "url": "http://arxiv.org/abs/2301.10392v1", + "title": "Statistical Inference and Large-scale Multiple Testing for High-dimensional Regression Models", + "abstract": "This paper presents a selective survey of recent developments in statistical\ninference and multiple testing for high-dimensional regression models,\nincluding linear and logistic regression. We examine the construction of\nconfidence intervals and hypothesis tests for various low-dimensional\nobjectives such as regression coefficients and linear and quadratic\nfunctionals. The key technique is to generate debiased and desparsified\nestimators for the targeted low-dimensional objectives and estimate their\nuncertainty. In addition to covering the motivations for and intuitions behind\nthese statistical methods, we also discuss their optimality and adaptivity in\nthe context of high-dimensional inference. In addition, we review the recent\ndevelopment of statistical inference based on multiple regression models and\nthe advancement of large-scale multiple testing for high-dimensional\nregression. The R package SIHR has implemented some of the high-dimensional\ninference methods discussed in this paper.", + "authors": "T. Tony Cai, Zijian Guo, Yin Xia", + "published": "2023-01-25", + "updated": "2023-01-25", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction High-dimensional data analysis has become a vital part of scienti\ufb01c research in many \ufb01elds. While the abundance of high-dimensional data o\ufb00ers numerous opportunities for statistical analysis, it also presents signi\ufb01cant technical 1 arXiv:2301.10392v1 [stat.ME] 25 Jan 2023 \fSpringer Nature 2021 L AT EX template 2 High-dimensional Regression challenges, as the number of variables can be much larger than the number of observations. In high-dimensional settings, most of the classical inferential procedures, such as the maximum likelihood, are no longer valid. In recent years, there has been signi\ufb01cant progress in developing new theories and methods for parameter estimation, hypothesis testing, con\ufb01dence interval construction, and large-scale simultaneous inference in the context of high-dimensional data analysis. In this paper, we provide a selective survey of recent advances in statistical inference and multiple testing for high-dimensional regression models, commonly used in modern data analysis across various \ufb01elds such as genetics, metabolomics, \ufb01nance, health, and economics. Much progress has been made in estimation and prediction under the high-dimensional linear and generalized linear models (GLMs); see, for example, [1\u201320]. Theoretical properties have been established in di\ufb00erent settings, including the minimax estimation rate and the rate of convergence for the estimation and prediction errors of several penalized procedures. Uncertainty quanti\ufb01cation is at the heart of many critical scienti\ufb01c applications and is more challenging than point estimation. For high-dimensional regression models, although the Lasso and other penalized estimators have been shown to achieve the optimal rates of convergence for estimation, these estimators su\ufb00er from non-negligible bias that makes them unsuitable to be directly used for statistical inference, as noted in several studies [21\u201323]. To overcome this issue, debiased inference methods have been developed in [21\u201323] that correct the bias of penalized estimators and allow for statistical inference based on the debiased estimators. The development of debiased inference methods has led to an increase in research on statistical inference for a wide range of low-dimensional objectives in di\ufb00erent high-dimensional models; see, for example, [24\u201347]. In particular, [29] studied the minimaxity and adaptivity of con\ufb01dence intervals for general linear functionals of a high-dimensional regression vector and found signi\ufb01cant di\ufb00erences between the cases of sparse and dense loading vectors. Another important inference tool, known as Neyman\u2019s orthogonalization or double machine learning, has been proposed in econometrics to enable inference for low-dimensional objectives with high-dimensional nuisance parameters; see, for example, [48\u201352]. In the single regression model (one-sample) setting, we observe data {Yk, Xk,\u00b7}1\u2264k\u2264n, where Yk \u2208R and Xk,\u00b7 \u2208Rp+1 denote the outcome and the high-dimensional covariates respectively, generated independently from the high-dimensional GLM, Yk = h(X\u22ba k,\u00b7\u03b2) + \u03f5k, for 1 \u2264k \u2264n (1) with E(\u03f5k|Xk,\u00b7) = 0 and the high-dimensional regression vector \u03b2 \u2208Rp+1. Throughout the paper, we use \u03b21 to denote the intercept and set Xk,1 = 1. We assume that the high-dimensional covariates Xk,\u22121 \u2208Rp are centered and subgaussian and the matrix \u03a3 \u2261EXk,\u00b7X\u22ba k,\u00b7 \u2208R(p+1)\u00d7(p+1) is well-conditioned. We focus on the linear and logistic regression models with the link function h(z) = z and h(z) = exp(z)/[1 + exp(z)] respectively. The regression vector \fSpringer Nature 2021 L AT EX template High-dimensional Regression 3 \u03b2 is assumed to be sparse, and its sparsity level is denoted by \u2225\u03b2\u22250. The high-dimensional covariates Xk,\u00b7 might come from a large number of measured covariates or the basis transformations of the baseline covariates. For the linear model, we further assume the error \u03f5k is sub-gaussian with homoscedastic regression error \u03c32 \u03f5 = E(\u03f52 k|Xk,\u00b7). In addition to the one-sample setting, we examine the statistical inference methods for the two-sample high-dimensional regression models. For d = 1, 2, we assume that the data {Y (d) k , X (d) k,\u00b7}1\u2264k\u2264nd are i.i.d. generated, following Y (d) k = h([X (d) k,\u00b7]\u22ba\u03b2 (d)) + \u03f5 (d) k , for 1 \u2264k \u2264nd, (2) where E(\u03f5 (d) k |X (d) k,\u00b7) = 0 and h(\u00b7) is the pre-speci\ufb01ed link function. Based on the models above, this paper \ufb01rst addresses the challenges of making statistical inferences for low-dimensional objectives (e.g., linear and quadratic functionals) in high-dimensional regression, both in oneand twosample settings. Speci\ufb01cally, the following quantities are of particular interest. 1. Linear functional x\u22ba new\u03b2 with xnew \u2208Rp+1 in one-sample setting. The linear functional x\u22ba new\u03b2 includes as special cases the single regression coe\ufb03cient \u03b2j [21\u201323, 29] when xnew is the jth canonical unit vector and the conditional mean of the outcome under (1) when xnew is a future observation\u2019s covariates. When xnew denotes the average of the covariates observations for a group, x\u22ba new\u03b2 is closely related to average treatment e\ufb00ect [30]. In logistic regression, the linear functional x\u22ba new\u03b2 is closely related to the case probability [24]. 2. Quadratic functionals \u03b2\u22ba GA\u03b2G and \u03b2\u22ba G\u03a3G,G\u03b2G with G \u2282{1, 2, \u00b7 \u00b7 \u00b7 , p+1} and A \u2208R|G|\u00d7|G| in one-sample setting. For a subset G, these quadratic functionals measure the total e\ufb00ect of variables in G. Statistical inference for quadratic functionals can be motivated from the group signi\ufb01cance test, and the (local) genetic heritability estimation [25, 26]. The inference method can be generalized to handle heterogeneous e\ufb00ect tests, hierarchical testing, prediction loss evaluation, and con\ufb01dence ball construction [26, 27]. 3. Di\ufb00erence between linear functionals h(x\u22ba new\u03b2(2)) \u2212h(x\u22ba new\u03b2(1)) with xnew \u2208Rp+1 in two-sample setting. This di\ufb00erence measures the discrepancy between the conditional means, which is closely related to individual treatment selection for the new observation xnew \u2208Rp+1 [53]. 4. Inner products of regression vectors [\u03b2(1)]\u22ba\u03b2(2) and [\u03b2(1)]\u22baA\u03b2(2) with the weighting matrix A \u2208R(p+1)\u00d7(p+1) in two-sample setting. The inner product of regression vectors or its weighted version measures the similarity between the two regression vectors. In genetic studies, such inner products can be used as the genetic relatedness measure when the covariates are genetic variants, and outcome variables are di\ufb00erent phenotypes [25, 28]. We examine statistical inference procedures for linear and quadratic functionals from both methodological and theoretical perspectives. We also discuss the optimality results for the corresponding estimation and inference problems. A user-friendly R package SIHR [54] has been developed to implement the statistical inference methods for the low-dimensional objectives mentioned above. \fSpringer Nature 2021 L AT EX template 4 High-dimensional Regression This package provides a convenient way to apply the discussed statistical inference methods. Beyond the aforementioned inference for a single coordinate of the regression vector or other one-dimensional functionals, we also discuss the simultaneous inference of high-dimensional regression models. This includes using global methods with maximum-type statistics to test the entire regression coe\ufb03cient vector [55\u201357], as well as component-wise simultaneous inference methods that control the false discovery rate (FDR). Speci\ufb01cally, we examine the one-sample testing of high-dimensional linear regression coe\ufb03cients [58], the comparison of two high-dimensional linear regression models [59], and the joint testing of regression coe\ufb03cients across multiple responses [60]. We also extend our discussion to logistic regression models. We discuss these large-scale multiple testing problems focusing on controlling the asymptotic FDR. While error rate control is important for simultaneous inference, statistical power is also crucial. However, many existing testing methods for high-dimensional linear models do not consider auxiliary information, such as model sparsity and heteroscedasticity, that could improve statistical power. While there has been a signi\ufb01cant amount of research on methods to enhance power in multiple testing in general [61\u201369, among many others], recent e\ufb00orts have also focused on simultaneous inference methods that incorporate auxiliary information to assist power improvement of high-dimensional regression analysis. For example, [60] achieved power gains by leveraging similarities across multivariate responses; [70] explored the sparsity information hidden in the data structures and improved the power through p-value weighting mechanisms; [71] obtained power enhancement through integrating heterogeneous linear models. In the current paper, we primarily focus on power enhancement in a two-sample inference setting where the high-dimensional objects of interest are individually sparse. We will discuss methods for controlling FDR with and without power enhancement and the related theoretical aspects. The rest of the paper is organized as follows. We \ufb01nish this section by introducing the notation. Section 2 discusses the debiased inference idea for the regression coe\ufb03cients, and Section 3 presents the debiased methods for linear and quadratic functionals in oneand two-sample settings. Section 4 focuses on simultaneous inference for high-dimensional regression models. We conclude the paper by discussing other related works in Section 5. Notation. For an event E, denote by I{E} its indicator function. For an index set J \u2282{1, 2, \u00b7 \u00b7 \u00b7 , p} and a vector x \u2208Rp, xJ is the sub-vector of x with indices in J and x\u2212J is the sub-vector with indices in Jc. For a set S, |S| denotes the cardinality of S. For a vector x \u2208Rp, the \u2113q norm of x is de\ufb01ned as \u2225x\u2225q = \u0000Pp l=1 |xl|q\u0001 1 q for q \u22650 with \u2225x\u22250 = |{1 \u2264l \u2264p : xl \u0338= 0}| and \u2225x\u2225\u221e= max1\u2264l\u2264p |xl|. Denote by ej the jth canonical unit vector and Ip \u2208Rn\u00d7p the identity matrix. For a symmetric matrix A, denote by \u03bbmax(A) and \u03bbmin(A) its maximum and minimum eigenvalues, respectively. For a matrix A \u2208Rn\u00d7p, A\u00b7,j and Ai,\u00b7 respectively denote the jth column and ith row of A, Ai,j denotes the (i, j)th entry of A, Ai,\u2212j denotes the ith row of A with its jth \fSpringer Nature 2021 L AT EX template High-dimensional Regression 5 entry removed, A\u2212i,j denotes the jth column of A with its ith entry removed, Ai,\u2212{j1,j2} denotes the ith row of A with its jth 1 and jth 2 entries both removed and A\u2212i,\u2212j \u2208R(n\u22121)\u00d7(p\u22121) denotes the submatrix of A with its ith row and jth column removed. We use c and C to denote generic positive constants that may vary from place to place. For two positive sequences an and bn, an \u2272bn means there exists a constant C > 0 such that an \u2264Cbn for all n; an \u224dbn if an \u2272bn and bn \u2272an, and an \u226abn if limn\u2192\u221ean/bn = 0. Let oP{an} and OP{an} respectively represent the sequences that grow in a smaller and equal/smaller rate of the sequence an with probability approaching 1 as n \u2192\u221e. 2 Inference for Regression Coe\ufb03cients In this section, we begin with a discussion in Section 2.1 on several commonly used penalized estimators for the high-dimensional GLMs. We review in Section 2.2 the debiased methods for linear models introduced in [21\u201323] and discuss its extensions to high-dimensional logistic regression in Section 2.3. We also present the optimality of the con\ufb01dence interval construction in both linear and logistic high-dimensional regression models. 2.1 Estimation in high-dimensional regression For the high-dimensional linear model (1), a commonly used estimator of the regression vector \u03b2 is the Lasso estimator [1], de\ufb01ned as b \u03b2 = arg min \u03b2\u2208Rp+1 \u2225Y \u2212X\u03b2\u22252 2 2n + \u03bb0 p+1 X j=2 \u2225X\u00b7,j\u22252 \u221an |\u03b2j|, (3) with the tuning parameter \u03bb0 = A\u03c3\u03f5 p log p/n for some positive constant A > 2. In the penalized regression (3), we do not penalize the intercept \u03b21. The tuning parameter \u03bb0 is typically chosen by cross-validation, as implemented in the R package glmnet [18]. With the Lasso estimator b \u03b2, the variance \u03c32 \u03f5 can be estimated by b \u03c32 \u03f5 = 1 n\u2225Y \u2212X b \u03b2\u22252 2. The tuning parameter \u03bb0 in (3) depends on the noise level \u03c3\u03f5. Alternative estimators have been proposed such that the tuning parameter does not depend on the unknown noise [12, 13, e.g.]. Particularly, [13] proposed the scaled Lasso estimator {b \u03b2, b \u03c3\u03f5} = arg min \u03b2\u2208Rp+1,\u03c3\u03f5\u2208R+ \u2225y \u2212X\u03b2\u22252 2 2n\u03c3\u03f5 + \u03c3\u03f5 2 + \u03bb0 p+1 X j=2 \u2225X\u00b7,j\u22252 \u221an |\u03b2j| (4) with the tuning parameter \u03bb0 = A p log p/n for some positive constant A > 2. Beyond the estimators mentioned above, a wide collection of estimators of high-dimensional regression vectors have been proposed [e.g., 3, 4, 6, 7, 9\u201312]. For the high-dimensional logistic model (1), the penalized methods have also been well developed to estimate \u03b2 \u2208Rp+1 [e.g., 6, 7, 9, 14\u201316]. In this \fSpringer Nature 2021 L AT EX template 6 High-dimensional Regression paper, we use the penalized log-likelihood estimator b \u03b2 in [6], de\ufb01ned as b \u03b2 = arg min \u03b2 1 n n X k=1 {log[1 + exp(X\u22ba k,\u00b7\u03b2)] \u2212Yk(X\u22ba k,\u00b7\u03b2)} + \u03bb0 p+1 X j=2 \u2225X\u00b7,j\u22252 \u221an |\u03b2j|, (5) with a positive tuning parameter \u03bb0 \u224d p log p/n. 2.2 Debiased or desparsi\ufb01ed estimators in linear models The penalized estimators introduced in Section 2.1 have been shown to achieve the optimal convergence rate and satisfy desirable variable selection properties [5, 72\u201374]. However, [21\u201323] highlighted that the Lasso and other penalized estimators are not ready for statistical inference due to the non-negligible estimation bias. They further proposed correcting the penalized estimators\u2019 bias and then making inferences based on the bias-corrected estimators. In the following, we present the main idea of the bias correction method. To illustrate the main idea, we \ufb01x the parameter index 2 \u2264j \u2264p+1 and focus on the con\ufb01dence interval construction for \u03b2j in the model (1). With b \u03b2 denoting the Lasso estimator in (3), the main idea of the method proposed in [22, 23] is to estimate the error of the plug-in estimator b \u03b2j \u2212\u03b2j. The approximation of the error b \u03b2j \u2212\u03b2j can be motivated by the following decomposition: for any vector u \u2208Rp+1, u\u22ba1 n n X k=1 Xk,\u00b7(Yk \u2212X\u22ba k,\u00b7 b \u03b2) \u2212(\u03b2j \u2212b \u03b2j) = u\u22ba1 nX\u22ba\u03f5 + \u0010 b \u03a3u \u2212ej \u0011\u22ba (\u03b2 \u2212b \u03b2), (6) with b \u03a3 = 1 n Pn k=1 Xk,\u00b7X\u22ba k,\u00b7. We explain how to construct the vector u by balancing the two terms on the right-hand side of the decomposition in (6). The \ufb01rst term on the right-hand side of (6) has the conditional variance \u03c32 \u03f5 /n\u00b7u\u22bab \u03a3u while the second term can be further upper-bounded by \f \f \f \u0010 b \u03a3u \u2212ej \u0011\u22ba (\u03b2 \u2212b \u03b2) \f \f \f \u2264\u2225b \u03a3u \u2212ej\u2225\u221e\u2225\u03b2 \u2212b \u03b2\u22251. (7) An algorithmic method of constructing u is to constrain the bias and minimize the variance. Particularly, [22, 23] proposed the following construction of the projection direction, b u = arg min u\u2208Rp+1 u\u22bab \u03a3u subject to \u2225b \u03a3u \u2212ej\u2225\u221e\u2264\u03bb (8) with \u03bb \u224d p log p/n denoting a positive tuning parameter. The construction in (8) is designed to minimize the conditional variance u\u22bab \u03a3u of the \u201casymp normal\u201d term and constrain \u2225b \u03a3u\u2212ej\u2225\u221e, which further controls the \u201cremaining bias\u201d term as in (7). With the projection direction b u in (8), [22, 23] introduced the debiased estimator, b \u03b2Deb j = b \u03b2j + b u\u22ba1 n n X k=1 Xk,\u00b7(Yk \u2212X\u22ba k,\u00b7 b \u03b2), for 2 \u2264j \u2264p + 1. (9) \fSpringer Nature 2021 L AT EX template High-dimensional Regression 7 Following from (6), we obtain the following error decomposition of b \u03b2Deb j , b \u03b2Deb j \u2212\u03b2j = b u\u22ba1 n n X k=1 Xk,\u00b7\u03f5k + (b \u03a3b u \u2212ej)\u22ba(\u03b2 \u2212b \u03b2). (10) The \ufb01rst term on the right-hand side of (10) is asymptotically normal with the asymptotic variance (\u03c32 \u03f5 /n) \u00b7 b u\u22bab \u03a3b u. Implied by (7), we show that the projection direction b u in (8) constrains the second term on the right-hand side of (10) by an upper bound \u03bb\u2225\u03b2 \u2212b \u03b2\u22251, with the tuning parameter \u03bb speci\ufb01ed in (8). This upper bound can be established to be of rate \u2225\u03b2\u22250 log p/n and the rate of convergence of the estimating error b \u03b2Deb j \u2212\u03b2j is 1 \u221an +\u2225\u03b2\u22250 log p n , which is shown to be the minimax optimal rate of estimating the regression coe\ufb03cient \u03b2j [31, 33]. More importantly, [22, 23] have shown that the debiased estimator b \u03b2Deb j in (9) is approximately unbiased and asymptotically normal when \u2225\u03b2\u22250 \u226a\u221an/ log p. Based on the asymptotic normality, [22, 23] constructed the following con\ufb01dence interval CI = \u0010 b \u03b2Deb j \u2212z\u03b1/2 p b V, b \u03b2Deb j + z\u03b1/2 p b V \u0011 with b V = b \u03c32 \u03f5 n b u\u22bab \u03a3b u, (11) where z\u03b1/2 is the upper \u03b1/2 quantile for the standard normal distribution. Remark 1. In the low-dimensional setting, we may set \u03bb = 0 as in (8) and obtain b u = b \u03a3\u22121ej. This choice of b u reduces the debiased estimator in (9) to the OLS estimator. The debiased estimator is also referred to as the \u201cdesparsi\ufb01ed\u201d estimator [21, 23] since b \u03b2Deb j is generally not zero even if the true \u03b2j is zero. Hence, even if \u03b2 is a sparse vector, the vector b \u03b2Deb = (b \u03b2Deb 1 , b \u03b2Deb 2 , \u00b7 \u00b7 \u00b7 , b \u03b2Deb p+1)\u22ba is dense. Consequently, the vector b \u03b2Deb does not estimate \u03b2 well in general even though b \u03b2Deb j is an optimal estimator of \u03b2j for every 2 \u2264j \u2264p + 1. 2.2.1 Optimality of statistical inference In high-dimensional linear model, the paper [31] established the minimax expected length of the con\ufb01dence interval over the parameter space \u0398(s) = {\u03b8 = (\u03b2, \u03a3, \u03c3\u03f5) : \u2225\u03b2\u22250 \u2264s, c0 \u2264\u03bbmin(\u03a3) \u2264\u03bbmax(\u03a3) \u2264C0, 0 < \u03c3\u03f5 \u2264C1} , (12) where C0 \u2265c0 > 0 and C1 > 0 are positive constants. The space \u0398(s) contains all regression vectors of less than s non-zero elements. As established in [31], the minimax expected length of con\ufb01dence intervals over \u0398(s) for the regime s \u2272n/ log p is 1 \u221an + s log p n . When s \u2272 \u221an log p, the optimal length 1/\u221an can be achieved by the con\ufb01dence interval in (11). Over the regime \u221an log p \u226as \u2272 n/ log p, [31] proposed a con\ufb01dence interval attaining the optimal rate s log p/n, where the construction requires the prior knowledge of the sparsity level s. We illustrate the minimax expected length in Figure 1. \fSpringer Nature 2021 L AT EX template 8 High-dimensional Regression Fig. 1 An illustration of the minimax optimality and adaptivity of the con\ufb01dence intervals concerning the sparsity s of \u03b2 for the unknown design setting. On the top of the \ufb01gure, we report the minimax expected lengths of the con\ufb01dence intervals. At the bottom of the \ufb01gure, the possibility of being adaptive to the sparsity s is presented. More importantly, [29] studied the possibility of constructing a rate-optimal adaptive con\ufb01dence interval. Here, an adaptive con\ufb01dence interval means that, even without knowing about the sparsity level s, the length of the constructed con\ufb01dence interval is automatically adapting to s. Since the sparsity information of the regression vector \u03b2 is generally unknown, it is desirable to construct an adaptive con\ufb01dence interval. The work [29] established the following adaptivity results: if the design covariance matrix \u03a3 is unknown, it is possible to construct an adaptive con\ufb01dence interval only for the ultra-sparse regime s \u2272\u221an/ log p. That is, if \u221an/ log p \u226as \u2272n/ log p, it is impossible to construct a rate-optimal con\ufb01dence interval that is adaptive to the sparsity level. This phase transition about the possibility of constructing adaptive con\ufb01dence intervals is presented in Figure 1. The information of \u03a3 is critical for constructing optimal con\ufb01dence intervals. If \u03a3 is known, the minimax expected length is 1/\u221an over the entire sparse regime s \u2272n/ log p; see [29, 75] for details. This contrasts sharply with the optimality results in Figure 1. 2.2.2 Another viewpoint: debiasing with decorrelation In this subsection, we detour to introduce a slightly di\ufb00erent view of debiased Lasso estimator proposed in [21, 23], which is of the following form e \u03b2Deb j = Z\u22ba \u00b7,j(Y \u2212X\u00b7,\u2212j b \u03b2\u2212j) Z\u22ba \u00b7,jX\u00b7,j , (13) where Z\u00b7,j \u2208Rn is a decorrelation vector to be speci\ufb01ed. [21, 23] proposed to construct the vector Z\u00b7,j \u2208Rn as the residual Z\u00b7,j = X\u00b7,j \u2212X\u00b7,\u2212jb \u03b3, with the Lasso estimator b \u03b3 = 1 2n\u2225X\u00b7,j \u2212X\u00b7,\u2212j\u03b3\u22252 2 + \u03bb\u03b3 P l\u0338=j \u2225X\u00b7,l\u22252 \u221an |\u03b3l|, where \u03bb\u03b3 > 0 is a positive tuning parameter. The KKT condition ensures that the residual Z\u00b7,j = X\u00b7,j \u2212X\u00b7,\u2212jb \u03b3 is nearly orthogonal to all columns of X\u00b7,\u2212j, via 1 n\u2225Z\u22ba \u00b7,jX\u00b7,\u2212j\u2225\u221e\u2264\u03bb\u03b3 \u00b7 max l\u0338=j \u2225X\u00b7,l\u22252 \u221an . \fSpringer Nature 2021 L AT EX template High-dimensional Regression 9 To see the e\ufb00ectiveness of the estimator in (13), we examine its estimation error e \u03b2Deb j \u2212\u03b2j = Z\u22ba \u00b7,j\u03f5 Z\u22ba \u00b7,jX\u00b7,j + Z\u22ba \u00b7,jX\u00b7,\u2212j(\u03b2\u2212j \u2212b \u03b2\u2212j) Z\u22ba \u00b7,jX\u00b7,j . (14) In the above expression, the \ufb01rst term on the right-hand side can be shown to be asymptotically normal under regularity conditions, while the second term is constrained as \f \f \f \f \f Z\u22ba \u00b7,jX\u00b7,\u2212j(\u03b2\u2212j \u2212b \u03b2\u2212j) Z\u22ba \u00b7,jX\u00b7,j \f \f \f \f \f \u22641 n\u2225Z\u22ba \u00b7,jX\u00b7,\u2212j\u2225\u221e\u00b7 \u2225\u03b2\u2212j \u2212b \u03b2\u2212j\u22251 1 n|Z\u22ba \u00b7,jX\u00b7,j| \u2264C\u03bb\u03b3\u2225b \u03b2 \u2212\u03b2\u22251, where the last inequality holds with a high probability for some positive constant C > 0. With the above argument, [21, 23] have shown that the \ufb01rst term in the decomposition (14) is the dominating term for the ultra-sparse regime \u2225\u03b2\u22250 \u226a\u221an/ log p. This leads to the asymptotic normality of the estimator e \u03b2Deb j . [21, 23] further constructed the con\ufb01dence interval based on asymptotic normality. In the current paper, we focus on methods generalizing the debiased estimator in (9), instead of the decorrelation form in (13). However, the decorrelation idea has been extended to handle other statistical inference problems, including high-dimensional generalized linear model [21, 32], gaussian graphical model [32], high-dimensional confounding model [76, 77], and high-dimensional additive model [78]. 2.3 Debiasing in binary outcome models We generalize the debiased estimator in (9) to high-dimensional GLM with a binary outcome. Similarly to the Lasso estimator, the penalized logistic regression estimator b \u03b2 in (5) su\ufb00ers from the bias due to the \u21131 penalty. The bias-corrected estimator is proposed as the following generic form, b \u03b2Deb j = b \u03b2j + b u\u22ba1 n n X k=1 WkXk,\u00b7(Yk \u2212h(X\u22ba k,\u00b7 b \u03b2)), (15) where Wk \u2208R, for 1 \u2264k \u2264n, are the weights to be speci\ufb01ed and b u \u2208Rp+1 is the projection direction to be speci\ufb01ed. We apply the Taylor expansion of the h function and obtain Yk \u2212h(X\u22ba k,\u00b7 b \u03b2) = h(X\u22ba k,\u00b7\u03b2) \u2212h(X\u22ba k,\u00b7 b \u03b2) + \u03f5k = h\u2032(X\u22ba k,\u00b7 b \u03b2)X\u22ba k,\u00b7(\u03b2 \u2212b \u03b2) + Rk + \u03f5k (16) with the approximation error Rk = R 1 0 (1 \u2212t)h\u2032\u2032(X\u22ba k,\u00b7 b \u03b2 + tX\u22ba k\u00b7(\u03b2 \u2212b \u03b2))dt \u00b7 (X\u22ba k\u00b7(b \u03b2 \u2212 \u03b2))2. We plug in the Taylor expansion (16) into the weighted bias-correction estimator in (15), leading to the error decomposition of b \u03b2Deb j \u2212\u03b2j as 1 n n X k=1 b u\u22baXk,\u00b7Wk\u03f5k | {z } asymp normal + 1 n n X k=1 \u0010 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7b u \u2212ej \u0011\u22ba (\u03b2 \u2212b \u03b2) | {z } remaining bias + 1 n n X k=1 b u\u22baXk,\u00b7WkRk | {z } nonlinearity bias . (17) \fSpringer Nature 2021 L AT EX template 10 High-dimensional Regression In the following, we describe two ways of specifying the weights Wk. 1. Linearization weighting. [24] proposed to construct the weight Wk = 1/h\u2032(X\u22ba k,\u00b7 b \u03b2). Then the \u201cremaining bias\u201d term in (17) is reduced to 1 n Pn k=1 \u0010 Xk,\u00b7X\u22ba k,\u00b7b u \u2212ej \u0011\u22ba (\u03b2 \u2212b \u03b2), which is the same as the corresponding term in the linear regression. This enables us to directly adopt the projection direction b u constructed in (8). This connection reveals the advantage of the weight Wk = 1/h\u2032(X\u22ba k,\u00b7 b \u03b2), that is, the bias-correction developed under the linear regression model can be directly extended to the logistic regression. 2. Link-speci\ufb01c weighting. For a general link function h(\u00b7), [31] constructed the weight Wk = h\u2032(X\u22ba k,\u00b7 b \u03b2)/[h(X\u22ba k,\u00b7 b \u03b2)(1\u2212h(X\u22ba k,\u00b7 b \u03b2))]. If h is the logistic link, we have h\u2032(\u00b7) = h(\u00b7)(1\u2212h(\u00b7)) and obtain the constant weight Wk = 1. Such a link-speci\ufb01c weighting can be generalized to other binary outcome models (e.g., the probit model) with various link functions h(\u00b7); see details in [31]. After specifying the weights, the projection direction can be constructed as b u = arg min u\u2208Rp+1 u\u22ba 1 n n X k=1 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 ! u subject to \r \r \r \r \r 1 n n X k=1 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 ! u \u2212ej \r \r \r \r \r \u221e \u2264\u03bb, \u2225Xu\u2225\u221e\u2264\u03c4 (18) with the positive tuning parameters \u03bb \u224d p log p/n and \u03c4 \u224d\u221alog n. The construction in (18) can be motivated from a similar view as (8). The constraint \r \r \r \u0010 1 n Pn k=1 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 \u0011 u \u2212ej \r \r \r \u221e\u2264\u03bb is imposed to constrain the \u201cremaining bias\u201d term in (17) and \u2225Xu\u2225\u221e\u2264\u03c4 is imposed to constrain the \u201cnonlinearity bias\u201d in (17). For the linearization weighting, the conditional variance of 1 n Pn k=1 u\u22baXk,\u00b7Wk\u03f5k is u\u22ba\u0010 1 n Pn k=1 W 2 k h\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 \u0011 u, which is of the same order as the objective function in (18) for a bounded Wk. So, instead of minimizing the exact variance, we minimize a scaled conditional variance in (18), which has the advantage of leading to almost the same optimization as in (8) for the linear model. Theoretical properties of the debiased estimators (15) have been established for the logistic outcome model. With the weights Wk = 1/h\u2032(X\u22ba k,\u00b7 b \u03b2) for the linearization weighting, [24] established the asymptotic normality of b \u03b2Deb j in (15). [31] established a similar theoretical property for b \u03b2Deb j in (15) with the weights Wk = 1 for the link-speci\ufb01c weighting. The asymptotic normality results in both works require the ultra-sparse condition \u2225\u03b2\u22250 \u226a\u221an/[log p log n]. The use of the weights Wk = 1 in [31] leads to a smaller standard error than using the weights Wk = 1/h\u2032(X\u22ba k,\u00b7 b \u03b2) proposed in [24]. The theoretical justi\ufb01cation in [31] requires a sample splitting such that the initial estimator b \u03b2 is constructed from an independent sample. Such sample splitting is not required in the analysis of [24], which is part of the bene\ufb01t of linearization weighting. \fSpringer Nature 2021 L AT EX template High-dimensional Regression 11 Based on the asymptotic normality, we construct the con\ufb01dence interval CI = \u0010 b \u03b2Deb j \u2212z\u03b1/2 p b V, b \u03b2Deb j + z\u03b1/2 p b V \u0011 with b V = 1 n b u\u22bab \u03a3Gb u, (19) where b \u03a3G = 1 n Pn k=1 W 2 k h(X\u22ba k,\u00b7 b \u03b2)(1 \u2212h(X\u22ba k,\u00b7 b \u03b2))Xk,\u00b7X\u22ba k,\u00b7. The optimality of con\ufb01dence interval construction in high-dimensional logistic regression was studied in [31]. The minimax expected length and the possible regime of constructing adaptive con\ufb01dence intervals are similar to those in Figure 1, up to a polynomial order of log n; see the results in [31]. 3 Linear and Quadratic Functionals Inference We consider in this section statistical inference for linear and quadratic transformations of the regression vectors under high-dimensional linear and logistic regressions. We investigate both oneand two-sample regression models. In Section 3.7, we discuss the R package SIHR [54] that implements these methods. 3.1 Linear functionals for linear regression For a given vector xnew \u2208Rp+1, we present the construction of the point estimator and con\ufb01dence interval for x\u22ba new\u03b2 under the high-dimensional linear model (1). Similar to inference for \u03b2j, the plug-in estimator x\u22ba new b \u03b2 su\ufb00ers from the estimation bias by directly plugging in the Lasso estimator b \u03b2 in (3). The work [53] proposed the following bias-corrected estimator, \\ x\u22ba new\u03b2 = x\u22ba new b \u03b2 + b u\u22ba1 n n X k=1 Xk,\u00b7(Yk \u2212X\u22ba k,\u00b7 b \u03b2), (20) with the projection direction b u de\ufb01ned as b u = arg min u\u2208Rp+1 u\u22bab \u03a3u subject to \r \r \rb \u03a3u \u2212xnew \r \r \r \u221e\u2264\u2225xnew\u22252\u03bb (21) \f \f \fx\u22ba newb \u03a3u \u2212\u2225xnew\u22252 2 \f \f \f \u2264\u2225xnew\u22252 2\u03bb, (22) where b \u03a3 = 1 n Pn k=1 Xk,\u00b7X\u22ba k,\u00b7 and \u03bb \u224d p log p/n is a positive tuning parameter. The debiased estimator in (20) satis\ufb01es the following error decomposition, \\ x\u22ba new\u03b2 \u2212x\u22ba new\u03b2 = b u\u22ba1 nX\u22ba\u03f5 | {z } asymp normal + \u0010 b \u03a3b u \u2212xnew \u0011\u22ba (\u03b2 \u2212b \u03b2) | {z } remaining bias . (23) The construction in (21), without the additional constraint (22), can be viewed as a direct generalization of (8) by replacing ej with the general loading xnew. Speci\ufb01cally, (21) minimizes the conditional variance of the \u201casymp normal\u201d term in (23) and provides a control of the \u201cremaining bias\u201d term by the inequality |(b \u03a3u \u2212xnew)\u22ba(\u03b2 \u2212b \u03b2)| \u2264\u2225b \u03a3u \u2212xnew\u2225\u221e\u2225\u03b2 \u2212b \u03b2\u22251. Such a construction of the projection direction for linear functional has been proposed in [29, 30]. However, such a direct generalization is not universally e\ufb00ective for all loadings. As established in Proposition 2 in [53], the projection direction, \fSpringer Nature 2021 L AT EX template 12 High-dimensional Regression constructed without the additional constraint (22), does not correct the bias for a wide class of dense loading vectors. We shall emphasize that the new constraint in (22) is crucial to ensuring the asymptotic normality of \\ x\u22ba new\u03b2 \u2212x\u22ba new\u03b2 for any loading vector xnew. This constraint is imposed such that the variance of the \u201casymp normal\u201d term in (23) always dominates the \u201cremaining bias\u201d term in (23). With this additional constraint, the projection direction b u constructed in (21) and (22) enables the e\ufb00ective bias correction for any loading vector xnew, no matter it is sparse or dense. The work [53] established the asymptotic normality of the estimator \\ x\u22ba new\u03b2 in (20) for any loading vector xnew \u2208Rp+1. Based on the asymptotic normality, we construct a con\ufb01dence interval for x\u22ba new\u03b2 as CI = \u0010 \\ x\u22ba new\u03b2 \u2212z\u03b1/2 p b V, \\ x\u22ba new\u03b2 + z\u03b1/2 p b V \u0011 with b V = b \u03c32 \u03f5 n b u\u22bab \u03a3b u. (24) 3.2 Linear functionals for logistic regression We now consider the high-dimensional logistic model and present the inference procedures for x\u22ba new\u03b2 or h(x\u22ba new\u03b2) proposed in [24]. In particular, [24] proposed the following debiased estimator, \\ x\u22ba new\u03b2 = x\u22ba new b \u03b2 + b u\u22ba1 n n X k=1 WkXk,\u00b7 \u0010 Yk \u2212h(X\u22ba k,\u00b7 b \u03b2) \u0011 , (25) where b \u03b2 is de\ufb01ned in (5), Wk = 1/h\u2032(X\u22ba k,\u00b7 b \u03b2) for 1 \u2264k \u2264n, and the projection direction b u \u2208Rp+1 is de\ufb01ned as b u = arg min u\u2208Rp+1 u\u22ba 1 n n X k=1 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 ! u subject to \r \r \r \r \r 1 n n X k=1 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 ! u \u2212xnew \r \r \r \r \r \u221e \u2264\u2225xnew\u22252\u03bb, \u2225Xu\u2225\u221e\u2264\u03c4 \f \f \f \f \fx\u22ba new 1 n n X k=1 Wkh\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,\u00b7X\u22ba k,\u00b7 ! u \u2212\u2225xnew\u22252 2 \f \f \f \f \f \u2264\u2225xnew\u22252 2\u03bb. (26) The bias-corrected estimator in (25) can be viewed as a generalization of those in (21) and (22) by incorporating the weighted bias-correction in (15). It has been established in [24] that \\ x\u22ba new\u03b2 in (25) is asymptotically unbiased and normal. Based on the asymptotic normality, we construct the following con\ufb01dence interval CI = \u0010 \\ x\u22ba new\u03b2 \u2212z\u03b1/2 p b V, \\ x\u22ba new\u03b2 + z\u03b1/2 p b V \u0011 with b V = 1 n b u\u22bab \u03a3Gb u, (27) where b \u03a3G = 1 n Pn k=1 W 2 k h(X\u22ba k,\u00b7 b \u03b2)(1\u2212h(X\u22ba k,\u00b7 b \u03b2))Xk,\u00b7X\u22ba k,\u00b7. We estimate the case probability P(Yk = 1|Xk,\u00b7 = xnew) by h(\\ x\u22ba new\u03b2) and construct the con\ufb01dence interval for h(x\u22ba new\u03b2) as CI = h h \u0010 \\ x\u22ba new\u03b2 \u2212z\u03b1/2 p b V \u0011 , h \u0010 \\ x\u22ba new\u03b2 + z\u03b1/2 p b V \u0011i . (28) \fSpringer Nature 2021 L AT EX template High-dimensional Regression 13 3.3 Conditional average treatment e\ufb00ects The inference methods proposed in Sections 3.1 and 3.2 can be generalized to make inferences for conditional average treatment e\ufb00ects, which can be expressed as the di\ufb00erence between two linear functionals. For 1 \u2264k \u2264n, let Ak \u2208{1, 2} denote the treatment assignment for the kth observation, where Ak = 1 and Ak = 2 represent the subject receiving the control or the treatment assignment, respectively. Moreover, in the context of comparing the treatment e\ufb00ectiveness, Ak = 1, and Ak = 2 may stand for the subject receiving the \ufb01rst or the second treatment assignment, respectively. As a special case of (2), we consider the following conditional outcome models E(Yk|Xk,\u00b7, Ak = 1) = X\u22ba k,\u00b7\u03b2(1) and E(Yk|Xk,\u00b7, Ak = 2) = X\u22ba k,\u00b7\u03b2(2). For an individual with Xk,\u00b7 = xnew, we de\ufb01ne \u2206(xnew) = E(Yk|Xk,\u00b7 = xnew, Ak = 2) \u2212E(Yk|Xk,\u00b7 = xnew, Ak = 1) = x\u22ba new(\u03b2(2) \u2212\u03b2(1)), which measures the change of the conditional mean from being untreated to treated for individuals with the covariates xnew. By generalizing (20), we construct the debiased estimators \\ x\u22ba new\u03b2(1) and \\ x\u22ba new\u03b2(2), together with their corresponding variance estimators b V\u03b2(1) and b V\u03b2(2). The paper [53] proposed to estimate \u2206(xnew) by b \u2206(xnew) = \\ x\u22ba new\u03b2(2) \u2212 \\ x\u22ba new\u03b2(1) and construct the con\ufb01dence interval for \u2206(xnew) as CI = \u0012 b \u2206(xnew) \u2212z \u03b1 2 q b V\u03b2(1) + b V\u03b2(2), b \u2206(xnew) + z \u03b1 2 q b V\u03b2(1) + b V\u03b2(2) \u0013 . (29) Regarding the hypothesis testing problem H0 : x\u22ba new(\u03b2(2) \u2212\u03b2(1)) \u22640 versus H1 : x\u22ba new(\u03b2(2)\u2212\u03b2(1)) > 0, the paper [53] proposed the following test procedure, \u03c6\u03b1(xnew) = I \u001a b \u2206(xnew) \u2212z\u03b1 q b V\u03b2(1) + b V\u03b2(2) \u22650 \u001b . (30) As a direct generalization, we may consider the logistic version, E(Yk|Xk,\u00b7, Ak = 1) = h(X\u22ba k,\u00b7\u03b2(1)) and E(Yk|Xk,\u00b7, Ak = 2) = h(X\u22ba k,\u00b7\u03b2(2)). For an individual with Xk,\u00b7 = xnew, our inference target becomes \u2206(xnew) = h(x\u22ba new\u03b2(2)) \u2212h(x\u22ba new\u03b2(1)). The methods in Section 3.2 can be applied here to make inferences for \u2206(xnew). 3.4 Quadratic functionals We now focus on inference for the quadratic functionals QA = \u03b2\u22ba GA\u03b2G and Q\u03a3 = \u03b2\u22ba G\u03a3G,G\u03b2G, where G \u2282{1, \u00b7 \u00b7 \u00b7 , p + 1} and A \u2208R|G|\u00d7|G| denotes a prespeci\ufb01ed matrix. Without loss of generality, we set G = {2, \u00b7 \u00b7 \u00b7 , |G| + 1}. In the following, we mainly discuss the main idea under the high-dimensional linear regression, which can be generalized to the high-dimensional logistic regression. Let b \u03b2 denote the Lasso estimator in (3). We start with the error decomposition of the plug-in estimator b \u03b2\u22ba GAb \u03b2G, b \u03b2\u22ba GAb \u03b2G \u2212\u03b2\u22ba GA\u03b2G = 2b \u03b2\u22ba GA(b \u03b2G \u2212\u03b2G) \u2212(b \u03b2G \u2212\u03b2G)\u22baA(b \u03b2G \u2212\u03b2G). (31) In consideration of high-dimensional linear models, [25, 26] proposed to construct the bias-corrected estimator through estimating the error component \fSpringer Nature 2021 L AT EX template 14 High-dimensional Regression 2b \u03b2\u22ba GA(b \u03b2G \u2212\u03b2G) on the right-hand side of (31). Since b \u03b2\u22ba GA(b \u03b2G \u2212\u03b2G) can be expressed as x\u22ba new(b \u03b2 \u2212\u03b2) with xnew = \u0010 0 b \u03b2\u22ba GA 0\u22ba\u0011\u22ba , the techniques of estimating the error component for the linear functional can be directly applied to approximate b \u03b2\u22ba GA(b \u03b2G \u2212\u03b2G). Particularly, [25, 26] proposed the following estimator of QA, b QA = b \u03b2\u22ba GAb \u03b2G + 2b u\u22ba AX\u22ba(Y \u2212X b \u03b2)/n, (32) where b uA denotes the solution of (21) and (22) with xnew = \u0010 0 b \u03b2\u22ba GA 0\u22ba \u0011\u22ba . No bias correction is required for the last term on the right-hand side of (31) since it is a higher-order error term under regular conditions. We now turn to the estimation of Q\u03a3 = \u03b2\u22ba G\u03a3G,G\u03b2G, where the matrix \u03a3G,G has to be estimated from the data. With b \u03a3 = 1 n Pn k=1 Xk,\u00b7X\u22ba k,\u00b7, we decompose the estimation error of the plug-in estimator b \u03b2\u22ba Gb \u03a3G,G b \u03b2G as b \u03b2\u22ba Gb \u03a3G,G b \u03b2G \u2212\u03b2\u22ba G\u03a3G,G\u03b2G =2b \u03b2\u22ba Gb \u03a3G,G(b \u03b2G \u2212\u03b2G) + \u03b2\u22ba G(b \u03a3G,G \u2212\u03a3G,G)\u03b2G \u2212(b \u03b2G \u2212\u03b2G)\u22bab \u03a3G,G(b \u03b2G \u2212\u03b2G). (33) By a similar approach as (32), [26] proposed the following estimator of Q\u03a3, b Q\u03a3 = b \u03b2\u22ba Gb \u03a3G,G b \u03b2G + 2b u\u22ba \u03a3X\u22ba(Y \u2212X b \u03b2)/n, (34) where b u\u03a3 denotes the solution of (21) and (22) with xnew = \u0010 0 b \u03b2\u22ba Gb \u03a3G,G 0\u22ba \u0011\u22ba . As a special case, [27] considered G = {1, \u00b7 \u00b7 \u00b7 , p + 1} and constructed b u\u03a3 = b \u03b2. The works [25, 26] established the asymptotic properties for a samplesplitting version of b QA and b Q\u03a3, where the initial estimator b \u03b2 is constructed from a subsample independent from the samples {Xk,\u00b7, Yk}1\u2264k\u2264n used in (32) and (34). Based on the asymptotic properties, [26] estimated the variance of b QA by b VA(\u03c4) = 4b \u03c32 \u03f5 b u\u22ba Ab \u03a3b uA/n + \u03c4/n for some positive constant \u03c4 > 0, where the term \u03c4/n is introduced as an upper bound for the term (b \u03b2G \u2212\u03b2G)\u22baA(b \u03b2G \u2212\u03b2G) in (31). [26] estimated the variance of b Q\u03a3 by b V\u03a3(\u03c4) = 4b \u03c32 \u03f5 n b u\u22ba \u03a3b \u03a3b u\u03a3 + 1 n2 n X k=1 \u0010 b \u03b2\u22ba GXk,GX\u22ba k,G b \u03b2G \u2212b \u03b2\u22ba Gb \u03a3G,G b \u03b2G \u00112 + \u03c4 n, and further constructed the following con\ufb01dence intervals for QA and Q\u03a3, CIA(\u03c4) = \u0012 b QA \u2212z\u03b1/2 \u00b7 q b VA(\u03c4), b QA + z\u03b1/2 \u00b7 q b VA(\u03c4) \u0013 , CI\u03a3(\u03c4) = \u0012 b Q\u03a3 \u2212z\u03b1/2 \u00b7 q b V\u03a3(\u03c4), b Q\u03a3 + z\u03b1/2 \u00b7 q b V\u03a3(\u03c4) \u0013 . (35) For a positive de\ufb01nite matrix A, the signi\ufb01cance test H0 : \u03b2G = 0 can be recast as H0,A : \u03b2\u22ba GA\u03b2G = 0 or H0,\u03a3 : \u03b2\u22ba G\u03a3G,G\u03b2G = 0. The following \u03b1-level signi\ufb01cance tests of H0,A and H0,\u03a3 have been respectively proposed in [26], \u03c6A(\u03c4) = I \u001a b QA \u2265z\u03b1 \u00b7 q b VA(\u03c4) \u001b , \u03c6\u03a3(\u03c4) = I \u001a b Q\u03a3 \u2265z\u03b1 \u00b7 q b V\u03a3(\u03c4) \u001b . (36) The inference for QA and Q\u03a3 can be generalized to the high-dimensional logistic regression together with the weighted bias correction detailed in (26). The paper [28] studied how to construct the debiased estimators of Q\u03a3 under the high-dimensional logistic regression. \fSpringer Nature 2021 L AT EX template High-dimensional Regression 15 3.5 Semi-supervised inference We summarize the optimal estimation of \u2225\u03b2\u22252 2 and \u03b2\u22ba\u03a3\u03b2 in a general semisupervised setting, with the labelled data (X1,\u00b7, Y1), \u00b7 \u00b7 \u00b7 , (Xn,\u00b7, Yn) and the unlabelled data Xn+1,\u00b7, \u00b7 \u00b7 \u00b7 , Xn+N,\u00b7, where the covariates X1,\u00b7, \u00b7 \u00b7 \u00b7 , Xn+N,\u00b7 are assumed to be identically distributed. The work [27] proposed the following semi-supervised estimator of \u03b2\u22ba\u03a3\u03b2, b Q(b \u03b2, b \u03a3S) = b \u03b2\u22bab \u03a3S b \u03b2 + 2b \u03b2\u22ba1 n n X k=1 Xk,\u00b7(Yk \u2212X\u22ba k,\u00b7 b \u03b2), b \u03a3S = 1 n + N n+N X k=1 Xk,\u00b7X\u22ba k,\u00b7. (37) The matrix estimator b \u03a3S in (37) utilizes both the labeled and unlabelled data. The work [27] constructed the following con\ufb01dence interval for \u03b2\u22ba\u03a3\u03b2 in the semi-supervised setting, CI(Z) = \u0012\u0010 b Q(b \u03b2, b \u03a3S) \u2212z\u03b1/2 p b V \u0011 + , b Q(b \u03b2, b \u03a3S) \u2212z\u03b1/2 p b V \u0013 , where b V = 1 n \u0012 4b \u03c32 \u03f5 b \u03b2\u22bab \u03a3S b \u03b2 + b \u03c1 n+N Pn+N k=1 \u0010 b \u03b2\u22baXk\u00b7X\u22ba k\u00b7 b \u03b2 \u2212b \u03b2\u22bab \u03a3S b \u03b2 \u00112 + \u03c4 \u0013 with b \u03c1 = n/(N + n) and \u03c4 > 0 being a user-speci\ufb01c tuning parameter adjusting for the higher order estimation error. The above con\ufb01dence interval construction demonstrates the usefulness of integrating the unlabelled data, signi\ufb01cantly reducing the ratio b \u03c1 and the interval length. De\ufb01ne \u0398 (s, M) = \b (\u03b2, \u03a3, \u03c3\u03f5) \u2208\u0398(s) : M 2 \u2264\u2225\u03b2\u22252 \u2264M \t , as a subspace of \u0398(s) in (12) with the constraint on \u2225\u03b2\u22252. In the semi-supervised setting, it was established in [27] that the minimax rate for estimating \u03b2\u22ba\u03a3\u03b2 over \u0398 (s, M) is M2 \u221a N + n + min \u001a M \u221an + s log p n , M2 \u001b . (38) The estimator b Q(b \u03b2, b \u03a3S) in (37) is shown to achieve the optimal rate in (38) for M \u2265C p s log p/n. When M \u2264C p s log p/n, then estimating Q by zero achieves the optimal rate in (38). The optimal convergence rate in (37) characterizes its dependence on the amount of unlabelled data. With a larger size N of the unlabelled data, the convergence rate M 2 \u221aN+n decreases. In the semisupervised setting, the unlabelled data can also be used to construct a more accurate estimator of \u2225\u03b2\u22252 2; see Section 4 of [27] for details. In the following, we consider the two extremes under semi-supervised learning: the supervised learning setting with N = 0 and the other extreme setting of knowing \u03a3 = I, which can be viewed as a special case of the semi-supervised setting with N \u2192\u221e. In Table 1, we summarize the optimal convergence rate of estimating \u2225\u03b2\u22252 2 and \u03b2\u22ba\u03a3\u03b2 over both \u0398(s, M) and \u03980(s, M), where \u03980(s, M) = {(\u03b2, \u03a3, \u03c3\u03f5) \u2208\u0398(s, M) : \u03a3 = I} . With leveraging the information of \u03a3, Table 1 shows that the optimal rates of estimating \u2225\u03b2\u22252 2 and \u03b2\u22ba\u03a3\u03b2 are reduced by M s log p n and M 2 1 \u221an, respectively. The improvement can be substantial when the parameter M, characterizing the \u21132 norm \u2225\u03b2\u22252, is relatively large. The optimal rate of estimating \u2225\u03b2\u22252 2 was established in [79] under the sequence model Yj = \u03b2j + 1 \u221an\u03f5j for 1 \u2264j \u2264p. When we know \u03a3 = I and \fSpringer Nature 2021 L AT EX template 16 High-dimensional Regression focus on the regime s \u2264c min{n/ log p, p\u03bd} for 0 \u2264\u03bd < 1/2, the optimal rate of estimating \u2225\u03b2\u22252 2 in the high-dimensional linear model matches that in [79]. However, when \u03a3 is unknown, estimating \u2225\u03b2\u22252 2 is much harder in the high-dimensional linear model than in the sequence model. Target Optimal Rate over \u0398(s, M) Optimal Rate over \u03980(s, M) \u2225\u03b2\u22252 2 min n M 1 \u221an + s log p n + M s log p n , M2o min n M 1 \u221an + s log p n , M2o \u03b2\u22ba\u03a3\u03b2 min n M 1 \u221an + s log p n + M2 1 \u221an , M2o Table 1 The minimax optimal rate of estimating \u2225\u03b2\u22252 2 over \u0398(s, M) was established in [25]; the remaining optimal rates are implied by (38), which was established in [27]. The focus is on the regime s \u2264c min{n/ log p, p\u03bd} for 0 \u2264\u03bd < 1/2. 3.6 Inner products of regression vectors We next consider the two-sample regression models in (2). When the highdimensional covariates are genetic variants and the outcome variables measure di\ufb00erent phenotypes, then the inner product [\u03b2(1)]\u22ba\u03b2(2) can be interpreted as the genetic relatedness [25], measuring the similarity between the two association vectors \u03b2(1) and \u03b2(2). We present the debiased estimator of [\u03b2(1)]\u22ba\u03b2(2) proposed in [25]. For d = 1, 2, let b \u03b2(d) be the Lasso estimator of \u03b2(d). Similar to the decomposition in (31), we decompose the error of the plug-in estimator [b \u03b2(1)]\u22bab \u03b2(2), \u03b2 b \u03b2 (1)t\u22bab \u03b2 (2) \u2212[\u03b2 (1)]\u22ba\u03b2 (2) =[b \u03b2 (1)]\u22ba(b \u03b2 (2) \u2212\u03b2 (2)) + [b \u03b2 (2)]\u22ba(b \u03b2 (1) \u2212\u03b2 (1)) \u2212(b \u03b2 (2) \u2212\u03b2 (2))\u22ba(b \u03b2 (1) \u2212\u03b2 (1)). (39) The key is to estimate [b \u03b2(1)]\u22ba(b \u03b2(2) \u2212\u03b2(2)) and [b \u03b2(2)]\u22ba(b \u03b2(1) \u2212\u03b2(1)) separately in the above decomposition, which can be viewed as a projection of b \u03b2(2) \u2212\u03b2(2) and b \u03b2(1) \u2212\u03b2(1), respectively. The work [25] proposed the following bias-corrected estimator of [\u03b2(1)]\u22ba\u03b2(2), \\ [\u03b2(1)]\u22ba\u03b2(2) =[b \u03b2 (1)]\u22bab \u03b2 (2) + b u\u22ba 1 1 n2 n2 X k=1 [X (2) k,\u00b7]\u22ba(Y (2) k \u2212[X (2) k,\u00b7]\u22bab \u03b2 (2)) + b u\u22ba 2 1 n1 n1 X k=1 [X (1) k,\u00b7]\u22ba(Y (1) k \u2212[X (1) k,\u00b7]\u22bab \u03b2 (1)) (40) where the projection direction vectors are constructed as b u1 = arg min u\u2208Rp+1 u\u22bab \u03a3 (2)u subject to \r \r \rb \u03a3 (2)u \u2212b \u03b2 (1)\r \r \r \u221e\u2264\u2225b \u03b2 (1)\u22252\u03bb2 b u2 = arg min u\u2208Rp+1 u\u22bab \u03a3 (1)u subject to \r \r \rb \u03a3 (1)u \u2212b \u03b2 (2)\r \r \r \u221e\u2264\u2225b \u03b2 (2)\u22252\u03bb1 with b \u03a3(d) = 1 n Pnd k=1 X (d) k,\u00b7[X (d) k,\u00b7]\u22baand \u03bbd \u224d p log p/nd for d = 1, 2. For the estimator (40), the bias-correction terms b u\u22ba 1 1 n2 Pn2 k=1[X (2) k,\u00b7]\u22ba(Y (2) k \u2212 [X (2) k,\u00b7]\u22bab \u03b2(2)) and b u\u22ba 2 1 n1 Pn1 k=1[X (1) k,\u00b7]\u22ba(Y (1) k \u2212[X (1) k,\u00b7]\u22bab \u03b2(1)) are constructed to estimate [b \u03b2(1)]\u22ba(\u03b2(2) \u2212b \u03b2(2)) and [b \u03b2(2)]\u22ba(\u03b2(1) \u2212b \u03b2(1)), respectively. The construction \fSpringer Nature 2021 L AT EX template High-dimensional Regression 17 of the projection directions b u1 and b u2 can be viewed as extensions of that in (21) with xnew = b \u03b2(1) and xnew = b \u03b2(2), respectively. An additional constraint as in (22) is not needed here since both xnew = b \u03b2(1) and xnew = b \u03b2(2) are suf\ufb01ciently sparse. The paper [25] has established the convergence rate of the debiased estimator proposed in (40). The analysis can be extended to establish the asymptotic normal distribution of \\ [\u03b2(1)]\u22ba\u03b2(2) under suitable conditions. The quantity [\u03b2(1)]\u22ba\u03a3\u03b2(2) is another genetic relatedness measure if \u03a3(1) = \u03a3(2) = \u03a3 with \u03a3(d) = EX (d) k,\u00b7[X (d) k,\u00b7]\u22bafor d = 1, 2. We can propose the debiased estimator \\ [\u03b2(1)]\u22ba\u03a3\u03b2(2), de\ufb01ned as [b \u03b2(1)]\u22bab \u03a3b \u03b2(2) + [b \u03b2(1)]\u22ba1 n2 Pn2 k=1[X (2) k,\u00b7]\u22ba(Y (2) k \u2212 [X (2) k,\u00b7]\u22bab \u03b2(2)) + [b \u03b2(2)]\u22ba1 n1 Pn1 k=1[X (1) k,\u00b7]\u22ba(Y (1) k \u2212[X (1) k,\u00b7]\u22bab \u03b2(1)). The above results have been extended to the logistic regression models, where the quantity [\u03b2(1)]\u22ba\u03a3\u03b2(2) still captures an interpretation of genetic relatedness. The paper [28] has carefully investigated the con\ufb01dence interval construction for [\u03b2(1)]\u22ba\u03a3\u03b2(2) in consideration of several high-dimensional logistic regression models. Moreover, we might need to estimate and make inferences for [\u03b2(1)]\u22baA\u03b2(2) with A denoting a general matrix. The idea in (40) can be generalized to make inference for [\u03b2(1)]\u22baA\u03b2(2). The work [80] applied such a generalized debiased estimator of [\u03b2(1)]\u22baA\u03b2(2) in an intermediate step to determine the optimal aggregation weight of multiple regression models. 3.7 R package SIHR The methods reviewed in Sections 2 and 3 have been implemented in the R package SIHR [54], which is available from CRAN. The SIHR package contains the main functions: LF, QF, and CATE. The LF function implements the con\ufb01dence interval for x\u22ba new\u03b2 in (24) under the high-dimensional linear regression with specifying model=\"linear\" and the con\ufb01dence intervals for h(x\u22ba new\u03b2) in (28) under the high-dimensional logistic regression with specifying model=\"logistic\" or model=\"logisticalter\", corresponding to the linearization weighting and link-speci\ufb01c weighting introduced in Section 2.3, respectively. The QF function implements the con\ufb01dence interval construction in (35) and hypothesis testing in (36) with di\ufb00erent choices of the weighting matrix A. The CATE function implements the con\ufb01dence interval in (29) and hypothesis testing in (30). Both QF and CATE functions can be applied to the logistic regression setting by specifying model=\"logistic\" or model=\"logisticalter\". The detailed usage of the SIHR package can be found in the paper [54]. 4 Multiple Testing In the previous sections, we have examined statistical inference for individual regression coe\ufb03cients and related one-dimensional functionals. However, in many applications, such as genomics, it is necessary to perform simultaneous inference for multiple regression coe\ufb03cients while controlling for the false discovery rate (FDR) and false discovery proportion (FDP). In this section, we \fSpringer Nature 2021 L AT EX template 18 High-dimensional Regression will explore several large-scale multiple testing procedures for high-dimensional regression models. We start with the linear models in Section 4.1 and extend the discussion to logistic models in Section 4.2. The testing procedures are uni\ufb01ed in Section 4.3 and the power enhancement methods are discussed next. 4.1 Simultaneous inference for linear regression One-sample simultaneous inference for high-dimensional linear regression coef\ufb01cients is closely related to the problem of variable selection. Common approaches for variable selection include regularization methods, such as Lasso [1], SCAD [10], Adaptive Lasso [4] and MCP [11], which simultaneously estimate parameters and select features, and stepwise feature selection techniques like LARS [19] and FoBa [81], which prioritize variable selection. See the discussions in [58] and references therein. However, both of these approaches aim to \ufb01nd the model that is closest to the truth, which may not be achievable in practice. Alternatively, [58] approached the problem from a multiple testing perspective and focused on controlling false discoveries rather than achieving perfect selection results. Speci\ufb01cally, for the high-dimensional regression model (1) with link function h(z) = z, i.e., Y = X\u03b2 + \u03f5, where \u03b2 = (\u03b21, . . . , \u03b2p+1)\u22ba\u2208 Rp+1, X = (X\u22ba 1,\u00b7, . . . , X\u22ba n,\u00b7)\u22ba, Y = (Y1, . . . , Yn)\u22ba, and \u03f5 = (\u03f51, . . . , \u03f5n)\u22ba, with {\u03f5k} being independent and identically distributed (i.i.d) random variables with mean zero and variance \u03c32 \u03f5 and independent of Xk,\u00b7, k = 1, . . . , n, [58] considered the following multiple testing problem, H0,i : \u03b2i = 0 versus H1,i : \u03b2i \u0338= 0, i = 2, . . . , p + 1, (41) with the control of FDR and FDP. In some \ufb01elds, one-sample inference may not be su\ufb03cient, particularly for detecting interactions. For example, as demonstrated in [82], many complex diseases are the result of interactions between genes and the environment. Therefore, it is important to thoroughly examine the e\ufb00ects of the environment and its interactions with genetic predispositions on disease phenotypes. When the environmental factor is a binary variable, such as smoking status or gender, interaction detection can be approached under a two-sample high-dimensional regression framework. Speci\ufb01cally, interactions can be identi\ufb01ed by comparing two high-dimensional regression models as introduced in (2) with identity link function, i.e., Y (d) = X (d)\u03b2(d) + \u03f5(d), for d = 1, 2, and recovering the nonzero components of \u03b2 (1) \u22121 \u2212\u03b2 (2) \u22121, where \u03b2(d) = (\u03b2 (d) 1 , . . . , \u03b2 (d) p+1)\u22ba\u2208Rp+1, X (d) = (X (d)\u22ba 1,\u00b7 , . . . , X (d)\u22ba nd,\u00b7)\u22ba, Y (d) = (Y (d) 1 , . . . , Y (d) nd )\u22ba, and \u03f5(d) = (\u03f5 (d) 1 , . . . , \u03f5(d) nd)\u22ba, with {\u03f5 (d) k } being i.i.d random variables with mean zero and variance \u03c32 \u03f5(d) and independent of X (d) k,\u00b7, k = 1, . . . , nd. Assume that n1 \u224dn2 and let n = max{n1, n2}. Then [59] investigated simultaneous testing of the hypotheses H0,i : \u03b2 (1) i = \u03b2 (2) i versus H1,i : \u03b2 (1) i \u0338= \u03b2 (2) i , i = 2, . . . , p + 1, (42) with FDR and FDP control. In genetic association studies, it is common to measure multiple correlated phenotypes on the same individuals. To detect associations between highdimensional genetic variants and these phenotypes, one can individually assess \fSpringer Nature 2021 L AT EX template High-dimensional Regression 19 the relationship between each response and each covariate, as in (41), and then adjust for multiplicity in the comparisons. However, as noted by [83] and [84], jointly analyzing these phenotypic measurements may increase the power to detect causal genetic variants. Therefore, motivated by the potential to enhance power by leveraging the similarity across multivariate responses, [60] used high-dimensional multivariate regression models to address applications in which D correlated responses are measured on n independent individuals: Yn\u00d7D = Xn\u00d7(p+1)B(p+1)\u00d7D + \u03a5n\u00d7D, (43) where Y = (Y\u00b7,1, . . . , Y\u00b7,D) \u2208Rn\u00d7D, with Y\u00b7,d = (Y1,d, . . . , Yn,d)\u22ba, denotes D responses with D \ufb01xed, and X = (X\u22ba 1,\u00b7, . . . , X\u22ba n,\u00b7)\u22ba\u2208Rn\u00d7(p+1) is the covariate matrix. In (43), B = (B\u00b7,1, . . . , B\u00b7,D) \u2208R(p+1)\u00d7D, with B\u00b7,d = (B1,d, . . . , Bp+1,d)\u22ba \u2208 Rp+1, represents the regression coe\ufb03cient matrix, where Bi,\u00b7 represents the regression coe\ufb03cients of the ith covariate; \u03a5 = (\u03f5\u00b7,1, . . . , \u03f5\u00b7,D) \u2208Rn\u00d7D, where \u03f5\u00b7,d = (\u03f51,d, . . . , \u03f5n,d)\u22ba, and {\u03f5k,d} are i.i.d random variables with mean zero and variance \u03c32 \u03f5 and independent of X. To examine whether the ith covariate is associated with any of the D responses, [60] simultaneously tested H0,i : Bi,\u00b7 = 0 versus H1,i : Bi,\u00b7 \u0338= 0, i = 2, . . . , p + 1, (44) while controlling FDR and FDP. Because the e\ufb00ect of the ith variable on each of the D responses may share strong similarities, namely, if Bi,d \u0338= 0, then the rest of the entries in this row are more likely to be nonzero, a row-wise testing method using the group-wise information is more favorable than testing the signi\ufb01cance of the matrix B column by column as in testing problem (41). 4.1.1 Bias corrections via inverse regression In the multiple testing problems discussed in Section 4.1, our goal is to simultaneously infer the regression coe\ufb03cients while controlling for error rates. Therefore, it is crucial to begin with an asymptotically unbiased estimator for each regression component. The debiasing techniques outlined in Section 2.2 can be used to attain nearly unbiased estimates, however, as noted in [58], the constraints or tuning parameters utilized in debiasing can signi\ufb01cantly a\ufb00ect test accuracy. Additionally, the asymptotic distribution of these debiased estimators is conditional, making it challenging to characterize the dependence structure among the test statistics, which is vital for error rate control in simultaneous inference. As an alternative, we will explore an inverse regression approach in this section, which establishes the unconditional asymptotic distribution of bias corrected statistics and allows for explicit characterization of the correlation structure. Alternatively, the debiasing method (13) in [21, 23] can be used as long as the decorrelation vectors Z\u00b7,j\u2019s introduced in Section 2.2.2 are close enough to their population counterparts. This approach will be illustrated in Section 4.2. Recall that X (d) k,1 = 1. To achieve bias correction, by taking testing problem (42) as an example, we consider the inverse regression models obtained by \fSpringer Nature 2021 L AT EX template 20 High-dimensional Regression regressing X (d) k,i on (Y (d) k , e X (d) k,\u2212i), where e X (d) k,\u2212i = X (d) k,\u2212{1,i}, for i = 2, . . . , p + 1: X (1) k,i = \u03b1 (1) i + (Y (1) k , e X (1) k,\u2212i)\u03b3 (1) i + \u03b7 (1) k,i, (k = 1, . . . , n1) X (2) k,i = \u03b1 (2) i + (Y (2) k , e X (2) k,\u2212i)\u03b3 (2) i + \u03b7 (2) k,i, (k = 1, . . . , n2) where for d = 1, 2, \u03b7 (d) k,i has mean zero and variance \u03c32 i,d and is uncorrelated with (Y (d) k , e X (d) k,\u2212i), and the \ufb01rst component of \u03b3 (d) i = (\u03b3 (d) i,1, . . . , \u03b3 (d) i,p)\u22basatis\ufb01es \u03b3 (d) i,1 = \u03c32 i,d\u03b2 (d) i /\u03c32 \u03f5(d), i = 2, . . . , p + 1, (45) where \u03c32 i,d = [{\u03b2 (d) i }2/\u03c32 \u03f5(d) + \u03c9 (d) i\u22121,i\u22121]\u22121 with {Cov(X (d) k,\u22121)}\u22121 = \u2126d = (\u03c9 (d) i,j). Note that, r (d) i = Cov(\u03f5 (d) k , \u03b7 (d) k,i) can be expressed as \u2212\u03b3 (d) i,1Cov(\u03f5 (d) k , Y (d) k ) = \u2212\u03b3 (d) i,1\u03c32 \u03f5(d) = \u2212\u03c32 i,d\u03b2 (d) i , hence we can approach the debiasing of \u03b2 (d) i through the debiasing of r (d) i , and equivalently formulate the testing problem (42) as H0,i : r (1) i /\u03c32 i,1 = r (2) i /\u03c32 i,2, i = 2, . . . , p + 1. The most straightforward way to estimate r (d) i is to use the sample covariance between the error terms, n\u22121 d Pnd k=1 \u03f5 (d) k \u03b7 (d) k,i. However, the error terms are unknown, so we \ufb01rst estimate them by b \u03f5 (d) k = Y (d) k \u2212X (d) k,\u00b7 b \u03b2 (d), b \u03b7 (d) k,i = X (d) k,i \u2212(Y (d) k \u2212\u00af Y (d), e X (d) k,\u2212i)b \u03b3 (d) i , where b \u03b2(d) = (b \u03b2 (d) 1 , . . . , b \u03b2 (d) p+1) and b \u03b3 (d) i = (b \u03b3 (d) i,1, . . . , b \u03b3 (d) i,p) are respectively the estimators of \u03b2(d) and \u03b3 (d) i that satisfy max{|b \u03b2 (d) \u22121 \u2212\u03b2 (d) \u22121|1, max i=1,...,p |b \u03b3 (d) i \u2212\u03b3 (d) i |1} = OP(an1), max{|b \u03b2 (d) \u22121 \u2212\u03b2 (d) \u22121|2, max i=1,...,p |b \u03b3 (d) i \u2212\u03b3 (d) i |2} = OP(an2), (46) for some an1 and an2 such that max{an1an2, a2 n2} = o{(n log p)\u22121/2}, and an1 = o(1/ log p). (47) As noted by [58, 59], estimators b \u03b2(d) and b \u03b3 (d) i that satisfy (46) and (47) can be obtained easily via standard methods such as the Lasso and Danzig selector. Following that, a natural estimator of r (d) i can be constructed by e r (d) i = n\u22121 d Pnd k=1 b \u03f5 (d) k b \u03b7 (d) k,i. However, the bias of e r (d) i exceeds the desired rate (nd log p)\u22121/2 for the subsequent analysis. Hence, the di\ufb00erence of e r (d) i and n\u22121 d Pnd k=1 \u03f5 (d) k \u03b7 (d) k,i is calculated, and it is equal to b \u03c32 \u03f5(d)b \u03b3 (d) i,1 + b \u03c32 i,d b \u03b2 (d) i up to order (nd log p)\u22121/2 under regularity conditions, where b \u03c32 \u03f5(d) = n\u22121 d Pnd k=1{b \u03f5 (d) k }2 and b \u03c32 i,d = n\u22121 d Pnd k=1{b \u03b7 (d) k,i}2 are the sample variances. Hence, a bias-corrected estimator for r (d) i is de\ufb01ned as b r (d) i = e r (d) i + b \u03c32 \u03f5(d)b \u03b3 (d) i,1 + b \u03c32 i,d b \u03b2 (d) i . (48) For the other two testing problems, the bias corrections can be performed almost exactly the same via the inverse regression technique above that translates the debiasing of regression coe\ufb03cients to the debiasing of residual covariances. Note that, through such an inverse regression approach, one can appropriately deal with the dependency of the component-wise debiased statistics, which is important for the following adjustment of multiplicity and the goal of FDR control. \fSpringer Nature 2021 L AT EX template High-dimensional Regression 21 4.1.2 Construction of test statistics We next construct test statistics for each of the three problems discussed in Section 4.1, using the bias-corrected statistics as a starting point. For problem (41), because testing whether \u03b2i = 0 is equivalent as testing whether the residual covariance is equal to zero, the test statistics can be constructed directly based on the bias corrected b ri (it can be obtained exactly the same as b r (d) i as shown in Section 4.1.1 where the superscript (d) is dropped since there is only one sample). Then the test statistics that standardize b ri\u2019s are obtained by Wi = b ri (b \u03c32 \u03f5 b \u03c32 i /n)1/2 , i = 2, . . . , p + 1, (49) where b \u03c32 \u03f5 and b \u03c32 i are again the sample variances by respectively dropping the superscript (d) and subscript d in the one-sample case. As shown in [58], the statistics Wi\u2019s are asymptotically normal under the null. The above construction cannot be directly applied to the problem (42), because \u03b2 (d) i is not necessary equal to 0 under the two-sample null and r (1) i /\u03c32 i,1 = r (2) i /\u03c32 i,2 is not equivalent to r (1) i = r (2) i . Thus, it is necessary to construct testing procedures based directly on estimators of r (1) i /\u03c32 i,1 \u2212r (2) i /\u03c32 i,2. By the bias correction in Section 4.1.1, [59] proposed an estimator of r (d) i /\u03c32 i,d: T (d) i = b r (d) i /b \u03c32 i,d, i = 2, . . . , p + 1; d = 1, 2, and tested (42) via the estimators {T (1) i \u2212T (2) i : i = 2, . . . , p + 1}. Due to the heteroscedasticity, [59] considered a standardized version of T (1) i \u2212T (2) i . Speci\ufb01cally, let e U (d) i = (\u03b2 (d) i + U (d) i )/\u03c32 i,d, with U (d) i = n\u22121 d nd X k=1 {\u03f5 (d) k \u03b7 (d) k,i \u2212E(\u03f5 (d) k \u03b7 (d) k,i)}. It was shown in [59] that T (d) i is close to e U (d) i asymptotically under regularity conditions. Because \u03b8 (d) i = Var(e U (d) i ) = Var(\u03f5 (d) k \u03b7 (d) k,i/\u03c32 i,d)/nd = (\u03c32 \u03f5(d)/\u03c32 i,d + {\u03b2 (d) i }2)/nd, it can be estimated by b \u03b8 (d) i = (b \u03c32 \u03f5(d)/b \u03c32 i,d + {b \u03b2 (d) i }2)/nd, and the standardized statistics are de\ufb01ned by Wi = T (1) i \u2212T (2) i (b \u03b8 (1) i + b \u03b8 (2) i )1/2 , i = 2, . . . , p + 1, (50) which are asymptotically normal under the null as studied in [59]. For the multivariate testing problem (44), by taking advantage of the similar e\ufb00ect of the ith variable on each of the responses, a group lasso penalty [85] can be imposed to obtain an estimator of B, such that max 1\u2264d\u2264D | b B\u22121,d \u2212B\u22121,d|1 = OP(an1), max 1\u2264d\u2264D | b B\u22121,d \u2212B\u22121,d|2 = OP(an2), for some an1 and an2 satisfying (47). By [86], the above rates can be ful\ufb01lled if the row sparsity of B satis\ufb01es s(p) = o(n1/3/log p). Then following the same bias correction strategy as described in Section 4.1.1, a debiased estimator for r (d) i can be obtained via b r (d) i = e r (d) i + b \u03c32 \u03f5 b \u03b3 (d) i,1 + b \u03c32 i,d b Bi,d. Then similarly as the \fSpringer Nature 2021 L AT EX template 22 High-dimensional Regression aforementioned two problems, the standardized statistic can be constructed by e T (d) i = T (d) i /{b \u03b8 (d) i }1/2, i = 2, . . . , p + 1; d = 1, . . . , D, where T (d) i = b r (d) i /b \u03c32 i,d and b \u03b8 (d) i = (b \u03c32 \u03f5 /b \u03c32 i,d + b B2 i,d)/n. Finally, a sum-of-squarestype test statistic for testing the ith row of B is proposed: Wi = D X d=1 { e T (d) i }2, i = 2, . . . , p + 1, (51) which is asymptotically \u03c72 D distributed under the null as studied in [60]. 4.2 Simultaneous inference for logistic regression The principles of simultaneous inference for high-dimensional linear regression can also be applied to high-dimensional logistic regression models. In particular, [57] considered the regression model (1), i.e., Yk = h(X\u22ba k,\u00b7\u03b2) + \u03f5k, k = 1, . . . , n, with the link function h(z) = exp(z)/[1+exp(z)], and studied the simultaneous testing problem (41) as described in Section 4.1, namely testing H0,i : \u03b2i = 0 versus H1,i : \u03b2i \u0338= 0, i = 2, . . . , p + 1, with FDR control. Based on the regularized estimator b \u03b2 given in (5), [57] corrected the bias of b \u03b2 via the Taylor expansion of h(uk) at b uk for uk = X\u22ba k,\u00b7\u03b2 and b uk = X\u22ba k,\u00b7 b \u03b2, and obtained that Yk \u2212h(b uk) + h\u2032(b uk)X\u22ba k,\u00b7 b \u03b2 = h\u2032(b uk)X\u22ba k,\u00b7\u03b2 + (Rk + \u03f5k), (52) where Rk is the remainder term as speci\ufb01ed in (16). Next, Yk \u2212h(b uk) + h\u2032(b uk)X\u22ba k,\u00b7 b \u03b2 can be treated as the new response, h\u2032(b uk)Xk,\u00b7 as the new covariates, and Rk + \u03f5k as the new noise. Under such a formulation, testing H0,i : \u03b2i = 0 can be translated into the simultaneous inference for the regression coe\ufb03cients of an approximate linear model. [57] applied the decorrelation method (13) and constructed the following debiased estimator e \u03b2Deb i , e \u03b2Deb i = b \u03b2i + Pn k=1 Zk,i(Yk \u2212h(X\u22ba k,\u00b7 b \u03b2)) Pn k=1 Zk,ih\u2032(X\u22ba k,\u00b7 b \u03b2)Xk,i , i = 2, . . . , p + 1, (53) where Z\u00b7,i is determined by the scaled residual that regresses X\u00b7,i on X\u00b7,\u2212i through the linearization weighting c Wk = 1/h\u2032(X\u22ba k,\u00b7 b \u03b2) introduced in Section 2.3; same strategy was also employed in [87, 88]. As summarized in Section 4.1.1, for the subsequent FDR control analysis, the decorrelation vectors Z\u00b7,i\u2019s were shown to be close to the true regression errors. Alternatively, the inverse regression technique in Section 4.1.1 can be similarly applied in the approximate linear model (52) for the bias correction. Based on (53), [57] proposed the following standardized test statistic: Wi = e \u03b2Deb i {Pn k=1 h\u2032(b uk)Z2 k,i}1/2/{Pn k=1 h\u2032(b uk)Zk,iXk,i}, i = 2, . . . , p + 1, (54) \fSpringer Nature 2021 L AT EX template High-dimensional Regression 23 which is asymptotically normal under the null. Additionally, [57] extended the idea to the two-sample multiple testing (42). Similarly, multiple testing of (44) can also be approached in the logistic setting. 4.3 Multiple testing procedure Using the test statistics {Wi : i = 2, . . . , p + 1} in (49), (50), (51) and (54) as a foundation, we next examine a uni\ufb01ed multiple testing procedure that guarantees error rates control. Let H = {2, . . . , p + 1}, H0 be the set of true null indices and H1 = H \\ H0 be the set of true alternatives. We are interested in cases where most of the tests are nulls, that is, |H1| is relatively small compared to |H|. Let \u03a8(\u00b7) be the asymptotic cumulative distribution function (cdf) of Wi under the null and let \u03a6(\u00b7) be the standard normal cdf, then we develop a normal quantile transformation of Wi by Ni = \u03a6\u22121 {1 \u2212(1 \u2212\u03a8(Wi))/2}, which approximately has the same distribution as the absolute value of a standard normal random variable under the null H0,i. Let t be the threshold level such that H0,i is rejected if Ni \u2265t. For any given t, denote by R0(t) = P i\u2208H0 I{Ni \u2265t} and R(t) = P i\u2208H I{Ni \u2265t} the total number of false positives and the total number of rejections, respectively. Then the FDP and FDR are de\ufb01ned as FDP(t) = R0(t) max{R(t), 1}, FDR(t) = E{FDP(t)}. An ideal choice of t rejects as many true positives as possible while controlling the FDP at the pre-speci\ufb01ed level \u03b1, i.e., t0 = inf n 0 \u2264t \u2264(2 log p)1/2 : FDP(t) \u2264\u03b1 o . Since R0(t) can be estimated by 2{1 \u2212\u03a6(t)}|H0| and |H0| is upper bounded by p, we conservatively estimate it by 2p{1 \u2212\u03a6(t)}. Therefore, the following multiple testing algorithm is proposed in [e.g., 57\u201360]. Algorithm 1 The multiple testing procedure. Step 1. Obtain the transformed statistics Ni = \u03a6\u22121 {1 \u2212(1 \u2212\u03a8(Wi))/2} from the test statistics Wi, i = 2, . . . , p + 1. Step 2. For a given 0 \u2264\u03b1 \u22641, calculate b t = inf \u0014 0 \u2264t \u2264(2 log p \u22122 log log p)1/2 : 2p{1 \u2212\u03a6(t)} max{R(t), 1} \u2264\u03b1 \u0015 . (55) If (55) does not exist, then set b t = (2 log p)1/2. Step 3. For i \u2208H, reject H0,i if Ni \u2265b t. As noted in [89], the constraint 0 \u2264t \u2264(2 log p \u22122 log log p)1/2 in (55) is critical. When t exceeds the upper bound, 2p{1 \u2212\u03a6(t)} \u21920 is not even a consistent estimate of R0(t). However, Benjamini-Hochberg (B-H) procedure [90] used 2p{1 \u2212\u03a6(t)} as an estimate of R0(t) for all t \u22650 and hence may \fSpringer Nature 2021 L AT EX template 24 High-dimensional Regression not be able to control the FDP. On the other hand, if t is not attained in the range, it is important to threshold Ni at (2 log p)1/2, because thresholding Ni at (2 log p \u22122 log log p)1/2 may cause too many false rejections. As a result, by applying the above multiple testing algorithm to each of the problems in Sections 4.1 and 4.2, under some regularity conditions as speci\ufb01ed in [57\u2013 60], we reach both the FDP and FDR control at the pre-speci\ufb01ed level \u03b1 asymptotically, i.e., lim(n,p)\u2192\u221eP{FDPAlg1 \u2264\u03b1 + \u03f5} = 1 for any \u03f5 > 0 and lim(n,p)\u2192\u221eFDRAlg1 \u2264\u03b1. 4.4 Power enhancement In addition to controlling error rates, we consider strategies to improve the power of multiple testing procedures, focusing on enhancing the power for twosample inference when the high-dimensional objects of interest are individually sparse. This is explored in [70] with an extension to a more general framework in [91]. It is worth noting that [70] improved the performance of Algorithm 1 by utilizing unknown sparsity in two-sample multiple testing. Additionally, power enhancement for the simultaneous inference of GLMs can be achieved through data integration [e.g., 71, 92] as well as other power boosting methods designed for general multiple testing problems as introduced in Section 1. Recall that, in the two-sample problem studied in Section 4.1, we aim to make inference for \u03b4i = I{\u03b2 (1) i \u0338= \u03b2 (2) i }, i = 2, . . . , p + 1. Following Algorithm 1, we \ufb01rst summarize the data by a single vector of test statistics {N2, \u00b7 \u00b7 \u00b7 , Np+1} and then choose a signi\ufb01cance threshold to control the multiplicity. However, such approach ignores the important feature that both objects \u03b2(1) and \u03b2(2) are individually sparse. Let Id = {i = 2, . . . , p + 1 : \u03b2 (d) i \u0338= 0} denote the support of \u03b2(d), d = 1, 2, and I = I1 \u222aI2 the union support. Because the small cardinality of I implies that both \u03b2(1) and \u03b2(2) are sparse, the information on I can be potentially utilized to narrow down the alternatives via the logical relationship that i / \u2208I implies that \u03b4i = 0. The goal of [70] is to incorporate the sparsity information to improve the testing e\ufb03ciency, and it is accomplished via the construction of an additional covariate sequence {Si : i = 2, . . . , p + 1} to capture the information on I. Note that Si and Ni have di\ufb00erent roles: Ni is the primary statistic to evaluate the signi\ufb01cance of the test, while Si is the auxiliary one that captures the sparsity information to assist the inference. It is also important to note that Si should be asymptotically independent with Ni so that the null distribution of Ni would not be distorted by the incorporation of Si. For the two-sample problem in Section 4.1, such auxiliary statistics can be constructed by Si = b r (1) i /b \u03c32 i,1 + (b \u03b8 (1) i /b \u03b8 (2) i )(b r (2) i /b \u03c32 i,2) {b \u03b8 (1) i (1 + b \u03b8 (1) i /b \u03b8 (2) i )}1/2 , i = 2, . . . , p + 1. (56) Then based on the pairs of statistics {(Ni, Si) : i = 2, . . . , p+1}, the proposal in [70] operates in three steps: grouping, adjusting and pooling (GAP). The \ufb01rst step divides all tests into K groups based on Si, which leads to heterogeneous groups with varied sparsity levels. The second step adjusts the p-values to \fSpringer Nature 2021 L AT EX template High-dimensional Regression 25 Algorithm 2 Multiple testing via grouping, adjusting and pooling (GAP). Step 1 (Grouping). Divide hypotheses into K groups: Gl = {i = 2, . . . , p + 1 : \u03bbl\u22121 < Si \u2264\u03bbl}, for 1 \u2264l \u2264K. The optimal choice of grouping will be determined in Step 2. Step 2 (Adjusting). De\ufb01ne ml = |Gl|. Calculate adjusted p-values pw i = min{pi/wo l , 1} if i \u2208Gl, 1 \u2264l \u2264K, where pi = 2{1 \u2212\u03a6(Ni)} and wo l will be calculated as follows. \u2022 Initial adjusting. For a given grouping {Gl : 1 \u2264l \u2264K}, let b \u03c0l be the estimated proportion of non-nulls in Gl. Compute the group-wise weights wl = ( K X l=1 mlb \u03c0l 1 \u2212b \u03c0l )\u22121 pb \u03c0l (1 \u2212b \u03c0l), 1 \u2264l \u2264K. (57) De\ufb01ne pw i = min{pi/wl, 1} for i \u2208Gl. \u2022 Further re\ufb01ning. For each \u039b = {\u03bbl : 1 \u2264l \u2264K \u22121} (allowed to be empty), let k = max{j : pw (j) \u2264j\u03b1/p}, (58) and reject the hypotheses corresponding to {pw (1), . . . , pw (k)}, where pw (1) \u2264 \u00b7 \u00b7 \u00b7 \u2264pw (p) are the re-ordered p-values. The weights wo l are computed using (57) based on the optimal grouping that yields most rejections. Step 3 (Pooling). Combine pw i \u2019s computed from Step 2 based on the optimal grouping, apply (58) again and output the rejections. incorporate the sparsity information. The \ufb01nal step combines the adjusted pvalues and chooses a threshold to control the global FDR. Based on the p-values obtained by the test statistics Ni\u2019s, i.e., pi = 2{1 \u2212\u03a6(Ni)}, the algorithm is summarized in Algorithm 2 and we refer to [70] for its detailed implementations such as the choices of the number of groups and the grid sets. In addition, [70] provided some insights on why the GAP algorithm works. First, it adaptively chooses the group-wise FDR levels via adjusted p-values and e\ufb00ectively incorporates group-wise information. Intuitively, Algorithm 2 increases the overall power by assigning higher FDR levels to groups where signals are more common. It does not assume known groups and searches for the optimal grouping to maximize the power. Moreover, the construction of the weights in Algorithm 2 ensures that after all groups are combined, the weights are always \u201cproper\u201d in the sense of [63]. As a result, even inaccurate estimates of the non-null proportions would not a\ufb00ect the validity of overall FDR control. Then, same as Algorithm 1, [70] provided the asymptotic error rates control results for Algorithm 2, namely, we have lim(n,p)\u2192\u221eP{FDPAlg2 \u2264\u03b1 + \u03f5} = 1 for any \u03f5 > 0 and lim(n,p)\u2192\u221eFDRAlg2 \u2264\u03b1. Moreover, due to the informative weights (57) that e\ufb00ectively incorporate the sparsity information through {Si : i = 2, . . . , p+1}, Algorithm 2 dominates Algorithm 1 in power asymptotically. \fSpringer Nature 2021 L AT EX template 26 High-dimensional Regression Speci\ufb01cally, [70] provided the rigorous theoretical comparison that \u03a8Alg2 \u2265 \u03a8Alg1 + o(1) as p \u2192\u221e, where \u03a8Alg1 and \u03a8Alg2 represent the expectations of the proportions of correct rejections among all alternative hypotheses for Algorithms 1 and 2, respectively. 5 Discussion In this expository paper, we provided a review of methods and theoretical results on statistical inference and multiple testing for high-dimensional regression models, including linear and logistic regression. Due to limited space, we were unable to discuss a number of related inference problems. In this section, we brie\ufb02y mention a few of them. Accuracy assessment. Accuracy assessment is a crucial part of highdimensional uncertainty quanti\ufb01cation. Its goal is to evaluate the estimation accuracy of an estimator b \u03b2. For linear regression, [93] considered a collection of estimators b \u03b2 and established the minimaxity and adaptivity of the point estimation and con\ufb01dence interval for \u2225b \u03b2\u2212\u03b2\u2225q q with 1 \u2264q \u22642. Suppose that b \u03b2 is independent of the data {Xk,\u00b7, Yk}1\u2264k\u2264n (which can be achieved by sample splitting, for example). De\ufb01ne the residue Rek = Yk \u2212X\u22ba k,\u00b7 b \u03b2 = X\u22ba k,\u00b7(\u03b2\u2212b \u03b2)+\u03f5k. The quadratic functional inference approach introduced in [27] can be applied to the data {Rek, Xk,\u00b7}1\u2264k\u2264n to make inference for both (b \u03b2 \u2212\u03b2)\u22ba\u03a3(b \u03b2 \u2212\u03b2) and \u2225b \u03b2 \u2212\u03b2\u22252 2; see more details in Section 5.2 of [27]. [94] used an estimator of \u2225b \u03b2 \u2212\u03b2\u22252 2 to construct a con\ufb01dence set for \u03b2 with b \u03b2 denoting an accurate highdimensional estimator. In the context of approximate message passing, [95, 96] considered a di\ufb00erent framework with n/(p + 1) \u2192\u03b4 \u2208(0, 1) and independent Gaussian design and established the asymptotic limit of \u2225b \u03b2 \u2212\u03b2\u22252 2/(p + 1) for the Lasso estimator b \u03b2. Semi-supervised Inference. The semi-supervised inference is well motivated by a wide range of modern applications, such as Electronic Health Record data analysis. In addition to the labeled data, additional covariate observations exist in the semi-supervised setting. It is critical to leverage the information in the unlabelled data and improve the inference e\ufb03ciency. As reviewed in Section 3.5, the additional unlabelled data improves the accuracy of various highdimensional inference procedures by facilitating the estimation of \u03a3 or \u03a3\u22121 [27, 75]. Moreover, in a di\ufb00erent context where the linear outcome model might be misspeci\ufb01ed, one active research direction in semi-supervised learning is to construct a more complicated imputation model (e.g., by applying the classical non-parametric or machine learning methods) and conduct a following-up bias correction after outcome imputation [e.g., 97\u2013101]. Applications of quadratic form inference. The statistical inference methods for quadratic functionals in Section 3.6 and inner products in Section 3.4 are useful in a wide range of statistical applications. Firstly, the group signi\ufb01cance test of \u2225\u03b2G\u22252 2 = 0 or \u03b2\u22ba G\u03a3G,G\u03b2G = 0 forms an important basis for designing computationally e\ufb03cient hierarchical testing approaches [26, 102]. \fSpringer Nature 2021 L AT EX template High-dimensional Regression 27 Secondly, consider the high-dimensional interaction model Yk = Ak\u03b7 +X\u22ba k,\u00b7\u03c4 + Ak \u00b7 X\u22ba k,\u00b7\u03b3 with Ak denoting the variable of interest (e.g., the treatment) and Xk,\u00b7 denoting a large number of other variables. In this model, testing the existence of the interaction term can be reduced to the inference for \u2225\u03b3\u22252 2. Lastly, in the two-sample model (2), the methods proposed in Sections 3.6 and 3.4 have been applied in [103] to calculating the di\ufb00erence \u2225\u03b2(1) \u2212\u03b2(2)\u22252 2, which has the expression of \u2225\u03b2(1)\u22252 2 + \u2225\u03b2(2)\u22252 2 \u22122[\u03b2(1)]\u22ba\u03b2(2). Multiple heterogeneous regression models. It is essential to perform e\ufb03cient integrative inference in various applications that combine multiple regression models. Examples include transfer learning, distributed learning, federated learning, and distributionally robust learning. Transfer learning provides a powerful tool for incorporating data from related studies to improve estimation and inference accuracy in a target study of direct interest. [104] studied transfer learning for high-dimensional linear regression. Minimax optimal convergence rates were established, and data-driven adaptive algorithms were proposed. [105, 106] explored transfer learning in high-dimensional GLMs and [107, 108] considered distributed learning for high-dimensional regression. Additionally, [71, 92] proposed integrative estimation and multiple testing procedures of cross-sites high-dimensional regression models that simultaneously accommodated between study heterogeneity and protected individual-level data. In a separate direction, [109] proposed the maximin e\ufb00ect as a robust prediction model for the target distribution being generated as a mixture of multiple source populations. [80] further established that the maximin e\ufb00ect is a group distributionally robust model and studied the statistical inference for the maximin e\ufb00ect in high dimensions. Many questions in multiple heterogeneous regression models are open and warrant future research. Other simultaneous inference problems. The principles of multiple testing procedures reviewed in Section 4 are also widely applicable to a range of simultaneous inference problems, including graph learning [e.g., 88, 89, 110, 111], di\ufb00erential network recovery [e.g., 112\u2013114], as well as various structured regression analysis [e.g., 77, 115, 116]. For example, through similar regression-based bias-correction techniques as reviewed in Section 4.1.1, [110] studied the estimation of Gaussian Graphical Models with FDR control; [89] focused on the recovery of sub-networks in Gaussian graphs; [88] identi\ufb01ed signi\ufb01cant communities for compositional data. Additionally, [112, 113] proposed multiple testing procedures for di\ufb00erential network detections of vector-valued and matrix-valued observations, respectively. Multiple testing of structured regression models such as mixed-e\ufb00ects models and confounded linear models has also been extensively studied in the literature [77, 116]. Alternative multiple testing methods for regression models. Besides the bias-correction based multiple testing approaches reviewed in this article, there are a few alternative classes of methods that aim at \ufb01nite sample FDR control for linear models. Examples include knocko\ufb00based methods, mirror statistics approaches and e-value proposals. In particular, [117] constructed a set of knocko\ufb00variables and selected those predictors that have considerably \fSpringer Nature 2021 L AT EX template 28 High-dimensional Regression higher importance scores than the knocko\ufb00counterparts; [118] extended the work to a model-X knocko\ufb00framework that allowed unknown conditional distribution of the response. There are several generalizations along this line of research; see [119, 120] and many references therein. Inspired by the knocko\ufb00 idea, [121] proposed a symmetrized data aggregation approach to build mirror statistics that incorporate data dependence structure; a general framework on mirror statistics of GLMs was studied in [122] and the references therein. The e-value based proposal [123] is another useful tool for multiple testing in general. [124] proposed an e-BH procedure that achieved FDR control under arbitrary dependence among the e-values, and the equivalence between the knocko\ufb00s and the e-BH was studied in [125]." + }, + { + "url": "http://arxiv.org/abs/2105.07536v4", + "title": "Theoretical Foundations of t-SNE for Visualizing High-Dimensional Clustered Data", + "abstract": "This paper investigates the theoretical foundations of the t-distributed\nstochastic neighbor embedding (t-SNE) algorithm, a popular nonlinear dimension\nreduction and data visualization method. A novel theoretical framework for the\nanalysis of t-SNE based on the gradient descent approach is presented. For the\nearly exaggeration stage of t-SNE, we show its asymptotic equivalence to power\niterations based on the underlying graph Laplacian, characterize its limiting\nbehavior, and uncover its deep connection to Laplacian spectral clustering, and\nfundamental principles including early stopping as implicit regularization. The\nresults explain the intrinsic mechanism and the empirical benefits of such a\ncomputational strategy. For the embedding stage of t-SNE, we characterize the\nkinematics of the low-dimensional map throughout the iterations, and identify\nan amplification phase, featuring the intercluster repulsion and the expansive\nbehavior of the low-dimensional map, and a stabilization phase. The general\ntheory explains the fast convergence rate and the exceptional empirical\nperformance of t-SNE for visualizing clustered data, brings forth\ninterpretations of the t-SNE visualizations, and provides theoretical guidance\nfor applying t-SNE and selecting its tuning parameters in various applications.", + "authors": "T. Tony Cai, Rong Ma", + "published": "2021-05-16", + "updated": "2022-11-01", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction Data visualization is critically important for understanding and interpreting the structure of large datasets, and has been recognized as one of the fundamental topics in data science (Donoho, 2017). A collection of machine learning algorithms for data visualization and dimension reduction have been developed. Among them, the t-distributed stochastic neighbor embedding (t-SNE) algorithm, proposed by van der Maaten and Hinton (2008), is arguably one of the most popular methods and a state-of-art technique for a wide range of applications (Wang et al., 2021). Speci\ufb01cally, t-SNE is an iterative algorithm for visualizing high-dimensional data by mapping the data points to a twoor \ufb01nite-dimensional space. It creates a single map that \u00a92022 Tony Cai and Rong Ma. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v23/21-B0524.html. arXiv:2105.07536v4 [stat.ML] 1 Nov 2022 \fCai and Ma reveals the intrinsic structures in a high-dimensional dataset, including trends, patterns, and outliers, through a nonlinear dimension reduction technique. In the past decade, the original t-SNE algorithm, along with its many variants (for example, Yang et al. (2009); Carreira-Perpin\u00b4 an (2010); Xie et al. (2011); van der Maaten (2014); Gisbrecht et al. (2015); Pezzotti et al. (2016); Im et al. (2018); Linderman et al. (2019); Chatzimparmpas et al. (2020)), has made profound impact to the practice of scienti\ufb01c research, including genetics (Platzer, 2013), molecular biology (Olivon et al., 2018), single-cell transcriptomics (Kobak and Berens, 2019), computer vision (Cheng et al., 2015) and astrophysics (Traven et al., 2017). In particular, the extraordinary performance of t-SNE for visualizing highdimensional data with intrinsic clusters has been widely acknowledged (van der Maaten, 2014; Kobak and Berens, 2019). Compared to the extensive literature on the computational and numerical aspects of t-SNE, there is a paucity of fundamental results about its theoretical foundations (see Section 1.3 for a brief overview). The lack of theoretical understanding and justi\ufb01cations profoundly limits the users\u2019 interpretation of the results as well as the potentials for further improvement of the method. This paper aims to investigate the theoretical foundations of t-SNE. Speci\ufb01cally, we present a novel framework for the analysis of t-SNE, provide theoretical justi\ufb01cations for its competence in dimension reduction and visualizing clustered data, and uncover the fundamental principles underlying its exceptional empirical performance. 1.1 Basic t-SNE Algorithm Let {Xi}1\u2264i\u2264n be a set of p-dimensional data points. t-SNE starts by computing a joint probability distribution over all pairs of data points {(Xi, Xj)}1\u2264i\u0338=j\u2264n, represented by a symmetric matrix P = (pij)1\u2264i,j\u2264n \u2208Rn\u00d7n, where pii = 0 for all 1 \u2264i \u2264n, and for i \u0338= j, pij = pi|j + pj|i 2n with pj|i = exp(\u2212\u2225Xi \u2212Xj\u22252 2/2\u03c4 2 i ) P \u2113\u2208{1,2,...,n}\\{i} exp(\u2212\u2225Xi \u2212X\u2113\u22252 2/2\u03c4 2 i ). (1) Here \u03c4i are tuning parameters, which are usually determined based on a certain perplexity measure and a binary search strategy (Hinton and Roweis, 2002; van der Maaten and Hinton, 2008). Similarly, for a two-dimensional1 map {yi}1\u2264i\u2264n \u2282R2, we de\ufb01ne the joint probability distribution over all pairs {(yi, yj)}1\u2264i\u0338=j\u2264n through a symmetric matrix Q = (qij)1\u2264i,j\u2264n where qii = 0 for all 1 \u2264i \u2264n and for i \u0338= j, qij = (1 + \u2225yi \u2212yj\u22252 2)\u22121 P \u2113,s\u2208{1,2,...,n},\u2113\u0338=s(1 + \u2225y\u2113\u2212ys\u22252 2)\u22121 . (2) Intuitively, P and Q are similarity matrices summarizing the pairwise distances of the highdimensional data points {Xi}1\u2264i\u2264n, and the two-dimensional map {yi}1\u2264i\u2264n, respectively. Then t-SNE aims to \ufb01nd {yi}1\u2264i\u2264n that minimizes the KL-divergence between P and Q, that is, (y1, ..., yn) = arg min y1,...,yn DKL(P, Q) = arg min y1,...,yn X i,j\u2208{1,2,...,n} i\u0338=j pij log pij qij . (3) 1. Throughout, we focus on the two-dimensional embedding for ease of presentation. However, all the theoretical results obtained in this work holds for any \ufb01nite constant embedding dimension. 2 \fTheoretical Foundation of t-SNE Many algorithms have been proposed to solve this optimization problem. The most widely used algorithm was proposed in van der Maaten and Hinton (2008), which draws on a variant of gradient descent algorithm, with an updating equation y(k+1) i = y(k) i + hD(k) i + m(k+1)(y(k) i \u2212y(k\u22121) i ), for i = 1, ..., n, (4) where h \u2208R+ is a prespeci\ufb01ed step size parameter, D(k) i = 4 P 1\u2264j\u2264n,j\u0338=i(y(k) j \u2212y(k) i )S(k) ij \u2208 R2 is the gradient term corresponding to yi, with S(k) ij = (pij \u2212q(k) ij )/(1 + \u2225y(k) i \u2212y(k) j \u22252 2) \u2208 R, and m(k) \u2208R+ is a prespeci\ufb01ed momentum parameter. The algorithm starts with an initialization y(0) i = y(\u22121) i for i \u2208{1, 2, ..., n}, drawn independently from a uniform distribution on [\u22120.01, 0.01]2, or from N(0, \u03b42I) for some small \u03b4 > 0. As indicated by van der Maaten and Hinton (2008), the inclusion of the momentum term m(k+1)(y(k) i \u2212y(k\u22121) i ) in (4) is mainly to speed up the convergence and to reduce the risk of getting stuck in a local minimum. In this paper, for simplicity and generality we focus on the basic version of the t-SNE algorithm based on the simple gradient descent, with the updating equation y(k+1) i = y(k) i + hD(k) i , for i = 1, ..., n. (5) In van der Maaten and Hinton (2008) and van der Maaten (2014), the recommended total number of iterations is 1000, while the step size h is initially set as 400 or 800, and is updated at each iteration by an adaptive learning rate scheme of Jacobs (1988). The standard gradient descent algorithm as in (5) su\ufb00ers from a slow convergence rate and even non-convergence in some applications. As an amelioration, van der Maaten and Hinton (2008) proposed an early exaggeration technique, applied to the initial stages of the optimization, that helps create patterns in the visualization and speed up the convergence. Such a computational strategy has been standard in practical use. In fact, most of the current software implementations of t-SNE are based on an early exaggeration stage followed by an embedding stage that iterates a certain gradient descent algorithm. In our setting, these two stages can be summarized as follows. Early exaggeration stage. For the \ufb01rst K0 > 0 iterations, the pij\u2019s in the gradient term D(k) i are multiplied by some exaggeration parameter \u03b1 > 0, so the updating equation for this early exaggeration stage becomes y(k+1) i = y(k) i + h X 1\u2264j\u2264n,j\u0338=i (y(k) j \u2212y(k) i )S(k) ij (\u03b1), i = 1, ..., n, (6) where S(k) ij (\u03b1) = (\u03b1pij \u2212q(k) ij )/(1 + \u2225y(k) i \u2212y(k) j \u22252 2) \u2208R, and the factor 4 in D(k) i is absorbed into the step size parameter h. We refer to this \ufb01rst stage of the t-SNE algorithm as the early exaggeration stage. In van der Maaten and Hinton (2008), the authors choose \u03b1 = 4 and K0 = 50 for the early exaggeration stage, whereas later in van der Maaten (2014), it is recommended that \u03b1 = 12 and K0 = 250. In particular, it is empirically observed that, the early exaggeration technique enables t-SNE to \ufb01nd a better global structure in the early stages of the optimization by creating very tight clusters of points that easily move around in the embedding 3 \fCai and Ma space (van der Maaten, 2014); this observation is later supported by some pioneering theoretical investigations (see Section 1.3). Nevertheless, there are interesting questions to be answered concerning (i) the underlying principles and mechanism behind such a computational strategy, (ii) the limit behavior of the low-dimensional map, (iii) how sensitive is the performance of t-SNE with respect to the choice of tuning parameters (\u03b1, h, K0), and (iv) how to e\ufb03ciently determine these parameters to achieve the best empirical performance. Figure 1: An illustration of the t-SNE iterations that visualize samples from the MNIST dataset (Section 5). Each sample corresponds an image of handwritten digit \u201c2,\u201d \u201c4,\u201d \u201c6,\u201d or \u201c8.\u201d The visulaizations are obtained using the Rtsne function in the R package Rtsne, by selecting the exact t-SNE mode (theta=0, pca=F), dropping the momentum terms (momentum=0, final momentum = 0), and setting perplexity=30 (default), \u03b1 = 12 (default), h = 200 (default) in (6), and K0 = 40. The \ufb01rst three plots (top row) correspond to the early exaggeration stage, while the last three plots (bottom row) correspond to the embedding stage. Embedding stage. After the early exaggeration stage, the exaggeration parameter \u03b1 is dropped and the original iterative algorithm (5) is carried out till attaining a prespeci\ufb01ed number of steps. We refer to this second stage as the embedding stage. The \ufb01nal output is a two-dimensional map {y(K1) i }1\u2264i\u2264n, commonly treated as a low-dimensional embedding of the original data {Xi}1\u2264i\u2264n, expected to preserve its intrinsic geometric structures. In addition to data visualization, t-SNE is sometimes also used as an intermediate step for clustering, signal detection, among many other purposes. In particular, it has been 4 \fTheoretical Foundation of t-SNE observed that, when applied to high-dimensional clustered data, t-SNE tends to produce a visualization with more separated clusters, which are often in good agreement with the clusters found by a dedicated clustering algorithm (Kobak and Berens, 2019). See Figure 1 for an example of data visualization using such a basic t-SNE algorithm. 1.2 Main Results and Our Contribution A formal theoretical framework is introduced for the analysis of t-SNE that relies on a joint statistical and computational analysis. The key contribution of the present work can be summarized as follows: \u2022 We rigorously establish the asymptotic equivalence between the early exaggeration stage and power iterations. Our theory unveils novel properties such as the implicit regularization e\ufb00ect and the necessity of early stopping in the early exaggeration stage for weakly clustered data. \u2022 We characterize the behavior of t-SNE iterations at the embedding stage by identifying an ampli\ufb01cation phase along with its intercluster repulsion and expansion phenomena, and a stabilization phase of this stage. \u2022 We give the theoretical guidance for initialization and selecting the tuning parameters at both stages in a \ufb02exible and data-adaptive manner. \u2022 We provide practical advice on applying t-SNE and interpreting the t-SNE visualizations of high-dimensional clustered data. The main results can be explained in more detail from three perspectives. Early exaggeration stage. Through a discrete-time analysis (Sections 2.1 and 2.2), we establish the asymptotic equivalence between the early exaggeration stage and power iterations based on the underlying graph Laplacian associated with the high-dimensional data, providing a spectral-graphical interpretation of the algorithm. We show the implicit spectral clustering mechanism underlying this stage, which explains the adaptivity and \ufb02exibility of t-SNE for visualizing clustered data without specifying the number of clusters. Speci\ufb01cally, for the cases where {Xi}1\u2264i\u2264n are approximately clustered into R groups, we make the key observation that the coordinates of {y(k) i }1\u2264i\u2264n converge to the R-dimensional Laplacian null space, leading to a limiting embedding where the elements of {y(k) i }1\u2264i\u2264n are well-clustered according to their true cluster membership. On the other hand, through a continuous-time analysis (Section 2.3), we study the underlying gradient \ufb02ow and uncover an implicit regularization e\ufb00ect depending on the number of iterations. In particular, our analysis implies that when dealing with noisy and approximately clustered data, one should stop early in the early exaggeration stage to avoid \u201covershooting.\u201d These results justify the empirical observations about the bene\ufb01ts of the early exaggeration technique in creating cluster structures and speeding up the algorithm. For more details about comparison with the existing results, see Section 1.3 and the discussions after Corollaries 7 in Section 2. Embedding stage. We provide a mechanical interpretation of the algorithm by characterizing the kinematics of the low-dimensional map at each iteration. Speci\ufb01cally, in 5 \fCai and Ma Section 3 we identify an ampli\ufb01cation phase within the embedding stage, featuring the local intercluster repulsion (Theorem 13) and the global expansive behavior (Theorem 15) of {y(k) i }1\u2264i\u2264n. In the former case, it is shown that the movement of each y(k) i to y(k+1) i is jointly determined by the repulsive forces pointing toward y(k) i from each of the other clusters (Figure 2), that amounts to increasing spaces between the existing clusters; in the latter case, it is shown the diameter of {y(k) i }1\u2264i\u2264n may strictly increase after each iteration. We observe that, following the ampli\ufb01cation phase, there is a stabilization phase where {y(k) i }1\u2264i\u2264n is locally adjusted to achieve at a \ufb01ner embedding of {Xi}1\u2264i\u2264n. These results together explain the fast convergence rate and the exceptional empirical performance of t-SNE for visualizing clustered data. The articulation of these phenomena also leads to useful practical guidances. See below and Remark 16 in Section 3 for more details. Figure 2: Illustration of the intercluster repulsion where the original data {Xi}1\u2264i\u2264n have three clusters. The position of y(k+1) i is jointly determined by y(k) i and two repulsive forces f(k) i1 and f(k) i2 pushing y(k) i away from the other two clusters. Practical implications. The general theory brings forth the interpretations of the t-SNE output, and provides theoretical guidance for selecting tuning parameters and for initialization. In Section 4 we illustrate the general theory on two examples of high-dimensional clustered data, one generated from a Gaussian mixture model, and another from a noisy nested sphere model. We also analyze in Section 5 a real-world dataset to further demonstrate the practical implications of our theory. In particular, our analysis allows for a wider spectrum of tuning parameters (Figure 10 and Equation (39)) and initialization procedures than those considered in previous theoretical works (Arora et al., 2018; Linderman and Steinerberger, 2019). Moreover, our theoretical results support the state-of-art practice (Kobak and Berens, 2019; Kobak and Linderman, 2021), but also lead to novel insights (e.g., the \ufb01rst item below) that has been unknown to our knowledge. In the following, we summarize our general advice on applying t-SNE to potentially clustered data: 6 \fTheoretical Foundation of t-SNE \u2022 For weakly clustered data, one may adopt the early exaggeration technique, but needs to stop early (for example, set K0 = \u230a(log n)2\u230b) to avoid overshooting \u2013 failure of stopping early may lead to false clustering; see Figure 5 below for an illustration. \u2022 t-SNE visualization based on random initialization and early exaggeration is reliable in terms of cluster membership but not relative position of clusters. For example, the neighboring clusters in visualization may not be interpreted as neighboring clusters in the original data; see Figure 6 below for an illustration. \u2022 Occasionally, false clustering may appear as an artifact of random initialization and intercluster repulsion. Therefore, it is helpful to run t-SNE multiple times to fully assess the e\ufb00ect of random initialization; see Figure 6 below for an illustration. \u2022 For strongly clustered data, one can speed up the algorithm by replacing the early exaggeration stage by a simple spectral initialization2 where (y(0) 1 , y(0) 2 ) are the eigenvectors associated with the smallest two eigenvalues of L(P). 1.3 Related Work The impressive empirical performance of t-SNE has recently attracted much theoretical interests. Lee and Verleysen (2011) investigated the bene\ufb01ts of the so-called shift-invariant similarities used in the stochastic neighbor embedding and its variants. Later on, they further identi\ufb01ed two key properties of these visualization methods (Lee and Verleysen, 2014). In Shaham and Steinerberger (2017), a large family of methods including t-SNE as a special case were studied and shown to successfully map well-separated disjoint clusters from high dimensions to the real line so as to approximately preserve the clustering. In Arora et al. (2018), a theoretical framework was developed to formalize the notion of visualizing clustered data, which is used to analyze the early exaggeration stage of t-SNE, and to justify its high visualization quality. Linderman and Steinerberger (2019) showed that, in the early exaggeration stage of t-SNE, with properly chosen parameters \u03b1 and h, a subset of the two-dimensional map belonging to the same cluster will shrink in diameter, suggesting well-clustered visualization following iterations. We note that connections between the early exaggeration stage and power iterations have been pointed out in Arora et al. (2018) and Linderman and Steinerberger (2019), but the discussions therein are mostly informal and heuristic. In contrast, we provide rigorous theoretical justi\ufb01cation for such a connection, identify its condition and explicate its consequences. By extending the idea of t-SNE, Im et al. (2018) considered a class of methods with various loss functions based on the f-divergence, and theoretically assessed the performances of these methods based on a neighborhood-level precision-recall analysis. More recently, Zhang and Steinerberger (2021) proposed to view t-SNE as a force-based method which generates embeddings by balancing attractive and repulsive forces between data points. In particular, the limiting behavior of t-SNE was analyzed under a mean-\ufb01eld model where a single homogeneous cluster is present. At the empirical side, the recent works of Kobak and Berens (2019) and Kobak and Linderman (2021) summarize the state-of-art practice of using t-SNE to biological data. 2. An illustration is provided in Figure 4.1 of Linderman and Steinerberger (2019). 7 \fCai and Ma A comprehensive survey of existing data visualization methods and their properties can be found in Nonato and Aupetit (2018). Despite these pioneering endeavors, the theoretical understanding of t-SNE is still limited. Many intriguing phenomena and important features that arise commonly in practice have not been well understood or properly explained. Moreover, it remains unclear how to properly interpret the t-SNE visualization and its potential artifacts. These important questions are carefully addressed in the current work for the case of clustered data. Compared to the existing works, the theoretical framework developed in our work leads to identi\ufb01cation and explication of novel properties, phenomena, and important practical implications on t-SNE, as summarized at the beginning of Section 1.2. 1.4 Notation and Organization For a vector a = (a1, ..., an)\u22a4\u2208Rn, we denote diag(a1, ..., an) \u2208Rn\u00d7n as the diagonal matrix whose i-th diagonal entry is ai, and de\ufb01ne the \u2113p norm \u2225a\u2225p = \u0000 Pn i=1 ap i \u00011/p. For a matrix A = (aij) \u2208Rn\u00d7n, we de\ufb01ne its Frobenius norm as \u2225A\u2225F = qPn i=1 Pn j=1 a2 ij, its \u2113\u221e-norm as \u2225A\u2225\u221e= max1\u2264i,j\u2264n |aij|, and its spectral norm as \u2225A\u2225= sup\u2225x\u22252\u22641 \u2225Ax\u22252; we also denote A.i \u2208Rn as its i-th column and Ai. \u2208Rn as its i-th row. Let O(n, k) = {V \u2208 Rn\u00d7k : V\u22a4V = Ik} be the set of all n \u00d7 k orthonormal matrices and On = O(n, n), the set of n-dimensional orthonormal matrices. For a rank r matrix A \u2208Rn\u00d7n with 1 \u2264r \u2264n, its eigendecomposition is denoted as A = U\u0393U\u22a4where U \u2208O(n, r) with its columns being the eigenvectors, and \u0393 = diag(\u03bb1(A), \u03bb2(A), ..., \u03bbr(A)) with \u03bbmin(A) = \u03bb1(A) \u2264... \u2264 \u03bbn(A) = \u03bbmax(A) being the ordered eigenvalues of A. For a smooth function f(x), we denote \u02d9 f(x) = d f(x)/dx and \u00a8 f(x) = d2f(x)/dx2. For any integer n > 0, we denote the set [n] = {1, 2, ..., n}. For a \ufb01nite set S, we denote its cardinality as |S|. For a subset S \u2286Rn, we de\ufb01ne its diameter diam(S) = supx,y\u2208S \u2225x \u2212y\u22252. For sequences {an} and {bn}, we write an = o(bn) or an \u226abn if limn an/bn = 0, and write an = O(bn), an \u2272bn or bn \u2273an if there exists a constant C such that an \u2264Cbn for all n. We write an \u224dbn if an \u2272bn and an \u2273bn. Throughout, C, C1, C2, ... are universal constants, that can vary from line to line. The rest of the paper is organized as follows. Section 2 presents the theoretical analysis for the early exaggeration stage of t-SNE. Section 3 analyzes the embedding stage. The general theory is then applied in Section 4 to two speci\ufb01c settings of model-based clustered data, one under a Gaussian mixture model and another under a noisy nested sphere model. Analysis of a real-world dataset is presented in Section 5. Section 6 discusses potential applications, extensions and other related problems. Proofs of our main results and supplementary \ufb01gures are collected in Appendix A to F. 2. Analysis of the Early Exaggeration Stage 2.1 Asymptotic Graphical Interpretation and Localization We start with a key observation that connects the updating equation (6) to some graphrelated concepts. To this end, we introduce the following de\ufb01nition. 8 \fTheoretical Foundation of t-SNE De\ufb01nition 1 (Degree & Laplacian Operators) For a symmetric matrix A = (aij)1\u2264i,j\u2264n \u2208 Rn\u00d7n, de\ufb01ne the degree operator D : Rn\u00d7n \u2192Rn\u00d7n by D(A) = diag(Pn i=1 ai1, ..., Pn i=1 ain), and the Laplacian operator L : Rn\u00d7n \u2192Rn\u00d7n by L(A) = D(A) \u2212A. We de\ufb01ne S(k) \u03b1 = (S(k) ij (\u03b1))1\u2264i,j\u2264n \u2208Rn\u00d7n with S(k) ii (\u03b1) \u22610 for all i \u2208[n]. Then we can rewrite the updating equation (6) using the matrix form as y(k+1) \u2113 = [In \u2212hL(S(k) \u03b1 )]y(k) \u2113, \u2113= 1, 2, (7) where In \u2208Rn\u00d7n is the identity matrix, and y(k) \u2113 \u2208Rn consists of the \u2113-th coordinates of {y(k) i }1\u2264i\u2264n. As a consequence, for each iteration k, if we treat the symmetric matrix S(k) \u03b1 as the adjacency matrix of a weighted graph G(k) with n nodes that summarizes the pairwise relationships between n data points {Xi}1\u2264i\u2264n, Equation (7) has an interpretation that links to the Laplacian matrix of such a weighted graph. To better understand the meaning and the properties of the underlying graph G(k) that evolve over iterations, we take a closer look at its adjacency matrix S(k) \u03b1 . In particular, one should keep in mind that in common applications of t-SNE, the early exaggeration stage has the following empirical features: (i) moderate or relatively large values of the exaggeration parameter \u03b1 (default 12 in the R package Rtsne), (ii) local initializations {y(0) i }1\u2264i\u2264n around the origin (see Section 1.1), and (iii) relative small diameters diam({y(k) i }1\u2264i\u2264n) over the iterations (Figure 1). Our next result shows that, these empirical features of t-SNE have deep connections to the asymptotic behavior of the evolving underlying graphs and their adjacency matrices {S(k) \u03b1 }k\u22651 in the large sample limit (as n \u2192\u221e). Theorem 2 (Asymptotic Graphical Interpretation) Recall that P = (pij)1\u2264i,j\u2264n is de\ufb01ned in (1) and denote \u03b7(k) = [diam({y(k) i }1\u2264i\u2264n)]2. Then for any i, j \u2208[n] with i \u0338= j, and each k \u22651 such that \u03b7(k) < 1, we have \f \f \f \fS(k) ij (\u03b1) \u2212\u03b1pij + 1 n(n \u22121) \f \f \f \f \u2264\u03b1pij\u03b7(k) + 2\u03b7(k) n(n \u22121)(1 \u2212\u03b7(k)). (8) Consequently, if we denote 1n = (1, ..., 1)\u22a4\u2208Rn, and Hn = 1 n(n\u22121)(1n1\u22a4 n \u2212In), then for each k \u22651, as long as (\u03b7(k), \u03b1) satis\ufb01es \u03b7(k) \u226a \u2225P\u2225 n\u2225P\u2225\u221e , \u03b1 \u226b 1 n\u2225P\u2225, as n \u2192\u221e, (9) we have lim n\u2192\u221e \u2225S(k) \u03b1 \u2212(\u03b1P \u2212Hn)\u2225 \u2225\u03b1P \u2212Hn\u2225 = 0. (10) The above theorem implies that, for large n, as long as the diameter of {y(k) i }1\u2264i\u2264n remains su\ufb03ciently small and the exaggeration parameter \u03b1 su\ufb03ciently large, the adjacency 9 \fCai and Ma matrix S(k) \u03b1 behaves almost like a \ufb01xed matrix \u03b1P\u2212Hn across the iterations. In other words, we may treat the updating equation (7) as an approximately linear equation y(k+1) \u2113 \u2248[In \u2212hL(\u03b1P \u2212Hn)]y(k) \u2113, \u2113= 1, 2, (11) where the linear operator In \u2212hL(\u03b1P\u2212Hn) only relies on the Laplacian of a \ufb01xed weighted graph whose adjacency matrix is given by the scaled and shifted similarity matrix \u03b1P\u2212Hn. This essentially opens the door to our key result on the asymptotic equivalence between the early exaggeration stage and power iterations. Before we formally present such a result, we need to \ufb01rst point out an important phenomenon concerning the global behavior of the low-dimensional map at the early exaggeration stage. Speci\ufb01cally, we make the following assumptions on the initialization and the tuning parameters (\u03b1, h, k): (I1) {y(0) i }1\u2264i\u2264n satis\ufb01es min\u2113\u2208[2] \u2225y(0) \u2113\u22252 > 0, and max\u2113\u2208[2] \u2225y(0) \u2113\u2225\u221e= O(1) as n \u2192\u221e; and (T1) the parameters (\u03b1, h, k) satisfy k(nh\u03b1\u2225P\u2225\u221e+ h/n) = O(1) as n \u2192\u221e. Intuitively, Condition (I1) says that the initialization {y(0) i }1\u2264i\u2264n should not be simply all zeros or unbounded, whereas the condition (T1) \u2013 as a consequence of (8) \u2013 essentially requires the cumulative deviations of hL(S(k) \u03b1 ) from hL(\u03b1P\u2212Hn) to be bounded. Our next result shows that, under these assumptions, the diameter of {y(k) i }1\u2264i\u2264n may not increase throughout the iterations, so the embedding remains localized within the initial range. Proposition 3 (Localization) Suppose (I1) and (T1) hold. We have diam({y(k+1) i }1\u2264i\u2264n) \u2264C max \u2113\u2208[2] \u2225y(0) \u2113\u2225\u221e, (12) for some universal constant C > 0. The above proposition con\ufb01rmed the globally localized and non-expansive behavior of {y(k) i }1\u2264i\u2264n over the early exaggeration stage observed in practice (Figure 1). Concerning Theorem 2, it tells us the step-speci\ufb01c condition \u03b7(k) \u226a\u2225P\u2225/(n\u2225P\u2225\u221e) therein can be generalized to all \ufb01nite k\u2019s as long as the initialization {y(0) i }1\u2264i\u2264n is concentrated around 0, that is, max\u2113\u2208[2] \u2225y(0) \u2113\u22252 \u221e\u226a\u2225P\u2225/(n\u2225P\u2225\u221e). Furthermore, when (\u03b1, h) are chosen such that the step-wise deviation diminishes (i.e., rn = nh\u03b1\u2225P\u2225\u221e+ h/n \u21920), Proposition 3 indicates that (10) may remain true for even larger numbers of iterations as long as k = O(r\u22121 n ). 2.2 Asymptotic Power Iterations, Implicit Spectral Clustering and Early Stopping With the above graphical interpretation of the updating equation (7) in mind, we now present our key result concerning the asymptotic equivalence between the early exaggeration stage and a power method based on the Laplacian matrix L(\u03b1P \u2212Hn). In particular, we make the following assumptions on the initialization and the tuning parameters: 10 \fTheoretical Foundation of t-SNE (I2) {y(0) i }1\u2264i\u2264n satis\ufb01es max\u2113\u2208[2] \u2225y(0) \u2113\u22252 \u221e= o(\u2225P\u2225/(n\u2225P\u2225\u221e)) as n \u2192\u221e; and (T1.D) The parameters (\u03b1, h, k) satisfy \u03b1 \u226b(n\u2225P\u2225)\u22121 and k(nh\u03b1\u2225P\u2225\u221e+ h/n) = o(1) as n \u2192\u221e. Condition (I2) follows from the discussion subsequent to Proposition 3, which along with Condition (T1.D), which is analogous to but stronger than (T1), ensures the conditions for Theorem 2 and Proposition 3 to hold simultaneously. Theorem 4 (Asymptotic power iterations) Under Conditions (I1) (I2) and (T1.D), we have (10) and (12) hold, and so does the asymptotic equivalence lim n\u2192\u221e \u2225y(k) \u2113 \u2212[In \u2212hL(\u03b1P \u2212Hn)]ky(0) \u2113\u22252 \u2225y(0) \u2113\u22252 = 0. (13) The above theorem suggests that each step of the early exaggeration stage may be treated as a power method in the sense that y(k+1) \u2113 \u2248[In \u2212hL(\u03b1P \u2212Hn)]ky(0) \u2113. (14) The normalization by \u2225y(0) \u2113\u22252 in (13) makes sure the result to be scale-invariant to the initialization. It is well-known that, for a \ufb01xed matrix G \u2208Rn\u00d7n with 1 as its unique largest eigenvalue in magnitude, the power iteration y(k) = Gky(0) converges to the associated eigenvector as k \u2192\u221e. As a result, when treated as an approximate power method, the early exaggeration stage of t-SNE essentially aims to \ufb01nd the direction of the leading eigenvector(s) of the matrix In \u2212hL(\u03b1P \u2212Hn), which, as will be shown shortly, is actually equivalent to \ufb01nding the eigenvector(s) associated with the smallest eigenvalue of the graph Laplacian L(\u03b1P \u2212Hn), or the null space of L(P). Led by these observations, our next results concern the limiting behavior of the lowdimensional map {y(k) i }1\u2264i\u2264n as the number of iterations k \u2192\u221e. Note that any Laplacian matrix has an eigenvalue 0 associated with a trivial eigenvector n\u22121/21. Given the a\ufb03nity (14) between t-SNE and the power method, we start by showing that, the linear operator [In \u2212hL(\u03b1P \u2212Hn)]k would converge eventually to a projection operator associated with the null space of the Laplacian L(P). In particular, we let R \u22651 be the dimension of the null space of the Laplacian L(P) \u2208Rn\u00d7n; and assume (T2) the parameters (\u03b1, h) satis\ufb01es \u03ba < h\u03bbR+1(L(\u03b1P)) \u2264h\u03bbn(L(\u03b1P)) < 1 for some constant \u03ba \u2208(0, 1). This assumption corresponds to the so-called \u201ceigengap\u201d condition in the random matrix literature, which gives the signal strength requirements for the recovery of the eigenvalues/eigenvectors, and, in the meantime, the conditions for the tuning parameters. Theorem 5 (Convergence of power iterations) Let U \u2208O(n, R) such that its columns consist of an orthogonal basis for the null space of L(P). Suppose kh = o(n) and (T2) hold. Then, we have lim k\u2192\u221e \u2225[In \u2212hL(\u03b1P \u2212Hn)]ky \u2212UU\u22a4y\u22252 \u2225y\u22252 = 0. (15) 11 \fCai and Ma Combining Theorems 4 and 5, we know that for su\ufb03ciently large n and k, the t-SNE iterations y(k) \u2113 may converge to the projection of the initial vectors y(0) \u2113 into the null space of the Laplacian L(P), that is y(k) \u2113 \u2248UU\u22a4y(0) \u2113, \u2113\u2208[2]. (16) Now, to better understand the above theorem and its implications on the limiting behavior of t-SNE applied to clustered data, we study the null space of a special class of Laplacian matrices, corresponding to the family of weighted graphs consisting of R \u22652 connected components. In fact, when the original data {Xi}1\u2264i\u2264n are well-clustered and \u03c4i\u2019s are appropriately chosen, the family of disconnected weighted graphs arise naturally since their adjacency matrices are good approximations of P based on these data (Balakrishnan et al., 2011). We illustrate this point further in Section 4. In the following, we say a symmetric adjacency matrix P is \u201cwell-conditioned\u201d if its associated weighted graph has R \u22652 connected components. Our next result characterizes the Laplacian null space corresponding to these disconnected weighted graphs. Proposition 6 (Laplacian null space) Suppose A \u2208Rn\u00d7n is symmetric and well conditioned. Then the smallest eigenvalue of the Laplacian L(A) is 0 and has multiplicity R, and the associated eigen subspace is spanned by {\u03b81, ..., \u03b8R} where for each r \u2208{1, ..., R}, [\u03b8r]j = \u001a 1/\u221anr if the j-th node belongs to the r-th component 0 otherwise , and nr is the number of nodes in the r-th connected component. In particular, up to possible permutation of coordinates, any vector u in the null space of L(A) can be expressed as u = a1 \u221an1 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 1n1 0 . . . 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb+ a2 \u221an2 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 0 1n2 . . . 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb+ ... + aR \u221anR \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 0 0 . . . 1nR \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb, (17) for some a1, ..., aR \u2208R. From the above proposition, for a well-conditioned matrix, the components of any u in the Laplacian null space has at most R distinct values, and whenever |{a1, ..., aR}| = R, the coordinates share the same value if and only if the corresponding nodes fall in the same connected component, i.e., the same cluster. Combining (16) and (17), one can see that, for strongly clustered data, the output from the early exaggeration stage essentially converges to the eigenvectors associated with the Laplacian null space. This leads to our fourth practical advice at the end of Section 1.2. We now generalize the analysis to the setting where the data {Xi}1\u2264i\u2264n is only weakly clustered in the sense that there exists a well-conditioned symmetric matrix P\u2217close to P under properly chosen {\u03c4i}, and the underlying graph associated with P may not be necessarily disconnected. More speci\ufb01cally, we assume (T2.D) there exists a symmetric and well-conditioned matrix P\u2217\u2208Rn\u00d7n satisfying (T2) and is su\ufb03ciently close to P in the sense that kh\u03b1\u2225L(P\u2217\u2212P)\u2225= o(1). 12 \fTheoretical Foundation of t-SNE For a given P satisfying (T2.D), let nr with r \u2208[R] be the size of the r-th connected component in the graph associated with P\u2217. Our next theorem obtains the implicit spectral clustering and early stopping properties of the early exaggeration stage. Theorem 7 (Implicit clustering and early stopping) Suppose the similarity P and the tuning parameters (\u03b1, h, k) satisfy (T1.D) and (T2.D), and the initialization satis\ufb01es (I1) and (I2). Then there exists some permutation matrix O \u2208Rn\u00d7n such that, for \u2113\u2208[2], lim (k,n)\u2192\u221e \u2225y(k) \u2113 \u2212Oz\u2113\u22252 \u2225y(0) \u2113\u22252 = 0, (18) where z\u2113= (z\u21131, ..., z\u21131 | {z } n1 , z\u21132, ..., z\u21132 | {z } n2 , ..., z\u2113R, ..., z\u2113R | {z } nR )\u22a4\u2208Rn, (19) and z\u2113r = \u03b8\u22a4 r y(0) \u2113/\u221anr for r \u2208[R]. Theorem 7 describes the limiting behavior of the low-dimensional map {y(k) i }1\u2264i\u2264n as (n, k) \u2192\u221e, when the original data is approximately clustered. Speci\ufb01cally, elements from {y(k) i }1\u2264i\u2264n associated with a connected component of the underlying graph would converge cluster-wise towards a few points on R2. In particular, Theorem 7 suggests that, at the end of the early exaggeration stage, although the samples belonging to the same underlying cluster tend to be clustered together in the t-SNE embeddings, the cluster centers of the t-SNE embeddings only rely on the initialization, rather than the actual positions of the underlying clusters. Therefore, if initialized randomly and noninformatively, the t-SNE embeddings at the end of the early exaggeration stage tend to preserve only the local structures (i.e., the closeness of the samples from the same cluster) but not the global structures (i.e., the relative positions of di\ufb00erent clusters) of the original data (Kobak and Berens, 2019; Kobak and Linderman, 2021). This observation, as illustrated in Figure 6, leads to our second practical advice at the end of Section 1.2. Our theory re\ufb01nes and improves the existing works such as Linderman and Steinerberger (2019) and Arora et al. (2018) in various aspects. Firstly, our theoretical framework formalizes and explains the asymptotic equivalence between the early exaggeration stage and the power iterations. The theory provides a precise description of the limiting behavior of the low-dimensional map and the theoretical conditions. Secondly, unlike the previous works where only one particular initialization and relatively limited range of tuning parameters were considered, our analysis yields general conditions and allows for more \ufb02exible choices of the initialization procedures and tuning parameters. Finally, our analysis unveils the need of stopping early in the early exaggeration stage for weakly clustered data, which is a novel feature. Speci\ufb01cally, both Conditions (T1.D) and (T2.D) allow k \u2192\u221ebut in a controlled manner \u2013 whenever \u2225L(P\u2217\u2212P)\u2225\u0338= 0, there is a data dependent upper bound on the iteration number k \u226a 1 h\u03b1\u2225L(P\u2217\u2212P)\u2225, which becomes more stringent for weakly clustered data (i.e., \u2225L(P\u2217\u2212P)\u2225not too small). Such a phenomenon is also observed empirically (Figure 5), where failing to stop early would lead to false clustering. 13 \fCai and Ma 2.3 Gradient Flow and Implicit Regularization For \u2113\u2208{1, 2}, let {\u02dc y(k) \u2113}k\u22650 be the sequence de\ufb01ned by the power iterations \u02dc y(k) \u2113 = [In \u2212 hL(\u03b1P\u2212Hn)]ky(0) \u2113. Theorem 4 shows that {\u02dc y(k) \u2113}k\u22650 well approximates the t-SNE iterations {y(k) \u2113}k\u22650 in the large sample limit. The sequence {\u02dc y(k) \u2113}k\u22650 admits the updating equation \u02dc y(k+1) \u2113 = \u02dc y(k) \u2113 \u2212hL(\u03b1P \u2212Hn)\u02dc y(k) \u2113, k \u22650, (20) with an initial value \u02dc y(0) \u2113 = y(0) \u2113. Treating Equation (20) as an auxiliary gradient descent algorithm to the original algorithm (7), a continuous-time analysis can be developed accordingly, which yields interesting insights about the t-SNE iterations {y(k) \u2113}k\u22650. We begin by modeling {\u02dc y(k) \u2113}k\u22650 by a smooth curve Y\u2113(t) with the Ansatz \u02dc y(k) \u2113 \u2248 Y\u2113(kh). De\ufb01ne a step function y\u2113,h(t) = \u02dc y(k) \u2113 for kh \u2264t < (k + 1)h, and as h \u21920, y\u2113,h(t) approaches Y\u2113(t) satisfying \u02d9 Y\u2113(t) = L(\u03b1P \u2212Hn)Y\u2113(t), (21) with the initial value Y\u2113(0) = \u02dc y(0) \u2113 = y(0) \u2113. The above \ufb01rst-order di\ufb00erential equation (21) is usually referred as the gradient \ufb02ow associated with the power iteration sequence {\u02dc y(k) \u2113}k\u22650, whose limiting behavior can be studied through the step function y\u2113,h(t). The following theorem provides a non-asymptotic uniform upper bound on the deviation of y\u2113,h(t) from Y(t) over t \u2208[0, T] and that of \u02dc y(k) \u2113 from Y(kh) over k \u2264T/h. Proposition 8 (Gradient \ufb02ow) For \u2113= 1, 2, and any given T > 0, we have sup t\u2208[0,T] \u2225y\u2113,h(t) \u2212Y\u2113(t)\u22252 \u2225Y\u2113(t)\u22252 \u2264Th\u2225L(\u03b1P \u2212Hn)\u22252, (22) where y\u2113,h(t) is the continuous-time step process of {\u02dc y(k) \u2113} generated by (20), and Y\u2113(t) is the solution to the ordinary di\ufb00erential equation (21). As a consequence, for t = hk, if kh2\u2225L(\u03b1P \u2212Hn)\u22252 \u21920 as n \u2192\u221e, then for \u2113\u2208{1, 2}, lim (n,k)\u2192\u221e \u2225\u02dc y(k) \u2113 \u2212Y\u2113(hk)\u22252 \u2225y(0) \u2113\u22252 = 0. (23) Combining Theorems 4 and 8, we obtain the approximation y(k) \u2113 \u2248Y\u2113(kh) over a range of k \u22650, for properly chosen parameters (\u03b1, h, k) and initialization. Consequently, the properties of the solution path Y\u2113(t) may provide important insights on the behavior of the t-SNE iterations at the early exaggeration stage. We start by stating the following proposition concerning the explicit expression of Y\u2113(t). Proposition 9 (Solution path) For \u2113\u2208{1, 2}, the \ufb01rst-order linear di\ufb00erential equation (21) with initial value Y\u2113(0) = y(0) \u2113 has the unique solution Y\u2113(t) = exp(\u2212tL(\u03b1P\u2212Hn))y(0) \u2113, where exp(\u00b7) is the matrix exponential de\ufb01ned as exp(A) = P\u221e k=0 1 k!Ak. In particular, 14 \fTheoretical Foundation of t-SNE suppose L(P) have the eigendecomposition L(P) = Pn i=1 \u03bbiuiu\u22a4 i where 0 = \u03bb1 \u2264... \u2264\u03bbn and u1 = n\u22121/21n. Then we also have Y\u2113(t) = (u\u22a4 1 y(0) \u2113)u1 + n X i=2 e\u2212t(\u03b1\u03bbi\u2212 1 n\u22121 )(u\u22a4 i y(0) \u2113)ui. (24) Several important observations about the solution path Y\u2113(t) can be made. Firstly, by Proposition 9, for {u1, ..., um} where m \u2208[n] is the largest integer such that \u03b1\u03bbm \u2264 1 n\u22121, we have lim t\u2192\u221eY\u2113(t) \u2208span({u1, ..., um}). (25) This can be treated as a continuous version of the limiting behavior of the power iterations obtained in Theorem 5: under the conditions of Theorem 5, we have \u03b1\u03bbm \u2264 1 n\u22121 for all m \u2208[R] but \u03b1\u03bbR+1 > 1 n\u22121, so that (25) implies that Y\u2113(t) converges to the null space of L(P). Secondly, as long as t = O(n), by orthogonality of {ui}, we have \u2225Y\u2113(t)\u22252 2 \u2272 n X i=1 e\u22122t\u03b1\u03bbi(u\u22a4 i y(0) \u2113)2. (26) The right-hand side is monotonically nonincreasing in t. Hence, the average distance of the rows in (Y1(hk), Y2(hk)) to the origin remains non-expansive over the iterations and is bounded up to a constant by that of (Y1(0), Y2(0)). This result echos Proposition 3 based on the discrete-time analysis. The third and more insightful observation from (24) is its implications on the \ufb01nitetime behavior of the original t-SNE sequence {y(k) \u2113}k\u22651, which complements our discretetime analysis. Speci\ufb01cally, for \ufb01nite t > 0, the coe\ufb03cient of the i-th basis ui in Y\u2113(t) is proportional to e\u2212t\u03b1\u03bbi, which is nonincreasing in \u03bbi. Consequently, (24) implies that, in the early steps of the iterations, the t-SNE algorithm imposes an implicit regularization e\ufb00ect on the low-dimensional map {y(k) i }1\u2264i\u2264n, in the sense that y(k) \u2113 \u2248n\u22121(1\u22a4 n y(0) \u2113)1n + n X i=2 e\u2212kh(\u03b1\u03bbi\u2212 1 n\u22121 )(u\u22a4 i y(0) \u2113)ui. (27) Comparing to the limit (25) or (16), during the early steps of the iterations, y(k) \u2113 is regularized as a conical sum of all the eigenvector basis {ui}1\u2264i\u2264n, with larger weights on the eigenvectors ui corresponding to the smaller eigenvalues of L(P), and smaller weights on those corresponding to the larger eigenvalues of L(P). As the iteration goes, the contributions from the less informative eigenvectors with larger eigenvalues \u03bbi such that \u03b1\u03bbi > 1 n\u22121 decrease exponentially in k, whereas the contributions from the more informative eigenvectors with smaller eigenvalues \u03bbi such that \u03b1\u03bbi < 1 n\u22121 increase with k. Importantly, the inclusion of all the eigenvectors helps to better summarize the cluster information in the original data and to avoid convergence to the trivial eigenvector n\u22121/21n. Indeed, the convergence (25) by itself may not lead to a cluster structure in the limit, as in many applications with weakly clustered data, the graph corresponding to P may be simply connected under \ufb01nite samples, so that the null space span({u1, ..., um}) is e\ufb00ectively the one-dimensional space spanned by n\u22121/21 alone. However, as our next theorem 15 \fCai and Ma shows, the bene\ufb01t of the implicit regularization, brought about by stopping early at the exaggeration stage, can be seen in the creation of desirable clusters in {y(k) i }1\u2264i\u2264n for weakly clustered data with approximately block-structured P. In particular, we make the following assumptions analogous to (T1.D) and (T2.D) in the discrete-time analysis. (T1.C) the parameters (\u03b1, h, t) satisfy \u03b1 \u226b[n\u03bbR+1(L(P))]\u22121 and t = o(n) as n \u2192\u221e; (T2.C) there exists a symmetric and well-conditioned matrix P\u2217\u2208Rn\u00d7n such that \u03bbR+1(L(P\u2217)) \u226bmax{(t\u03b1)\u22121, \u2225L(P\u2217\u2212P)\u2225}, and t\u03b1\u2225L(P\u2217\u2212P)\u2225= o(1) as n \u2192\u221e. Similar to the previous conditions, Conditions (T1.C) and (T2.C) concerns the approximate block structure of P, and ensures su\ufb03cient exaggeration and early stopping of the iterations. Theorem 10 (Implicit regularization, clustering and early stopping) Under Conditions (I1), (T1.C) and (T2.C), let U0 \u2208O(n, R) such that its columns span the null space of P\u2217. Then, we have lim n\u2192\u221e \u2225Y\u2113(t) \u2212U0U\u22a4 0 Y\u2113(t)\u22252 \u2225Y\u2113(0)\u22252 = 0, \u2113\u2208{1, 2}, (28) and, for z\u2113de\ufb01ned in Theorem 7, there exists a permutation matrix O \u2208Rn\u00d7n such that lim n\u2192\u221e \u2225Y\u2113(t) \u2212Oz\u2113\u22252 \u2225Y\u2113(0)\u22252 = 0, \u2113\u2208{1, 2}. (29) An immediate consequence of the above theorem is the following corollary, which arrives at the same conclusion as Theorem 7 through a di\ufb00erent route. Corollary 11 Suppose the conditions of Theorems 4 and 10 hold with t = hk, and k\u03b12h2\u2225L(P\u2212 Hn)\u22252 \u21920. Then the conclusion of Theorem 7 holds. The above theorems provide a deeper theoretical explanation of the need of stopping early at the exaggeration stage. On the one hand, the number of iterations should be su\ufb03ciently large so that {y(k) i }1\u2264i\u2264n moves away from the initialization and is su\ufb03ciently close to a subspace where the underlying cluster information is properly stored. On the other hand, the iterations should be also stopped early for weakly clustered data to avoid \u201covershooting,\u201d that is, convergence to the null space of the super\ufb01cial Laplacian L(P), which may only include the non-informative trivial eigenvector n\u22121/21n (Figure 5 right). 3. Analysis of the Embedding Stage We have shown in Section 2 that the iterations in the early exaggeration stage essentially create clusters in the low-dimensional map {y(k) i }1\u2264i\u2264n, that agree with those underlying {Xi}1\u2264i\u2264n. However, as indicated by Proposition 3, so far the low-dimensional map is concentrated and localized around zero, which may not be ideal for visualization purpose. In addition, by Theorem 7, much information about {Xi}1\u2264i\u2264n other than its cluster membership are not re\ufb02ected by the low-dimensional map. In this section, we show that, after 16 \fTheoretical Foundation of t-SNE transition to the embedding stage, the t-SNE iterations (5) essentially start by amplifying and re\ufb01ning the existing cluster structures in the low-dimensional map and then aim at a proper embedding of the original data. We show that, starting from the embedding stage, the diameter of the low-dimensional map {y(k) i }1\u2264i\u2264n grows fast and they move in clusters as inherited from the early exaggeration stage. Importantly, over the iterations, the elements of {y(k) i }1\u2264i\u2264n belonging to di\ufb00erent clusters would in general move away from each other, resulting to an enlarged visualization with more separated clusters. We refer these iteration steps presenting such a drastically expansive, and intercluster-repulsive behavior of {y(k) i }1\u2264i\u2264n as the ampli\ufb01cation phase of the embedding stage. We also show that, after a certain point, the conditions for the fast expansion phenomenon no longer hold, which likely causes the change of behavior, into a new phase which we refer as the stabilization phase. This is in line with the empirical observation (Figure 3) that, after a few fast expansive iterations in the ampli\ufb01cation phase, the speed of expansion/ampli\ufb01cation gradually reduces towards zero, and in the stabilization phase the diameter only increases very slowly with the iterations. Figure 3: An illustration of the two phase of the embedding stage based on the 1600 MNIST samples described in Section 5. The iterations are counted from the beginning of the embedding stage, and the ampli\ufb01cation rate is the ratio between the diameters of two consecutive embeddings. Recall that the updating equation at the embedding stage is y(k+1) i = y(k) i + h\u2032 X j\u0338=i S(k) ij (y(k) j \u2212y(k) i ), (30) where h\u2032 is the step size that may not be identical to the one in the early exaggeration stage. To understand the behavior of t-SNE at this stage, we start with the following proposition characterizing the matrix S(k) = (S(k) ij )1\u2264i,j\u2264n over the ampli\ufb01cation phase. Proposition 12 For any integer k, if diam({y(k) i }1\u2264i\u2264n) = o(1) as n \u2192\u221e, then, for any i, j \u2208[n] such that i \u0338= j, 17 \fCai and Ma 1. if limn\u2192\u221en2pij = 0, it holds that S(k) ij = \u22121+O(\u03b7(k)) n(n\u22121) as n \u2192\u221e; and 2. if limn\u2192\u221en2pij \u2265c for some constant c > 0, it holds that |S(k) ij | \u224dpij as n \u2192\u221e. Roughly speaking, Proposition 12 says that over the ampli\ufb01cation phase, the matrix S(k) = (S(k) ij )1\u2264i,j\u2264n essentially has two types of entries, determined by the magnitude of the corresponding entries in P. Speci\ufb01cally, S(k) ij is negative with magnitude n\u22122 if pij is much smaller than n\u22122, and otherwise S(k) ij has the same magnitude as pij. This observation leads to the next theorem, which provides important insights on the updating equation (30) by partitioning the contributions of {y(k) i }1\u2264i\u2264n to an updated y(k+1) i into a few major components, each corresponding to a distinct cluster in the original data. To this end, we consider again the similarity matrix P that is only approximately well-conditioned, as characterized by the following assumption. (T2.E) There exists a symmetric and well-conditioned matrix P\u2217\u2208Rn\u00d7n satisfying (T2.D), limn\u2192\u221en2\u2225P \u2212P\u2217\u2225\u221e= 0, and limn\u2192\u221enr n \u2192\u03b3r \u2208(0, 1) for each r \u2208[R]. The existence of well-conditioned P\u2217induces an equivalence class on [n] characterizing the underlying cluster membership. Speci\ufb01cally, for any i, j \u2208[n], we denote i \u223cj if and only if the i-th node and the j-th node belong to the same graph component. Therefore, we have the partition [n] = \u222ar\u2208[R]Hr for mutually disjoint sets {Hr}1\u2264r\u2264R, with Hr corresponding to the r-th equivalence class. Next, we make assumptions on the initialization and parameters (\u03b1, h, K0) in the early exaggeration stage, where K0 = K0(n) \u2192\u221eis the total number of iterations in that stage. Speci\ufb01cally, we assume (I3) the initialization is chosen such that \u2225y(0) 1 \u22252 \u224d\u2225y(0) 2 \u22252, max\u2113\u2208[2] \u2225y(0) \u2113\u2225\u221e= o(n\u22121/2) as n \u2192\u221e, and there exists some constant C > 1 such that z\u2113de\ufb01ned in Theorem 7 satis\ufb01es C\u22121 \u2264n|z\u2113i \u2212z\u2113j|/\u2225y(0) \u2113\u22252 \u2264C for any i, j \u2208[R] such that i \u0338= j, and \u2113\u2208{1, 2}; and (T1.E) the parameters (\u03b1, h, K0) in (6) satisfy (T1.D), and, for Rn = (1 \u2212 \u03ba)K0+hK0[(\u03b1n\u2225P\u2225\u221e+1/n)\u00b7max\u2113\u2208[2] \u2225y(0) \u2113\u22252 \u221e+\u03b1\u2225L(P\u2217\u2212P)\u2225], we have nRn(1+ n2\u2225P\u2217\u2225\u221e) = o(1) as n \u2192\u221e. Condition (I3) is mild as it can be satis\ufb01ed with high probability by a straightforward random initialization procedure, to be presented shortly. Condition (T1.E) is analogous to but slightly stronger than (T1.D), by requiring a smaller cumulative approximation error Rn between L(S(k) \u03b1 ) and L(\u03b1P\u2217\u2212Hn), that is, more distinct clusters in {Xi}1\u2264i\u2264n. Finally, for the parameters (h\u2032, K1) in embedding stage, where K1 = K1(n) is the number of iterations within the ampli\ufb01cation phase, we make the following assumption that controls the cumulative approximation error in S(k) as suggested by Proposition 12. (T3.E) diam({y(K0+K1) i }1\u2264i\u2264n) = o(1), and the parameter h\u2032 in (30) satis\ufb01es K1h\u2032(n\u2225P\u2217\u2225\u221e+ 1/n) = O(1) as n \u2192\u221e. 18 \fTheoretical Foundation of t-SNE Theorem 13 (Intercluster repulsion) Under Conditions (T1.E) (T2.E) (T3.E) and (I3), for each K0 \u2264k \u2264K0 + K1 and any i \u2208[n], we have y(k+1) i = y(k) i + X r\u2208[R]\\r0 f(k) ir + \u03f5(k) i , (31) where r0 \u2208[R] such that i \u2208Hr0, limn\u2192\u221e\u2225\u03f5(k) i \u22252/\u2225f(k) ir \u22252 = 0 for all r \u2208[R] \\ r0, and f(k) ir = h\u2032|Hr| n(n \u22121) \u0012 y(k) i \u2212 1 |Hr| X j\u2208Hr y(k) j \u0013 \u2208R2. In addition, we have sup K0\u2264k\u2264K0+K1 max (i,j):i\u223cj \u2225y(k) i \u2212y(k) j \u22252 \u226an\u22121(\u2225y(0) 1 \u22252 + \u2225y(0) 2 \u22252), (32) and inf K0\u2264k\u2264K0+K1 min (i,j):i\u2241j \u2225y(k) i \u2212y(k) j \u22252 \u2273n\u22121(\u2225y(0) 1 \u22252 + \u2225y(0) 2 \u22252). (33) A few remarks about the above theorem are in order. Firstly, Conditions (T1.E) (T2.E) and (I3) concerning the initialization, parameter selection and the number of iterations in the early exaggerations are not only compatible but also su\ufb03cient for the previous results, including Theorems 4 and 7. This suggests the above intercluster repulsive phenomenon at the embedding stage actually relies on the properties of the outputs from the early exaggeration stage, again yielding the necessity of the early exaggeration, or equivalent techniques. Secondly, as indicated by the next theorem, Condition (I3) on the initialization can be satis\ufb01ed by the following simple local random initialization procedure. Theorem 14 (Random initialization) For any sequence \u03c3n \u21920 as n \u2192\u221e, let y(0) \u2113 = \u03c3ng\u2113/\u2225g\u2113\u22252, where g\u2113\u2208Rn for \u2113\u2208[2] is independently generated from a standard multivariate normal distribution. Then {y(0) i }1\u2264i\u2264n satis\ufb01es Condition (I3) with probability at least 1 \u2212\u03b4 for some su\ufb03ciently small constant \u03b4 > 0. Thirdly, the above theorem provides a precise characterization of the kinematics of each yk i during the iterations, and its reliance on the data points {y(k\u22121) i } in the previous step, as well as the cluster structure inherited from the early exaggeration stage. Speci\ufb01cally, f(k) ir summarizes the contributions from the points {y(k) i }i\u2208Hr in the r-th cluster to the new point y(k+1) i . The theorem implies that, at the ampli\ufb01cation phase, the behavior of {y(k) i }1\u2264i\u2264n is mainly driven by the relative positions of the R clusters produced in the early exaggeration stage: for each point, a vector sum of the repulsive forces coming from all the other clusters at their current positions determine the direction and distance of its movement of each point in this iteration (Figure 2). As a consequence, after each iteration, the diameter of {y(k) i }1\u2264i\u2264n would increase, till the end of the ampli\ufb01cation phase, that is, when Condition (T3.E), or more speci\ufb01cally, diam({y(k) i }1\u2264i\u2264n) = o(1) no longer holds. This process improves the visualization quality by making the clusters more distinct and separated (Figure 1 with k = 40 and 80). Our next result con\ufb01rms the intuition that the diameter of {y(k) i }1\u2264i\u2264n is bound to increase after each iteration in the ampli\ufb01cation phase of the embedding stage. 19 \fCai and Ma Theorem 15 (Expansion) Suppose the conditions of Theorem 13 hold. If in addition \u2225P\u2217\u2225\u221e\u2272n\u22122, then for any k \u2208{K0, K0 + 1, ..., K1}, we have diam({y(k+1) i }1\u2264i\u2264n) > diam({y(k) i }1\u2264i\u2264n), (34) where diam({y(k+1) i }1\u2264i\u2264n) \u2212diam({y(k) i }1\u2264i\u2264n) \u2273h\u2032 n2 min\u2113=1,2 \u2225y(0) \u2113\u22252. Once the diameter of {y(k) i }1\u2264i\u2264n exceeds certain threshold, that is, when diam({y(k) i }1\u2264i\u2264n) is at least of constant order, we arrive at the \ufb01nal stabilization phase. In this phase, the condition of Proposition 12 is violated, and, unlike what is claimed in part one of Proposition 12, the entries of the matrix S(k) corresponding to the smaller entries in P, that is, pij\u2019s with pij \u226an\u22122, no longer remain an almost constant value 1/n(n \u22121). In particular, the sign of S(k) ij would generally rely on the relative magnitudes between pij and qij. We rewrite (30) as y(k+1) i = y(k) i + h\u2032 X j\u0338=i pij \u2212q(k) ij 1 + d(k) ij (y(k) j \u2212y(k) i ). (35) In the stabilization phase, the new position for y(k+1) i is determined by the starting point y(k) i , and the averaged contributions from each of the other data points {y(k) j }j\u0338=i. The contribution from y(k) j to y(k+1) i is either in or against the direction of (y(k) j \u2212y(k) i ), depending on sign(pij \u2212qij). If sign(pij \u2212qij) = \u22121, or, the similarity between y(k) i and y(k) j as measured by qij is greater than the similarity between Xi and Xj as measured by pij, the contribution from y(k) j to y(k+1) i is in the direction of y(k) i \u2212y(k) j , resulting to a repulsive force that enlarges the distance between y(k+1) i and y(k+1) j after the iteration. Similarly, if sign(pij \u2212qij) = 1, it means the similarity between y(k) i and y(k) j is smaller than their counterparts in {Xi}1\u2264i\u2264n, so the contribution y(k) j to y(k+1) i is in the opposite direction y(k) j \u2212y(k) i , resulting to an attractive force that reduces the distance between y(k+1) i and y(k+1) j after the iteration. The iterations over the stabilization phase aim to locally adjust the relative positions of the low-dimensional map to make the \ufb01nal visualization more reliable and faithful. Remark 16 In practice, expansion and repulsion e\ufb00ects help make clusters identi\ufb01ed from the early exaggeration step more salient in the \ufb01nal visualization, and possibly more informative in terms of the local structures within the clusters. This is especially helpful if two clusters are positioned too close to each other at the end of the early exaggeration stage, as an artifact of the random initialization (e.g., the middle column of Figure 5). Moreover, the intercluster repulsion phenomenon explains the occasional appearance of false clusters in the t-SNE visualization (Kobak and Linderman, 2021). Speci\ufb01cally, our theory indicates that false clustering may appear due to an incidental combination of overlapped clusters from the early exaggeration stage with random initialization, and the intercluster repulsion from the embedding stage (Figures 5 and 6). This leads to our third general advice on practice at the end of Section 1.2. 20 \fTheoretical Foundation of t-SNE 4. Application I: Visualizing Model-Based Clustered Data In the previous sections, we established the theoretical properties for the basic t-SNE algorithm under general conditions on the parameters (\u03b1, h, h\u2032, K1, K2), the initialization, and the similarity matrix P constructed from the original data. In this section, we apply our general theory in two concrete examples of clustered data, one generated from a Gaussian mixture model and another from a noisy nested sphere model. 4.1 Gaussian Mixture Model Consider the Gaussian mixture model Xi|zi = r \u223cN(\u00b5r, \u03a3), zi i.i.d. \u223cMultinomial(\u03c01, ..., \u03c0R), for i \u2208[n], (36) where \u00b5r \u2208Rp and PR r=1 \u03c0r = 1. We make the following assumptions. (C1) The mixing proportions {\u03c0r}1\u2264r\u2264R satisfy minr \u03c0r \u2265c > 0. (C2) There exists some large constant C\u2032 > 0 such that \u03c12 = min1\u2264j\u0338=k\u2264R \u2225\u00b5j \u2212 \u00b5k\u22252 2 \u2265C\u2032 max{p, log n}. (C3) There exists some constant C > 0 such that the population covariance matrix \u03a3 \u2208Rp\u00d7p satis\ufb01es C\u22121 \u2264\u03bb1(\u03a3) \u2264\u03bbp(\u03a3) \u2264C and tr(\u03a3)/p \u2264C. Under the above Gaussian mixture model, we obtain the following corollary that provides the conditions for the theoretical results presented in the previous sections. Corollary 17 Suppose Conditions (C1) (C2) and (C3) hold, and \u03c4 2 i \u224dmax{p, log n}. If \u03b1 \u226b1, K0h = o(n), h\u03b1 \u224dn, 1 \u226aK0 \u226aexp{ \u03c12 max{p,log n}} and K0h\u03b1\u03c32 n log n = o(n2), then Conditions (T1.D) and (T2.D) hold. If in addition log n \u226aK0 \u226an\u22121 exp{ \u03c12 max{p,log n}}, K0h\u03b1\u03c32 n log n = o(n), and K1h\u2032 = O(n), then Conditions (T1.E) (T2.E) and (T3.E) hold. As a consequence, suitable choices of the tuning parameters (\u03b1, h, h\u2032, K0, K1) under the Gaussian mixture model can be determined e\ufb03ciently. For example, if \u03c12 \u2273log n \u00b7 max{p, log n}, one could choose K0 = \u230a(log n)2\u230b, \u03c3n = (log n)\u22122, h = h\u2032 = n\u03b4 and \u03b1 = n1\u2212\u03b4 for any constant \u03b4 \u2208(0, 1). By Corollary 17, Conditions (T1.D) and (T2.D) hold, so the conclusions of Theorem 7 follows for k = K0; meanwhile, Conditions (T1.E) (T2.E) and (T3.E) also hold, so the conclusions of Theorem 13 hold for each K1 with K0 \u2264K1 \u2264n1\u2212\u03b4. Note that the above results apply to both low-dimensional settings where p = o(n) and high-dimensional settings where p \u2273n. To demonstrate the e\ufb00ectiveness of the theoretical guidance, we generate n = 1500 samples of dimension p = 100 from a Gaussian mixture model with r = 6, \u03c12 = p, \u03a3 = Ip, and the cluster proportion vector (0.1, 0.1, 0.1, 0.15, 0.25, 0.3). We use the above tuning parameters with various \u03b4 \u2208{1/2, 1/3} and perplexity=30 (default). The t-SNE embeddings at the end of the early exaggeration stage k = K0 = \u230a(log n)2\u230b= 53 and at k = 1000 are included in Figure 4 below and Figure 7 in Appendix F, con\ufb01rming the theoretical predictions. Moreover, Figure 8 in Appendix F shows that when the above separation condition (C2) is slightly violated (e.g., \u03c12 = p4/5), t-SNE is still able to visualize clusters, which demonstrates the robustness of t-SNE with respect to the separation condition. 21 \fCai and Ma Figure 4: t-SNE visualizations of the model-generated samples as described in Section 4, using the theory-guided tuning parameters with \u03b4 = 1/3 (see Figure 7 for similar results with \u03b4 = 1/2). The left column shows outputs from the early exaggeration stage, whereas the right column are the corresponding \ufb01nal embeddings. Remark 18 Arora et al. (2018) analyzed the early exaggeration stage of t-SNE based on a slightly di\ufb00erent theoretical framework under the Gaussian mixture model with a mean separation condition \u03c1 \u2273p1/4, and under the mixture model of log-concave distributions with a separation condition \u03c1 \u2273p5/12. Compared to these results, our separation condition \u03c1 \u2273p1/2 in (C2) is strong, and it is unclear to us if such a restriction is intrinsic to our theoretical framework or an artifact from our proof strategy. Nevertheless, nailing down the sharp information threshold for t-SNE visualization is an important and fundamental problem \u2013 we plan to have a more systematic treatment of this in a subsequent work. 4.2 Noisy Nested Sphere Model Consider the model of nested spheres with radial noise (Amini and Razaee, 2021), where for i \u2208[n], we have Xi = \u00b5i + \u00b5i \u2225\u00b5i\u22252 \u03bei, \u03bei i.i.d. \u223cN(0, \u03c32) (37) 22 \fTheoretical Foundation of t-SNE and \u00b5i|zi = r \u223cPk, zi i.i.d. \u223cMultinomial(\u03c01, ..., \u03c0R), (38) with PR r=1 \u03c0r = 1 and {Pk} being uniform distributions on nested spheres in Rp of various radii \u03c1min = \u03c11 < \u03c12 < ... < \u03c1R = \u03c1max. We make the following assumptions concerning the separation distances between the underlying nested spheres. (C4) There exists some \u03b3 such that max{n\u22121, \u03c32\u03c1\u22122 min} log n \u226a\u03b3 \u226a1 and maxr\u2208[R\u22121] \u03c1r \u03c1r+1 \u226a1 \u2212C\u221a\u03b3 log \u03b3 for some su\ufb03ciently large constant C > 0. (C5) There exists some small constant c > 0 such that c min |\u03c1r+1 \u2212\u03c1r| \u2265 \u03c3\u221alog n. In Condition (C4), the separation distance is characterized by the ratio \u03c1r/\u03c1r+1 whereas in Condition (C5) the distance is characterized by the di\ufb00erence \u03c1r+1 \u2212\u03c1r. The following corollary provides a su\ufb03cient condition for the results presented in the previous sections. Corollary 19 Suppose Assumptions (C1) (C4) and (C5) hold, and \u03c4 2 i \u224d\u03b3\u03c12 zi. If K0h = o(n), \u03b1h = O(\u03b3n), h\u03b1\u03bbR+1(L(P\u2217)) \u2265\u03ba for some constant \u03ba \u2208(0, 1), K0 \u226b1, K0h(\u03b1/\u03b3 + 1)\u03c32 n log n = o(n2), and log K0h\u03b1 n \u226a\u03b3\u22121(1 \u2212maxr\u2208[R\u22121] \u03c1r \u03c1r+1 )2 + log \u03b3, then Conditions (T1.D) and (T2.D) hold. If in addition K0 \u226blog n, K0h(\u03b1/\u03b3 + 1)\u03c32 n log n = o(n), log K0h\u03b1 \u226a\u03b3\u22121(1 \u2212maxr\u2208[R\u22121] \u03c1r \u03c1r+1 )2 + log \u03b3, and K1h\u2032 = O(\u03b3n), then Conditions (T1.E) (T2.E) and (T3.E) hold. Again, suitable choices of the tuning parameters (\u03b1, h, h\u2032, K0, K1) under the noisy nested sphere model can be determined e\ufb03ciently. For example, let\u2019s consider the case where \u03c1r+1 \u2212\u03c1r = \u2206for all r \u2208[R \u22121]. Speci\ufb01cally, suppose there exists some small constant c > 0 such that \u2206\u2265c\u03c1R, and that \u03b3 = c(log n)\u22121 satis\ufb01es (C4) and \u03bbR+1(L(P\u2217)) \u2273 1 \u03b3n in probability. Then, by Corollary 19, the desired visualization properties such as those in Theorems 7 and 13 would hold with high probability, as long as we choose K0 = \u230a(log n)2\u230b, K1 \u2264n1\u2212\u03b4/ log n, \u03c3n = (log n)\u22122, h = h\u2032 = n\u03b4 and \u03b1 = \u03b3n1\u2212\u03b4 for any constant \u03b4 \u2208(0, 1). Figures 4 and 7 in the Appendix show the t-SNE embeddings of n = 1500 samples of dimension p = 50, at the end of the early exaggeration stage k = K0 = \u230a(log n)2\u230b= 53 and at k = 1000, generated from Model (37) with r = 3, \u03c3 = 1, (\u03c11, \u03c12, \u03c13) = (10, 25, 50) and cluster proportion (0.17, 0.33, 0.5). For the tuning parameters, the above analytical values with \u03b3 = 0.5 and \u03b4 \u2208{1/3, 1/2} are used. As a result, clusters of three nested spheres are visible in all t-SNE embeddings, con\ufb01rming our theoretical predictions. 5. Application II: Visualizing Real-World Clustered Data Finally, we demonstrate our theory by applying t-SNE to the MNIST3 dataset, which contains images of hand-written digits. Speci\ufb01cally, we focus on n = 4N = 1600 images of hand-written digits \u201c2,\u201d \u201c4,\u201d \u201c6\u201d and \u201c8,\u201d with each digit having N = 400 images. Each image contains 28 \u00d7 28 pixels and was treated as a 784-dimensional vector. Based on our theoretical analysis, we set the tuning parameters \u03b1 = n1\u2212\u03b4, h = h\u2032 = n\u03b4, K0 = \u230a(log n)2\u230b (39) 3. http://yann.lecun.com/exdb/mnist/ 23 \fCai and Ma with \u03b4 = 2/3. Again, we use the default perplexity (=30), leading to an approximate block matrix P, with block structure corresponding to the cluster membership (Figure 9). Figure 5: Illustration of t-SNE embeddings of 1600 MNIST samples at the end of embedding stage (bottom row), and their corresponding outputs from the early exaggeration stage (top row). Di\ufb00erent columns have identical initializations and tuning parameters, but distinct number of iterations for the early exaggeration stage. The colors of the dots indicate the underlying four clusters. To demonstrate the necessity of stopping early at the early exaggeration stage, in Figure 5, we show the t-SNE embeddings at the end of embedding stage (bottom row), and their corresponding outputs from the early exaggeration stage (top row). Di\ufb00erent columns have identical initialization and tuning parameters, but distinct numbers of iterations for the early exaggeration stage, namely K0 = \u230a(log n)2\u230b= 54 (left), K0 = \u230an2/3\u230b= 137 (middle), and K0 = \u230an3/4\u230b= 253 (right). Comparing the top three plots in Figure 5, we can clearly see that when K0 far exceeds our theory-guided value \u230a(log n)2\u230b, the cluster patterns is no longer visible, which, in the case of K0 = 253, led to false clustering in the \ufb01nal visualization. Moreover, the middle column of Figure 5 also demonstrates the importance of the embedding stage, especially its underlying intercluster repulsion and expansion e\ufb00ects, as to making the cluster patterns more salient in the \ufb01nal visualization. Next we assess the e\ufb00ects and artifacts of the random initialization. In Figure 6, we \ufb01x all the tuning parameters as in (39) and generate t-SNE visualizations from three di\ufb00erent random initializations. Comparing the \ufb01rst two plots, we observe that the relative positions of the clusters vary with the initialization. For example, the purple cluster and red cluster 24 \fTheoretical Foundation of t-SNE Figure 6: t-SNE visualizations of 1600 MNIST samples based on three di\ufb00erent random initializations and identical tuning parameters in (39). are neighbors in the left panel but not in the middle panel. This echos our theoretical prediction (discussion after Theorem 7) and justi\ufb01es our second practical advice in Section 1.2. On the other hand, on the right panel of Figure 6, we \ufb01nd that even with a proper choice of the tuning parameters, false clustering may still appear as an artifact of the random initialization (cf. Remark 16 and the third practical advice in Section 1.2). Finally, we point out that our theory-guided values for the tuning parameters are \ufb02exible, robust and adaptive to the sample size. For example, in Figure 10, we present three more visualizations of n = 2400 (N = 600 for each digit) MNIST samples, by using the tuning parameters in (39) with various \u03b4 \u2208{1/3, 1/2, 2/3}, and an identical random initialization. The cluster patterns are visible and similar in all the cases, showing the e\ufb00ectiveness of our tuning parameters and the insensitivity to the choice of \u03b4. 6. Discussion The present paper provides theoretical foundations of t-SNE for visualizing clustered data and obtains insights about its theoretical properties and interpretations. We believe that some of the conditions may be relaxed by adopting more advanced technical tools. For example, the current analysis of the early exaggeration stage relies on the well-celebrated Davis-Kahan matrix perturbation inequality (cf. Section B.3), which may be further improved by leveraging advanced results in Random Matrix Theory, such as Benaych-Georges and Nadakuditi (2012) and Bao et al. (2021). There are still many interesting questions that remain to be explored. For instance, what is the limiting behavior of the low-dimensional map {y(k) i }1\u2264i\u2264n towards the end of the embedding stage, after transition to the stabilization phase? How to interpret the local structure within a cluster (DePavia and Steinerberger, 2020; Robinson and Pierce-Ho\ufb00man, 2020)? How many iterations are needed for the embedding stage? How to determine the bandwidth {\u03c4i} in a data-driven and adaptive manner (Ding and Ma, 2022)? The present work is a \ufb01rst step towards answering these important questions. 25 \fCai and Ma Moreover, our theoretical framework is generic and can be generalized to study other algorithms that are closely related to or share similar features with t-SNE. For example, in addition to the variants of t-SNE mentioned in Section 1, many dimension reduction and data visualization methods, such as multidimensional scaling (Kruskal, 1978), kernel principal component analysis (Sch\u00a8 olkopf et al., 1997), and Laplacian eigenmap (Belkin and Niyogi, 2003), start with a similarity matrix summarizing the pairwise distances within a dataset, and then proceed by either explicitly or implicitly exploiting the spectral properties of the similarity matrix. In this connection, the general ideas behind our theoretical analysis, such as identifying the underlying structured graph and properties of its adjacency or Laplacian matrix (Sections 2.1 and 2.2), studying the gradient \ufb02ow associated with the discrete algorithm (Section 2.3), and the mechanical/kinematic view of the updating equation (Section 3), can be adopted to uncover the underlying mechanism and the properties of these methods. It is also interesting to explore the fundamental limit for data visualization and dimension reduction. For example, what are the necessary conditions for the data {Xi}1\u2264i\u2264n to guarantee the existence of a low-dimensional map {yi}1\u2264i\u2264n being a metric embedding of it? Whether t-SNE has to sacri\ufb01ce some global structures in order to locally embed the data well (Chari et al., 2021)? These problems are left for future investigation. Acknowledgement The authors are grateful to the editors and four anonymous referees for their comments and suggestions which have signi\ufb01cantly improved the results and presentation of the paper. The research of Tony Cai was supported in part by NSF grant DMS-2015259 and NIH grant R01-GM129781. The research of Rong Ma was supported by Professor David Donoho at Stanford University. This work was partially completed while Rong Ma was a PhD candidate in Biostatistics at the University of Pennsylvania. Rong Ma would like to thank Mingyao Li for introducing the subject, and Micha\u00a8 el Aupetit, David Donoho, Jeyong Lee, Stefan Steinerberger, Yiqiao Zhong and James Zou for helpful discussions." + }, + { + "url": "http://arxiv.org/abs/2011.03900v2", + "title": "The Cost of Privacy in Generalized Linear Models: Algorithms and Minimax Lower Bounds", + "abstract": "We propose differentially private algorithms for parameter estimation in both\nlow-dimensional and high-dimensional sparse generalized linear models (GLMs) by\nconstructing private versions of projected gradient descent. We show that the\nproposed algorithms are nearly rate-optimal by characterizing their statistical\nperformance and establishing privacy-constrained minimax lower bounds for GLMs.\nThe lower bounds are obtained via a novel technique, which is based on Stein's\nLemma and generalizes the tracing attack technique for privacy-constrained\nlower bounds. This lower bound argument can be of independent interest as it is\napplicable to general parametric models. Simulated and real data experiments\nare conducted to demonstrate the numerical performance of our algorithms.", + "authors": "T. Tony Cai, Yichen Wang, Linjun Zhang", + "published": "2020-11-08", + "updated": "2020-12-06", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.CR", + "cs.LG", + "math.ST", + "stat.ME", + "stat.TH" + ], + "main_content": "Introduction Statistical and machine learning algorithms are gaining prominence in our daily lives, and so are demands for data privacy guarantees by these algorithms. The need for data privacy protection has in turn inspired the development of formal criteria and frameworks for data privacy, with di\ufb00erential privacy [20, 21] (and its variants [33, 23, 41, 16]) being the most widely studied in theory [24, 22, 26, 1], and adopted in practice [14, 2, 15, 28]. Much of its popularity can be attributed to the ease of building privacy-preserving algorithms that the 1 arXiv:2011.03900v2 [stat.ML] 6 Dec 2020 \fdi\ufb00erential privacy framework a\ufb00ords [21, 40, 24, 22], but privacy can also come at a cost: it has been observed that requiring algorithms to be di\ufb00erentially private may sacri\ufb01ce their statistical accuracy [6, 29, 37]. The quest for privacy-preserving yet statistically accurate algorithms has since become a vibrant \ufb01eld of research. On the methodological front, a variety of popular computational and statistical methods has seen their di\ufb00erentially private counterparts, for some examples: causal inference [36, 35], deep learning [1, 46], and multiple testing [27]. On the theoretical side, however, the study of statistical optimality of di\ufb00erentially private algorithms focuses more heavily on the simpler and more stylized problems, such as mean estimation [6, 31, 32], top-k selection [5, 51], and linear regression [11]. In this paper, we study a broadly applicable model, the generalized linear model (GLM) [44, 39], by proposing di\ufb00erentially private algorithms for parameter estimation with theoretical guarantees. We characterize their statistical performance, and prove their nearoptimality by establishing minimax lower bounds. In this work, we consider both the classical low-dimensional setting and the contemporary high-dimensional setting. 1.1 Our Contribution and Related Literature Our main contribution is two-fold: constructing di\ufb00erentially private algorithms for GLM parameter estimation (Section 3), and establishing the near-optimality for the algorithms via privacy-constrained minimax lower bounds for GLM parameter estimation (Section 4). Private algorithms for GLMs. We construct algorithms, based on noisy gradient descent [8, 7] and noisy iterative hard thresholding [11, 30, 9], for privately estimating the vector of GLM parameters. There has been an extensive literature on private logistic regression [12, 13, 56], and more broadly, private empirical risk minimization [8, 34, 7]. Our work is inspired by but distinct from previous works in its focus on parameter estimation accuracy, as opposed to excess risk of the solution; for statisticians, parameter estimation accuracy is also arguably a more informative measure of performance than excess risks. Since the log-likelihood function of GLM in general lacks strong convexity [52], bounding the distance between an estimator and the true parameter requires more re\ufb01ned analysis of the algorithms. Theorem 1 shows that (\u03b5, \u03b4)-di\ufb00erentially private estimation of the GLM parameters can be achieved by the noisy gradient descent algorithm (Algorithm 1 based on [8]) with an extra privacy cost of \u02dc O \u0010 d2 log(1/\u03b4) n2\u03b52 \u0011 in terms of the squared \u21132 risk, where n, d respectively denote the sample size and the dimension of the parameter vector. The di\ufb03culty posed by a \u201c\ufb02at\u201d log-likelihood landscape [43, 3] is even more salient in the high-dimensional setting: when the number of parameters exceeds the sample size, 2 \fstrong convexity is categorically impossible for any objective function. We instead leverage the sparsity of the parameter vector to design a noisy iterative hard thresholding algorithm (Algorithm 4), which attains convergence in O(log n) iterations and incurs an extra privacy cost of \u02dc O \u0010 (s\u2217log d)2 log(1/\u03b4) n2\u03b52 \u0011 in terms of the squared \u21132 risk (Theorem 3), where s\u2217denotes the sparsity of the parameter vector. In particular, the linear dependence on sparsity and logarithmic dependence on the ambient dimension suggest that di\ufb00erentially private estimation remains feasible in high dimensions, which contrasts with the impossibility of high-dimensional estimation [18, 19] under the more restrictive local di\ufb00erential privacy framework [33]. The technical analysis of our algorithm can also be extended to private estimation in other sparse M-estimation problems that enjoy restricted strong convexity and restricted smoothness properties. Minimax lower bounds. We develop a novel lower bound technique based on Stein\u2019s Lemma and show that the statistical accuracy of our algorithms are optimal up to logarithmic factors in the sample size, by establishing privacy-constrained minimax lower bounds for GLM parameter estimation (Theorems 5 and 6). Our strategy for establishing these lower bounds entails a broad generalization of the \u201ctracing attack\u201d techniques, \ufb01rst developed by [10, 26] and further applied to various statistical problems, including sharp lower bounds for classical Gaussian mean estimation and linear regression [31, 11], as well as lower bounds for sparse mean estimation and linear regression in the high-dimensional setting [51, 11]. In these previous works, the design of tracing attacks seems to be largely ad hoc and catered to speci\ufb01c distribution families such as Gaussian or Beta-Binomial; a general principle for designing attacks has not been observed. Although some promising proposals have been made along this direction [47, 42], it remains unclear whether the suggested attacks in these works actually imply any lower bound results. In the present paper, we address this problem by proposing a \u201cscore attack\u201d based on and named after the score statistic, that is the gradient of the log-likelihood function with respect to the parameter vector. Not only does the score attack imply lower bounds for the GLM problems in the present paper, it also opens paths for lower bound analysis in a much greater range of statistical problems with di\ufb00erential privacy constraints, as the form of our score attack and its theoretical properties (Theorem 4) are applicable to general parametric families of distributions . 1.2 Structure of the Paper The paper is organized as follows. Section 2 formulates the problem and provides necessary background information. Section 3 describes the algorithms and analyzes in detail their 3 \fprivacy guarantees as well as statistical accuracy. Section 4 introduces the score attack framework for minimax lower bounds, and applies the framework to establish minimax lower bounds for the GLM problems. Section 5 provides simulated and real data examples that illustrate the numerical performance of our algorithms. Section 6 summarizes our work and discusses its implication for future research. Main technical results are proved in Section 7, and other auxiliary results in the Appendix. Notation. For real-valued sequences {an}, {bn}, we write an \u2272bn if an \u2264cbn for some universal constant c \u2208(0, \u221e), and an \u2273bn if an \u2265c\u2032bn for some universal constant c\u2032 \u2208(0, \u221e). We say an \u224dbn if an \u2272bn and an \u2273bn. c, C, c0, c1, c2, \u00b7 \u00b7 \u00b7 , and so on refer to universal constants in the paper, with their speci\ufb01c values possibly varying from place to place. For a vector v \u2208Rd and a subset S \u2286[d], we use vS to denote the restriction of vector v to the index set S. We write supp(v) := {j \u2208[d] : vj \u0338= 0}. \u2225v\u2225p denotes the vector \u2113p norm for 1 \u2264p \u2264\u221e, with an additional convention that \u2225v\u22250 denotes the number of non-zero coordinates of v. For a function f : R \u2192R, \u2225f\u2225\u221edenotes the the essential supremum of |f|. For t \u2208R and R > 0, let \u03a0R(t) denote the projection of t onto the closed interval [\u2212R, R]. For a random variable X, we use ess sup(X) = inf{c : P(X < c) = 1} to denote the essential supremum of X. 2 Problem Formulation In this section, we present a detailed description of the scope of statistical models (generalized linear models) and algorithms (di\ufb00erentially private algorithms) to be studied in this paper, and formally de\ufb01ne the \u201ccost of privacy\u201d in terms of minimax risks. 2.1 Generalized Linear Models We study parameter estimation in generalized linear models. In a generalized linear model, the response variable y \u2208R, conditional on the design vector x \u2208Rd, follows a distribution of the natural exponential family form, f\u03b2\u2217(y|x) = h(y, \u03c3) exp \u0012(x\u22a4\u03b2\u2217)g(y) \u2212\u03c8(x\u22a4\u03b2\u2217) c(\u03c3) \u0013 , (2.1) where c(\u03c3) is a nuisance scale parameter and \u03c8(\u00b7) is the cumulant generating function of y given x. The generalized linear model is, \ufb01rst of all, a generalization of the linear model: setting g(y) = y, \u03c8(u) = u2/2 and c(\u03c3) = \u03c32 in (2.1) recovers the (Gaussian) linear model. 4 \fModel (2.1) also subsumes other special cases such as logistic and multinomial regression. Throughout the paper, our goal is estimating \u03b2\u2217\u2208Rd using an i.i.d. sample {(yi, xi)}i\u2208[n] drawn from the model (2.1). We shall consider both the classical setting, where the dimension d is dominated by the sample size n, and the high-dimensional sparse setting where d potentially dominates n but only a small proportion of \u03b2\u2217\u2019s coordinates are non-zero. In either case, the issue of data privacy is relevant, as any nontrivial estimator of \u03b2\u2217 must take the data {(yi, xi)}i\u2208[n] as input. Before considering concrete estimators and their performance, let us \ufb01rst de\ufb01ne the desired criteria of privacy protection. 2.2 Di\ufb00erential Privacy Intuitively speaking, an algorithm M applied over a data set X compromises data privacy if an adversary is able to correctly infer from the algorithm\u2019s output M(X) whether an individual datum x belongs to X or not. The notion of di\ufb00erential privacy formalizes this idea by requiring that, for every pair of data sets X and X\u2032 that di\ufb00er by a single datum, hereafter called \u201cadjacent data sets\u201d, the algorithm M is randomized so that the distributions of M(X) and of M(X\u2032) are close to each other. De\ufb01nition 1 (Di\ufb00erential Privacy, [21]). A randomized algorithm M : X n \u2192R is (\u03b5, \u03b4)di\ufb00erentially private if for every pair of adjacent data sets X, X\u2032 \u2208X n that di\ufb00er by one individual datum and every (measurable) S \u2286R, P (M(X) \u2208S) \u2264e\u03b5 \u00b7 P (M(X\u2032) \u2208S) + \u03b4, where the probability measure P is induced by the randomness of M only. The de\ufb01nition guarantees that, for small values of \u03b5, \u03b4 \u22650, the distributions of M(X) and M(X\u2032) are almost indistinguishable. But beyond its strong privacy guarantees, the notion of di\ufb00erential privacy is desirable also for the ease and \ufb02exibility of constructing di\ufb00erentially private algorithms. We summarize here some useful facts for our construction of algorithms in this paper. First, a large class of non-private algorithms can be made di\ufb00erentially private via random perturbations. Fact 1 (The Laplace and Gaussian mechanisms, [21, 22]). Let M : X n \u2192Rd be an algorithm that is not necessarily di\ufb00erentially private. \u2022 Suppose supX,X\u2032adjacent \u2225M(X) \u2212M(X\u2032)\u22251 < B < \u221e. For w \u2208Rd with its coordinates w1, w2, \u00b7 \u00b7 \u00b7 , wd i.i.d. \u223cLaplace(B/\u03b5), M(X) + w is (\u03b5, 0)-di\ufb00erentially private. 5 \f\u2022 If instead we have supX,X\u2032adjacent \u2225M(X) \u2212M(X\u2032)\u22252 < B < \u221e, for w \u223cNd(0, \u03c32I) with \u03c32 = 2B2 log(2/\u03b4)/\u03b5, M(X) + w is (\u03b5, \u03b4)-di\ufb00erentially private. That is, if a non-private algorithm\u2019s output is not too sensitive to changing any single datum in the input data set, perturbing the algorithm with Laplace or Gaussian noises produces a di\ufb00erentially private algorithm. Second, di\ufb00erential privacy is preserved under compositions, albeit with weaker privacy parameters. The composition theorems, stated below, explicitly quantify how privacy parameters degrade as private algorithms are composited. Fact 2 (Composition Theorems). Consider an (\u03b5, \u03b4)-di\ufb00erentially private algorithm M0 : X n \u2192R0 and (\u03b5, \u03b4)-di\ufb00erentially private algorithms Mi : X n \u00d7 Ri\u22121 \u2192Ri for i = 1, 2, \u00b7 \u00b7 \u00b7 , k \u22121. Consider the composite algorithm M = Mk\u22121 \u25e6Mk\u22122 \u25e6\u00b7 \u00b7 \u00b7 \u25e6M0. \u2022 Composition theorem [21]. M is (k\u03b5, k\u03b4)-di\ufb00erentially private. \u2022 Advanced composition [24]. For every \u03b4\u2032 > 0, M is ( p 2k log(1/\u03b4\u2032)\u03b5+k(e\u03b5 \u22121)\u03b5, k\u03b4 + \u03b4\u2032)-di\ufb00erentially private. It is worth noting that the notion of \u201ccomposition\u201d considered here, termed \u201ck-fold adaptive composition\u201d in the literature, is more general than the composition of functions in the usual sense: each part of the composite algorithm may access the same data set, or a di\ufb00erent data set, after receiving the output from its previous part. In fact, if a di\ufb00erentially private algorithm is simply post-processed independent of data, there is no deterioration of privacy whatsoever. Fact 3 (Post-processing [21, 55]). Consider an (\u03b5, \u03b4)-di\ufb00erentially private algorithm M : X n \u2192R. If g is a measurable function, then g(M) is also (\u03b5, \u03b4)-di\ufb00erentially private. The composition and post-processing properties will be particularly useful for analyzing the privacy guarantees of the iterative algorithms considered in Section 3. 2.3 The Cost of Privacy in Generalized Linear Models Once an algorithm is known to be di\ufb00erentially private, it is natural to ask whether the privacy guarantees come at the expense of accuracy: as seen in Fact 1, often random perturbations are introduced to achieve di\ufb00erential privacy. In this paper, we assess the accuracy of algorithms via the lens of minimax risk, de\ufb01ned as follows. Let {f\u03b8 : \u03b8 \u2208\u0398} be a family of statistical models supported over X and indexed by the parameter \u03b8. Let X = {x1, x2, \u00b7 \u00b7 \u00b7 , xn} be an i.i.d. sample drawn from f\u03b8\u2217for some 6 \funknown \u03b8\u2217\u2208\u0398, M : X n \u2192\u0398 be an estimator. Let \u2113: \u0398 \u00d7 \u0398 \u2192R+ be a metric on \u0398 and \u03c1 : R+ \u2192R+ be an increasing function. The (statistical) risk of M is given by E\u03c1(\u2113(M(X), \u03b8\u2217)), where the expectation is taken over the data distribution f\u03b8\u2217and the randomness of estimator M. Because the risk E\u03c1(\u2113(M(X), \u03b8\u2217)) depends on the unknown \u03b8\u2217and can be trivially minimized by setting M(X) \u2261\u03b8\u2217, a more sensible measure of performance is the maximum risk over the entire class of distributions {f\u03b8 : \u03b8 \u2208\u0398}, sup\u03b8\u2208\u0398 E\u03c1(\u2113(M(X), \u03b8)). The minimax risk of estimating \u03b8 \u2208\u0398 is then given by inf M sup \u03b8\u2208\u0398 E\u03c1(\u2113(M(X), \u03b8)). (2.2) By de\ufb01nition, this quantity characterizes the best possible worst-case performance that an estimator can hope to achieve over the class of models {f\u03b8 : \u03b8 \u2208\u0398}. In this paper, we study a privacy-constrained version of the minimax risk: let M\u03b5,\u03b4 be the collection of all (\u03b5, \u03b4)-di\ufb00erentially private algorithms mapping from X n to \u0398, we consider inf M\u2208M\u03b5,\u03b4 sup \u03b8\u2208\u0398 E\u03c1(\u2113(M(X), \u03b8)). (2.3) As M\u03b5,\u03b4 is a proper subset of all possible estimators, the privacy-constrained minimax risk as de\ufb01ned above will be at least as large as the unconstrained minimax risk, with the di\ufb00erence between these two minimax risks, (2.2) and (2.3) being the \u201ccost of privacy\u201d. In our case, the statistical models of interest are the generalized linear models (2.1) indexed by the parameter vector \u03b2\u2217, and we would like to precisely characterize the cost of privacy in GLM parameter estimation problems. This goal will be achieved in two steps: Section 3 provides upper bounds of the privacy-constrained minimax risk (2.3) via analysis of di\ufb00erentially private algorithms; Section 4 establishes corresponding lower bounds of the privacy-constrained minimax risk. 3 Di\ufb00erentially Private Algorithms for GLMs In this section, we develop di\ufb00erentially private algorithms for estimating parameters \u03b2\u2217\u2208 Rd in the generalized linear model f\u03b2\u2217(y|x) = h(y, \u03c3) exp \u0012x\u22a4\u03b2\u2217g(y) \u2212\u03c8(x\u22a4\u03b2\u2217) c(\u03c3) \u0013 ; x \u223cfx. (3.1) 7 \fWith an i.i.d. sample Z = {zi}i\u2208[n] = {(yi, xi)}i\u2208[n] drawn from the model (3.1), the general approach is to minimize the following (scaled) negative log-likelihood function in a di\ufb00erentially private fashion: Ln(\u03b2; Z) = 1 n n X i=1 \u0000\u03c8(x\u22a4 i \u03b2) \u2212g(yi)x\u22a4 i \u03b2 \u0001 . (3.2) We may write Ln(\u03b2) as a shorthand when the relevant data set is unambiguous. Since Ln is convex in \u03b2, the problem is an instance of di\ufb00erentially private convex optimization, for which there are many well-studied methods. Roughly speaking, these methods can be organized into two categories depending on the form of random perturbations involved: \u201cone-shot\u201d methods [12, 13, 34] in which random noises are added only once to the objective function or before reporting the \ufb01nal solution, or iterative, gradient-descent type methods [8, 7] in which random noises are added to each iteration of the algorithm. As discussed in Section 1.1, existing convergence results for these methods are focused on the excess risk of a di\ufb00erentially private minimizer \u03b2priv of (3.2), compared to the nonprivate solution \u02c6 \u03b2, that is, ELn(\u03b2priv) \u2212ELn( \u02c6 \u03b2). The lack of strong convexity in the generalized linear model (3.1) precludes the possibility of obtaining bounds for parameter estimation from excess risk bounds. For example, for the logistic regression model, which is obtained from (3.1) by setting g(y) = y and \u03c8(u) = log(1 + eu). The Hessian of Ln, \u22072Ln(\u03b2) = 1 n n X i=1 \u03c8\u2032\u2032(x\u22a4 i \u03b2)xix\u22a4 i = 1 n n X i=1 ex\u22a4 i \u03b2 (1 + ex\u22a4 i \u03b2)2xix\u22a4 i , has its smallest eigenvalue approaching 0 as \u2225\u03b2\u22252 \u2192\u221eeven in the favorable setting where n is much greater than the dimension of \u03b2. When n is dominated by the dimension, \u22072Ln(\u03b2) is simply rank-de\ufb01cient and therefore cannot be positive-de\ufb01nite. The absence of strong convexity in GLMs also implies that the \u201cone-shot\u201d algorithms are not guaranteed to be di\ufb00erentially private (see [13, 34] and the references therein), unless a quadratic penalty term is added to Ln. Our approach, then, is to consider gradient descent type algorithms. Although strong convexity fails to hold for Ln globally, it turns out that Ln satis\ufb01es a \u201crestricted\u201d and \u201clocal\u201d sense of strong convexity [43], to be made precise in Section 3.1, is su\ufb03cient for the noisy gradient descent algorithm to enjoy fast convergence and optimal statistical accuracy. In Section 3.1, we analyze in detail the privacy guarantees and convergence rates of the noisy gradient descent algorithm, which works well for the classical setting of d = o(n). The high-dimensional, d \u2273n setting is considered in Section 3.2. In this case, consis8 \ftent estimation of \u03b2\u2217is not possible even without privacy constraints, unless additional assumptions such as sparsity of \u03b2\u2217are imposed. When \u03b2\u2217is indeed sparse, we introduce a noisy iterative hard thresholding algorithm that allows the random perturbations to scale with the sparsity (\u201cintrinsic dimension\u201d) of \u03b2\u2217rather than the ambient dimension d, thereby achieving the optimal statistical accuracy with privacy constraints. 3.1 The Classical Low-dimensional Setting We \ufb01rst consider the classical low-dimensional setting where d = o(n). For minimizing the negative GLM log-likelihood Ln(\u03b2; Z) = 1 n n X i=1 \u0000\u03c8(x\u22a4 i \u03b2) \u2212g(yi)x\u22a4 i \u03b2 \u0001 in a di\ufb00erentially private fashion, we consider the noisy gradient descent algorithm, \ufb01rst proposed by [8] in its generic form for arbitrary convex functions. The following algorithm is a specialization the generic algorithm to GLMs. Algorithm 1: Di\ufb00erentially Private Generalized Linear Regression Input : Ln(\u03b2, Z), data set Z, step size \u03b70, privacy parameters \u03b5, \u03b4, noise scale B, number of iterations T, truncation parameter R, initial value \u03b20 \u2208Rd. 1 for t in 0 to T \u22121 do 2 Generate wt \u2208Rd with wt1, wt2, \u00b7 \u00b7 \u00b7 , wtd i.i.d. \u223cN \u0010 0, (\u03b70)22B2 d log(2T/\u03b4) n2(\u03b5/T)2 \u0011 ; 3 Compute \u03b2t+1 = \u03b2t \u2212(\u03b70/n) Pn i=1(\u03c8\u2032(x\u22a4 i \u03b2t) \u2212\u03a0R(yi))xi + wt; 4 end Output: \u03b2(T). Before delving into the analysis of its privacy guarantees and convergence rates, we collect some necessary assumptions here for the clarity of ensuing technical results. (D1) Bounded design: there is a constant \u03c3x < \u221esuch that \u2225x\u22252 < \u03c3x \u221a d almost surely. (D2) Bounded moments of design: Ex = 0, and the covariance matrix \u03a3x = Exx\u22a4satis\ufb01es 0 < 1/C < \u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) < C for some constant 0 < C < \u221e. (G1) The function \u03c8 in the GLM (3.1) satis\ufb01es \u2225\u03c8\u2032\u2225\u221e< c1 for some constant c1 < \u221e. (G2) The function \u03c8 satis\ufb01es \u2225\u03c8\u2032\u2032\u2225\u221e< c2 for some constant c2 < \u221e. These assumptions are comparable to those required for the theoretical analysis of GLMs in the non-private setting; for examples, see [43, 38, 54] and the references therein. 9 \fLet us \ufb01rst consider the privacy guarantees of Algorithm 1. Because the algorithm is a composition of T individual steps, if each step is (\u03b5/T, \u03b4/T)-di\ufb00erentially private, the overall algorithm would be (\u03b5, \u03b4)-di\ufb00erentially private in view of Fact 2. This is indeed the case under appropriate assumptions. Lemma 1. If assumptions (D1) and (G1) hold, then choosing B = 4(R+c1)\u03c3x guarantees that Algorithm 1 is (\u03b5, \u03b4)-di\ufb00erentially private. Lemma 1 is proved in Section A.1. Although the privacy guarantee holds for any number of iterations T, choosing T properly has signi\ufb01cant implications for the accuracy of Algorithm 1, as a larger value of T introduces greater noises into the algorithm in order to achieve privacy. Existing results on noisy gradient descent typically call for O(n) [7] or O(n2) [8] iterations for minimizing generic convex functions. For the GLM problem, we shall show that O(log n) iterations su\ufb03ce, thanks to the restricted strong convexity and restricted smoothness of generalized linear models. Fact 4 ([38], Proposition 1 paraphrased). If assumptions (D1) and (D2) hold, there is a constant \u03b1 > 0 that depends on \u03c3x, C, \u03c8 and satis\ufb01es \u27e8\u2207Ln(\u03b21) \u2212\u2207Ln(\u03b22), \u03b21 \u2212\u03b22\u27e9\u2265 \uf8f1 \uf8f2 \uf8f3 \u03b1\u2225\u03b21 \u2212\u03b22\u22252 2 \u2212c2\u03c32 x 2\u03b1 log d n \u2225\u03b21 \u2212\u03b22\u22252 1 if \u2225\u03b21 \u2212\u03b22\u22252 \u22643, 3\u03b1\u2225\u03b21 \u2212\u03b22\u22252 \u2212 \u221a 2c\u03c3x q log d n \u2225\u03b21 \u2212\u03b22\u22251 if \u2225\u03b21 \u2212\u03b22\u22252 > 3, (3.3) with probability at least 1 \u2212c3 exp(\u2212c4n). If we further assume (G2), there is a constant \u03b3 \u2265\u03b1 > 0 that depends on \u03c3x, M, c2 and satis\ufb01es \u27e8\u2207Ln(\u03b21) \u2212\u2207Ln(\u03b22), \u03b21 \u2212\u03b22\u27e9\u2264\u03b3\u2225\u03b21 \u2212\u03b22\u22252 2 + 4\u03b3 3 log d n \u2225\u03b21 \u2212\u03b22\u22252 1. (3.4) with probability at least 1 \u2212c3 exp(\u2212c4n). These weaker versions of strong convexity and smoothness, as it turns out, are su\ufb03cient for Algoirthm 1 to attain linear convergence, which is the same rate for minimizing strongly convex and smooth functions, and cannot be further improved in general [45]. Therefore, O(log n) iterations would allow the algorithm to converge to an accuracy of O(n\u22121) within \u02c6 \u03b2, the true minimizer of Ln, in terms of squared \u21132 norm; as E\u2225\u02c6 \u03b2\u2212\u03b2\u2217\u22252 2 is of order d/n, from a statistical perspective, there is little reason to run the algorithm further than O(log n) iterations. 10 \fTheorem 1. Let {(yi, xi)}i\u2208[n] be an i.i.d. sample from the GLM (3.1). Suppose assumptions (D1), (D2), (G1) and (G2) are true. Let the parameters of Algorithm 1 be chosen as follows. \u2022 Set step size \u03b70 = 3/4\u03b3, where \u03b3 is the smoothness constant de\ufb01ned in Fact 4. \u2022 Set R = min \u0010 ess sup |y1|, c1 + p 2c2c(\u03c3) log n \u0011 \u2272 p c(\u03c3) log n. \u2022 Noise scale B. Set B = 4(R + c1)\u03c3x. \u2022 Number of iterations T. Let T = (2\u03b3/\u03b1) log(9n), where \u03b1, \u03b3 are the strong convexity and smoothness constants de\ufb01ned in in Fact 4. \u2022 Initialization \u03b20. Choose \u03b20 so that \u2225\u03b20 \u2212\u02c6 \u03b2\u22252 \u22643, where \u02c6 \u03b2 = arg min Ln(\u03b2; Z). If n \u2265K \u00b7 \u0010 Rd p log(1/\u03b4) log n log log n/\u03b5 \u0011 for a su\ufb03ciently large constant K, the output of Algorithm 1 satis\ufb01es \u2225\u03b2(T) \u2212\u03b2\u2217\u22252 2 \u2272c(\u03c3) \u0012d n + d2 log(1/\u03b4) log3 n n2\u03b52 \u0013 , (3.5) with probability at least 1 \u2212c3 exp(\u2212c4n) \u2212c3 exp(\u2212c4d) \u2212c3 exp(\u2212c4 log n). Theorem 1 is proved in Section A.2. Some further comments may help clarify the theorem. For the choice of various algorithm tuning parameters, we note that the step size, number of iterations and initialization are chosen to assure convergence; in particular the initialization condition, as in [38], is standard in the literature and can be extended to \u2225\u03b20 \u2212\u02c6 \u03b2\u22252 \u22643 max(1, \u2225\u03b2\u2217\u22252). The choice of truncation level R is to ensure privacy while keeping as many data intact as possible; when the distribution of y has bounded support, for example in the logistic model, it can be chosen to be an O(1) constant and therby saving an extra factor of O(log n) in the second term of (3.5). The choice of B which depends on R then ensures the privacy of Algorithm 1 as seen in Lemma 1. Finally, the scaling of n versus d, \u03b5 and \u03b4 in Theorem 1 is nearly optimal, as our lower bound result, Theorem 5, shall imply that no estimator can achieve low \u21132 error unless the assumed scaling holds, and that the statistical accuracy of Algorithm 1 cannot be further improved except possibly for factors of log n. 3.2 The High-dimensional Sparse Setting In this section, we construct di\ufb00erentially private algorithms for estimating GLM parameters when the dimension d dominates the sample size n. In this setting, even without 11 \fprivacy requirements, directly minimizing the negative log-likelihood function Ln(\u03b2) no longer achieves any meaningful statistical accuracy, because the objective function Ln can have in\ufb01nitely many minimizers due to a rank-de\ufb01cient Hessian matrix \u22072Ln(\u03b2) = 1 n Pn i=1 \u03c8\u2032\u2032(x\u22a4 i \u03b2)xix\u22a4 i . The problem is nevertheless solvable when the true parameter vector \u03b2\u2217is s\u2217-sparse with s\u2217= o(d), that is when at most s\u2217out of d coordinates of \u03b2\u2217are non-zero. For estimating a sparse \u03b2\u2217, the primary challenge lies in (approximately) solving the non-convex optimization problem \u02c6 \u03b2 = arg min\u03b2:\u2225\u03b2\u22250\u2264s\u2217Ln(\u03b2; Z). Some popular non-private approaches include convex relaxation via \u21131 regularization of Ln [43, 3], or projected gradient descent onto the non-convex feasible set {\u03b2 : \u2225\u03b2\u22250 \u2264s\u2217}, also known as iterative hard thresholding [9, 30]: Algorithm 2: Iterative Hard Thresholding (IHT) Input : Objective function f(\u03b8), sparsity s, step size \u03b7, number of iterations T. 1 Initialize \u03b80 with \u2225\u03b80\u22250 \u2264s, set t = 0; 2 for t in 0 to T \u22121 do 3 \u03b8t+1 = Ps (\u03b8t \u2212\u03b7\u2207f(\u03b8t)), where Ps(v) = arg minz:\u2225z\u22250=s \u2225v \u2212z\u22252 2; 4 end Output: \u03b8(T). In each iteration, the algorithm updates the solution via gradient descent, keeps its largest s coordinates in magnitude, and sets the other coordinates to 0. For privately \ufb01tting high-dimensional sparse GLMs, we shall construct a noisy version of Algorithm 2, and show in Section 3.2.2 that it again enjoys a linear rate of convergence, as a consequence of Fact 4 and sparsity of \u03b2\u2217. As a \ufb01rst step towards this goal, we consider in Section 3.2.1 a noisy, di\ufb00erentially private version of the projection operator Ps, as well as a noisy iterative hard thresholding algorithm applicable to any objective function that satisi\ufb01es restricted strong convexity and restricted smoothness. 3.2.1 The Noisy Iterative Hard Thresholding Algorithm At the core of our algoirthm is a noisy, di\ufb00erentially private algorithm that identi\ufb01es the top-s largest coordinates of a given vector with good accuracy. The following \u201cPeeling\u201d algorithm [27] serves this purpose, with fresh Laplace noises added to the underlying vector and one coordinate \u201cpeeled\u201d from the vector in each iteration. The algorithm is guaranteed to be (\u03b5, \u03b4)-di\ufb00erentially private when the vector-valued function v(Z) is not sensitive to replacing any single datum. 12 \fAlgorithm 3: Noisy Hard Thresholding (NoisyHT) Input : vector-valued function v = v(Z) \u2208Rd, data Z, sparsity s, privacy parameters \u03b5, \u03b4, noise scale \u03bb. 1 Initialize S = \u2205; 2 for i in 1 to s do 3 Generate wi \u2208Rd with wi1, wi2, \u00b7 \u00b7 \u00b7 , wid i.i.d. \u223cLaplace \u0012 \u03bb \u00b7 2\u221a 3s log(1/\u03b4) \u03b5 \u0013 ; 4 Append j\u2217= arg maxj\u2208[d]\\S |vj| + wij to S; 5 end 6 Set \u02dc Ps(v) = vS; 7 Generate \u02dc w with \u02dc w1, \u00b7 \u00b7 \u00b7 , \u02dc wd i.i.d. \u223cLaplace \u0012 \u03bb \u00b7 2\u221a 3s log(1/\u03b4) \u03b5 \u0013 ; Output: \u02dc Ps(v) + \u02dc wS. Lemma 2 ([27, 11]). If for every pair of adjacent data sets Z, Z\u2032 we have \u2225v(Z) \u2212 v(Z\u2032)\u2225\u221e< \u03bb, then NoisyHT is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm. The accuracy of Algorithm 3 is quanti\ufb01ed by the next lemma. Lemma 3. Let \u02dc Ps be de\ufb01ned as in Algorithm 3. For any index set I, any v \u2208RI and \u02c6 v such that \u2225\u02c6 v\u22250 \u2264\u02c6 s \u2264s, we have that for every c > 0, \u2225\u02dc Ps(v) \u2212v\u22252 2 \u2264(1 + 1/c)|I| \u2212s |I| \u2212\u02c6 s\u2225\u02c6 v \u2212v\u22252 2 + 4(1 + c) X i\u2208[s] \u2225wi\u22252 \u221e. Lemma 3 is proved in Section A.3. In comparison, the exact, non-private projection operator Ps satis\ufb01es ([30], Lemma 1) \u2225Ps(v) \u2212v\u22252 2 \u2264|I| \u2212s |I| \u2212\u02c6 s\u2225\u02c6 v \u2212v\u22252 2. Algorithm 3, therefore, is as accurate as its non-private counterpart up to a constant multiplicative factor and some additive noise. Taking the private top-s projection algorithm, we have the following noisy iterative hard thresholding algorithm. Compared to the non-private Algorithm 2, we simply replaced the exact projection Ps with the noisy projection given by Algorithm 3. The privacy guarantee of Algorithm 4 is then inherited from that of Algorithm 3. Lemma 4. If for every pair of adjacent data z, z\u2032 and every \u03b8 \u2208\u0398 we have \u2225\u2207l(\u03b8; z) \u2212 \u2207l(\u03b8; z\u2032)\u2225\u221e< B, then NoisyIHT is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm. 13 \fAlgorithm 4: Noisy Iterative Hard Thresholding (NoisyIHT) Input : Objective function Ln(\u03b8, Z) = n\u22121 Pn i=1 l(\u03b8, zi), data set Z, sparsity level s, step size \u03b70, privacy parameters \u03b5, \u03b4, noise scale B, number of iterations T. 1 Initialize \u03b80 with \u2225\u03b80\u22250 \u2264s, set t = 0; 2 for t in 0 to T \u22121 do 3 \u03b8t+1 = NoisyHT (\u03b8t \u2212\u03b70\u2207Ln(\u03b8t; Z), Z, s, \u03b5/T, \u03b4/T, (\u03b70/n)B); 4 end Output: \u03b8(T). The lemma is proved in Section A.4. Similar to the noisy gradient descent (Algorithm 1), the privacy guarantee of Algorithm 4 is valid for any choice of T, however a fast rate of convergence would allow us to select a small T and thereby introducing less noise into the algorithm. To our delight, restricted strong convexity and restricted smoothness again lead to a linear rate of convergence even in the high-dimensional sparse setting. Theorem 2. Let \u02c6 \u03b8 = arg min\u2225\u03b8\u22250\u2264s\u2217Ln(\u03b8; Z). For iteration number t \u22650, suppose \u27e8\u2207Ln(\u03b8t) \u2212\u2207Ln(\u02c6 \u03b8), \u03b8t \u2212\u02c6 \u03b8\u27e9\u2265\u03b1\u2225\u03b8t \u2212\u02c6 \u03b8\u22252 2 (3.6) \u27e8\u2207Ln(\u03b8t+1) \u2212\u2207Ln(\u02c6 \u03b8), \u03b8t+1 \u2212\u02c6 \u03b8\u27e9\u2264\u03b3\u2225\u03b8t+1 \u2212\u02c6 \u03b8\u22252 2. (3.7) for constants 0 < \u03b1 < \u03b3. Let w1, w2, \u00b7 \u00b7 \u00b7 , ws be the noise vectors added to \u03b8t\u2212\u03b70\u2207Ln(\u03b8t; Z) when the support of \u03b8t+1 is iteratively selected, St+1 be the support of \u03b8t+1, and \u02dc w be the noise vector added to the selected s-sparse vector. Then, for \u03b70 = 2/3\u03b3, there exists an absolute constant c0 so that, choosing s \u2265c0(\u03b3/\u03b1)2s\u2217guarantees Ln(\u03b8t+1) \u2212Ln(\u02c6 \u03b8) \u2264 \u0012 1 \u2212\u03c1 \u00b7 \u03b1 \u03b3 \u2212 2s\u2217 s + s\u2217 \u0013 \u0010 Ln(\u03b8t) \u2212Ln(\u02c6 \u03b8) \u0011 + C\u03b3 \uf8eb \uf8edX i\u2208[s] \u2225wi\u22252 \u221e+ \u2225\u02dc wSt+1\u22252 2 \uf8f6 \uf8f8, where 0 < \u03c1 < 1 is an absolute constant, and C\u03b3 > 0 is a constant depending on \u03b3. Theorem 2 is proved in Section A.5. While conditions (3.6) and (3.7) are similar to the ordinary strong convexity and smoothness conditions in appearance, they are in fact much weaker because \u02c6 \u03b8, \u03b8t are both s-sparse. It is unclear yet, however, whether these weaker conditions are satis\ufb01ed by the GLM log-likelihood function, and whether the linear convergence in terms of Ln implies any positive result for parameter estimation accuracy \u2225\u03b8(T) \u2212\u02c6 \u03b8\u22252 2. In the next section, we resolve these issues for high-dimensional sparse GLMs and obtain a parameter estimation accuracy result. 14 \f3.2.2 Noisy Iterative Hard Thresholding for High-Dimensional Sparse GLMs Assuming that the true GLM parameter vector \u03b2\u2217satis\ufb01es \u2225\u03b2\u2217\u22250 \u2264s\u2217, we now specialize the results of Section 3.2.1 to the GLM negative log-likelihood function Ln(\u03b2; Z) = 1 n n X i=1 \u0000\u03c8(x\u22a4 i \u03b2) \u2212g(yi)x\u22a4 i \u03b2 \u0001 . Algorithm 5: Di\ufb00erentially Private Sparse Generalized Linear Regression Input : Ln(\u03b2, Z), data set Z, sparsity level s, step size \u03b70, privacy parameters \u03b5, \u03b4, noise scale B, number of iterations T, truncation parameter R. 1 Initialize \u03b20 with \u2225\u03b20\u22250 \u2264s, set t = 0; 2 for t in 0 to T \u22121 do 3 Compute \u03b2t+0.5 = \u03b2(T) \u2212(\u03b70/n) Pn i=1(\u03c8\u2032(x\u22a4 i \u03b2t) \u2212\u03a0R(yi))xi; 4 \u03b2t+1 = NoisyHT (\u03b2t+0.5, Z, s, \u03b5/T, \u03b4/T, \u03b70B/n); 5 end Output: \u03b2(T). Some assumptions about the data set {(yi, xi)}i\u2208[n] and its distribution will be helpful for analyzing the accuracy and privacy guarantees of Algorithm 5. The necessary assumptions for the high-dimensional sparse case are identical to those for the low-dimensional case, except with (D1) replaced by (D1\u2019), as follows. (D1\u2019) Bounded design: there is a constant \u03c3x < \u221esuch that \u2225x\u2225\u221e< \u03c3x almost surely. Because Algorithm 5 is a special case of the general Algorithm 4, the privacy guarantee of Algorithm 5 reduces to specializing Lemma 4 to GLMs, as follows. Lemma 5. If assumptions (D1\u2019) and (G1) are true, then choosing B = 4(R + c1)\u03c3x guarantees that Algorithm 5 is (\u03b5, \u03b4)-di\ufb00erentially private. The lemma is proved in Section A.6. For the parameter estimation accuracy of Algorithm 5, Fact 4 combined with the sparsity of \u02c6 \u03b2, \u03b2\u2217and \u03b2t for every t are su\ufb03cient for conditions (3.6) and (3.7) in Theorem 2 to hold. Invoking Theorem 2 in a proof by induction then leads to an upper bound for \u2225\u03b2(T) \u2212\u03b2\u2217\u22252 2. Below we state the main result; the detailed proof is in Section 7.1. Theorem 3. Let {(yi, xi)}i\u2208[n] be an i.i.d. sample from the GLM (3.1) with the true parameter vector \u2225\u03b2\u2217\u22250 \u2264s\u2217. Suppose assumptions (D1\u2019), (D2), (G1) and (G2) are true. Let the parameters of Algorithm 5 be chosen as follows. 15 \f\u2022 Set sparsity level s = 4c0(\u03b3/\u03b1)2s\u2217and step size \u03b70 = 1/(2\u03b3), where the constant c0 is de\ufb01ned in Theorem 2 and constants \u03b1, \u03b3 are de\ufb01ned in Proposition 4. \u2022 Set R = min \u0010 ess sup |y1|, c1 + p 2c2c(\u03c3) log n \u0011 \u2272 p c(\u03c3) log n. \u2022 Noise scale B. Set B = 4(R + c1)\u03c3x. \u2022 Number of iterations T. Let T = (2\u03b3/\u03c1\u03b1) log(6\u03b3n), where \u03c1 is an absolute constant de\ufb01ned in Theorem 1.1. \u2022 Initialization \u03b20. Choose \u03b20 so that \u2225\u03b20\u22250 \u2264s and \u2225\u03b20 \u2212\u02c6 \u03b2\u22252 \u22643, where \u02c6 \u03b2 = arg min\u2225\u03b2\u22250\u2264s\u2217Ln(\u03b2; Z). If n \u2265K \u00b7 \u0010 Rs\u2217log d p log(1/\u03b4) log n/\u03b5 \u0011 for a su\ufb03ciently large constant K, it holds with probability at least 1 \u2212c3 exp(\u2212c4 log(d/s\u2217log n)) \u2212c3 exp(\u2212c4n) \u2212c3 exp(\u2212c4 log n) that \u03b2(T), the output of Algorithm 5 satis\ufb01es \u2225\u03b2(T) \u2212\u03b2\u2217\u22252 2 \u2272c(\u03c3) \u0012s\u2217log d n + (s\u2217log d)2 log(1/\u03b4) log3 n n2\u03b52 \u0013 . (3.8) Theorem 3 is proved in Section 7.1. Similar to the low-dimensional GLM algorithm, the step size, number of iterations and initialization are chosen to ensure convergence; the initialization condition, as in [38], is standard in the literature and can be extended to \u2225\u03b20 \u2212\u02c6 \u03b2\u22252 \u22643 max(1, \u2225\u03b2\u2217\u22252). The choice of truncation level R is to ensure privacy while keeping as many data intact as possible; when the distribution of y has bounded support, for example in the logistic model, it can be chosen to be an O(1) constant and therby saving an extra factor of O(log n) in the second term of (3.5). The scaling of n versus d, s\u2217, \u03b5 and \u03b4 in Theorem 3 is nearly optimal, as the corresponding lower bound, Theorem 6, shall show that no estimator can achieve low \u21132 error unless the assumed scaling holds, and that the statistical accuracy of Algorithm 5 cannot be further improved except possibly for factors of log n. 4 Privacy-constrained Minimax Lower Bounds Section 3 proposed di\ufb00erentially private algorithms for estimating GLM parameters and obtained convergence rates for these algorithms. We shall show in this section that the convergence rates cannot be improved by any other (\u03b5, \u03b4)-di\ufb00erentially private estimator beyond possibly factors of log n, via privacy-constrained lower bounds of the form inf M\u2208M\u03b5,\u03b4 sup \u03b2\u2208\u0398 E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273r(n, d, \u0398, \u03c3, \u03b5, \u03b4), (4.1) 16 \fwhere M\u03b5,\u03b4 is the collection of all (\u03b5, \u03b4)-di\ufb00erentially private estimators, \u0398 \u2286Rd is a parameter space to which the true value of \u03b2 is assumed to belong, and the expectation is taken over y, X and the randomness of M. We shall provide precise forms of the lower bound r(n, d, \u0398, \u03c3, \u03b5, \u03b4) for both the lowdimensional and high-dimensional sparse GLMs, via a broad generalization of the \u201ctracing attack\u201d argument [10, 26, 25, 50] for privacy-constrained minimax lower bounds. A tracing attack is an algorithm that takes a single candidate datum as input and attempts to infer whether this candidate belongs to a given data set or not, by comparing the candidate with some summary statistics computed from the data set. Statisticians can think of a tracing attack as a hypothesis test which rejects the null hypothesis that the candidate is out of the data set for large values of some test statistic. The hypothesis testing formulation naturally motivates some desiderata for a tracing attack: \u2022 Soundness (type I error control): if the candidate does not belong to the data set, the tracing attack is likely to takes small values. \u2022 Completeness (type II error control): if the candidate does belong, the tracing attack is likely to take large values. For example, [26, 31, 11] showed that, if the random sample X and the candidate z are drawn from a Gaussian distribution with mean \u00b5 , tracing attacks of the form \u27e8M(X) \u2212 \u00b5, z \u2212\u00b5\u27e9is sound and complete provided that M(X) is an accurate estimator of \u00b5. This accuracy requirement in turn connects tracing attacks with risk lower bounds for di\ufb00erentially private algorithms: if an estimator M(X) is di\ufb00erentially private, it cannot possibly be too close to the estimand, or the existence of tracing attacks leads to a contradiction with the guarantees of di\ufb00erential privacy. Designing sound and complete tracing attacks, therefore, is crucial to the sharpness of privacy-constrained minimax lower bounds. Besides the Gaussian mean tracing attack mentioned above, there are some successful tracing attacks proposed for speci\ufb01c problems, such as top-k selection [51] or linear regression [11], but a general recipe for the design and analysis of tracing attacks has not been available. In Section 4.1, we construct a tracing attack applicable to general parametric families of distributions, and describe its utility for privacy-constrained minimax lower bounds. This general approach is then specialized to low-dimensional and high-dimensional sparse GLMs, in Sections 4.2 and 4.3 respectively, to establish lower bound results that match the upper bound results in Section 3 up to factors of log n. 17 \f4.1 The Score Attack Given a parametric family of distributions {f\u03b8(x) : \u03b8 \u2208\u0398} with \u0398 \u2286Rd, the score statistics, or simply the score, is given by S\u03b8(x) := \u2207\u03b8 log f\u03b8(x). If x \u223cf\u03b8, we have ES\u03b8(x) = 0 and VarS\u03b8(x) = I(\u03b8), where I(\u03b8) is the Fisher information matrix of f\u03b8. Using the score statistic, we de\ufb01ne the score attack as A\u03b8(z, M(X)) := \u27e8M(X) \u2212\u03b8, S\u03b8(z)\u27e9. (4.2) The score attack conjectures that z belongs to X for large values of A\u03b8(z, M(X)). In particular, if f\u03b8(x) is the density of N(\u03b8, I), the score attack coincides with the tracing attacks for Gaussian means studied in [26, 31, 11]. As argued earlier, an e\ufb00ective tracing attack should ideally be \u201csound\u201d (low type I error) and \u201ccomplete\u201d (low Type II error). This is indeed the case for our score attack. Theorem 4. Let X = {x1, x2, \u00b7 \u00b7 \u00b7 , xn} be an i.i.d. sample drawn from f\u03b8. For each i \u2208[n], let X\u2032 i denote the data set obtained from X by replacing xi with an independent copy x\u2032 i \u223cf\u03b8. 1. Soundness: for each i \u2208[n], EA\u03b8(xi, M(X\u2032 i)) = 0; E|A\u03b8(xi, M(X\u2032 i))| \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)). (4.3) 2. Completeness: if for every j \u2208[d], log f\u03b8(X) is continuously di\ufb00erentiable with respect to \u03b8j and | \u2202 \u2202\u03b8j log f\u03b8(X)| < gj(X) such that E|gj(X)M(X)j| < \u221e, we have X i\u2208[n] EA\u03b8(xi, M(X)) = X j\u2208[d] \u2202 \u2202\u03b8j EM(X)j. (4.4) Theorem 4 is proved in Section 7.2. The special form of \u201ccompleteness\u201d for Gaussian and Beta-Binomial families have been discovered as \u201c\ufb01ngerprinting lemma\u201d in the literature [53, 10, 51, 31]. It may not be clear yet how the soundness and completeness properties would imply lower bounds for E\u2225M(X) \u2212\u03b8\u22252 2. For the speci\ufb01c attacks designed for Gaussian mean estimation [31] and top-k selection [51], it has been observed that, if M is an (\u03b5, \u03b4)di\ufb00erentially private algorithm, one can prove inequalities of the form EA\u03b8(xi, M(X)) \u2264 EA\u03b8(xi, M(X\u2032 i))+O(\u03b5)E|A\u03b8(xi, M(X\u2032 i))|. Suppose such relations hold for the score attack 18 \fas well, the soundness property (4.3) would then imply X i\u2208[n] EA\u03b8(xi, M(X)) \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 \u00b7 n p \u03bbmax(I(\u03b8))O(\u03b5). We give precise statement of such an inequality in Section 4.1.1. On the other hand, if we can also bound P i\u2208[n] EA\u03b8(xi, M(X)) from below by some positive quantity, a lower bound for E\u2225M(X) \u2212\u03b8\u22252 2 is immediately implied. Completeness may help us in this regard: when EM(X)j is close to \u03b8j, it is reasonable to expect that \u2202 \u2202\u03b8j EM(X)j is bounded away from zero. Indeed several versions of this argument, often termed \u201cstrong distribution\u201d, exist in the literature [26, 50] and have led to lower bounds for Gaussian mean estimation and top-k selection. In Section 4.1.2, we consider a systematic approach to lower bounding \u2202 \u2202\u03b8j EM(X)j via Stein\u2019s Lemma [48, 49]. The technical results in Sections 4.1.1 and 4.1.2 combined with Theorem 4 would enable us to later prove concrete minimax lower bounds for GLMs. 4.1.1 Score Attacks and Di\ufb00erential Privacy In Theorem 4, we have found that, when the data set X\u2032 i does not include xi, the score attack is unlikely to take large values: EA\u03b8(xi, M(X\u2032 i)) = 0; E|A\u03b8(xi, M(X\u2032 i))| \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)). If M is di\ufb00erentially private, the distribution of M(X\u2032 i) is close to that of M(X); as a result, the inequalities above can be related to the case where the data set X does include the candidate xi. Lemma 6. If M is an (\u03b5, \u03b4)-di\ufb00erentially private algorithm with 0 < \u03b5 < 1 and \u03b4 \u22650, then for every T > 0, EA\u03b8(xi, M(X)) \u22642\u03b5 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)) + 2\u03b4T + Z \u221e T P (|A\u03b8(xi, M(X))| > t) . (4.5) Lemma 6 is proved in Section 7.2.1. The quantity on the right side of (4.5) is determined by the statistical model f\u03b8(x) and the choice of T. In Sections 4.2 and 4.3, we work out its speci\ufb01c forms for low-dimensional and high-dimensional sparse GLMs. 19 \f4.1.2 Score Attacks and Stein\u2019s Lemma Let us denote EX|\u03b8M(X) by g(\u03b8), then g is a map from \u0398 to \u0398, and we are interested in bounding \u2202 \u2202\u03b8j gj(\u03b8) from below. Stein\u2019s Lemma [48, 49], as stated below, suggests some promising directions. Lemma 7 (Stein\u2019s Lemma). Let Z be distributed according to some density p(z) that is continuously di\ufb00erentiable with respect to z and let h : R \u2192R be a di\ufb00erentiable function such that E|h\u2032(Z)| < \u221e. We have Eh\u2032(Z) = E \u0014\u2212h(Z)p\u2032(Z) p(Z) \u0015 . In particular, if p(z) = (2\u03c0)\u22121/2e\u2212z2/2, we have Eh\u2032(Z) = EZh(Z). Stein\u2019s Lemma implies that, by imposing appropriate prior distributions on \u03b8, one can obtain a lower bound for \u2202 \u2202\u03b8j gj(\u03b8) on average over the prior distribution of \u03b8, as follows. Lemma 8. Let \u03b8 be distributed according to a density \u03c0 with marginal densities {\u03c0j}j\u2208[d]. If for every j \u2208[d], \u03c0j, gj satisfy the regularity conditions in Lemma 7, we have E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2202 \u2202\u03b8j gj(\u03b8) \uf8f6 \uf8f8\u2265E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \uf8f6 \uf8f8\u2212 v u u u tE\u03c0\u2225g(\u03b8) \u2212\u03b8\u22252 2 \u00b7 E\u03c0 \uf8ee \uf8f0X j\u2208[d] \u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132\uf8f9 \uf8fb (4.6) Lemma 8 is proved in Section 7.2.2. Often we may assume without the loss of generality that E\u03c0\u2225g(\u03b8) \u2212\u03b8\u22252 2 \u2264E\u03c0EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 < C for some constant C when the sample size n is su\ufb03ciently large, the right side is completely determined by the choice of \u03c0, as the following example illustrates: Example 1. Let \u03c0 be the density of N(0, I), then (4.6) reduces to E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2202 \u2202\u03b8j gj(\u03b8) \uf8f6 \uf8f8\u2265 X j\u2208[d] E\u03c0j\u03b82 j \u2212 \u221a C sX j\u2208[d] E\u03c0j\u03b82 j = d \u2212 \u221a Cd \u2273d. In view of the completeness property (4.4), Lemma 8 suggests an average lower bound for P i\u2208[n] EA\u03b8(xi, M(X)) over some prior distribution \u03c0(\u03b8), with the speci\ufb01c form of this average lower bound entirely determined by the choice of \u03c0. 20 \f4.1.3 From Score Attacks to Lower Bounds Let us combine Theorem 4 with Lemmas 6 and 8 to understand how the score attack leads to privacy-constrained minimax lower bounds. Let \u03c0 be a prior distribution supported over the parameter space \u0398 with marginal densities {\u03c0j}j\u2208[d], and assume without the loss of generality that EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 < C for every \u03b8 \u2208\u0398. The completeness part of Theorem 4 and Lemma 8 imply that X i\u2208[n] E\u03c0EX|\u03b8A\u03b8(xi, M(X)) \u2265E\u03c0 \uf8eb \uf8edX j\u2208[d] \u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \uf8f6 \uf8f8\u2212 \u221a C v u u u tE\u03c0 \uf8ee \uf8f0X j\u2208[d] \u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132\uf8f9 \uf8fb Since Lemma 6 holds for every \u03b8, it follows from the Lemma that X i\u2208[n] E\u03c0EX|\u03b8A\u03b8(xi, M(X)) \u22642n\u03b5 q E\u03c0EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)) + 2n\u03b4T + X i\u2208[n] Z \u221e T P (|A\u03b8(xi, M(X))| > t) . These two inequalities are true for every (\u03b5, \u03b4)-di\ufb00erentially private M, and they therefore suggest a lower bound for infM\u2208M\u03b5,\u03b4 E\u03c0EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2, which in turn lower bounds infM\u2208M\u03b5,\u03b4 sup\u03b8\u2208\u0398 EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 since the maximum risk is greater than the average risk regardless of the prior distribution. Following this strategy, we shall obtain the privacy-constrained minimax lower bounds for GLM problems, by choosing an appropriate prior distribution \u03c0 and working out the speci\ufb01c forms of the two inequalities (4.5) and (4.6) in the context of GLMs. 4.2 The Classical Low-dimensional Setting We \ufb01rst consider the low-dimensional d = o(n) setting. For the generalized linear model f\u03b2(y|x) = h(y, \u03c3) exp \u0012x\u22a4\u03b2y \u2212\u03c8(x\u22a4\u03b2) c(\u03c3) \u0013 ; x \u223cfx, and a candidate datum (\u02dc y, \u02dc x), the score attack, as de\ufb01ned by (4.2), takes the form A\u03b2((\u02dc y, \u02dc x), M(y, X)) = 1 c(\u03c3) M(y, X) \u2212\u03b2, [\u02dc y \u2212\u03c8\u2032(\u02dc x\u22a4\u03b2)]\u02dc x \u000b . (4.7) 21 \fFor the prior distribution of \u03b2, we choose \u03c0(\u03b2) to be the density of N(0, I). The strategy outlined in Section 4.1 implies the following lower bound result. Theorem 5. Consider i.i.d. observations (y1, x1), \u00b7 \u00b7 \u00b7 , (yn, xn) \u2208R \u00d7 Rd, where x \u223cfx such that E(xx\u22a4) is diagonal with 0 < \u03bbmax(E(xx\u22a4)) < C < \u221e, \u2225x\u22252 \u2272 \u221a d almost surely, and y given x follows the conditional distribution f\u03b2(y|x) = h(y, \u03c3) exp \u0012x\u22a4\u03b2y \u2212\u03c8(x\u22a4\u03b2) c(\u03c3) \u0013 . If 0 < \u2225\u03c8 \u2032\u2032\u2225\u221e< c2 < \u221e, 0 < \u03b5 < 1, 0 < \u03b4 < n\u2212(1+\u03b3) for some \u03b3 > 0, then for su\ufb03ciently large n and every (\u03b5, \u03b4)-di\ufb00erentially private M such that \u2225M(y, X) \u2212\u03b2\u22252 2 \u2272d and E\u2225M(y, X) \u2212\u03b2\u22252 2 = o(1), sup \u03b2\u2208Rd E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3) d2 n2\u03b52. (4.8) Theorem 5 is proved in Section B.1. The (\u03b5, \u03b4)-di\ufb00erentially private estimators M\u03b5,\u03b4 are also subject to the non-private minimax risks lower bound for GLMs, infM sup\u03b2 E\u2225M(y, X)\u2212 \u03b2\u22252 2 \u2273c(\u03c3)d/n. It then follows from (4.8) that inf M\u2208M\u03b5,\u03b4 sup \u03b2 E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3) \u0012d n + d2 n2\u03b52 \u0013 . The lower bound matches the statistical accuracy of noisy gradient descent, Theorem 1, up to factors of log n under the usual setting of \u03b4 = n\u2212\u03b1 for some constant \u03b1 > 1. Besides showing the optimality of noisy gradient descent, this comparison also suggests that the cost of privacy, as measured by the squared \u21132-norm, in GLM parameter estimation is of the order d2/n2\u03b52. 4.3 The High-Dimensional Sparse Setting We now consider the setting where d, the dimension of \u0398, dominates the sample size n, but each \u03b2 \u2208\u0398 is assumed to be s\u2217-sparse, that is \u2225\u03b2\u22250 \u2264s\u2217. As seen in the following theorem, the sparsity assumption leads to a lower bound that depends primarily on the sparsity, or the \u201cintrinsic dimension\u201d of \u03b2, and only logarithmically on the ambient dimension d. For high-dimensional sparse GLMs, we consider a modi\ufb01cation of the classical GLM score attack (4.7), the sparse GLM score attack: A\u03b2,s\u2217((\u02dc y, \u02dc x), M(y, X)) = 1 c(\u03c3) (M(y, X) \u2212\u03b2)supp(\u03b2), [\u02dc y \u2212\u03c8\u2032(\u02dc x\u22a4\u03b2)]\u02dc x \u000b . (4.9) 22 \fFor the prior \u03c0, we have to choose some distribution supported over the set {\u03b2 : \u03b2 \u2208 Rd, \u2225\u03b2\u22250 \u2264s\u2217}. Speci\ufb01cally, we consider \u03b2 generated as follows: let \u02dc \u03b21, \u02dc \u03b22, \u00b7 \u00b7 \u00b7 , \u02dc \u03b2d be drawn i.i.d. from N(0, 1), let Is\u2217be be the index set of \u02dc \u03b2 with top s\u2217greatest absolute values so that |Is\u2217| = s\u2217by de\ufb01nition, and de\ufb01ne \u03b2j = \u02dc \u03b2j1(j \u2208Is\u2217). The score attack strategy then leads to the following lower bound result. Theorem 6. Consider n i.i.d. observations (y1, x1), \u00b7 \u00b7 \u00b7 , (yn, xn), where x \u223cfx such that E(xx\u22a4) is diagonal with 0 < \u03bbmax(E(xx\u22a4)) < C < \u221e, \u2225x\u2225\u221e< c < \u221ealmost surely, and y given x follows the conditional distribution f\u03b2(y|x) = h(y, \u03c3) exp \u0012x\u22a4\u03b2y \u2212\u03c8(x\u22a4\u03b2) c(\u03c3) \u0013 . If 0 < \u2225\u03c8 \u2032\u2032\u2225\u221e= c2 < \u221e, 0 < \u03b5 < 1, 0 < \u03b4 < n\u2212(1+\u03b3) for some \u03b3 > 0, s = o (d1\u2212\u03b3) for some \u03b3 > 0, then for su\ufb03ciently large n and every (\u03b5, \u03b4)-di\ufb00erentially private M such that \u2225M(y, X) \u2212\u03b2\u22252 2 \u2272s\u2217and E\u2225M(y, X) \u2212\u03b2\u22252 2 = o(1), sup \u03b2\u2208Rd,\u2225\u03b2\u22250\u2264s\u2217E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3)(s\u2217log d)2 n2\u03b52 . (4.10) Theorem 6 is proved in Section B.2. In conjunction with the non-private minimax lower bound infM sup\u03b2\u2208Rd,\u2225\u03b2\u22250\u2264s E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3)s\u2217log d/n, (4.10) implies inf M\u2208M\u03b5,\u03b4 sup \u03b2\u2208Rd,\u2225\u03b2\u22250\u2264s E\u2225M(y, X) \u2212\u03b2\u22252 2 \u2273c(\u03c3) \u0012s\u2217log d n + (s\u2217log d)2 n2\u03b52 \u0013 . By comparing the privacy-constrained minimax lower bound with Theorem 3, we can see that the noisy iterative hard thresholding algorithm for sparse GLMs is optimal up to factors of log n under the usual setting of \u03b4 = n\u2212\u03b1, and that the cost of privacy, as measured by squared \u21132 norm, in sparse GLM parameter estimation is of the order (s\u2217log d)2/n2\u03b52. 5 Numerical Results In this section, we investigate the numerical performance of the proposed privacy-preserving algorithms by conducting experiments with both simulated and real data sets. The numerical results also illustrate our theoretical \ufb01ndings on di\ufb00erentially private GLM parameter estimation. 23 \f5.1 Simulated Data For the low-dimensional GLM, our simulated data set is constructed as follows. For our desired choice of d and n, we sample \u03b2 uniformly at random from the unit sphere in Rd, draw coordinates of the design vector xi independently from the uniform distribution over (\u22121, 1) for each i \u2208[n], and sample yi from the logistic regression model, that is yi following the Bernoulli distribution with success probability 1 1+exp(\u2212x\u22a4 i \u03b2). Using the simulated data, we study the numerical performance of Algorithm 1 via three sets of experiments. In each experiment, the algorithm is initialized with \u03b2 = 0 \u2208Rd, with step size \u03b70 = 1 for each iteration. (a). Fix n = 40000, \u03b5 = 0.5 and \u03b4 = (2n)\u22121, and compare the iterates of Algorithm 1 with the true \u03b2 for d = 10, 20, or 40. As displayed in Figure 1(a), the log error log(\u2225\u03b2t \u2212\u03b2\u22252 2) is linear in t when d = 10 but deteriorates as d increases, con\ufb01rming the theoretical result in Theorem 1. (b). Fix d = 20, \u03b5 = 0.5 and \u03b4 = (2n)\u22121, and compare the iterates of Algorithm 1 with the true \u03b2 for n = 20000, 40000, or 80000. As predicted by Theorem 1, log(\u2225\u03b2t \u2212\u03b2\u22252 2) is linear in t when n = 80000 but deteriorates as n decreases. (c). Fix d = 20, n = 40000 and \u03b4 = (2n)\u22121, and compare the iterates of Algorithm 1 with the true \u03b2 for \u03b5 = 0.2, 0.5, 0.8, or \u221e(non-private). The decrease in log(\u2225\u03b2t \u2212\u03b2\u22252 2) as \u03b5 increases is consistent with Theorem 1. G G G G G G G G G G G G G G G G G G G G \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 5 10 15 20 Iteration Count log(||\u03b2t \u2212\u03b2 ^||2 2) G d = 40 d = 20 d = 10 n = 40000,\u03b5 = 0.5,\u03b4 = 1/2n (a) G G G G G G G G G G G G G G G G G G G G \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 5 10 15 20 Iteration Count log(||\u03b2t \u2212\u03b2 ^||2 2) G n = 80000 n = 40000 n = 20000 d = 20,\u03b5 = 0.5,\u03b4 = 1/2n (b) G G G G G G G G G G G G G G G G G G G G \u22123 \u22122 \u22121 0 5 10 15 20 Iteration Count log(||\u03b2t \u2212\u03b2 ^||2 2) G non\u2212private \u03b5 = 0.8 \u03b5 = 0.5 \u03b5 = 0.2 d = 20,n = 40000,\u03b4 = 1/2n (c) Figure 1: Log-distance between the iterates of Algorithm 1 and the true parameter \u03b2 under various settings of n, d, \u03b5 and \u03b4. 24 \fFor the high-dimensional sparse GLM, the simulated data set is constructed in the identical way as the low-dimensional case, except that the s-sparse true parameter \u03b2 is obtained by concatenating a random draw from the unit sphere in Rs with 0 \u2208Rd\u2212s. We have three sets of experiments to study the numerical performance of Algorithm 5. In each experiment, the algorithm is initialized with \u03b2 = 0 \u2208Rd, with step size \u03b70 = 1 for each iteration and the sparsity level set at twice of the true sparsity. (a). Fix d = 10000, n = 40000, \u03b5 = 0.5 and \u03b4 = (2n)\u22121, and compare the iterates of Algorithm 5 with the true \u03b2 for s = 10, 20, or 40. As suggested by Theorem 3, the log error log(\u2225\u03b2t \u2212\u03b2\u22252 2) is linear in t when s = 10 but deteriorates as s increases. (b). Fix d = 10000, s = 10, \u03b5 = 0.5 and \u03b4 = (2n)\u22121, and compare the iterates of Algorithm 5 with the true \u03b2 for n = 20000, 40000, or 80000. log(\u2225\u03b2t \u2212\u03b2\u22252 2) is linear in t when n = 80000 or n = 40000, but deteriorates as n decreases. (c). Fix d = 10000, n = 40000, s = 10 and \u03b4 = (2n)\u22121, and compare the iterates of Algorithm 5 with the true \u03b2 for \u03b5 = 0.2, 0.5, 0.8, or \u221e(non-private). The decrease in log(\u2225\u03b2t \u2212\u03b2\u22252 2) as \u03b5 increases con\ufb01rms Theorem 3. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G \u22122 \u22121 0 0 10 20 30 Iteration Count log(||\u03b2t \u2212\u03b2 ^||2 2) G s = 40 s = 20 s = 10 d = 10000,n = 40000,\u03b5 = 0.5,\u03b4 = 1/2n (a) G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G \u22123 \u22122 \u22121 0 0 10 20 30 Iteration Count log(||\u03b2t \u2212\u03b2 ^||2 2) G n = 80000 n = 40000 n = 20000 d = 10000,s = 10,\u03b5 = 0.5,\u03b4 = 1/2n (b) G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G \u22123 \u22122 \u22121 0 0 10 20 30 Iteration Count log(||\u03b2t \u2212\u03b2 ^||2 2) G non\u2212private \u03b5 = 0.8 \u03b5 = 0.5 \u03b5 = 0.2 d = 10000,n = 40000,s = 10,\u03b4 = 1/2n (c) Figure 2: Log-distance between the iterates of Algorithm 5 and the true parameter \u03b2 under various settings of n, d, s, \u03b5 and \u03b4. 5.2 Real Data For the real data experiment, we consider the Swarm Behavior Data Set, collected by the Human Perception of Swarming project at the University of New South Wales (https: 25 \f//unsw-swarm-survey.netlify.app/) and made publicly available at the UCI Machine Learning Repository [17]. In this data set, each of n = 24016 instances contains d = 2400 attributes describing the behavior (velocity, direction, location, etc.) of 200 individuals in the system, and with each instance assigned a binary class label, \u201c\ufb02ocking\u201d or \u201cnot \ufb02ocking\u201d. A system of individual birds, insects, or people are said to be \u201c\ufb02ocking\u201d if they are perceived moving as a group with the same velocity without colliding each other. In our experiment, we attempt to classify these instances into \u201c\ufb02ocking\u201d or \u201cnot \ufb02ocking\u201d by our Algorithm 5 for high-dimensional sparse GLMs. We randomly split the data set into two halves, train a sparse logistic regression model using one half, and predict the labels of the other half by this logistic model. For \ufb01tting the sparse logistic model on the training set, we run Algorithm 5 for 50 iterations with step size \u03b70 = 0.5 and initial value \u03b20 = 0 \u2208R2401 (including the intercept). For various settings of s, \u03b5 and \u03b4, the average misclassi\ufb01cation rate (and its standard error) over repetitions of the experiment are displayed in the tables below. The results suggest that the classi\ufb01cation accuracy indeed worsens as the privacy requirement becomes more stringent, but the loss of accuracy is mild compared to the non-private \u03b5 = \u221ecase. s = 25 s = 50 s = 100 \u03b5 = 0.2 0.33(.05) 0.21(.05) 0.13(.05) \u03b5 = 0.5 0.28(.05) 0.20(.05) 0.10(.02) \u03b5 = \u221e 0.30(.05) 0.21(.05) 0.09(.03) (a) \u03b4 = 1/2n s = 25 s = 50 s = 100 \u03b5 = 0.2 0.32(.05) 0.22(.05) 0.13(.06) \u03b5 = 0.5 0.33(.04) 0.19(.05) 0.08(.02) \u03b5 = \u221e 0.30(.05) 0.21(.05) 0.09(.02) (b) \u03b4 = 1/n2 Figure 3: Mean and standard error of misclassi\ufb01cation rates of Algorithm 5 in the randomly drawn test subset of the Swarm Behavior Data Set. 6 Discussion In this paper, we studied the cost of di\ufb00erential privacy in estimating the parameters of the GLMs. We designed di\ufb00erentially private algorithms, based on projected gradient descent, that achieve fast, linear convergence to the optimal non-private solution, and analyzed their statistical accuracy with respect to the true parameters. The theoretical properties of our algorithms are demonstrated in numerical experiments with real and simulated data sets. The accuracy of these algorithms are shown to be optimal up to logarithmic factors, via 26 \flower bounds of the privacy-constrained minimax risk. These lower bounds are established by the score attack framework, which generalizes prior works on tracing attacks for privacyconstrained minimax lower bounds. The upper bounds and lower bounds together have led to a clear characterization of the cost of privacy in estimating GLM parameters. This paper suggests several promising directions of further research. On the algorithmic side, since our convergence analysis of di\ufb00erentially private algorithms can be applied to other M-estimation problems satisfying restricted strong convexity and restricted smoothness, it is of interest to study their performance in problems such as low-rank matrix recovery and regression. Our results on high-dimensional sparse GLMs also raise questions on the interplay between privacy and other structural assumptions, for example, groupstructured sparsity, approximate sparsity, or low-rankness as mentioned above. On the statistical optimality side, our score attack framework may lead to lower bounds for a much larger variety of statistical models than generalized linear models. It is also of signi\ufb01cant value to prove sharper lower bounds that can potentially capture the remaining logarithmic gap between upper and lower bounds, and develop sharp lower bounds for di\ufb00erentially private con\ufb01dence intervals or hypothesis testing. 7 Proofs In this section, we prove the main technical results of this paper, Theorems 3 and 4. 7.1 Proof of Theorem 3 Proof of Theorem 3. We shall \ufb01rst de\ufb01ne several favorable events under which the desired convergence does occur, and then show that the probability that any of the favorable events fails to happen is negligible. These events are, E1 = {(3.3) and (3.4) hold}, E2 = {\u03a0R(yi) = yi, \u2200i \u2208[n]}, E3 = {\u2225\u03b2t \u2212\u02c6 \u03b2\u22252 \u22643, 0 \u2264t \u2264T}. We \ufb01rst analyze the behavior of Algorithm 5 under these events. The assumed scaling of n \u2265K \u00b7 \u0010 Rs\u2217log d p log(1/\u03b4) log n/\u03b5 \u0011 implies that n \u2265K\u2032s\u2217log d/n for a su\ufb03ciently large K\u2032. Since \u2225\u03b2t\u22250 \u2264s \u224ds\u2217for every t and \u2225\u02c6 \u03b2\u22250 \u2264s\u2217by de\ufb01nition, the RSM condition (3.4) implies that for every t, \u27e8\u2207Ln(\u03b2t) \u2212\u2207Ln( \u02c6 \u03b2), \u03b2t \u2212\u02c6 \u03b2\u27e9\u22644\u03b3 3 \u2225\u03b2t \u2212\u02c6 \u03b2\u22252 2. (7.1) 27 \fSimilarly, under event E3, the RSC condition (3.3) implies that \u27e8\u2207Ln(\u03b2t) \u2212\u2207Ln( \u02c6 \u03b2), \u03b2t \u2212\u02c6 \u03b2\u27e9\u22652\u03b1 3 \u2225\u03b2t \u2212\u02c6 \u03b2\u22252 2. (7.2) These two inequalities and our choice of parameters s, \u03b7 now allow Theorem 2 to apply. Let wt 1, wt 2, \u00b7 \u00b7 \u00b7 , wt s be the noise vectors added to \u03b2t \u2212\u03b70\u2207Ln(\u03b2t; Z) when the support of \u03b2t+1 is iteratively selected, St+1 be the support of \u03b2t+1, and \u02dc wt be the noise vector added to the selected s-sparse vector. De\ufb01ne Wt = C\u03b3 \u0010P i\u2208[s] \u2225wt i\u22252 \u221e+ \u2225\u02dc wt St+1\u22252 2 \u0011 , then Theorem 2 leads to Ln(\u03b2(T)) \u2212Ln( \u02c6 \u03b2) \u2264 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T \u0010 Ln(\u03b20) \u2212Ln( \u02c6 \u03b2) \u0011 + T\u22121 X k=0 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T\u2212k\u22121 Wk \u2264 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T 2\u03b3 3 \u2225\u03b20 \u2212\u02c6 \u03b2\u22252 2 + T\u22121 X k=0 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T\u2212k\u22121 Wk \u2264 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T 6\u03b3 + T\u22121 X k=0 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T\u2212k\u22121 Wk. (7.3) The second inequality is a consequence of (7.1), and the third inequality follows from the assumption that \u2225\u03b20 \u2212\u02c6 \u03b2\u22252 \u22643. On the other hand, we can lower bound Ln(\u03b2(T)) \u2212Ln( \u02c6 \u03b2) as follows: by (7.2), Ln(\u03b2(T)) \u2212Ln( \u02c6 \u03b2) \u2265Ln(\u03b2(T)) \u2212Ln(\u03b2\u2217) \u2265\u03b1 3 \u2225\u03b2(T) \u2212\u03b2\u2217\u22252 2 \u2212\u27e8\u2207Ln(\u03b2\u2217), \u03b2\u2217\u2212\u03b2(T)\u27e9. (7.4) Combining (7.3) and (7.4) yields \u03b1 3 \u2225\u03b2(T) \u2212\u03b2\u2217\u22252 2 \u2264\u27e8\u2207Ln(\u03b2\u2217), \u03b2\u2217\u2212\u03b2(T)\u27e9+ \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T 6\u03b3 + T\u22121 X k=0 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T\u2212k\u22121 Wk \u2264\u2225\u2207Ln(\u03b2\u2217)\u2225\u221e \u221a s + s\u2217\u2225\u03b2\u2217\u2212\u03b2(T)\u22252 + \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T 6\u03b3 + T\u22121 X k=0 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T\u2212k\u22121 Wk = \u2225\u2207Ln(\u03b2\u2217)\u2225\u221e \u221a s + s\u2217\u2225\u03b2\u2217\u2212\u03b2(T)\u22252 + 1 n + T\u22121 X k=0 \u0012 1 \u2212\u03c1 \u03b1 2\u03b3 \u0013T\u2212k\u22121 Wk. (7.5) The last step follows from our choice of T = (2\u03b3/\u03c1\u03b1) log(6\u03b3n). Now let us de\ufb01ne two events 28 \fthat allow for high-probability bounds of the right side. E4 = \uf8f1 \uf8f2 \uf8f3max t Wt \u2264K Rs\u2217log d p log(1/\u03b4) log n n\u03b5 !2\uf8fc \uf8fd \uf8fe, E5 = ( \u2225\u2207Ln(\u03b2\u2217)\u2225\u221e\u22644\u03c3x \u221ac2 r log d n ) . Under E4, E5, we can conclude from (7.5) that \u2225\u03b2(T) \u2212\u03b2\u2217\u22252 \u2272 p c(\u03c3) r s\u2217log d n + s\u2217log d p log(1/\u03b4) log3/2 n n\u03b5 . ! We have shown so far that the desired rate of convergence (3.8) holds when Ei occurs for 1 \u2264i \u22645; we now turn to controlling the probability that any of the \ufb01ve events fails to happen, P5 i=1 P(Ec i ). \u2022 By Proposition 4, P(Ec 1) \u2264c3 exp(\u2212c4n) under the assumptions of Theorem 3. \u2022 We have P(Ec 2) \u2264c3 exp(\u2212c4 log n) by the choice of R, and assumptions (G1), (G2) which imply the following bound of moment generating function of yi: we have log E exp \u0012 \u03bb \u00b7 yi \u2212\u03c8\u2032(x\u22a4 i \u03b2) c(\u03c3) \f \f \fxi \u0013 = 1 c(\u03c3) \u0000\u03c8(x\u22a4 i \u03b2 + \u03bb) \u2212\u03c8(x\u22a4 i \u03b2) \u2212\u03bb\u03c8\u2032(x\u22a4 i \u03b2) \u0001 \u2264 1 c(\u03c3) \u00b7 \u03bb2\u03c8 \u2032\u2032(x\u22a4 i \u03b2 + \u02dc \u03bb) 2 for some \u02dc \u03bb \u2208(0, \u03bb). It follows that E exp \u0010 \u03bb \u00b7 yi\u2212\u03c8\u2032(x\u22a4 i \u03b2) c(\u03c3) \f \f \fxi \u0011 \u2264exp \u0010 c2\u03bb2 2c(\u03c3) \u0011 because \u2225\u03c8 \u2032\u2032\u2225\u221e< c2. \u2022 For E3, we have P(Ec 3) \u2264T \u00b7 c3 exp(\u2212c4 log(d/s\u2217)) = c3 exp(\u2212c4 log(d/s\u2217log n)) by the initial condition \u2225\u03b20 \u2212\u02c6 \u03b2\u22253 2 and proof by induction via the following lemma, to be proved in Section A.7.1. Lemma 9. Under the assumptions of Theorem 3 Let \u03b2k, \u03b2k+1 be the kth and (k+1)th iterates of Algorithm 5. If \u2225\u03b2k \u2212\u02c6 \u03b2\u22252 \u22643, we have \u2225\u03b2k+1 \u2212\u02c6 \u03b2\u22252 \u22643 with probability at least 1 \u2212c3 exp(\u2212c4 log(d/s\u2217)). \u2022 For E4, we invoke an auxiliary lemma to be proved in Section A.7.2. Lemma 10. Consider w \u2208Rk with w1, w2, \u00b7 \u00b7 \u00b7 , wk i.i.d. \u223cLaplace(\u03bb). For every C > 1, P \u0000\u2225w\u22252 2 > kC2\u03bb2\u0001 \u2264ke\u2212C P \u0000\u2225w\u22252 \u221e> C2\u03bb2 log2 k \u0001 \u2264e\u2212(C\u22121) log k. 29 \fFor each iterate t, the individual coordinates of \u02dc wt, wt i are sampled i.i.d. from the Laplace distribution with scale (2\u03b3)\u22121 \u00b7 2B\u221a 3s log(T/\u03b4) n\u03b5/T , where the noise scale B \u2272R and T \u224dlog n by our choice. If n \u2265K \u00b7 \u0010 Rs\u2217log d p log(1/\u03b4) log n/\u03b5 \u0011 for a suf\ufb01ciently large constant K, Lemma 10 implies that, with probability at least 1 \u2212 c3 exp(\u2212c4 log(d/(s\u2217log n)), maxt Wt is bounded by K \u0012 Rs\u2217log d\u221a log(1/\u03b4) log n n\u03b5 \u00132 for some appropriate constant K. \u2022 Under assumptions of Theorem 3, it is a standard probabilistic result (see, for example, [54] pp. 288) that P(Ec 5) \u22642e\u22122 log d. We have P5 i=1 P(Ec i ) \u2264c3 exp(\u2212c4 log(d/s\u2217log n)) + c3 exp(\u2212c4n) + c3 exp(\u2212c4 log n). The proof is complete. 7.2 Proof of Theorem 4 Proof. For soundness, we note that xi and M(X\u2032 i) are independent, and therefore EA\u03b8(xi, M(X\u2032 i)) = E\u27e8M(X\u2032 i) \u2212\u03b8, S\u03b8(xi)\u27e9= \u27e8EM(X\u2032 i) \u2212\u03b8, ES\u03b8(xi)\u27e9= 0. The last equality is true by the property of the score that ES\u03b8(z) = 0 for any z \u223cf\u03b8. As to the \ufb01rst absolute moment, we apply Jensen\u2019s inequality, E|A\u03b8(xi, M(X\u2032 i))| \u2264 p E\u27e8M(X\u2032 i) \u2212\u03b8, S\u03b8(xi)\u27e92 \u2264 p E(M(X\u2032 i) \u2212\u03b8)\u22a4(VarS\u03b8(xi))(M(X\u2032 i) \u2212\u03b8) \u2264 q E\u2225M(X) \u2212\u03b8\u22252 2 p \u03bbmax(I(\u03b8)). For completeness, we \ufb01rst simplify X i\u2208[n] EA\u03b8(xi, M(X)) = E D M(X) \u2212\u03b8, X i\u2208[n] S\u03b8(xi) E = E D M(X), X i\u2208[n] S\u03b8(xi) E . By the de\ufb01nition of score and that x1, \u00b7 \u00b7 \u00b7 , xn are i.i.d., P i\u2208[n] S\u03b8(xi) = S\u03b8(x1, \u00b7 \u00b7 \u00b7 , xn) = S\u03b8(X). It follows that E D M(X), X i\u2208[n] S\u03b8(xi) E = E D M(X), S\u03b8(X) E = X j\u2208[d] E \u0014 M(X)j \u2202 \u2202\u03b8j log f\u03b8(X) \u0015 . For each term in the right-side summation, one may exchange di\ufb00erentiation and inte30 \fgration thanks to the regularity conditions on f\u03b8, and therefore E \u0014 M(X)j \u2202 \u2202\u03b8j log f\u03b8(X) \u0015 = E \u0014 M(X)j(f\u03b8(X))\u22121 \u2202 \u2202\u03b8j f\u03b8(X) \u0015 = \u2202 \u2202\u03b8j E \u0002 M(X)j(f\u03b8(X))\u22121f\u03b8(X) \u0003 = \u2202 \u2202\u03b8j EM(X)j. 7.2.1 Proof of Lemma 6 Proof. Let Ai := A\u03b8(xi, M(X)), A\u2032 i := A\u03b8(xi, M(X\u2032 i)), and let Z+ = max(Z, 0) and Z\u2212= \u2212min(Z, 0) denote the positive and negative parts of a random variables Z respectively. We have EAi = EA+ i \u2212EA\u2212 i = Z \u221e 0 P(A+ i > t)dt \u2212 Z \u221e 0 P(A\u2212 i > t)dt. For the positive part, if 0 < T < \u221eand 0 < \u03b5 < 1, we have Z \u221e 0 P(A+ i > t)dt = Z T 0 P(A+ i > t)dt + Z \u221e T P(A+ i > t)dt \u2264 Z T 0 \u0000e\u03b5P(A+ i > t) + \u03b4 \u0001 dt + Z \u221e T P(A+ i > t)dt \u2264 Z \u221e 0 P(A\u2032 i + > t)dt + 2\u03b5 Z \u221e 0 P(A\u2032 i + > t)dt + \u03b4T + Z \u221e T P(|Ai| > t)dt. Similarly for the negative part, Z \u221e 0 P(A\u2212 i > t)dt = Z T 0 P(A\u2212 i > t)dt + Z \u221e T P(A\u2212 i > t)dt \u2265 Z T 0 \u0010 e\u2212\u03b5P(A\u2032 i \u2212> t) \u2212\u03b4 \u0011 dt + Z \u221e T P(A\u2212 i > t)dt \u2265 Z T 0 P(A\u2032 i \u2212> t)dt \u22122\u03b5 Z T 0 P(A\u2032 i \u2212> t) \u2212\u03b4T + Z \u221e T P(A\u2212 i > t)dt \u2265 Z \u221e 0 P(A\u2032 i \u2212> t)dt \u22122\u03b5 Z \u221e 0 P(A\u2032 i \u2212> t) \u2212\u03b4T. It then follows that EAi \u2264 Z \u221e 0 P(A\u2032 i + > t)dt \u2212 Z \u221e 0 P(A\u2032 i \u2212> t)dt + 2\u03b5 Z \u221e 0 P(|A\u2032 i| > t)dt + 2\u03b4T + Z \u221e T P(|Ai| > t)dt 31 \f= EA\u2032 i + 2\u03b5E|Ai| + 2\u03b4T + Z \u221e T P(|Ai| > t)dt. The proof is now complete by soundness (4.3). 7.2.2 Proof of Lemma 8 Proof. For each j \u2208[d], by Lemma 7, we have E\u03c0j \u0012 \u2202 \u2202\u03b8j gj(\u03b8) \u0013 = E\u03c0j \u0012 \u2202 \u2202\u03b8j E[gj(\u03b8)|\u03b8j] \u0013 = E\u03c0j \u0014\u2212E[gj(\u03b8)|\u03b8j]\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 Because |gj(\u03b8) \u2212\u03b8j| \u2264\u2225g(\u03b8) \u2212\u03b8\u22252 \u2264EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 for every \u03b8 \u2208\u0398, we have E\u03c0j \u0014\u2212E[g(\u03b8)|\u03b8j]\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2265E\u03c0j \u0014\u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2212E\u03c0j \u0014 EX|\u03b8\u2225M(X) \u2212\u03b8\u22252 \u00b7 \f \f \f \f \u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \f \f \f \f \u0015 \u2265E\u03c0j \u0014\u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2212 q E\u03c0jEX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 v u u tE\u03c0j \"\u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132# . So we have obtained E\u03c0j \u0012 \u2202 \u2202\u03b8j gj(\u03b8) \u0013 \u2265E\u03c0j \u0014\u2212\u03b8j\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u0015 \u2212 q E\u03c0jEX|\u03b8\u2225M(X) \u2212\u03b8\u22252 2 v u u tE\u03c0j \"\u0012\u03c0\u2032 j(\u03b8j) \u03c0j(\u03b8j) \u00132# . Now we take expectation over \u03c0(\u03b8)/\u03c0j(\u03b8j) and sum over j \u2208[d] to complete the proof. Acknowledgments The research of T. Cai was supported in part by the National Science Foundation grants DMS-1712735 and DMS-2015259 and the National Institutes of Health grants R01-GM129781 and R01-GM123056. The research of L. Zhang was supported in part by the National Science Foundation grant DMS-2015378." + }, + { + "url": "http://arxiv.org/abs/2008.12434v2", + "title": "On the Non-Asymptotic Concentration of Heteroskedastic Wishart-type Matrix", + "abstract": "This paper focuses on the non-asymptotic concentration of the heteroskedastic\nWishart-type matrices. Suppose $Z$ is a $p_1$-by-$p_2$ random matrix and\n$Z_{ij} \\sim N(0,\\sigma_{ij}^2)$ independently, we prove the expected spectral\nnorm of Wishart matrix deviations (i.e., $\\mathbb{E} \\left\\|ZZ^\\top -\n\\mathbb{E} ZZ^\\top\\right\\|$) is upper bounded by \\begin{equation*}\n \\begin{split}\n (1+\\epsilon)\\left\\{2\\sigma_C\\sigma_R + \\sigma_C^2 +\nC\\sigma_R\\sigma_*\\sqrt{\\log(p_1 \\wedge p_2)} + C\\sigma_*^2\\log(p_1 \\wedge\np_2)\\right\\},\n \\end{split} \\end{equation*} where $\\sigma_C^2 := \\max_j\n\\sum_{i=1}^{p_1}\\sigma_{ij}^2$, $\\sigma_R^2 := \\max_i\n\\sum_{j=1}^{p_2}\\sigma_{ij}^2$ and $\\sigma_*^2 := \\max_{i,j}\\sigma_{ij}^2$. A\nminimax lower bound is developed that matches this upper bound. Then, we derive\nthe concentration inequalities, moments, and tail bounds for the\nheteroskedastic Wishart-type matrix under more general distributions, such as\nsub-Gaussian and heavy-tailed distributions. Next, we consider the cases where\n$Z$ has homoskedastic columns or rows (i.e., $\\sigma_{ij} \\approx \\sigma_i$ or\n$\\sigma_{ij} \\approx \\sigma_j$) and derive the rate-optimal Wishart-type\nconcentration bounds. Finally, we apply the developed tools to identify the\nsharp signal-to-noise ratio threshold for consistent clustering in the\nheteroskedastic clustering problem.", + "authors": "T. Tony Cai, Rungang Han, Anru R. Zhang", + "published": "2020-08-28", + "updated": "2022-02-16", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "math.PR", + "stat.TH" + ], + "main_content": "Introduction Random matrix theory is an important topic in its own right and has been proven to be a powerful tool in a wide range of applications in statistics, high-energy physics, and number theory. Wigner matrices, symmetric matrices with mean-zero independent and *The research of Tony Cai was supported in part by NSF grants DMS-1712735 and DMS-2015259 and NIH grants R01-GM129781 and R01-GM123056. The research of Rungang Han and Anru R. Zhang was supported in part by NSF CAREER-1944904, NSF DMS-1811868, and NIH R01-GM131399. \u2020University of Pennsylvania, United States of America. E-mail: tcai@wharton.upenn.edu \u2021Duke University, United States of America. E-mail: rungang.han@duke.edu \u00a7University of Wisconsin-Madison and Duke University, United States of America. Email: anru.zhang@duke.edu \fHeteroskedastic Wishart-type Concentration identically distributed (i.i.d.) entries (subject to the symmetry constraint), have been a particular focus. Asymptotic and non-asymptotic properties of the spectrum of Wigner matrices have been widely studied in the literature. See, for example, [2, 28, 31] and the references therein. Motivated by a range of applications, heteroskedastic Wigner-type matrices, random matrices with independent heteroskedastic entries, have attracted much recent attention. A central problem of interest is the characterization of the dependence of the spectral norm \u2225\u00b7 \u2225(i.e., the largest singular value of the matrix) of a heteroskedastic Wigner-type matrix on the variances of its entries. To answer this question, Ajanki, Erd\u02dd os, Kr\u00fcger [1] established the asymptotic behavior of the resolvent, a local law down to the smallest spectral resolution scale, and bulk universality for the heteroskedastic Wigner-type matrix. Bandeira and van Handel [4] proved an non-asymptotic upper bound for the spectral norm. More speci\ufb01cally, let Z = (Zij) be a p \u00d7 p heteroskedastic Wigner-type matrix with Var(Zij) := \u03c32 ij. Bandeira and van Handel [4] showed that E \u2225Z\u2225\u2272\u03c3 + \u03c3\u2217 \u221alog p, where \u03c32 = maxi P j \u03c32 ij and \u03c32 \u2217= maxij \u03c32 ij are the column-sum-wise and entry-wise maximum variances, respectively. This bound was improved by van Handel [30] to E \u2225Z\u2225\u2272\u03c3 + maxi,j\u2208[p] \u03c3\u2217 ij log i. Here, the matrix {\u03c3\u2217 ij} is obtained by permuting the rows and columns of the variance matrix {\u03c3ij} such that maxj \u03c3\u2217 1j \u2265maxj \u03c3\u2217 2j \u2265\u00b7 \u00b7 \u00b7 \u2265maxj \u03c3\u2217 pj. Later, Lata\u0142a and van Handel [20] further improved it to a tight bound: E \u2225Z\u2225\u224d\u03c3 + max i,j\u2208[p] \u03c3\u2217 ij p log i. (1.1) In addition to the Wigner-type matrix, the Wishart-type matrix, ZZ\u22a4\u2212EZZ\u22a4, also plays a crucial role in many high-dimensional statistical problems, including the principal component analysis (PCA) and factor analysis [36], matrix denoising [25], and bipartite community detection [15]. Though there have been many results on the asymptotic and non-asymptotic properties of the homoskedastic Wishart-type matrix, where Z has i.i.d entries (see [8] for an introduction and the references therein), the properties of the heteroskedastic Wishart-type matrices are much less understood. Speci\ufb01cally, suppose Z is a p1 \u00d7 p2 random matrix with independent and zero-mean entries. In this paper, we are interested in the Wishart-type concentration: E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r. De\ufb01ne \u03c32 C, \u03c32 R, \u03c32 \u2217as the column-sum-wise, row-sum-wise, and entry-wise maximum variances: \u03c32 C = max j p1 X i=1 \u03c32 ij, \u03c32 R = max i p2 X j=1 \u03c32 ij, \u03c32 \u2217= max ij \u03c32 ij. (1.2) By the symmetrization scheme and the asymmetric Wigner-type concentration inequality in [4], it is not dif\ufb01cult to show that E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264E \r \rZZ\u22a4\u2212Z\u2032(Z\u2032)\u22a4\r \r \u22642E \r \rZZ\u22a4\r \r = 2E \u2225Z\u22252 \u2272 \u0010 \u03c3C + \u03c3R + \u03c3\u2217 p log(p1 \u2227p2) \u00112 . (1.3) Since ZZ\u22a4\u2212EZZ\u22a4can be decomposed into a sum of independent random matrices, ZZ\u22a4\u2212EZZ\u22a4= p2 X j=1 \u0000Z\u00b7jZ\u22a4 \u00b7j \u2212EZ\u00b7jZ\u22a4 \u00b7j \u0001 , one can apply the concentration inequality for the sum of independent random matrices [29, Theorem 1] to show that E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272\u03c3C\u03c3R p log p2 + \u03c32 C(log p2)2. (1.4) EJP 0 (2021), paper 0. Page 2/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration However, as we will show later, these bounds are not tight. In this paper, we establish non-asymptotic bounds for the Wishart-type concentration E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225. The main results include the following. We begin by focusing on the Gaussian case in Section 2.1 and prove that if all entries of Z are independently Gaussian, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u22642.01\u03c3C\u03c3R + 1.01\u03c32 C + C1\u03c3R\u03c3\u2217 p log(p1 \u2227p2) + C2\u03c32 \u2217log(p1 \u2227p2), (1.5) where C1, C2 are some universal constants that does not depend on the variance components \u03c3C, \u03c3R, \u03c3\u2217or matrix dimensions p1, p2. Moreover, we can set the coef\ufb01cients in front of \u03c3C\u03c3R and \u03c32 C arbitrarily close to 2 and 1, respectively, at the sacri\ufb01ce of larger constants C1, C2 in (1.5) (see Theorem 2.1 for details). We further justify that the constants in 2\u03c3C\u03c3R + \u03c32 C are essential under the homoskedastic setting. The proof of (1.5) is based on a Wishart-type moment method provided in Section 2.2. In Section 2.3, we provide a lower bound to show that the upper bound (1.5) is minimax rate-optimal in a general class of heteroskedastic random matrices. We then consider the more general non-Gaussian setting including sub-Gaussian, sub-exponential, heavy tailed, and bounded distributions in Section 3.1. In particular, we establish the following concentration bound when the entries have independent subGaussian distributions: E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272 \u0010 \u03c3C + \u03c3R + \u03c3\u2217 p log(p1 \u2227p2) \u00112 \u2212\u03c32 R. (1.6) Upper bounds for the moments and probability tails of \u2225ZZ\u22a4\u2212EZZ\u22a4\u2225are developed in Section 3.2. In Sections 3.3 and 3.4, we consider two variance structures arising in statistical applications and develop tight Wishart-type concentration bounds. If the random matrix Z has independent sub-Gaussian entries and homoskedastic rows, i.e., \u03c3ij = \u03c3i, we prove that E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u224d p1 X i=1 \u03c32 i + v u u tp2 p1 X i=1 \u03c32 i \u00b7 max i\u2208[p1] \u03c3i. If Z has independent sub-Gaussian entries and homoskedastic column variances, i.e., \u03c3ij \u224d\u03c3j, we prove that E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u224d v u u tp1 p2 X j=1 \u03c34 j + p1 max j\u2208[p2] \u03c32 j . To illustrate the usefulness of the newly established tools, we apply these tools in Section 4 to solve a statistical problem in heteroskedastic clustering. Speci\ufb01cally, we obtain a sharp signal-to-noise ratio threshold to guarantee consistent clustering. 2 Main Results We \ufb01rst introduce the notation to be used in the rest of the paper. Let a\u2227b and a\u2228b be the minimum and maximum of real numbers a and b, respectively. We use [d] to denote the set {1, . . . , d} for any positive integer d. For any vector v, let \u2225v\u2225q = (P i |vi|q)1/q be the vector \u2113q norm; speci\ufb01cally, \u2225v\u2225\u221e= supi |vi|. For any sequences {an}, {bn}, denote a \u2272b (or bn \u2273an) if there exists a uniform constant C > 0 such that a \u2264Cb. If a \u2272b and EJP 0 (2021), paper 0. Page 3/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration a \u2273b both hold, we say a \u224db. For any \u03b1 \u22651, the Orlicz \u03c8\u03b1 norm of any random variable X is de\ufb01ned as \u2225X\u2225\u03c8\u03b1 = inf {x \u22650 : E exp ((|X|/x)\u03b1) \u22642} . In the literature [31, 33], a random variable is often called sub-Gaussian, sub-exponential, or sub-Weibull with tail parameter (1/\u03b1), if \u2225X\u2225\u03c82 \u2264C, \u2225X\u2225\u03c81 \u2264C, and \u2225X\u2225\u03c8\u03b1 \u2264C, respectively. The matrix spectral norm is de\ufb01ned as \u2225X\u2225= supu,v u\u22a4Xv \u2225u\u22252\u2225v\u22252 . The capital letters C, C1, \u02dc C and lowercase letters c, c1, c0 represent the generic large and small constants, respectively, whose exact values may vary from place to place. 2.1 Concentration of heteroskedastic Wishart matrix We begin by considering the Gaussian case where the entries Zij \u223cN(0, \u03c32 ij) independently. The following theorem provides an upper bound for the concentration and is one of the main results of the paper. Theorem 2.1 (Wishart-type Concentration for Gaussian random matrix). Suppose Z is a p1-by-p2 random matrix and Zij \u223cN(0, \u03c32 ij) independently. Then for any \u01eb1, \u01eb2 > 0, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264(1 + \u01eb1) n 2\u03c3C\u03c3R + (1 + \u01eb2)\u03c32 C + C1(\u01eb1)\u03c3R\u03c3\u2217 p log(p1 \u2227p2) + C2(\u01eb1, \u01eb2)\u03c32 \u2217log(p1 \u2227p2) o , (2.1) where C1(\u01eb1) = 10(1+\u01eb1) p \u23081/ log(1 + \u01eb1)\u2309and C2(\u01eb1, \u01eb2) = (1+\u01eb1)\u23081/ log(1+\u01eb1)\u2309 \u0010 25 \u01eb2 + 24 \u0011 . Remark 2.2 (Lower bound for the homoskedastic case). If Z has independent and homoskedastic Gaussian entries, i.e., Zij iid \u223cN(0, 1), then \u03c3C = \u221ap1, \u03c3R = \u221ap2, and Theorem 2.1 implies E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264(1 + \u01eb) \u0000\u03c32 C + 2\u03c3C\u03c3R \u0001 + C\u01eb\u03c3R p log(p1 \u2227p2) (2.2) for any \u01eb > 0 and constant C\u01eb only depending on \u01eb. On the other hand, we have Proposition 2.3. If Z is a p1-by-p2 matrix with i.i.d. homoskedastic Gaussian entries, then lim inf p1,p2\u2192\u221e E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r 2\u03c3C\u03c3R + \u03c32 C \u22651. (2.3) Proposition (2.3) and (2.2) together indicate that (\u03c32 C + 2\u03c3C\u03c3R) in the upper bound of Theorem 2.1 are sharp in the homoskedastic case. In Section 2.3, we establish a minimax lower bound to show that all four terms in the upper bound (1.6) are essential when Z is a general heteroskedastic random matrix. 2.2 Proof of Theorem 2.1 The proof of Theorem 2.1 relies on a moment method and the following fact: for a p-by-p symmetric matrix A (in the context of Theorem 2.1, A = ZZ\u22a4\u2212EZZ\u22a4) and a even number q \u224dlog(p), we have \u2225A\u2225\u2248(tr(Aq))1/q . We introduce two lemmas for the proof of Theorem 2.1. First, Lemma 2.4 builds a comparison between the q-th moment of the heteroskedastic Wishart-type matrix ZZ\u22a4\u2212 EZZ\u22a4with a homoskedastic analogue HH\u22a4\u2212EHH\u22a4. The complete proof of Lemma 2.4 is postponed to Section 5.1. EJP 0 (2021), paper 0. Page 4/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Lemma 2.4 (Gaussian Comparison). Suppose Z \u2208Rp1\u00d7p2 has independent Gaussian entries: Zij \u223cN(0, \u03c32 ij). Let m1 = \u2308\u03c32 C\u2309+ q \u22121 and m2 = \u2308\u03c32 R\u2309+ q \u22121. Suppose H \u2208Rm1\u00d7m2 has i.i.d. N(0, 1) entries. Then for any q \u22652, Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 Etr \b (HH\u22a4\u2212EHH\u22a4)q\t . (2.4) Remark 2.5 (Proof sketch of Lemma 2.4). Previously, [4, Proposition 2.1] compared the moments of the Wigner-type matrices (i.e., Z is symmetric and thus p1 = p2 = p, \u03c3C = \u03c3R = \u03c3) by the expansion Etr(Z2p) = P u1,...,u2q E(Zu1u2Zu2u3 \u00b7 \u00b7 \u00b7 Zu2pu1) and counting the cycles in a reduced unipartite graph: Etr(Z2q) \u2264 p \u2308\u03c32\u2309+ q Etr(H2q). (2.5) Compared to the expansion of Wigner-type random matrix Etr(Z2q), the expansion of Wishart-type random matrix Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t is much more complicated: Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t = X uq+1=u1,...,uq\u2208[p1] v1,...,vq\u2208[p2] E q Y k=1 \u0000Zuk,vkZuk+1,vk \u2212\u03c32 uk,vk \u00b7 1{uk=uk+1} \u0001 = \u00b7 \u00b7 \u00b7 = X c\u2208([p1]\u00d7[p2])q q Y k=1 \u03c3uk,vk\u03c3uk+1,vk Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) ij \u0000G2 ij \u22121 \u0001\u03b2ij(c) , (2.6) where ([p1] \u00d7 [p2])q is the set of all cycles of length 2q on a p1-by-p2 complete bipartite graph, Gij = Zij/\u03c3ij are i.i.d. standard normal distributed, and \u03b1ij(c), \u03b2ij(c) are some graphical characteristic quantities of cycle c to be de\ufb01ned later. By gathering the cycles with the same \u201cshape\" s, we can show: Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 X s Y \u03b1,\u03b2\u22650 \b EG\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s) \u00b7 n p1\u03c32(mL(s)\u22121) C \u03c32mR(s) R o \u2227 n p2\u03c32mL(s) C \u03c32(mR(s)\u22121) R o , (2.7) where m\u03b1,\u03b2(s), mL(s) and mR(s) are some graphical properties of the cycles with shape s to be de\ufb01ned later and G \u223cN(0, 1). Meanwhile, we can develop a lower bound for the moment of standard Wishart matrix: Etr \u0000(HH\u22a4\u2212EHH\u22a4)q\u0001 \u2265 X s Y \u03b1,\u03b2\u22650 E \b G\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s) \u00b7 n m1\u03c32mL(s)\u22122 C \u00b7 \u03c32mR(s) R o \u2228 n m2\u03c32mL(s) C \u00b7 \u03c32mR(s)\u22122 R o . (2.8) Lemma 2.4 follows by combining (2.7) and (2.8). Next, Lemma 2.6 gives an upper bound on the moment of the standard Wishart matrix. The complete proof is provided in Section 5.1. Lemma 2.6. Suppose H \u2208Rm1\u00d7m2 has i.i.d. standard Gaussian entries. Then for any integer q \u22652, \u0000E\u2225HH\u22a4\u2212EHH\u22a4\u2225q\u00011/q \u22642\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q, \u0000Etr \b (HH\u22a4\u2212EHH\u22a4)q\t\u00011/q \u226421/q(m1\u2227m2)1/q\u00b7(2\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q) . EJP 0 (2021), paper 0. Page 5/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Remark 2.7 (Proof idea of Lemma 2.6). Let \u03c3i(H) be the i-th singular value of H. The proof of Lemma 2.6 utilizes the following fact: \u2225HH\u22a4\u2212EHH\u22a4\u2225= \u2225HH\u22a4\u2212m2Im1\u2225= max \b \u03c32 1(H) \u2212m2, m2 \u2212\u03c32 m1(H) \t and the concentration inequalities of the largest and smallest singular values of the Gaussian ensemble (e.g., [31]). See Section 5.1 for the complete proof. Now, we are in position to \ufb01nish the proof of Theorem 2.1. Proof of Theorem 2.1. Without loss of generality, we assume \u03c32 \u2217= maxij \u03c32 ij = 1. Let m1 = \u2308\u03c32 C\u2309+ 2q \u22121, m2 = \u2308\u03c32 R\u2309+ 2q \u22121 for some q to be speci\ufb01ed later and H be an m1-by-m2 random matrix with i.i.d. standard Gaussian entries. Lemmas 2.4 and 2.6 imply E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2264 \u0010 Etr n\u0000ZZ\u22a4\u2212EZZ\u22a4\u00012qo\u00111/2q Lemma 2.4 \u2264 \u001a\u0012 p1 m1 \u2227p2 m2 \u0013 \u00b7 Etr \b HH\u22a4\u2212EHH\u22a4\t2q\u001b1/2q Lemma 2.6 \u2264 21/2q \u0012\u0012 p1 m1 \u2227p2 m2 \u0013 m1 \u2227m2 \u00131/2q \u0010 2\u221am1m2 + m1 + 4(\u221am1 + \u221am2) p 2q + 4q \u0011 \u226421/2q (p1 \u2227p2)1/2q \u00002\u03c3C\u03c3R + \u03c32 C + 10\u03c3C \u221aq + 10\u03c3R \u221aq + 24q \u0001 . (2.9) Let q = K\u2308log(p1 \u2227p2)\u2309for K = \u2308 1 log(1+\u03b51)\u2309, then we have E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225 \u226421/2K \u0010 eq/K\u00111/2q \u00002\u03c3C\u03c3R + \u03c32 C + 10\u03c3C \u221aq + 10\u03c3R \u221aq + 24q \u0001 \u2264(2e)1/2K \u0012 2\u03c3C\u03c3R + (1 + \u01eb2)\u03c32 C + 10 \u221a K\u03c3R p log(p1 \u2227p2) + \u001225 \u01eb2 + 24 \u0013 K log(p1 \u2227p2) \u0013 \u22642(1 + \u01eb1)\u03c3C\u03c3R + (1 + \u01eb1)(1 + \u01eb2)\u03c32 C + C1(\u01eb1)\u03c3R p log(p1 \u2227p2) + C2(\u01eb1, \u01eb2) log(p1 \u2227p2). (2.10) Here, C1(\u01eb1) = 10(1 + \u03b51) p \u23081/ log(1 + \u03b51)\u2309, C2(\u01eb1, \u01eb2) = (1 + \u03b51)\u23081/ log(1 + \u03b51)\u2309 \u001225 \u01eb2 + 24 \u0013 . 2.3 Lower bounds To show the tightness of the upper bound given earlier, we also develop the following minimax lower bound for the heteroskedastic Wishart-type concentration. Theorem 2.8 (Lower bound of heteroskedastic Wishart-type concentration). Suppose p1, p2 \u22654. Consider the following set of p1-by-p2 random matrices, Fp(\u03c3\u2217, \u03c3C, \u03c3R) = ( Z \u2208Rp1\u00d7p2 : Zij ind \u223cN(0, \u03c32 ij), p = p1 \u2227p2, maxi,j \u03c3ij \u2264\u03c3\u2217, maxi Pp2 j=1 \u03c32 ij \u2264\u03c32 R, maxj Pp1 i=1 \u03c32 ij \u2264\u03c32 C ) . For any (\u03c3\u2217, \u03c3R, \u03c3C) tuple satisfying min{\u03c3C, \u03c3R} \u2265\u03c3\u2217\u2265max{\u03c3C/\u221ap1, \u03c3R/\u221ap2}, there exists a random Gaussian matrix Z \u2208Fp(\u03c3\u2217, \u03c3R, \u03c3C) such that E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2273\u03c32 C + \u03c3C\u03c3R + \u03c3R\u03c3\u2217 p log p + \u03c32 \u2217log p. (2.11) EJP 0 (2021), paper 0. Page 6/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration The proof of Theorem 2.8 is given in Section 5.1. Remark 2.9. Theorems 2.1 and 2.8 together establish the minimax optimal rate of E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225in the class of Fp(\u03c3\u2217, \u03c3C, \u03c3R). In other words, Theorem 2.8 shows that (2.1) yields the best upper bound for heteroskedastic Wishart-type concentration among all the bounds characterized by \u03c3C, \u03c3R, \u03c3\u2217. We shall point out that the upper bound of Theorem 2.1 may not be tight for some speci\ufb01c values of {\u03c32 ij}. For example, in Sections 3.3 and 3.4, we develop sharper bounds via a more re\ufb01ned analysis when the Wishart matrix has near-homoskedastic rows or columns. Generally speaking, it remains an open problem to develop a heteroskedastic Wisharttype concentration inequality that is tight for all speci\ufb01c values of {\u03c32 ij}. We leave this problem as future work. 3 Extensions We consider several extensions of Theorem 2.1 in this section. 3.1 Wishart-type concentration of non-Gaussian random matrices In this section, we generalize the developed concentration inequality for heteroskedastic Wishart matrices with more general entrywise distributions, such as sub-Gaussian, sub-exponential, heavy tailed, and bounded distributions. We \ufb01rst introduce the following lemma as a sub-Gaussian analog of Lemma 2.4. Lemma 3.1 (Sub-Gaussian comparison). Suppose Z \u2208Rp1\u00d7p2 has independent meanzero symmetric sub-Gaussian entries: EZij = 0, Var(Zij) = \u03c32 ij, \u2225Zij/\u03c3ij\u2225\u03c82 \u2264\u03ba. (3.1) M \u2208Rm1\u00d7m2 has i.i.d. standard Gaussian entries. When q \u22651, m1 = \u2308\u03c32 C\u2309+ q \u22121, m2 = \u2308\u03c32 R\u2309+ q \u22121, we have Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264(C\u03ba)2q \u0012 p1 m1 \u2227p2 m2 \u0013 Etr \b (HH\u22a4\u2212EHH\u22a4)q\t . (3.2) The proof of Lemma 3.1 is deferred to Section 5.2. Remark 3.2 (Proof ideas of Lemma 3.1). Compared to the proof of Lemma 2.4, the proof of Lemma 3.1 requires more delicate scheme to bound EG\u03b1ij(c) ij (G2 ij \u22121)\u03b2ij(c) for nonstandard-Gaussian distributed Gij := Zij/\u03c3ij. To this end, we introduce Lemma 5.2 to bound EG\u03b1ij(c) ij (G2 ij \u22121)\u03b2ij(c) by a Gaussian analog: EG\u03b1ij(c) ij (G2 ij \u22121)\u03b2ij(c) \u2264(C\u03ba)2qEG\u03b1ij(c)(G2 \u22121)\u03b2ij(c), G \u223cN(0, 1). As a consequence of Lemma 3.1, we have the following Wishart-type Concentration of sub-Gaussian random matrix. Corollary 3.3 (Wishart-type concentration of sub-Gaussian random matrix). Suppose Z \u2208Rp1\u00d7p2 has independent mean-zero sub-Gaussian entries that satisfy (3.1). Then E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272\u03ba2 \u0010 \u03c3C\u03c3R + \u03c32 C + \u03c3R\u03c3\u2217 p log(p1 \u2227p2) + \u03c32 \u2217log(p1 \u2227p2) \u0011 . (3.3) Proof of Corollary 3.3. When all Zij\u2019s are symmetrically distributed, Corollary 3.3 follows from the proof of Theorem 2.1 along with Lemmas 2.6 and 3.1. If Zij\u2019s are not all EJP 0 (2021), paper 0. Page 7/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration symmetric, let Z\u2032 be an independent copy of Z, then each entry of Z \u2212Z\u2032 has independent symmetric sub-Gaussian distribution. By Jensen\u2019s inequality, we have E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r = E \r \rZZ\u22a4+ E\u2032Z\u2032(Z\u2032)\u22a4\u2212Z(E\u2032Z\u2032)\u22a4\u2212(E\u2032Z\u2032)Z\u22a4\u22122EZZ\u22a4\r \r = E \r \r \rE n ZZ\u22a4+ Z\u2032(Z\u2032)\u22a4\u2212Z(Z\u2032)\u22a4\u2212(Z\u2032)Z\u22a4\u22122EZZ\u22a4\f \f \fZ o\r \r \r \u2264E h E n\r \rZZ\u22a4+ Z\u2032(Z\u2032)\u22a4\u2212Z(Z\u2032)\u22a4\u2212(Z\u2032)Z\u22a4\u22122EZZ\u22a4\r \r \f \f \fZ oi = E \u0002 E\u2032 \r \r(Z \u2212Z\u2032)(Z \u2212Z\u2032)\u22a4\u2212E(Z \u2212Z\u2032)(Z \u2212Z\u2032)\u22a4\r \r\u0003 \u2272\u03ba2 \u0010 \u03c3C\u03c3R + \u03c32 C + \u03c3R\u03c3\u2217 p log(p1 \u2227p2) + \u03c32 \u2217log(p1 \u2227p2) \u0011 . Next, we turn to the Wishart-type concentration for random matrix Z with heavytailed entries. Theorem 3.4 (Wishart-type concentration for heavy-tailed random matrix). Suppose \u03b1 \u22641, Z \u2208Rp1\u00d7p2 has independent entries, Var(Zij) \u2264\u03c32 ij, and \u2225Zij/\u03c3ij\u2225\u03c8\u03b1 \u2264\u03ba for all i, j. Given \u03c3C, \u03c3R, and \u03c3\u2217de\ufb01ned in (1.2), we have E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272 \u0010 \u03c3C + \u03c3R + \u03c3\u2217(log(p1 \u2227p2))1/2(log(p1 \u2228p2))1/\u03b1\u22121/2\u00112 \u2212\u03c32 R. In a variety of applications, the observations and random perturbations are naturally bounded (e.g., adjacency matrix in network analysis [24] and single-nucleotide polymorphisms (SNPs) data in genomics [27]). Thus, we provide a Wishart-type concentration for entrywise uniformly bounded random matrices as follows. Theorem 3.5 (Wishart-type concentration of bounded random Matrix). Suppose Z \u2208 Rp1\u00d7p2, EZij = 0, Var(Zij) = \u03c32 ij, |Zij| \u2264B almost surely, then E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264(1 + \u01eb1) n 2\u03c3C\u03c3R + (1 + \u01eb2)\u03c32 C + C1(\u01eb1)B\u03c3R p log(p1 \u2227p2) + C2(\u01eb1, \u01eb2)B2 log(p1 \u2227p2) o , where C1(\u01eb1) and C2(\u01eb1, \u01eb2) are de\ufb01ned as in Theorem 2.1. If we further have maxi,j \u03c3ij \u2264 \u03c3\u2217and B (log(p1 \u2227p2)/p1)1/2 \u226a\u03c3\u2217for some \u03c3\u2217, then E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264(1 + \u01eb) (2\u221ap1p2 + p1) \u03c32 \u2217. (3.4) An immediate application of the previous theorem is the following Wishart-type concentration for independent Bernoulli random matrices. Corollary 3.6 (Wishart-type Concentration of Bernoulli Random Matrix). Suppose Z \u2208 Rp1\u00d7p2, Aij ind \u223cBernoulli(\u03b8ij), \u03b8ij \u2264\u03b8\u2217and \u03b8\u2217\u2265C log(p1 \u2227p2)/p1. Then, E \r \r(A \u2212\u0398)(A \u2212\u0398)\u22a4\u2212E(A \u2212\u0398)(A \u2212\u0398)\u22a4\r \r \u2272(\u221ap1p2 + p1) \u03b8\u2217. (3.5) To prove Theorems 3.4 and 3.5, we establish the corresponding comparison lemmas for random matrices with heavy tail/bounded distributions, which is more technically involved from Gaussian/sub-Gaussian distributions due to the essential difference. The proofs of Theorems 3.4 and 3.5 are provided in Section 5.2. Remark 3.7. It is helpful to summarize the heteroskedastic Wishart-type concentration inequalities with Gaussian, sub-Gaussian, heavy-tail, and bounded entries in a uni\ufb01ed form: E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264C0 n (\u03c3C + \u03c3R + K)2 \u2212\u03c32 R o , EJP 0 (2021), paper 0. Page 8/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration where K = \u03c3\u2217(log(p1 \u2227p2))1/2 and C0 > 1 is a constant if the entries of Z are subGaussian; K = \u03c3\u2217(log(p1 \u2227p2))1/2(log(p1 \u2228p2))1/\u03b1\u22121/2 and C0 > 1 is a constant if Z has bounded \u03c8\u03b1 norm; K = C p log(p1 \u2227p2) and C0 = 1 + \u03b5 if the entries of Z are bounded; and K = C\u03c3\u2217(log(p1 \u2227p2))1/2 and C0 = (1 + \u03b5) if the entries of Z are Gaussian. 3.2 Moments and tail bounds We study the general b-th moment and the tail probability of heteroskedastic Wisharttype matrix in the following theorem. Theorem 3.8 (High-order moments and tail probability bounds). Suppose the conditions in Theorem 2.1 hold. For any b > 0, we have n E \r \rZZ\u22a4\u2212EZZ\u22a4\r \rbo1/b \u2272 \u0010 \u03c3C + \u03c3R + \u03c3\u2217 p b \u2228log(p1 \u2227p2) \u00112 \u2212\u03c32 C. (3.6) There exists uniform constant C > 0 such that for any x > 0, P \u001a\r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2265C \u0012\u0010 \u03c3C + \u03c3R + \u03c3\u2217 p log(p1 \u2227p2) + x \u00112 \u2212\u03c32 C \u0013\u001b \u2264exp(\u2212x2). (3.7) Since neither \u2225ZZ\u22a4\u2212EZZ\u22a4\u2225nor \u2225ZZ\u22a4\u2212EZZ\u22a4\u22251/2 are Lipschitz continuous in Z, the classic Talagrand\u2019s concentration inequality [10, Theorem 6.10] does not directly apply to give the tail probability bound of \u2225ZZ\u22a4\u2212EZZ\u22a4\u2225. We instead prove (3.7) via a more direct moment method. The complete proof is given in Section 5.3. 3.3 Wishart matrix with near-homoskedastic rows In this section, we consider a special class of heteroskedastic matrices. Let Z \u2208 Rp1\u00d7p2 be a random matrix with independent, sub-Gaussian, and zero-mean entries. Suppose all entries in the same row of Z share similar variance (i.e., there exists \u03c32 i such that \u03c3ij approximately equals \u03c32 i for all i, j). Then the p2 columns of Z, i.e., {Z\u00b7j}p2 j=1, have approximately equal covariance matrix, diag(\u03c32 1, . . . , \u03c32 p1). In this case, 1 nZZ\u22a4= 1 n Pn j=1 Z\u00b7jZ\u22a4 \u00b7j is the sample covariance matrix. It is of great interest to analyze \u2225ZZ\u22a4\u2212 EZZ\u22a4\u2225, i.e., the concentration of the sample covariance matrix in both probability and statistics [3, 12]. Note that Corollary 3.3 directly implies E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2272 X i \u03c32 i + s p2 X i \u03c32 i \u00b7 max i \u03c3i + p p2 log(p1 \u2227p2) max i \u03c32 i . (3.8) With a more careful analysis, we can derive a better concentration inequality than (3.8) without the logarithmic terms. Theorem 3.9. Suppose Z is a p1-by-p2 random matrix with independent mean-zero subGaussian entries. If there exist \u03c31, . . . , \u03c3p \u22650 such that \u2225Zij/\u03c3i\u2225\u03c82 \u2264CK for constant CK > 0, then E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272 X i \u03c32 i + s p2 X i \u03c32 i \u00b7 max i \u03c3i. (3.9) Remark 3.10. We also note that a similar result of Theorem 3.9 can be derived from Koltchinskii and Lounici [18]. Their result is based on generic chaining argument with the assumption that all columns of Z are i.i.d. Here, we assume independence and an upper bound on the Orlicz-\u03c82 norm of each entry, while allow the distributions to be non-identical. EJP 0 (2021), paper 0. Page 9/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration The following theorem gives a lower bound on the concentration of Wishart matrix with homoskedastic rows. Theorem 3.11. If Z \u2208Rp1\u00d7p2, Zij ind \u223cN(0, \u03c32 i ), we have E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2273 X i \u03c32 i + s p2 X i \u03c32 i \u00b7 max i \u03c3i. The proof of Theorem 3.11 is deferred to Section 5.4. Theorems 3.9 and 3.11 render an exact rate of Wishart-type concentration for random matrices with homoskedastic rows: E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u224d X i \u03c32 i + s p2 X i \u03c32 i max i \u03c3i, if Var(Zij) ind \u223cN(0, \u03c32 i ). The rest of this section is dedicated to the proof of Theorems 3.9. We only prove for Gaussian Wishart-type random matrices since the sub-Gaussian case follows similarly. We \ufb01rst introduce a key tool to sequentially reduce the number of rows of Z. The tool, as summarized in the following lemma, may of independent interest. Lemma 3.12 (Variance contraction inequality of Gaussian random matrix). Suppose G \u2208 Rp1\u00d7p2 and \u02dc G \u2208R(p1\u22121)\u00d7p2 are two random matrices with independent Gaussian entries satisfying EGij = E \u02dc Gij = 0, Var(Gij) = \u03c32 ij, Var( \u02dc Gij) = \u001a \u03c32 ij, 1 \u2264i \u2264p1 \u22122; \u03c32 p1\u22121,j + \u03c32 p1,j, i = p1 \u22121. In other words, G and \u02dc G are identical distributed in their \ufb01rst (p1\u22122) rows; the variance of the last row of \u02dc G is the sum of last two rows\u2019 variances of G. Then for any positive integer q, tr \u0010\u0000GG\u22a4\u2212EGG\u22a4\u0001q\u0011 \u2264tr \u0010\u0010 \u02dc G \u02dc G\u22a4\u2212E \u02dc G \u02dc G\u22a4\u0011q\u0011 . The proof of Lemma 3.12 is provided in Section 5.4. Now we are ready to prove Theorem 3.9. Proof of Theorem 3.9. Denote \u03c32 C = P i \u03c32 i , \u03c3\u2217= maxi \u03c3i. Assume \u03c3\u2217= 1 without loss of generality. Set q = 2\u2308\u03c32 C\u2309. We use mathematical induction on p1 to show the following upper bound: for some uniform constant C > 0 (which does not dependent on p1, p2, \u03c3C), we have \u0010 Etr n\u0000ZZ\u22a4\u2212EZZ\u22a4\u0001qo\u00111/q \u2264C \u0000\u03c32 C + \u221ap2\u03c3C \u0001 . (3.10) \u2022 If p1 \u22642q, Lemma 2.4 yields Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 Etr \b (HH\u22a4\u2212EHH\u22a4)q\t . Here, H is a m1-by-m2 dimensional matrix with i.i.d. standard Gaussian entries and m1 = \u2308\u03c32 C\u2309+ q \u22121 = 3\u2308\u03c32 C\u2309\u22121, m2 = p2 + 2\u2308\u03c32 C\u2309\u22121. (3.11) EJP 0 (2021), paper 0. Page 10/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Additionally, by Lemma 2.6, \u0000E \b (ZZ\u22a4\u2212EZZ\u22a4)q\t\u00011/q \u2264 \u0012\u0012 p1 m1 \u2227p2 m2 \u0013 Etr n\u0000HH\u22a4\u2212EHH\u22a4\u0001qo\u00131/q \u2264 \u0012 E \u0012 p1 m1 \u2227p2 m2 \u0013 m1\u2225HH\u22a4\u2212EHH\u22a4\u2225q \u00131/q \u2264p1/q 1 (2\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q) (3.11) \u2264(2q)1/q \u00b7 C \u0000\u221ap2\u03c3C + \u03c32 C \u0001 \u2264C \u0000\u221ap2\u03c3C + \u03c32 C \u0001 , which implies (3.10). \u2022 Suppose the statement (3.10) holds for Z \u2208R(p1\u22121)\u00d7p2 for some p1 > 2q, we further consider the case where Z \u2208Rp1\u00d7p2. Note that 1 = \u03c32 \u2217= \u03c32 1 \u2265\u03c32 2 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03c32 p1 \u22650. By such the ordering, \u03c32 p1\u22121 + \u03c32 p1 \u22642 p1 p1 X i=1 \u03c32 i = 2 p1 \u03c32 C \u22642\u03c32 C 2q \u2264 2\u03c32 C 4\u2308\u03c32 C\u2309\u22641 = \u03c32 \u2217. (3.12) By Lemma 3.12, we have tr \u0000(ZZ\u22a4\u2212EZZ\u22a4)q\u0001 \u2264tr \u0010 ( \u02dc Z \u02dc Z\u22a4\u2212E \u02dc Z \u02dc Z\u22a4)q\u0011 . where \u02dc Z is a (p1 \u22121)-by-p2 random matrix with independent entries and E( \u02dc Z) = 0, Var(( \u02dc Z)ij) = \u001a \u03c32 i , if 1 \u2264i \u2264p1 \u22122; \u03c32 p\u22121 + \u03c32 p, if 1 \u2264i \u2264p1 \u22121. By (3.12), we have maxi,j Var(( \u02dc Z)ij) \u2264\u03c32 \u2217. Meanwhile, Pp1\u22121 i=1 Var(( \u02dc Z)ij) = Pp1 i=1 \u03c32 i = \u03c32 C. Thus, the induction assumption of (3.10) implies \u0000E \b (ZZ\u22a4\u2212EZZ\u22a4)q\t\u00011/q \u2264 \u0010 E n ( \u02dc Z \u02dc Z\u22a4\u2212E \u02dc Z \u02dc Z\u22a4)qo\u00111/q \u2264C \u0000\u221ap2\u03c3C + \u03c32 C \u0001 . By induction, we have proved that (3.10) holds in general. Therefore, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264 \u0010 Etr n\u0000ZZ\u22a4\u2212EZZ\u22a4\u0001qo\u00111/q \u2272\u221ap2\u03c3C + \u03c32 C. 3.4 Wishart matrix with near-homoskedastic columns Let Z \u2208Rp1\u00d7p2 be a random matrix with independent entries. We consider another case of interest that all entries in each column of Z have the similar variance (i.e., there exist \u03c3j such that \u03c3ij \u2248\u03c32 j , \u2200i, i\u2032 \u2208[p1], \u2200j \u2208[p2]). This model has been used to characterize heteroskedastic independent samples in statistical applications [17]. Applying Theorem 2.1, one obtains E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272 s p1 X j \u03c32 j max j \u03c3j + p1 max j \u03c32 j . (3.13) As the direct upper bound of (3.13) may be sub-optimal, we prove the following upper and lower bounds via a more careful analysis. EJP 0 (2021), paper 0. Page 11/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Theorem 3.13. Suppose Z \u2208Rp1\u00d7p2 has independent, mean-zero, and sub-Gaussian entries. Assume there exist \u03c31, . . . , \u03c3n \u22650 such that \u2225Zij/\u03c3j\u2225\u03c82 \u2264CK for constant CK > 0. Then, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272 s p1 X j \u03c34 j + p1 max j \u03c32 j . (3.14) Theorem 3.14. If Z \u2208Rp1\u00d7p2, Zij ind \u223cN(0, \u03c32 j ), we have E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2273 s p1 X j \u03c34 j + p1 max j \u03c32 j . The proof of Theorem 3.14 is deferred to Section 5.5. Now we consider the proof of Theorem 3.13. Since the Gaussian comparison lemma (Lemma 2.4) cannot give the desired term Pp2 j=1 \u03c34 j , we turn to study the expansion of Etr \b\u0000\u2206(ZZ\u22a4) \u0001q\t , where \u2206(ZZ\u22a4) equals to ZZ\u22a4with all diagonal entries set to zero. The expansion of Etr \b\u0000\u2206(ZZ\u22a4) \u0001q\t can be related to the cycles in a complete graph for which every edge is visited {0, 4, 8, 12 . . .} times. Based on this new idea, we introduce the following lemma. Lemma 3.15. Suppose Z \u2208Rp1\u00d7p2, Zij ind \u223cN(0, \u03c32 ij), and \u03c3ij \u2264\u03c3j. For a square matrix A, let \u2206(A) be A with all diagonal entries set to zero and D(A) be A with all off-diagonal entries set to zero. For any integer q \u22651, suppose H \u2208Rp1\u00d7m have i.i.d. standard normal entries and m = \u2308Pp2 j=1 \u03c34 j \u2309+ q \u22121. Then, Etr n\u0000\u2206(ZZ\u22a4) \u0001qo \u2264Etr n\u0000\u2206(HH\u22a4) \u0001qo . (3.15) The proof of Lemma 3.15 is provided in Section 5.5. Next, we prove Theorem 3.13. Proof of Theorem 3.13. Denote \u03c32 R = P j \u03c32 j , \u03c3\u2217= maxi \u03c3i. Without loss of generality, we assume \u03c3\u2217= 1. Note that E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264E \r \rD(ZZ\u22a4) \u2212EZZ\u22a4\r \r + E \r \r\u2206(ZZ\u22a4) \r \r . It suf\ufb01ces to bound the two terms separately. Since D(ZZ\u22a4) \u2212EZZ\u22a4is a diagonal matrix with independent diagonal entries, we have \r \rD(ZZ\u22a4) \u2212EZZ\u22a4\r \r = max i\u2208[p1] \f \f \f \f \f \f p2 X j=1 Z2 ij \u2212E p2 X j=1 Z2 ij \f \f \f \f \f \f . With Bernstein inequality and union bound, we have P \uf8eb \uf8edmax i\u2208[p1] \f \f \f \f \f \f p2 X j=1 Z2 ij \u2212E p2 X j=1 Z2 ij \f \f \f \f \f \f > t \uf8f6 \uf8f8\u22642 exp log p1 \u2212c t2 Pp2 j=1 \u03c34 j \u2227t \u03c32 \u2217 !! . Integration over the tail further yields E max i\u2208[p1] \f \f \f \f \f \f p2 X j=1 Z2 ij \u2212E p2 X j=1 Z2 ij \f \f \f \f \f \f \u2272 v u u tlog p1 p2 X j=1 \u03c34 j + \u03c32 \u2217log p1. (3.16) Next, we use moment method to bound E \r \r\u2206(ZZ\u22a4) \r \r. For any even positive integer q, by Lemma 3.15, E \r \r\u2206(ZZ\u22a4) \r \r \u2264 \u0010 Etr n\u0000\u2206(ZZ\u22a4) \u0001qo\u00111/q \u2264 \u0010 Etr n\u0000\u2206(HH\u22a4) \u0001qo\u00111/q . (3.17) Here H is a p1-by-m random matrix with i.i.d. N(0, 1) entries and m = \u2308Pp2 j=1 \u03c34 j \u2309+ q \u22121. Thus it suf\ufb01ces to bound \u0000Etr \b\u0000\u2206(HH\u22a4) \u0001q\t\u00011/q. EJP 0 (2021), paper 0. Page 12/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration On the one hand, by Lemma 2.6, \u2200q \u22652, \u0000E\u2225HH\u22a4\u2212EHH\u22a4\u2225q\u00011/q \u22642\u221ap1m + m + 4(\u221ap1 + \u221am)\u221aq + 2q. (3.18) On the other hand, note that \r \rD(HH\u22a4) \u2212EHH\u22a4\r \r = maxi\u2208[m] |Xi|, where Xi are independent centralized \u03c72 m random variable. By the Chi-square concentration and union bound, we have P \u0012 max i\u2208[p1] |Xi|q > t \u0013 \u22642 exp \u0012 log p1 \u2212c \u0012t2/q m \u2227t1/q \u0013\u0013 . Integration gives E max i\u2208[p1] |Xi|q \u2264Cq \u0010 logq p1 + ( p m log p1)q\u0011 . (3.19) Then it follows that \u0010 Etr n\u0000\u2206(HH\u22a4) \u0001qo\u00111/q \u2264 \u0010 p1E \r \r\u2206(HH\u22a4) \r \rq\u00111/q \u2264p1/q 1 \u0010 E \r \rHH\u22a4\u2212EHH\u22a4\r \rq\u00111/q + \u0010 E \r \rD(HH\u22a4\u2212EHH\u22a4) \r \rq\u00111/q (3.18)(3.19) \u2272 p1/q 1 \u00b7 \u0000\u221ap1m + p1 + 4(\u221ap1 + \u221am)\u221aq + 2q \u0001 . (3.20) Now we specify q = 2p1 and get E \r \r\u2206(ZZ\u22a4) \r \r (3.17) \u2264 \u0010 Etr n\u0000\u2206(HH\u22a4) \u0001qo\u00111/q \u2272 v u u tp1 n X j=1 \u03c34 j + p1. This together with (3.16) completes the proof of this theorem. 4 Applications The concentration bounds established in the previous sections have a range of applications. In this section, we illustrate the usefulness of the heteroskedastic Wishart-type concentration by applications to low-rank matrix denoising and heteroskedastic clustering. Consider the following \u201csignal + noise\" model: Y = X + Z, where X \u2208Rp1\u00d7p2 is a (approximately) low-rank matrix of interest, Z is the random noise with independent entries, and Y is the observation. This model has attracted signi\ufb01cant attention in probability and statistics [5, 7, 14, 26], and has also been the prototypical setting in various applications, such as bipartite stochastic block model [15], exponential family PCA [22], top-k ranking from pairwise comparison [23]. In these applications, the leading singular values/vectors of X often contain information of interest. A straightforward way to estimate the leading singular values/vectors of X (which are also the square root eigenvalues and the eigenvectors of XX\u22a4) is by evaluating the spectrum of Y (or equivalently Y Y \u22a4). Suppose \u03bbi(Y Y \u22a4), \u03bbi(XX\u22a4), vi(Y Y \u22a4), vi(Y Y \u22a4) are the ith eigenvalue and ith eigenvector of Y Y \u22a4, XX\u22a4, respectively. The classic perturbation theory (e.g., Weyl [34] and David-Kahan [13]) yield the following sharp bounds, |\u03bbi(Y Y \u22a4) \u2212\u03bbi(XX\u22a4)| \u2264\u2225Y Y \u22a4\u2212XX\u22a4\u2225, \u2225vi(Y Y \u22a4) \u00b1 vi(Y Y \u22a4)\u22252 \u2272 \u2225Y Y \u22a4\u2212XX\u22a4\u2225 minj=i,i+1{\u03bbj\u22121(XX\u22a4) \u2212\u03bbj(XX\u22a4)}. EJP 0 (2021), paper 0. Page 13/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Then, a tight upper bound for the perturbation Y Y \u22a4\u2212XX\u22a4is critical to quantify the estimation accuracy of \u03bbi(Y Y \u22a4), vi(Y Y \u22a4) to \u03bbi(XX\u22a4), vi(XX\u22a4). By expansion, the perturbation of Y Y \u22a4\u2212XX\u22a4can be written as Y Y \u22a4\u2212XX\u22a4= XZ\u22a4+ ZX\u22a4+ EZZ\u22a4+ (ZZ\u22a4\u2212EZZ\u22a4). (4.1) Here, EZZ\u22a4is a deterministic diagonal matrix; \u2225XZ\u22a4\u2225= \u2225ZX\u22a4\u2225are the spectral norm of a random matrix multiplied by a deterministic matrix, which has been considered in [32]; The term \u2225ZZ\u22a4\u2212ZZ\u22a4\u2225can often be the dominating and most complicated part in (4.1) and the heteroskedastic Wishart-type concentration inequality established in the present paper provides a powerful tool for analyzing it. We further illustrate through a speci\ufb01c application to high-dimensional heteroskedastic clustering. The clustering is an ubiquitous task in statistics and machine learning [16]. Suppose we observe a two-component Gaussian mixture: Yj = lj\u00b5 + \u03b5j, \u03b5j = (\u03b51j, . . . , \u03b5pj)\u22a4, \u03b5ij ind \u223cN(0, \u03c32 i ), j = 1, . . . , n. (4.2) Here, \u00b5 is an unknown deterministic vector in Rp and lj \u2208{\u22121, 1} are unknown labels of two classes. While most existing works focus on the homoskedastic setting, we consider a heteroskedastic setting where the noise variance \u03c32 i may vary across different coordinates. Then, the sample {Yj}n j=1 can be written in a matrix form, Y = X + Z, where Y = \u0002 Y \u22a4 1 , Y \u22a4 2 , \u00b7 \u00b7 \u00b7 , Y \u22a4 n \u0003\u22a4, X = [l1, l2, \u00b7 \u00b7 \u00b7 , ln]\u22a4\u00b5, and Z = (\u03b5ij). Our goal is to cluster {Yj}n j=1 into two groups, or equivalently to estimate the hidden label {lj}n j=1. Let \u02c6 v be the \ufb01rst eigenvector of Y Y \u22a4. As \u02c6 v is an estimation of l, it is straightforward to cluster as \u02c6 lj = sgn(\u02c6 vj), j = 1, . . . , n. (4.3) Applying Theorem 3.13 and perturbation bound of \u2225XZ\u22a4\u2225[36, Lemma 3] on (4.1), it can be shown that E \r \rY Y \u22a4\u2212EZZ\u22a4\u2212XX\u22a4\r \r \u2272n \u2225\u00b5\u2225\u03c3\u2217+ n\u03c32 \u2217+ s n X j \u03c34 j . Combining this with the Davis-Kahan Theorem [13], we obtain the following result. Theorem 4.1. Let \u03c3\u2217= maxi \u03c3i and \u02dc \u03c3 = (P i \u03c34 i )1/4. The estimator in (4.3) satis\ufb01es EM(l, \u02c6 l) \u2272n \u2225\u00b5\u22252 \u03c3\u2217+ n\u03c32 \u2217+ \u221an\u02dc \u03c32 n \u2225\u00b5\u22252 2 \u22271. (4.4) Here, M(l, \u02c6 l) is the misclassi\ufb01cation rate de\ufb01ned as M(l, \u02c6 l) = 1 n min ( n X i=1 1{li\u0338=\u02c6 li}, n X i=1 1{li\u0338=\u2212\u02c6 li} ) . (4.5) The complete proof of Theorem 4.1 is deferred to Section 5.6. By (4.4), the clustering is consistent (i.e., EM(l, \u02c6 l) = o(1)) as long as \u2225\u00b5\u22252 \u226b\u03c3\u2217\u2228(\u02dc \u03c3/n1/4). (4.6) The following lower bound shows that the signal-noise-ration condition (4.6) is necessary to ensure a consistent classi\ufb01cation. The proof is provided in Section 5.6. EJP 0 (2021), paper 0. Page 14/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Theorem 4.2. Suppose \u03c3\u2217\u2264\u02dc \u03c3 \u2264p1/4\u03c3\u2217. Consider the following class of distributions on Rn\u00d7p: Pl,\u03bb(\u03c3\u2217, \u02dc \u03c3) = ( PY : Y = X + Z \u2208Rn\u00d7p : X = l\u00b5\u22a4, Zij ind \u223cN(0, \u03c32 j ), \u2225\u00b5\u2225\u2265\u03bb, maxj \u03c3j \u2264\u03c3\u2217, Pp i=1 \u03c34 i \u2264\u02dc \u03c34 ) . There exists a universal constant c > 0, such that if \u03bb < c \u0000\u03c3\u2217\u2228(\u02dc \u03c3/n1/4) \u0001 , we have inf \u02c6 l sup Pl,\u03bb(\u03c3\u2217,\u02dc \u03c3) EM(l, \u02c6 l) \u22651/4. 5 Additional Proofs 5.1 Proofs for main results In this section, we collect the proofs of upper and lower bound results in Section 2 including Lemma 2.4, Lemma 2.6, Proposition 2.3 and Theorem 4.2. Proof of Lemma 2.4. This proof shares similarity but shows more distinct aspects, compared with the one of Wigner-type [4, Proposition 2.1]. We assume \u03c3\u2217= 1 throughout the proof without loss of generality. We divide the proof into two steps, which targets on the two sides of the inequalities, respectively. Step 1 One can check that EZZ\u22a4= diag \u0010nPp2 j=1 \u03c32 ij op1 i=1 \u0011 . Consider the following expansion, Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t = X u1,...,uq,uq+1\u2208[p1] E q Y k=1 \u0000ZZ\u22a4\u2212EZZ\u22a4\u0001 uk,uk+1 = X u1,...,uq,uq+1\u2208[p1] E q Y k=1 \uf8f1 \uf8f2 \uf8f3 X vk\u2208[p2] \u0000Zuk,vkZuk+1,vk \u22121{uk=uk+1}EZ2 uk,vk \u0001 \uf8fc \uf8fd \uf8fe = X u1,...,uq,uq+1\u2208[p1] v1,...,vq\u2208[p2] E q Y k=1 \u0000Zuk,vkZuk+1,vk \u2212\u03c32 uk,vk \u00b7 1{uk=uk+1} \u0001 . (5.1) Here, the indices are in module q, i.e., u1 = uq+1. Next, we consider the bipartite graph from [p1] on [p2] and the cycles of length 2q, i.e., c := (u1 \u2192v1 \u2192u2 \u2192v2 \u2192 . . . \u2192uq \u2192vq \u2192uq+1 = u1). For any (i, j) \u2208[p1] \u00d7 [p2], let \u03b1ij(c) = Card {k : (uk = i, vk = j, uk+1 \u0338= i) or (uk \u0338= i, vk = j, uk+1 = i)} ; \u03b2ij(c) = Card {k : uk = uk+1 = i, vk = j} . (5.2) Then, \u03b1ij(L) is the number of times that the edge (i, j) is visited exactly once by sub-path uk \u2192vk \u2192uk+1; \u03b2ij(c) is the number of times that the edge (i, j) is visited twice by sub-path uk \u2192vk \u2192uk+1 (back and forth). Since Zij/\u03c3ij has i.i.d. standard normal distribution, we have Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t = X c\u2208([p1]\u00d7[p2])q Y (i,j)\u2208[p1]\u00d7[p2] EZ\u03b1ij(c) ij \u0000Z2 ij \u2212\u03c32 ij \u0001\u03b2ij(c) = X c\u2208([p1]\u00d7[p2])q Y (i,j)\u2208[p1]\u00d7[p2] \u03c3\u03b1ij(c)+2\u03b2ij(c) ij Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) \u0000G2 \u22121 \u0001\u03b2ij(c) = X c\u2208([p1]\u00d7[p2])q q Y k=1 \u03c3uk,vk\u03c3uk+1,vk Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) \u0000G2 \u22121 \u0001\u03b2ij(c) . (5.3) EJP 0 (2021), paper 0. Page 15/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Here G denotes a N(0, 1) random variable. Next, let m\u03b1,\u03b2(c) be the number of edges which appear \u03b1 times in (uk \u2192vk) or (vk \u2192uk+1) with uk \u0338= uk+1, and \u03b2 times in (uk \u2192vk \u2192uk+1) with uk = uk+1. More rigorously, m\u03b1,\u03b2(c) := Card n (i, j) \u2208[p1] \u00d7 [p2] : \u03b2 = |{k : uk = uk+1 = i, vk = j}|, \u03b1 = |{k : exactly one of uk or uk+1 = i, vk = j| o . (5.4) For any cycle c, we de\ufb01ne its shape s(u) by relabeling the vertices in order of appearance. For example, the cycle 2 \u21924\u2032 \u21923 \u21922\u2032 \u21922 \u21924\u2032 \u21925 \u21921\u2032 \u21922 has shape 1 \u21921\u2032 \u21922 \u21922\u2032 \u21921 \u21921\u2032 \u21923 \u21923\u2032 \u21921. Here i denotes the left vertex while i\u2032 denotes the right vertex. It is easy to see for any two cycles c and c\u2032 with the same shape, we must have m\u03b1,\u03b2(c) = m\u03b1,\u03b2(c\u2032). Thus we can well de\ufb01ne m\u03b1,\u03b2(s(c)) := m\u03b1,\u03b2(c). Based on previous discussions, Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) \u0000G2 \u22121 \u0001\u03b2ij(c) = Y \u03b1,\u03b2\u22650 \b EG\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s(c)) . (5.5) Then a natural observation is that EG\u03b1(G2 \u22121)\u03b2 \u22650 for all non-negative \u03b1, \u03b2 and EG\u03b1(G2 \u22121)\u03b2 = 0 if and only if \u03b1 is an odd or \u03b1 = 0, \u03b2 = 1 (see Lemma 5.2 in Appendix A for details). We then de\ufb01ne even shape set Sp1,p2 as Sp1,p2 = {s(c) : m\u03b1,\u03b2(s(c)) = 0 for all \u03b1, \u03b2 s.t. \u03b1 is an odd or \u03b1 = 0, \u03b2 = 1} . (5.6) Then the right hand side of (5.5) is nonzero only for s(c) \u2208Sp1,p2 and the expansion (5.3) can be further rewritten as Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t = X s0\u2208Sp1,p2 X c:s(c)=s0 q Y k=1 \u03c3uk,vk\u03c3uk+1,vk Y \u03b1,\u03b2\u22650 \b EG\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s0) = X s0\u2208Sp1,p2 Y \u03b1,\u03b2\u22650 \b EG\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s0) \u00b7 X c:s(c)=s0 q Y k=1 \u03c3uk,vk\u03c3uk+1,vk. (5.7) Now denote mL(s0) and mR(s0) be the number of distinct left and right nodes that is visited by cycles with shape s0, we have the following lemma: Lemma 5.1. Suppose \u03c3\u2217\u22641. Then for any shape s0 \u2208Sp1,p2, X c:s(c)=s0 q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u2264 \u0010 p1\u03c32mL(s0)\u22122 C \u03c32mR(s0) R \u0011 \u2227 \u0010 p2\u03c32mL(s0) C \u03c32mR(s0)\u22122 R \u0011 . Proof. The proof of Lemma 5.1 is an analogue of [4, Lemma 2.5]. We \ufb01rst show X c:s(c)=s0 q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u2264p1\u03c32mL(s0)\u22122 C \u03c32mR(s0) R . (5.8) Suppose s0 = (s1, s\u2032 1, . . . , sq, s\u2032 q), let l(k) = min{j : sj = k}, i.e., the \ufb01rst time in any cycle of shape s0 at which its kth distinct left vertex is visited. Similarly we de\ufb01ne r(k) = min{j : s\u2032 j = k}. Now let c = (u1, v1, \u00b7 \u00b7 \u00b7 uq, vq) be a cycle with shape s0. Then the following mL(s0) distinct edges from right vertex to left vertex will appear in order: vl(2)\u22121 \u2192ul(2), vl(3)\u22121 \u2192ul(3), \u00b7 \u00b7 \u00b7 , vl(mL(s0))\u22121 \u2192ul(mL(s0)). Similarly, we have mR(s0) edges from left vertex to right vertex: ur(1) \u2192vr(1), ur(2) \u2192 EJP 0 (2021), paper 0. Page 16/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration vr(2), \u00b7 \u00b7 \u00b7 , ur(mR(s0)) \u2192vr(mR(s0)). In addition, these mL + mR \u22121 edges are distinct by the de\ufb01nition of l(k) and r(k). We claim each of these mL+mR\u22121 edges appear at least twice. Suppose one of the above edges only appear once, then we must have m1,0(s(c)) \u22651, which contradicts s0 \u2208Sp1+p2. Now for a \ufb01xed starting vertex u1 = u \u2208[p1], we can bound X c:u1=u s(c)=s0 q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u2264 X c:u1=u s(c)=s0 \u0010 \u03c32 ur(1),vr(1) \u00b7 \u00b7 \u00b7 \u03c32 ur(mR (s0)),vr(mR(s0)) \u0011 \u00b7 \u0010 \u03c32 ul(2),vl(2)\u22121 \u00b7 \u00b7 \u00b7 \u03c32 ul(mL(s0)),ul(mR(s0))\u22121 \u0011 = X a2\u0338=\u00b7\u00b7\u00b7\u0338=amL(s0)\u2208[p1] b1\u0338=\u00b7\u00b7\u00b7\u0338=bmR(s0)\u2208[p2] \u0010 \u03c32 asr(1) ,b1 \u00b7 \u00b7 \u00b7 \u03c32 asr(mR(s0)) ,bmR(s0) \u0011 \u00b7 \u0012 \u03c32 a2,bs\u2032 l(2)\u22121 \u00b7 \u00b7 \u00b7 \u03c32 amL(s0),bs\u2032 l(mR(s0))\u22121 \u0013 \u2264\u03c32mR(s0) R \u03c32(mL(s0)\u22121) C . Then (5.8) follows by taking different initial vertices u \u2208[p1]. Similarly we can show X c:s(c)=s0 q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u2264p2\u03c32mL(s0) C \u03c32mR(s0)\u22122 R and the proof is complete. Combining (5.7) and Lemma 5.1, we obtain Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 X s0\u2208Sp1,p2 Y \u03b1,\u03b2\u22650 \b EG\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s0) \u00b7 n p1\u03c32(mL(s0)\u22121) C \u03c32mR(s0) R o \u2227 n p2\u03c32mL(s0) C \u03c32(mR(s0)\u22121) R o . (5.9) Step 2 Next, we consider the expansion for Etr \u0000(HH\u22a4)q\u0001 , where H \u2208Rm1\u00d7m2 is with i.i.d. standard Gaussian entries. We similarly expand as Step 1 to obtain Etr \u0000(HH\u22a4\u2212m2Im1)q\u0001 = X s0\u2208Sp1,p2 Y \u03b1,\u03b2\u22650 \b EG\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s0) \u00b7 |{c : s(c) = s0}| = X s0\u2208Sp1,p2 Y \u03b1,\u03b2\u22650 E \b G\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s0) \u00b7 m1(m1 \u22121) \u00b7 \u00b7 \u00b7 (m1 \u2212mL(s0) + 1)m2(m2 \u22121) \u00b7 \u00b7 \u00b7 (m2 \u2212mR(s0) + 1) Provided that m1 = \u2308\u03c32 C\u2309+ q \u22121 and m2 = \u2308\u03c32 R\u2309+ q \u22121, mL(s0), mR(s0) \u2264q, we have m1(m1 \u22121) \u00b7 \u00b7 \u00b7 (m1 \u2212mL(s0) + 1) \u00b7 m2(m2 \u22121) \u00b7 \u00b7 \u00b7 (m2 \u2212mR(s0) + 1) \u2265m1 \u00b7 (m1 \u2212mL(s0) + 1)mL(s0)\u22121 \u00b7 (m1 \u2212mR(s0) + 1)mR(s0) \u2265m1\u03c32mL(s0)\u22122 C \u00b7 \u03c32mR(s0) R . Similarly, m1(m1 \u22121) \u00b7 \u00b7 \u00b7 (m1 \u2212mL(s0) + 1) \u00b7 m2(m2 \u22121) \u00b7 \u00b7 \u00b7 (m2 \u2212mR(s0) + 1) \u2265\u03c32mL(s0) C \u00b7 m2\u03c32mR(s0)\u22122 R . EJP 0 (2021), paper 0. Page 17/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration These all together imply Etr \u0000(HH\u22a4\u2212m2Im1)q\u0001 \u2265 X s0\u2208Sp1+p2 Y \u03b1,\u03b2\u22650 E \b G\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s0) \u00b7 n m1\u03c32mL(s0)\u22122 C \u00b7 \u03c32mR(s0) R o \u2228 n m2\u03c32mL(s0) C \u00b7 \u03c32mR(s0)\u22122 R o . (5.10) By comparing (5.9) and (5.10), we have \ufb01nally proved that Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 Etr n\u0000HH\u22a4\u2212EHH\u22a4\u0001qo . Proof of Lemma 2.6. Let W = max \b \u03c3max(H) \u2212\u221am2 \u2212\u221am1, \u221am2 \u2212\u221am1 \u2212\u03c3min(M), 0 \t , by the tail bound of i.i.d. Gaussian matrix (c.f., [31, Corollary 5.35]), P (W \u2265t) \u2264 2 exp(\u2212t2/2) for all t \u22650. Thus for any q \u22651, EW q =q Z \u221e 0 tq\u22121P (W \u2265t) dt \u22642q Z \u221e 0 tq\u22121 exp(\u2212t2/2)dt = 2 q 2 q\u0393(q/2). Since \u2225HH\u22a4\u2212EHH\u22a4\u2225=\u2225HH\u22a4\u2212m2Im1\u2225= max \b \u03c32 max(H) \u2212m2, m2 \u2212\u03c32 min(H) \t \u2264(W + \u221am1 + \u221am2)2 \u2212m2 =2\u221am1m2 + m1 + W 2 + 2(\u221am1 + \u221am2)W, (5.11) we have \u0000E\u2225HH\u22a4\u2212EHH\u22a4\u2225q\u00011/q \u22642\u221am1m2 + m1 + (EW 2q)1/q + 2(\u221am1 + \u221am2) (EW q)1/q \u22642\u221am1m2 + m1 + \u00002q+1q\u0393(q) \u00011/q + 2(\u221am1 + \u221am2) \u0010 2 q 2 q\u0393(q/2) \u00111/q . Next we claim \u00002q+1q\u0393(q) \u00011/q \u22642q, \u0010 2 q 2 q\u0393(q/2) \u00111/q \u22642q1/2. (5.12) One can verify (5.12) for 2 \u2264q \u226410 by calculation. When q \u226511, (5.12) can be veri\ufb01ed by the Gamma function upper bound in [6]. In summary, we have \u0000E\u2225HH\u22a4\u2212EHH\u22a4\u2225q\u00011/q \u22642\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q. which has \ufb01nished the proof of the \ufb01rst part of this lemma. For the second part, when m1 \u2264m2, since HH\u22a4\u2212EHH\u22a4is an m1-by-m1 matrix, we know tr( \u0000HH\u22a4\u2212EHH\u22a4\u0001q) is the sum of m1 eigenvalues of \u0000HH\u22a4\u2212EHH\u22a4\u0001q, while each of these eigenvalues are no more than \u2225HH\u22a4\u2212EHH\u22a4\u2225q. Thus, Etr n\u0000HH\u22a4\u2212EHH\u22a4\u0001qo \u2264Em1\u2225HH\u22a4\u2212EHH\u22a4\u2225q \u2264(m1 \u2227m2) \u00b7 (2\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q)q . When m1 > m2, we shall note that rank(HH\u22a4) \u2264m2 and EHH\u22a4= m2Im1. Then, \u0000HH\u22a4\u2212EHH\u22a4\u0001q \u2212(\u22121)qmq 2Im1 = q X k=1 (\u2212m2)q\u2212k \u0012 q m1 \u0013 (HH\u22a4)k, EJP 0 (2021), paper 0. Page 18/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration which shares the eigenspace of HH\u22a4and has rank no more than m2. Thus, Etr n\u0000HH\u22a4\u2212EHH\u22a4\u0001qo = Etr n\u0000HH\u22a4\u2212EHH\u22a4\u0001q \u2212(\u22121)qmq 2Im1 o + tr ((\u22121)qmq 2Im1) \u2264m2E \r \r \r \u0000HH\u22a4\u2212EHH\u22a4\u0001q \u2212(\u22121)qmq 2Im1 \r \r \r + m1mq 2 \u2264m2 \b (2\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q)q + mq 2 \t + m1mq 2 \u22642m2 (2\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q)q =2(m1 \u2227m2) (2\u221am1m2 + m1 + 4(\u221am1 + \u221am2)\u221aq + 2q)q . where the last inequality is due to m1 > m2. Proof of Proposition 2.3. Since Zij iid \u223cN(0, 1), we have EZZ\u22a4= p2Ip1 and E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r = E \r \rZZ\u22a4\u2212p2Ip1 \r \r \u2265E \u0000\r \rZZ\u22a4\r \r \u2212p2 \u0001 = E\u2225Z\u22252 \u2212p2. Since \u2225Z\u2225/(\u221ap1 + \u221ap2) \u21921 as p1, p2 tend to in\ufb01nity [31, Theorem 5.31], lim inf p1,p2\u2192\u221e E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r 2\u03c3C\u03c3R + \u03c32 C \u2265lim inf p1,p2\u2192\u221e E\u2225Z\u22252 \u2212p2 2\u221ap1p2 + p1 \u22651. Proof of Theorem 2.8. It suf\ufb01ces to prove the following separate lower bounds to prove this theorem. sup Z\u2208Fp(\u03c3\u2217,\u03c3C,\u03c3R) E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2273\u03c32 C; (5.13) sup Z\u2208Fp(\u03c3\u2217,\u03c3C,\u03c3R) E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2273\u03c3C\u03c3R; (5.14) sup Z\u2208Fp(\u03c3\u2217,\u03c3C,\u03c3R) E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2273\u03c3R\u03c3\u2217 p log p + \u03c32 \u2217log p. (5.15) 1. We \ufb01rst set \u03c3i1 = \u03c3C/\u221ap1; \u03c3ij = 0, j \u22652. If Zij \u223cN(0, \u03c32 ij) independently, it is easy to check that Z \u2208Fp1,p2(\u03c3\u2217, \u03c3R, \u03c3C). Then Z is zero except the \ufb01rst column. Suppose the \ufb01rst column of Z is z, then ZZ\u22a4\u2212EZZ\u22a4= zz\u22a4\u2212\u03c32 C p1 Ip1, E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225=E\u2225zz\u22a4\u2212\u03c32 C/p1\u2225\u2265E\u2225zz\u22a4\u2225\u2212\u03c32 C/p1 = E\u2225z\u22252 2 \u2212\u03c32 C/p1 \u2265\u03c32 C(1 \u22121/p1) \u2265c\u03c32 C, which has shown (5.13). 2. Let k1 = \u230a\u03c32 C/\u03c32 \u2217\u230b, k2 = \u230a\u03c32 R/\u03c32 \u2217\u230b. Construct \u03c3ij = \u001a \u03c3\u2217, 1 \u2264i \u2264k1, 1 \u2264j \u2264k2; 0, otherwise. By such a construction, Zij \u223cN(0, \u03c32 \u2217) for 1 \u2264i \u2264k1, 1 \u2264j \u2264k2; Zij = 0 otherwise. Thus, E \u0000Z\u00b7jZ\u22a4 \u00b7j \u2212EZ\u00b7jZ\u22a4 \u00b7j \u00012 = EZ\u00b7jZ\u22a4 \u00b7j Z\u00b7jZ\u22a4 \u00b7j \u2212 \u0000EZ\u00b7jZ\u22a4 \u00b7j \u00012 =E\u2225Z\u00b7j\u22252 2Z\u00b7jZ\u22a4 \u00b7j \u2212\u03c34 \u2217Ik1 = (k1 + 1)\u03c34 \u2217. Here, the last equality is due to \u0000E\u2225Z\u00b7j\u22252 2Z\u00b7jZ\u22a4 \u00b7j \u0001 i,i\u2032 = E\u2225Z\u00b7j\u22252 2Zi,jZi\u2032,j = \u001a (k1 \u22121 + 3) \u03c34 \u2217, 1 \u2264i = i\u2032 \u2264k1; 0, 1 \u2264i \u0338= i\u2032 \u2264k1. EJP 0 (2021), paper 0. Page 19/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Thus, \r \r \r \r \r \r k2 X j=1 E \b Z\u00b7jZ\u22a4 \u00b7j \u2212EZ\u00b7jZ\u22a4 \u00b7j \t2 \r \r \r \r \r \r = \r \r(k1 + 1)k2\u03c34 \u2217I \r \r = (k1 + 1)k2\u03c34 \u2217. Note that ZZ\u22a4\u2212EZZ\u22a4can be decomposed as the sum of independent random matrices, ZZ\u22a4\u2212EZZ\u22a4= k2 X j=1 \b Z\u00b7jZ\u22a4 \u00b7j \u2212EZ\u00b7jZ\u22a4 \u00b7j \t . We apply the bound for expected norm of random matrices sum [29] and obtain E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2273 p (k1 + 1)k2\u03c34 \u2217= q (\u230a\u03c32 C/\u03c32 \u2217\u230b+ 1) \u00b7 \u230a\u03c32 R/\u03c32 \u2217\u230b\u00b7 \u03c34 \u2217 \u2265 q (\u03c32 C/\u03c32 \u2217) \u00b7 \u03c32 R/(2\u03c32 \u2217) \u00b7 \u03c34 \u2217 (since \u03c3R \u2265\u03c3\u2217) \u2273\u03c3R\u03c3C. We thus have shown (5.14). 3. Set k1 = \u230a\u03c32 C/\u03c32 \u2217\u230b, k2 = \u230a\u03c32 R/\u03c32 \u2217\u230b, m = \u230a(p1/k1) \u2227(p2/k2)\u230b. If k2 \u2265(log p)2, then \u03c3R \u2265\u03c3\u2217log p and (5.15) can be implied by (5.14). So we assume k2 \u2264(log p)2, thus k1m \u2265k1 \u0012 p1 2k1 \u2227p2 2k2 \u0013 \u2265p1 2 \u2227 p2 2(log p)2 \u22651 2 p (log p)2 and log(k1m) \u2265c log p. Let (\u03c3ij) = \uf8ee \uf8ef \uf8f0 B B ... \uf8f9 \uf8fa \uf8fb= diag( m z }| { B, B, . . . , B, B, 0) \u2208Rp1\u00d7p2, B = \u03c3\u22171k11\u22a4 k2. Then we can rewrite down Z in rowwise form as Z = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u03b2\u22a4 1 0 0 . . . . . . . . . \u03b2\u22a4 k1 0 0 0 \u03b2\u22a4 k1+1 0 . . . . . . . . . 0 \u03b2\u22a4 2k1 0 0 0 ... \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u2208Rp1\u00d7p2, \u03b21, . . . , \u03b2k1m \u2208Rk2, \u03b21, . . . , \u03b2k1m iid \u223cN(0, \u03c32 \u2217Ik2). By taking a look at the expression of \r \rZZ\u22a4\u2212EZZ\u22a4\r \r, we know \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2265 max 1\u2264j\u2264k1m \f \f\u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217 \f \f . Note that \u03b2\u22a4 j \u03b2j/\u03c32 \u2217\u223c\u03c72 k2. By the lower bound of right-tail of Chi-square distribution (Corollary 3 in [37]), we have P \u0000\u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217\u2265\u03c32 \u2217x \u0001 \u2265c exp \u0010 \u2212C(x \u2227x2 k2 ) \u0011 . Since P \u0012 max j \u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217> \u03c32 \u2217x \u0013 = 1 \u2212P \u0012 max j \u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217\u2264\u03c32 \u2217x \u0013 = 1 \u2212 k1m Y j=1 \u00001 \u2212P \u0000\u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217\u2265\u03c32 \u2217x \u0001\u0001 \u22651 \u2212 \u0012 1 \u2212c exp \u0012 \u2212C \u0012 x \u2227x2 k2 \u0013\u0013\u0013k1m , EJP 0 (2021), paper 0. Page 20/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Taking x = c1 \u0010p k2 log(k1m) \u2228log(k1m) \u0011 for some c1 such that \u2212C \u0010 x \u2227x2 k2 \u0011 \u2265 \u2212log(k1m), we get \u0012 1 \u2212c exp \u0012 \u2212C \u0012 x \u2227x2 k2 \u0013\u0013\u0013k1m \u2264 \u0012 1 \u2212 c\u2032 (k1m) \u0013k1m \u2264e\u2212c\u2032. Thus, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2265E max 1\u2264j\u2264k1m \u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217 \u2265sup x>0 x\u03c32 \u2217\u00b7 P \u0012 max j \u03b2\u22a4 j \u03b2j \u2212k2\u03c32 \u2217> x\u03c32 \u2217 \u0013 \u2265c1(1 \u2212e\u2212c\u2032) \u0010p k2 log(k1m) \u2228log(k1m) \u0011 \u2273c\u03c32 \u2217 \u0010p k2 log(k1m) + log(k1m) \u0011 \u2273c\u03c3\u2217\u03c3R p log p + c\u03c32 \u2217log p. 5.2 Proofs for non-Gaussian distributions In this section, we collect the proofs of concentration for the non-Gaussian Wisharttype matrix (Lemma 3.1, Theorem 3.4 and Theorem 3.5) in Section 3.1. Proof of Lemma 3.1. Following the notations and proof idea of Lemma 2.4, we have the same expansion of Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t as (5.3): Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t = X c\u2208([p1]\u00d7[p2])q Y (i,j)\u2208[p1]\u00d7[p2] EZ\u03b1ij(c) ij \u0000Z2 ij \u2212\u03c32 ij \u0001\u03b2ij(c) = X c\u2208([p1]\u00d7[p2])q q Y k=1 \u03c3uk,vk\u03c3uk+1,vk Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) ij \u0000G2 ij \u22121 \u0001\u03b2ij(c) , (5.16) where Gij := Zij/\u03c3ij. Different from (5.3), Eij in (5.16) may not have N(0, 1) distribution. To overcome this dif\ufb01culty, we introduce the following lemma to bound EE\u03b1 ij(E2 ij \u22121)\u03b2 via a Gaussian analogue. Lemma 5.2 (Gaussian moments). Suppose G \u223cN(0, 1), \u03b1, \u03b2 are non-negative integers, then \u001a (\u03b1 + 2\u03b2 \u22121)!! \u2265EG\u03b1(G2 \u22121)\u03b2 \u2265(\u03b1 + 2\u03b2 \u22123)!! \u00b7 (\u03b1 + \u03b2 \u22121), if \u03b1 is even; EG\u03b1(G2 \u22121)\u03b2 = 0, if \u03b1 is odd. (5.17) Here for odd k, k!! = k(k \u22122) \u00b7 \u00b7 \u00b7 1. Especially, (\u22121)!! = 1, (\u22123)!! = \u22121. More generally, if Z has symmetric distribution and satis\ufb01es Var(Z) = 1, \u2225Z\u2225\u03c82 = sup q\u22651 q\u22121/2(E|Z|q)1/q \u2264\u03ba. (5.18) Then for any integers \u03b1, \u03b2 \u22650, \f \fEZ\u03b1(Z2 \u22121)\u03b2\f \f \u2264(C\u03ba)\u03b1+2\u03b2EG\u03b1(G2 \u22121)\u03b2 (5.19) for some uniform constant C > 0. Proof of Lemma 5.2. See Appendix. EJP 0 (2021), paper 0. Page 21/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Now, Combining (5.16) and (5.19), we have Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 X c\u2208([p1]\u00d7[p2])q q Y k=1 \u03c3uk,vk\u03c3uk+1,vk Y (i,j)\u2208[p1]\u00d7[p2] (C\u03ba)\u03b1ij(c)+2\u03b2ij(c)EE\u03b1ij(c) ij \u0000E2 ij \u22121 \u0001\u03b2ij(c) = (C\u03ba)2q X c\u2208([p1]\u00d7[p2])q q Y k=1 \u03c3uk,vk\u03c3uk+1,vk Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) \u0000G2 \u22121 \u0001\u03b2ij(c) . The rest of the proof can similarly proceed as we did in proving Lemma 2.4. Proof of Theorem 3.4. Let b := 2/\u03b1 \u22652 and Eij := Zij/\u03c3ij. By de\ufb01nition, we have supq q\u2212b 2 (E|Eij|q)1/q \u2264\u03ba. Thus for any \u03b1, \u03b2 \u22650, \f \fEE\u03b1 ij(E2 ij \u22121)\u03b2\f \f \u2264 \f \fEE\u03b1 ij(E2 ij \u22121)1{|Eij|\u22641} + EE\u03b1 ij(E2 ij \u22121)\u03b21{|Eij|>1} \f \f \u22641 + E|Eij|\u03b1+2\u03b2 \u2264(C\u03ba)\u03b1+2\u03b2(\u03b1 + 2\u03b2) b(\u03b1+2\u03b2) 2 . (5.20) We introduce the following technical lemma. Lemma 5.3. Let G, \u02dc G be independent N(0, 1) and let Fij be i.i.d. copy of G| \u02dc G|b\u22121. Then, EE\u03b1 ij(E2 ij \u22121)\u03b2 \u2264(Cb\u03ba)\u03b1+2\u03b2EF \u03b1 ij(F 2 ij \u22121)\u03b2. (5.21) Here Cb is some constant which only depend on b. Proof of Lemma 5.3. See Appendix. Now let Gij, \u02dc Gij be i.i.d. N(0, 1) and de\ufb01ne Fij = Gij| \u02dc Gij|b\u22121. Let \u02dc Z be a random matrix with entries \u02dc Zij = \u03c3ijFij. Then, by Lemma 5.3 and the similar proof in Lemma 3.1, we have Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264(Cb\u03ba)2qEtr n ( \u02dc Z \u02dc Z \u2212E \u02dc Z \u02dc Z\u22a4)qo . Thus, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2264 \u0000Etr \b (ZZ\u22a4\u2212EZZ\u22a4)2q\t\u00011/2q \u2264(Cb\u03ba)2 \u0010 Etr n ( \u02dc Z \u02dc Z\u22a4\u2212E \u02dc Z \u02dc Z\u22a4)2qo\u00111/2q . (5.22) Let q = \u2308log(p1 \u2227p2)\u2309, now it suf\ufb01ces to upper bound \u0012 E \r \r \r \u02dc Z \u02dc Z\u22a4\u2212EZ \u02dc Z\u22a4\r \r \r 2q\u00131/2q . We de\ufb01ne \u02dc \u03c32 C = maxj Pp1 i=1 \u03c32 ij| \u02dc Gij|2b\u22122, \u02dc \u03c32 R = maxi Pp2 j=1 \u03c32 ij| \u02dc Gij|2b\u22122 and \u02dc \u03c3\u2217= maxi \u03c3ij| \u02dc Gij|b\u22121 and apply Theorem 2.1 conditionally on \u02dc G: E \u0014 tr \u001a\u0010 \u02dc Z \u02dc Z\u22a4\u2212E \u02dc Z \u02dc Z\u22a4\u00112q\u001b \f \f \f \u02dc G \u0015 \u2264C2q \u0010 \u02dc \u03c32 C + \u02dc \u03c3C \u02dc \u03c3R + \u03c3C\u03c3\u2217 p log(p1 \u2227p2) + \u03c32 \u2217log(p1 \u2227p2) \u00112q . Then, \u0012 Etr \u001a\u0010 \u02dc Z \u02dc Z\u22a4\u2212E \u02dc Z \u02dc Z\u22a4\u00112q\u001b\u00131/2q \u2264C \u0010\r \r\u02dc \u03c32 C \r \r 2q + \u2225\u02dc \u03c3C\u03c3R\u22252q + \u2225\u02dc \u03c3R\u02dc \u03c3\u2217\u22252q p log(p1 \u2227p2) + \r \r\u02dc \u03c32 \u2217 \r \r 2q log(p1 \u2227p2) \u0011 . (5.23) Here \u2225X\u22252q := (E|X|2q)1/2q is the \u21132q-norm of random variable X. Now we bound \r \r\u02dc \u03c32 \u2217 \r \r 2q, \r \r\u02dc \u03c32 R \r \r 2q and \r \r\u02dc \u03c32 C \r \r 2q separately. EJP 0 (2021), paper 0. Page 22/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration \u2022 \r \r\u02dc \u03c32 \u2217 \r \r 2q. For any a > 0, since P \u0012 max i,j | \u02dc Gij| > t \u0013 \u22642 exp \u0012 \u2212t2 2 + log(p1p2) \u0013 \u22642 exp \u0012 \u2212t2 4 \u0013 , \u2200t > 2 p log(p1p2), integration yields E max i,j | \u02dc Gij|a = Z \u221e 0 P \u0012 max i,j | \u02dc Gij| > t1/a \u0013 dt \u2264 \u0010 2 p log(p1p2) \u0011a + Z \u221e 0 2e\u2212t2/a 4 dt = (4 log(p1p2))a/2 + 4a\u0393 \u0010a 2 \u0011 . Then it follows that \r \r\u02dc \u03c32 \u2217 \r \r 2q \u2264\u03c32 \u2217 \u0012 E max i,j | \u02dc Gij|4(b\u22121)q \u00131/2q \u2264\u03c32 \u2217 \u0010 (4 log(p1p2))2(b\u22121)q + 16(b \u22121)q\u0393 (2(b \u22121)q) \u00111/2q \u2272\u03c32 \u2217 \u0000log(p1p2)b\u22121 + qb\u22121\u0001 \u2272\u03c32 \u2217logb\u22121(p1 \u2228p2). (5.24) \u2022 \r \r\u02dc \u03c32 C \r \r 2q and \r \r\u02dc \u03c32 R \r \r. By the moment bound of supremum of empirical process [9, Theorem 11], \r \r\u02dc \u03c32 C \r \r 2q = \r \r \r \r \rmax j p1 X i=1 \u03c32 ij| \u02dc Gij|2b\u22122 \r \r \r \r \r 2q \u2272E max 1\u2264j\u2264p2 p1 X i=1 \u03c32 ij| \u02dc Gij|2b\u22122 + q \r \r\u02dc \u03c32 \u2217 \r \r 2q \u2264E max j p1 X i=1 \u0010 \u03c32 ij| \u02dc Gij|2b\u22122 \u2212E\u03c32 ij| \u02dc Gij|2b\u22122\u0011 + \u03c32 C + q \r \r\u02dc \u03c32 \u2217 \r \r 2q . (5.25) Denote Yj = Pp1 i=1 \u0010 \u03c32 ij| \u02dc Gij|2b\u22122 \u2212E\u03c32 ij| \u02dc Gij|2b\u22122\u0011 , it suf\ufb01ces to bound E maxj Yj. To this end, we introduce the following Generalized Bernstein-Orlicz norm de\ufb01ned in [19]. For a random variable X, let \u2225X\u2225\u03a8\u03b1,L := inf {\u03b7 > 0 : E[\u03a8\u03b1,L(|X|/\u03b7)] \u22641} be the \u03a8\u03b1,L-norm where \u03a8\u03b1,L is de\ufb01ned via its inverse function \u03a8\u22121 \u03b1,L(t) := p log(1 + t) + L(log(1 + t))1/\u03b1, \u2200t \u22650. Now \ufb01x j \u2208[p2] and let \u03b1 = 1/(b \u22121) and L = 4b\u22121\u03c32 \u2217 \u221a 2 qPp1 i=1 \u03c34 ij . By [19, Theorem 3.1], \u2225Yj\u2225\u03a8\u03b1,L \u2264C v u u t p1 X i=1 \u03c34 ij; P \uf8eb \uf8ed|Yj| \u2265C v u u t p1 X i=1 \u03c34 ij n\u221a t + Lt1/\u03b1o \uf8f6 \uf8f8\u22642 exp(\u2212t), t \u22650. This yields P \u0010 |Yj| \u2265C n \u03c3C\u03c3\u2217 \u221a t + \u03c32 \u2217tb\u22121o\u0011 \u22642 exp(\u2212t), t \u22650, EJP 0 (2021), paper 0. Page 23/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration which can be rewritten as P (|Yj| \u2265t) \u22642 exp \u2212c t2 \u03c32 C\u03c32 \u2217 \u2227 \u0012 t \u03c32 \u2217 \u00131/(b\u22121)!! , t \u22650. Applying union bound, we get P \u0012 max j |Yj| \u2265t \u0013 \u22642 exp log p2 \u2212c t2 \u03c32 C\u03c32 \u2217 \u2227 \u0012 t \u03c32 \u2217 \u00131/(b\u22121)!! , t \u22650. Now it follows that E max j Yj \u2264E max j |Yj| = Z \u221e 0 P \u0012 max j |Yj| > t \u0013 dt \u2264C \u0010 \u03c3C\u03c3\u2217 p log p2 + \u03c32 \u2217logb\u22121(p2) \u0011 + Z \u221e C(\u03c3C\u03c3\u2217 \u221alog p2+\u03c32 \u2217logb\u22121(p2)) P \u0012 max j |Yj| > t \u0013 dt \u2264C \u0010 \u03c3C\u03c3\u2217 p log p2 + \u03c32 \u2217logb\u22121(p2) \u0011 + Z \u221e 0 \u0012 exp \u0012 \u2212c t2 \u03c3C\u03c3\u2217 \u0013 + exp \u0012 \u2212c t1/(b\u22121) \u03c32/(b\u22121) \u2217 \u0013\u0013 dt \u2272\u03c3C\u03c3\u2217 p log p2 + \u03c32 \u2217logb\u22121(p2) \u2272\u03c32 C + \u03c32 \u2217logb\u22121(p2). Combining with (5.25), we obtained \r \r\u02dc \u03c32 C \r \r 2q \u2272\u03c32 C + \u03c32 \u2217logb\u22121(p1 \u2228p2) log(p1 \u2227p2). (5.26) Similarly we can obtain \u2225\u02dc \u03c3R\u22252q \u2272\u03c32 R + \u03c32 \u2217logb\u22121(p1 \u2228p2) log(p1 \u2227p2). (5.27) Combining (5.23), (5.24), (5.26), (5.27) and applying Cauchy-Schwarz inequality, we obtain E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2272\u03c32 C + \u03c3R\u03c3C + \u03c3R\u03c3\u2217log(b\u22121)/2(p1 \u2228p2) p log(p1 \u2227p2) + \u03c32 \u2217logb\u22121(p1 \u2228p2) log(p1 \u2227p2). This completes the proof. Proof of Theorem 3.5. We \ufb01rst prove the following comparison Lemma. Lemma 5.4. Suppose Z is a p1-by-p2 random matrix with independent entries satisfying EZij = 0, Var(Zij) = \u03c32 ij, |Z| \u22641. H is an m1-by-m2 dimensional matrix with i.i.d. standard Gaussian entries. When q \u22651, m1 = \u2308\u03c32 C\u2309+ q \u22121, m2 = \u2308\u03c32 R\u2309+ q \u22121, we have Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 Etr \b (HH\u22a4\u2212EHH\u22a4)q\t . Proof. Recall Z \u2208Rp1\u00d7p2, |Z| \u22641 almost surely, EZij = 0, Var(Zij) = \u03c32 ij. Similarly as the proof of Lemma 2.4, let c = (u1, v1, . . . , uq, vq) \u2208([p1 \u00d7 [p2]])q be the cycle of length 2q on bipartite graph [p1] \u2192[p2], \u03b1ij(c) and \u03b2ij(c) be de\ufb01ned as (5.2). We similarly have EJP 0 (2021), paper 0. Page 24/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration the following expansion, E \b (ZZ\u22a4\u2212EZZ\u22a4)q\t = X u1,...,uq\u2208[p1] E q Y j=1 (ZZ\u22a4\u2212EZZ\u22a4)uj,uj+1 =E X u1,...,uq\u2208[p1] q Y j=1 \uf8f1 \uf8f2 \uf8f3 X vj\u2208[p2] Zuj,vj(Z\u22a4)vj,uj+1 \u22121{uj=uj+1} X vj\u2208[p2] EZ2 uj,vj \uf8fc \uf8fd \uf8fe = X u1,...,uq\u2208[p1] v1,...,vq\u2208[p2] E q Y j=1 \u0010 Zuj,vjZuj+1,vj \u2212\u03c32 uj,vj1{uj=uj+1} \u0011 = X c\u2208([p1]\u00d7[p2])q Y (i,j)\u2208[p1]\u00d7[p2] EZ\u03b1ij(c) ij \u0000Z2 ij \u2212\u03c32 ij \u0001\u03b2ij(c) . Since Zij is symmetric distributed and EZ2 ij = \u03c32 ij, we have EZ\u03b1 ij \u0000Z2 ij \u2212\u03c32 ij \u0001\u03b2 = 0, if \u03b1 is odd or {\u03b1 = 0, \u03b2 = 1}. For any (i, j) \u2208[p1] \u00d7 [p2], we shall note that 0 \u2264Z2 ij \u22641 and |Z2 ij \u2212\u03c32 ij| \u22641. If \u03b1 \u22652 and \u03b1 is even, E \f \f \fZ\u03b1ij(c) ij (Z2 ij \u2212\u03c32 ij)\u03b2ij(c)\f \f \f = E|Z2 ij| \u00b7 |Z\u03b1ij(c)\u22122 ij (Z2 ij \u2212\u03c32 ij)\u03b2ij(c)| \u2264EZ2 ij = \u03c32 ij; if \u03b1 \u22650, \u03b2 \u22652, one has E \f \f \fZ\u03b1ij(c) ij (Z2 ij \u2212\u03c32 ij)\u03b2ij(c)\f \f \f \u2264E(Z2 ij \u2212\u03c32 ij)2 \u00b7 \f \f \fZ\u03b1ij(c) ij (Z2 ij \u2212\u03c32 ij)\u03b2ij(c)\u22122\f \f \f \u2264EZ4 ij \u2212\u03c34 ij \u2264EZ2 ij \u00b7 \u2225Zij\u22252 \u221e\u2212\u03c34 ij = \u03c32 ij \u2212\u03c34 ij \u2264\u03c32 ij. Therefore, for any \u03b1, \u03b2 \u22650, we have EZ\u03b1 ij(Z2 ij \u2212\u03c32 ij)\u03b2 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u2264\u03c32 ij \u00b7 EG\u03b1 \u0000G2 \u22121 \u0001\u03b2 , \u03b1 is even and (\u03b1, \u03b2) \u0338= (0, 0); = 1, \u03b1 = 0, \u03b2 = 0; = 0, \u03b1 is odd. Here, G \u223cN(0, 1). Thus, E \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 X c\u2208([p1]\u00d7[p2])q Y (i,j)\u2208[p1]\u00d7[p2] \u03c32 ij1{(\u03b1ij(c),\u03b2ij(c))\u0338=(0,0)}EG\u03b1ij(c) \u0000G2 \u22121 \u0001\u03b2ij(c) . Let s be the shape of any loop c \u2208([p1] \u00d7 [p2])q, mL(s) and mR(s) be the number of distinct left and right nodes respectively visited by any c with shape s; m\u03b1,\u03b2(s) = m\u03b1,\u03b2(c) is de\ufb01ned as (5.4)). Then, E \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 X c\u2208([p1]\u00d7[p2])q Y (i,j)\u2208[p1]\u00d7[p2] \u03c32 ij1{(\u03b1ij(c),\u03b2ij(c))\u0338=(0,0)}EG\u03b1ij(c) \u0000G2 \u22121 \u0001\u03b2ij(c) = X s X c: c has shape s \u00b7 Y (i,j)\u2208[p1]\u00d7[p2] c pass (i, j) for positive even times \u03c32 ij \u00b7 Y \u03b1,\u03b2\u22650 \u03b1 is even \b G\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s) \u2264 X s \u0010 p1\u03c32mL(s)\u22122 C \u03c32mR(s) R \u2227p2\u03c32mL(s) C \u03c32mR(s)\u22122 R \u0011 \u00b7 Y \u03b1,\u03b2\u22650 \u03b1 is even \b G\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s) . EJP 0 (2021), paper 0. Page 25/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration On the other hand, we have Etr \u0000(HH\u22a4\u2212EHH\u22a4)q\u0001 = Etr \u0000(HH\u22a4\u2212m2Im1)q\u0001 = X s m1 \u00b7 \u00b7 \u00b7 (m1 \u2212mL(s) + 1) \u00b7 m2 \u00b7 \u00b7 \u00b7 (m2 \u2212mR(s) + 1) \u00b7 Y \u03b1,\u03b2\u22650 E \b G\u03b1(G2 \u22121)\u03b2\tm\u03b1,\u03b2(s) . Provided that m1 = \u2308\u03c32 C \u22281\u2309+ q \u22121 and m2 = \u2308\u03c32 R \u22281\u2309+ q \u22121, we have \u03c32mL(s)\u22122 C \u2264m1(m1 \u22121) \u00b7 \u00b7 \u00b7 (m1 \u2212mL(s) + 1) m1 , \u03c32mR(s) R \u2264m2(m1\u22122) \u00b7 \u00b7 \u00b7(m2\u2212mL(s)+1); \u03c32mL(s) C \u2264m1(m1\u22121) \u00b7 \u00b7 \u00b7 (m1\u2212mL(s)+1), \u03c32mR(s)\u22122 R \u2264m2(m1 \u22122) \u00b7 \u00b7 \u00b7 (m2 \u2212mL(s) + 1) m2 . Thus Etr \b (ZZ\u22a4\u2212EZZ\u22a4)q\t \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 \u00b7 Etr \b (HH\u22a4\u2212EHH\u22a4)q\t , which has \ufb01nished the proof of this lemma. Assume B = 1 without loss of generality. With Lemma 5.4 and Lemma 2.6, the proof of Theorem refth:heter-wishart-bounded is the same as Theorem 2.1. 5.3 Proof for tail bounds Proof of Theorem 3.8. Without loss of generality, we assume \u03c3\u2217= 1. Let q \u22652 be an even integer. Let m1 = \u2308\u03c32 C\u2309+ qb \u22121, m2 = \u2308\u03c32 R\u2309+ qb \u22121. By Lemmas 2.4 and 2.6, Etr n\u0000ZZ\u22a4\u2212EZZ\u22a4\u0001qbo \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 Etr \u0000(HH\u22a4\u2212EHH\u22a4)qb\u0001 \u2264 \u0012 p1 m1 \u2227p2 m2 \u0013 (m1 \u2227m2) \u0010 2\u221am1m2 + m1 + 4(\u221am1 + \u221am2) p qb + 2qb \u0011qb \u2264(p1 \u2227p2) \u0010 2\u221am1m2 + m1 + 4(\u221am1 + \u221am2) p qb + 2qb \u0011qb . Thus, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \rb \u2264 \u0010 Etr \u0000ZZ\u22a4\u2212EZZ\u22a4\u0001qb\u00111/q \u2264(p1 \u2227p2)1/q \u0010 2\u221am1m2 + m1 + 4(\u221am1 + \u221am2) p qb + 2qb \u0011b = n (p1 \u2227p2)1/(qb) \u0010 2\u221am1m2 + m1 + 4(\u221am1 + \u221am2) p qb + 2qb \u0011ob \u2264 n C(p1 \u2227p2)1/(qb) \u0010 \u03c3R\u03c3C + \u03c32 C + (\u03c3R + \u03c3C) p qb + qb \u0011ob . We set q = 2\u2308log(p1 \u2227p2)/b\u2309and consider the following two cases: 1. If b \u2265log(p1 \u2227p2), we have q = 2 and E \r \rZZ\u22a4\u2212EZZ\u22a4\r \rb \u2264 n C(p1 \u2227p2)1/(2b) \u0010 \u03c3R\u03c3C + \u03c32 C + (\u03c3R + \u03c3C) \u221a b + b \u0011ob \u2264 n C \u0010 (\u03c3C + \u03c3R + \u221a b)2 \u2212\u03c32 R \u0011ob . EJP 0 (2021), paper 0. Page 26/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration 2. If b < log(p1 \u2227p2), we have 2 log(p1 \u2227p2)/b \u2264q = 2\u2308log(p1 \u2227p2)/b\u2309\u22642 (log(p1 \u2227p2)/b + 1) \u22644 log(p1 \u2227p2)/b. Then, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \rb \u2264 n C(p1 \u2227p2)1/(2 log(p1\u2227p2)) \u0010 \u03c3R\u03c3C + \u03c32 C + (\u03c3R + \u03c3C) p 4 log(p1 \u2227p2) + 4 log(p1 \u2227p2) \u0011ob \u2264 n C \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2))2 \u2212\u03c32 C \u0011ob . In summary, there exists a uniform constant C0 > 0 such that E \r \rZZ\u22a4\u2212EZZ\u22a4\r \rb \u2264 n C0 \u0010 (\u03c3C + \u03c3R + p b \u2228log(p1 \u2227p2))2 \u2212\u03c32 C \u0011ob . In fact, the statement holds for all b > 0 including non-integers. Next we consider the tail bound inequality for \u2225ZZ\u22a4\u2212EZZ\u22a4\u2225. Let C1 be a to-bespeci\ufb01ed constant. By Markov inequality, P \u0010\r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2265C1 \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2) + x)2 \u2212\u03c32 C \u0011\u0011 \u2264 E \r \rZZ\u22a4\u2212EZZ\u22a4\r \rb n C1 \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2) + x)2 \u2212\u03c32 C \u0011ob \u2264 \uf8f1 \uf8f2 \uf8f3 C0 \u0010 (\u03c3C + \u03c3R + p b \u2228log(p1 \u2227p2))2 \u2212\u03c32 C \u0011 C1 \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2) + x)2 \u2212\u03c32 C \u0011 \uf8fc \uf8fd \uf8fe b . We set b = x2, C1 = eC0, we have P \u0010\r \rZZ\u22a4\u2212EZZ\u22a4\r \r \u2265C1 \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2) + x)2 \u2212\u03c32 C \u0011\u0011 \u2264 \uf8f1 \uf8f2 \uf8f3 C0 \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2) + \u221a b)2 \u2212\u03c32 C \u0011 C1 \u0010 (\u03c3C + \u03c3R + p log(p1 \u2227p2) + x)2 \u2212\u03c32 C \u0011 \uf8fc \uf8fd \uf8fe b = exp(\u2212x2). Therefore, we have \ufb01nished the proof of this theorem. 5.4 Proofs for Section 3.3 Proof of Lemma 3.12. The proof of this lemma relies on a more careful counting scheme for each cycle. For convenience, we de\ufb01ne \u02dc \u03c32 ij = Var( \u02dc Gij) = \u001a \u03c32 ij, 1 \u2264i \u2264p1 \u22122, 1 \u2264j \u2264p2; \u03c32 p1\u22121,j + \u03c32 p1,j, i = p \u22121, 1 \u2264j \u2264p2. (G0)ij = Gij/\u03c3ij, 1 \u2264i \u2264p2, 1 \u2264j \u2264p1; ( \u02dc G0)ij = \u02dc Gij/\u02dc \u03c3ij, 1 \u2264i \u2264p1 \u22121, 1 \u2264j \u2264p2 as the variances and standardizations of each entry of G and \u02dc G. Since the proof is lengthy, we divide into steps for a better presentation. EJP 0 (2021), paper 0. Page 27/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Step 1 In this step, we consider the expansions for both Etr(GG\u22a4\u2212EGG\u22a4)q and Etr( \u02dc G \u02dc G\u22a4\u2212 E \u02dc G \u02dc G\u22a4)q, Etr \b (GG\u22a4\u2212EGG\u22a4)q\t = E X u1,...,uq\u2208[p1] q Y k=1 \u0000GG\u22a4\u2212EGG\u22a4\u0001 uk,uk+1 = X u1,...,uq\u2208[p1] v1,...,vq\u2208[p2] E q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u0000(G0)uk,vk(G0)uk+1,vk \u22121{uk=uk+1} \u0001 = X \u2126\u2286[q] X u\u2126c\u2208[p1\u22122] X v1,...,vq\u2208[p2] \u00b7 \uf8f1 \uf8f2 \uf8f3 X u\u2126\u2208{p1\u22121,p1} E q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u0000(G0)uk,vk(G0)uk+1,vk \u22121{uk=uk+1} \u0001 \uf8fc \uf8fd \uf8fe. (5.28) Here uq+1 := u1. Similarly, Etr n ( \u02dc G \u02dc G\u22a4\u2212E \u02dc G \u02dc G\u22a4)qo = E X u1,...,uq\u2208[p1\u22121] q Y k=1 \u0010 \u02dc G \u02dc G\u22a4\u2212E \u02dc G \u02dc G\u22a4\u0011 uk,uk+1 = X \u2126\u2286[q] X u\u2126c \u2208[p1\u22122] X v1,...,vq\u2208[p2] \u00b7 ( X u\u2126=p1\u22121 E q Y k=1 \u02dc \u03c3uk,vk \u02dc \u03c3uk+1,vk \u0010 ( \u02dc G0)uk,vk( \u02dc G0)uk+1,vk \u22121{uk=uk+1} \u0011) . Thus, in order prove this lemma, we only need show for any \ufb01xed v1, . . . , vq \u2208[p2], \u2126\u2286[q], u\u2126c \u2208[p1 \u22122], one has X u\u2126\u2208{p1\u22121,p2} E q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u0000(G0)uk,vk(G0)uk+1,vk \u22121{uk=uk+1} \u0001 \u2264E q Y k=1 \u02dc \u03c3\u02dc uk,vk \u02dc \u03c3\u02dc uk+1,vk \u0010 ( \u02dc G0)\u02dc uk,vk( \u02dc G0)\u02dc uk+1,vk \u22121{\u02dc uk=\u02dc uk+1} \u0011 . (5.29) Here, \u02dc uk = uk \u2208[p1 \u22122], if k \u2208\u2126c; \u02dc uk = p1 \u22121, if k \u2208\u2126. (5.30) Step 2 To prove (5.29), we shall \ufb01rst recall that the de\ufb01nition of u1, . . . , uq, v1, . . . , vq are EJP 0 (2021), paper 0. Page 28/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration cyclic, i.e., u1 = uq+1, we also denote v0 = vq. Thus, X u\u2126\u2208{p1\u22121,p1} q Y k=1 \u03c3uk,vk\u03c3uk+1,vk = X u\u2126\u2208{p1\u22121,p1} q Y k=1 \u03c3uk,vk\u03c3uk,vk\u22121 = Y k\u2208\u2126c \u03c3uk,vk\u03c3uk,vk\u22121 \u00b7 Y k\u2208\u2126 \u03c3p1\u22121,vk\u03c3p1\u22121,vk\u22121 + Y k\u2208\u2126 \u03c3p1,vk\u03c3p1,vk\u22121 ! \u2264 Y k\u2208\u2126c \u03c3uk,vk\u03c3uk,vk\u22121 \u00b7 Y k\u2208\u2126 \u0000\u03c3p1\u22121,vk\u03c3p1\u22121,vk\u22121 + \u03c3p1,vk\u03c3p1,vk\u22121 \u0001 Cauchy-Schwarz \u2264 Y k\u2208\u2126c \u02dc \u03c3uk,vk \u02dc \u03c3uk,vk\u22121 \u00b7 Y k\u2208\u2126 \u0010 (\u03c32 p1\u22121,vk + \u03c32 p1,vk) \u00b7 (\u03c32 p1\u22121,vk\u22121 + \u03c32 p1,vk\u22121) \u00111/2 = Y k\u2208\u2126c \u02dc \u03c3uk,vk \u02dc \u03c3uk,vk\u22121 \u00b7 Y k\u2208\u2126 \u02dc \u03c3p1\u22121,vk \u02dc \u03c3p1\u22121,vk\u22121 = X u\u2126=p1\u22121 q Y k=1 \u02dc \u03c3uk,vk \u02dc \u03c3uk,vk\u22121 = X u\u2126=p1\u22121 q Y k=1 \u02dc \u03c3uk,vk \u02dc \u03c3uk+1,vk. (5.31) Step 3 For any \ufb01xed \u2126= {k : uk \u2208[p1 \u22122]} and a cycle c = (u1 \u2192v1 \u2192u2 \u2192v2 \u2192. . . \u2192 uq \u2192vq \u2192u1) such that u\u2126\u2208{p1 \u22121, p} and u\u2126\u2208[p1 \u22122], recall \u02dc uk is de\ufb01ned as (5.30). We aim to show in this step that E q Y k=1 \u0000(G0)uk,vk(G0)uk+1,vk \u22121{uk=uk+1} \u0001 \u2264E q Y k=1 \u0010 ( \u02dc G0)\u02dc uk,vk( \u02dc G0)\u02dc uk+1,vk \u22121{\u02dc uk=\u02dc uk+1} \u0011 . (5.32) We can rearrange the left hand side and the right hand side of (5.32) to E p1 Y i=1 p2 Y j=1 (G0)\u03b1ij ij ((G0)2 ij \u22121)\u03b2ij, and E p1\u22121 Y i=1 p2 Y j=1 ( \u02dc G0)\u02dc \u03b1ij ij (( \u02dc G0)2 ij \u22121) \u02dc \u03b2ij. Here, \u03b1ij, \u03b2ij, \u02dc \u03b1ij, and \u02dc \u03b2ij are de\ufb01ned as \u03b1ij = |{k : (uk = i, vk = j, uk+1 \u0338= i) or (uk \u0338= i, vk = j, uk+1 = i)}| , \u03b2ij = |{k : uk = uk+1 = i, vk = j}| , \u02dc \u03b1ij = |{k : (\u02dc uk = i, vk = j, \u02dc uk+1 \u0338= i) or (\u02dc uk \u0338= i, vk = j, \u02dc uk+1 = i)}| \u02dc \u03b2ij = |{k : \u02dc uk = \u02dc uk+1 = i, vk = j}| . Then, \u03b1ij (or \u02dc \u03b1ij) is the number of times that the edge (i, j) is visited exactly once by sub-path uk \u2192vk \u2192uk+1 (or \u02dc uk \u2192vk \u2192\u02dc uk+1); \u03b2ij (or \u02dc \u03b2ij) is the number of times that the edge (i, j) is visited twice (back and forth) by sub-path uk \u2192vk \u2192 uk+1 (or \u02dc uk \u2192vk \u2192\u02dc uk+1). Here, by comparing the order of ( \u02dc G0)ij and (G0)ij in these two monomials (5.32), \u02dc \u03b1ij, \u02dc \u03b2ij, \u03b1ij, \u03b2ij are related as \u02dc \u03b1ij = \u03b1ij, \u02dc \u03b2ij = \u03b2ij, if 1 \u2264i \u2264p1 \u22122, 1 \u2264j \u2264n, The relationship among \u02dc \u03b1p1\u22121,j, \u02dc \u03b2p1\u22121,j, \u03b1p1\u22121,j, \u03b1p1,j, \u03b2p1\u22121,j, \u03b2p1,j is more involved. EJP 0 (2021), paper 0. Page 29/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration To analyze them, for any \ufb01xed 1 \u2264j \u2264p2 we de\ufb01ne x(j) 1 = |{k : (uk \u2192vk \u2192uk+1) = ((p1 \u22121) \u2192j \u2192{p1 \u22121, p1}c) or ({p1 \u22121, p1}c \u2192j \u2192(p1 \u22121))}| , x(j) 2 = |{k : (uk \u2192vk \u2192uk+1) = (p1 \u2192j \u2192{p1 \u22121, p1}c) or ({p1 \u22121, p1}c \u2192j \u2192p1)}| , x(j) 3 = |{k : (uk \u2192vk \u2192uk+1) = ((p1 \u22121) \u2192j \u2192(p1 \u22121))}| , x(j) 4 = |{k : (uk \u2192vk \u2192uk+1) = (p1 \u2192j \u2192p1)}| , x(j) 5 = |{k : (uk \u2192vk \u2192uk+1) = ((p1 \u22121) \u2192j \u2192p1) or (p1 \u2192j \u2192(p1 \u22121))}| . Then by de\ufb01nitions, we have \u03b1p1\u22121,j = x(j) 1 + x(j) 5 , \u03b1p1,j = x(j) 2 + x(j) 5 , \u02dc \u03b1p1\u22121,j = x(j) 1 + x(j) 2 ; \u03b2p1\u22121,j = x(j) 3 , \u03b2p1,j = x(j) 4 , \u02dc \u03b2p1\u22121,j = x(j) 3 + x(j) 4 + x(j) 5 . We introduce the following Lemma before we proceed. Lemma 5.5. Suppose Z1, Z2 are independent and symmetric distributed random variables. Var(Z1) = Var(Z2) = 1, \u2225Z1\u2225\u03c82, \u2225Z2\u2225\u03c82 \u2264\u03ba. G is standard Gaussian distributed. For any non-negative integers x1, . . . , x5, we have \f \fEZx1+x5 1 Zx2+x5 2 (Z2 1 \u22121)x3(Z2 2 \u22121)x4\f \f \u2264(C\u03ba)x1+x2+2(x3+x4+x5)EGx1+x2(G2 \u22121)x3+x4+x5. (5.33) Especially when Z1, Z2, G are all standard Gaussian, \f \fEZx1+x5 1 Zx2+x5 2 (Z2 1 \u22121)x3(Z2 2 \u22121)x4\f \f \u2264EGx1+x2(G2 \u22121)x3+x4+x5. (5.34) Proof. See Appendix. By Lemma 5.5, \f \fE(G0) \u03b1p1\u22121,j p1\u22121,j ((G0)2 p1\u22121,j \u22121)\u03b2p1\u22121,j\f \f \u00b7 \f \fE(G0) \u03b1p1,j p1,j ((G0)2 p1,j \u22121)\u03b2p1,j\f \f = \f \f \f \fE(G0) x(j) 1 +x(j) 5 p1\u22121,j ((G0)2 p1\u22121,j \u22121)x(j) 3 \f \f \f \f \u00b7 \f \f \f \fE(G0) x(j) 2 +x(j) 5 p1,j ((G0)2 p1,j \u22121)x(j) 4 \f \f \f \f \u2264E( \u02dc G0) x(j) 1 +x(j) 2 p1\u22121,j (( \u02dc G0)2 p1\u22121,j \u22121)x(j) 3 +x(j) 4 +x(j) 5 = E( \u02dc G0) \u02dc \u03b1p1\u22121,j p1\u22121,j (( \u02dc G0)2 p1\u22121,j \u22121) \u02dc \u03b2p1\u22121,j. Thus, E p1 Y i=1 p2 Y j=1 (G0)\u03b1ij ij ((G0)2 ij \u22121)\u03b2ij \u2264E p1\u22121 Y i=1 p2 Y j=1 ( \u02dc G0)\u02dc \u03b1ij ij (( \u02dc G0)2 ij \u22121) \u02dc \u03b2ij. (5.35) This gives (5.32). Step 4 Combining (5.31) and (5.32), we \ufb01nally have X u\u2126\u2208{p1\u22121,p1} E q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u0000(G0)uk,vk(G0)uk+1,vk \u22121{uk=uk+1} \u0001 = X u\u2126\u2208{p1\u22121,p1} q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u00b7 E q Y k=1 \u0000(G0)uk,vk(G0)uk+1,vk \u22121{uk=uk+1} \u0001 (5.32) \u2264 X u\u2126\u2208{p1\u22121,p1} q Y k=1 \u03c3uk,vk\u03c3uk+1,vk \u00b7 E q Y k=1 \u0010 ( \u02dc G0)\u02dc uk,vk( \u02dc G0)\u02dc uk+1,vk \u22121{\u02dc uk=\u02dc uk+1} \u0011 (5.31) \u2264E q Y k=1 \u02dc \u03c3\u02dc uk,vk \u02dc \u03c3\u02dc uk+1,vk \u0010 ( \u02dc G0)\u02dc uk,vk( \u02dc G0)\u02dc uk+1,vk \u22121{\u02dc uk=\u02dc uk+1} \u0011 , which yields (5.29) and additionally \ufb01nishes the proof of this lemma. \u25a1 EJP 0 (2021), paper 0. Page 30/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Proof of Theorem 3.11. Denote \u03c32 C = P i \u03c32 i , \u03c3\u2217= maxi \u03c3i, Z = [Z1, . . . , Zp2], and Sk = ZkZ\u22a4 k \u2212EZkZ\u22a4 k . Then E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r = E \r \r \r \r \r p2 X k=1 Sk \r \r \r \r \r . By the lower bound for expected norm of independent random matrices sum [29, Theorem I and Section 1.3], E\u2225ZZ\u22a4\u2212EZZ\u22a4\u2225\u2273 \r \r \r \r \rE p2 X k=1 SkS\u22a4 k \r \r \r \r \r !1/2 + E max k \u2225Sk\u2225. (5.36) If Zij \u223cN(0, \u03c32 i ) for any i \u2208[p1], j \u2208[p2]. Note that \u0000EZkZ\u22a4 k ZkZ\u22a4 k \u0001 ij = EZik p1 X l=1 Z2 lkZjk = ( 3\u03c34 i + \u03c32 i \u0010P l\u0338=i \u03c32 l \u0011 , i = j; 0, i \u0338= j, = diag \u0000{2\u03c34 i + \u03c32 i \u03c32 C}p1 i=1 \u0001 \u0000EZkZ\u22a4 k \u00012 = diag(\u03c34 1, . . . , \u03c34 p1). Thus, \r \r \r \r \rE p2 X k=1 SkS\u22a4 k \r \r \r \r \r = \r \r \r \r \r p2 X k=1 E(ZkZ\u22a4 k \u2212EZkZ\u22a4 k )(ZkZ\u22a4 k \u2212EZkZ\u22a4 k ) \r \r \r \r \r = \r \r \r \r \r p2 X k=1 EZkZ\u22a4 k ZkZ\u22a4 k \u2212(EZkZ\u22a4 k )2 \r \r \r \r \r = \r \rdiag \u0000{\u03c32 i \u03c32 C + \u03c34 i }p1 i=1 \u0001\r \r = \u03c34 \u2217+ \u03c32 \u2217\u03c32 C. Meanwhile, let i\u2217\u2208[p] such that suppose \u03c3\u2217= \u03c3i\u2217, then E\u2225Sk\u2225= E \r \rZkZ\u22a4 k \u2212EZkZ\u22a4 k \r \r \u2265E \r \rZkZ\u22a4 k \r \r \u2212 \r \rEZkZ\u22a4 k \r \r = \u03c32 C \u2212\u03c32 \u2217; E\u2225Sk\u2225\u2265E\u2225(Sk)i\u2217i\u2217\u2225= E \f \fZ2 i\u2217k \u2212EZ2 i\u2217k \f \f \u2265c\u03c32 \u2217. Combining the previous two inequalities, we have E\u2225Sk\u2225\u2265c\u03c32 C. Consequently, E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r (5.36) \u2273 \u03c32 C + \u221ap2\u03c3\u2217\u03c3C. \u25a1 5.5 Proofs for Section 3.4 Proof of Lemma 3.15. Since the diagonal of \u2206(ZZ\u22a4) is zero, we have the following expansion, Etr n\u0000\u2206(ZZ\u22a4) \u0001qo = X u1,...,u1\u2208[p1] E q Y k=1 \u0000\u2206(ZZ\u22a4) \u0001 uk,uk+1 = X u1,...,u1\u2208[p1] v1,...vq\u2208[p2] E q Y k=1 \u00001{uk\u0338=uk+1}Zuk,vkZuk+1,vk \u0001 . (5.37) Again, the indices on u are in module q, i.e., u1 = uq+1. For a cycle c := (u1 \u2192v1 \u2192 u2 \u2192v2 \u2192. . . \u2192uq \u2192vq \u2192u1), recall the de\ufb01nition of \u03b1ij(c): \u03b1ij(c) = Card {k : (uk = i, vk = j, uk+1 \u0338= i) or (uk \u0338= i, vk = j, uk+1 = i)} EJP 0 (2021), paper 0. Page 31/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration for any i \u2208[p1] and j \u2208[p2], which counts how many times edge i \u2192j or j \u2192i are visited. Now the expansion in (5.37) can be further written as Etr n\u0000\u2206(ZZ\u22a4) \u0001qo = X c\u2208([p1]\u00d7[p2])q q Y k=1 1{uk\u0338=uk+1} ! \u00b7 \uf8eb \uf8ed Y (i,j)\u2208[p1]\u00d7[p2] EZ\u03b1ij(c) ij \uf8f6 \uf8f8 = X c\u2208([p1]\u00d7[p2])q q Y k=1 1{uk\u0338=uk+1}\u03c3uk,vk\u03c3uk+1,vk ! \u00b7 \uf8eb \uf8ed Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) \uf8f6 \uf8f8 = X c\u2208([p1]\u00d7[p2])q q Y k=1 1{uk\u0338=uk+1}\u03c32 vk ! \u00b7 \uf8eb \uf8ed Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) \uf8f6 \uf8f8. (5.38) We de\ufb01ne m\u03b1(c) be the number of edges which appear \u03b1 times in the cycle c: m\u03b1(c) = Card {(i, j) \u2208[p1] \u00d7 [p2] : |{k : uk or uk+1 = i, vk = j, }| = \u03b1} Let s(c) be the shape of c, we have Y (i,j)\u2208[p1]\u00d7[p2] EG\u03b1ij(c) = Y \u03b1\u22650 EGm\u03b1(s(c)), where G \u223cN(0, 1). Next we de\ufb01ne the following shape family: Sp1,p2 := {s(c) : m\u2032 \u03b1(c) = 0 for all odd \u03b1; and uk \u0338= uk+1 for all k = 1, . . . , q} . Based on the notations above, one can check the expansion in (5.38) can be further simpli\ufb01ed to Etr n\u0000\u2206(ZZ\u22a4) \u0001qo = X s0\u2208Sp1,p2 X c:s(c)=s0 q Y k=1 \u03c32 vk ! Y \u03b1\u22650 EGm\u03b1(s0) = X s0\u2208Sp1,p2 Y \u03b1\u22650 EGm\u03b1(s0) X c:s(c)=s0 q Y k=1 \u03c32 vk ! . (5.39) For a \ufb01xed shape s0 \u2208Sp1,p2, let mL(s0) (mR(s0)) be the number of distinct left (right) vertexes visited by cycles with shape s0. Now we bound P c:s(c)=s0 \u0000Qq k=1 \u03c32 vk \u0001 via mL(s0) and mR(s0). To this end, we \ufb01rst present three facts for any cycles with shape s0: \u2022 Each visited edges must appear at least twice in the cycles; \u2022 For each right vertex in the cycle, its predecessor and successor in left vertex set must be different; \u2022 The cycle is uniquely de\ufb01ned by specifying mL(s0) left vertexes and mR(s0) right vertexes; moreover, the summation term is free of the index of the left visited vertexes. These three observations, together with the assumption \u03c3\u2217= 1, yield the following bound: X c:s(c)=s0 q Y k=1 \u03c32 vk ! \u2264p1(p1 \u22121) \u00b7 \u00b7 \u00b7 (p1 \u2212mL(s0) + 1) \uf8eb \uf8ed n X j=1 \u03c34 j \uf8f6 \uf8f8 mR(s0) . (5.40) EJP 0 (2021), paper 0. Page 32/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Next we make comparison between Etr \b\u0000\u2206(ZZ\u22a4) \u0001q\t and Etr \b\u0000\u2206(HH\u22a4) \u0001q\t , where H is a p1-by-m random matrix with i.i.d. standard Gaussian entries. Similar, as above, we have Etr n\u0000\u2206(HH\u22a4) \u0001qo = X s0\u2208Sp1,p2 Y \u03b1\u22650 EGm\u03b1(s0) X c:s(c)=s0 |{c : s(c) = s0}| . Setting m = \u2308Pp2 j=1 \u03c34 j \u2309+ q \u22121, we have |{c : s(c) = s0}| = p1(p1 \u22121) \u00b7 \u00b7 \u00b7 (p1 \u2212mL(s0) + 1)m(m \u22121) \u00b7 \u00b7 \u00b7 (m \u2212mR(s0) + 1) \u2265p1(p1 \u22121) \u00b7 \u00b7 \u00b7 (p1 \u2212mL(s0) + 1)(m \u2212mR(s0) + 1)mR(s0) \u2265p1(p1 \u22121) \u00b7 \u00b7 \u00b7 (p1 \u2212mL(s0) + 1) \uf8eb \uf8ed n X j=1 \u03c34 j \uf8f6 \uf8f8 mR(s0) . (5.41) Combining (5.40) and (5.41), we \ufb01nish the proof. Proof of Theorem 3.14. Denote \u03c32 R = P j \u03c32 j , \u03c3\u2217= maxi \u03c3i. We use the general lower bound for expected norm of independent random matrices sum [29, Theorem I and Section 1.3] as we did in the proof of Theorem 3.11. Since Zij \u223cN(0, \u03c32 j ), for any k \u2208[p2], \u0000EZkZ\u22a4 k ZkZ\u22a4 k \u0001 ij = EZik p1 X l=1 Z2 lkZjk = \u001a 3\u03c34 k + (p1 \u22121)\u03c34 k, i = j; 0, i \u0338= j, = diag \u0000{(p1 + 2)\u03c32 k}p1 i=1 \u0001 \u0000EZkZ\u22a4 k \u00012 = \u03c34 kIp1. Thus, \r \r \r \r \rE p2 X k=1 SkS\u22a4 k \r \r \r \r \r = \r \r \r \r \r p2 X k=1 EZkZ\u22a4 k ZkZ\u22a4 k \u2212(EZkZ\u22a4 k )2 \r \r \r \r \r = \r \r \r \r \r(p1 + 1) p2 X k=1 \u03c34 k ! Ip1 \r \r \r \r \r \u2265p1 p2 X k=1 \u03c34 k. On the other hand, E max k \u2225Sk\u2225\u2265max k E \u2225Sk\u2225\u2265max k \b E \r \rZkZ\u22a4 k \r \r \u2212 \r \rEZkZ\u22a4 k \r \r\t = (p1 \u22121)\u03c32 \u2217. Combining the previous two inequalities and (5.36) in the proof of Theorem 3.11, we obtain E \r \rZZ\u22a4\u2212EZZ\u22a4\r \r (5.36) \u2273 v u u tp1 p2 X k=1 \u03c32 k + p1\u03c32 \u2217. 5.6 Proofs for heteroskedastic clustering Proof of Theorem 4.1. We \ufb01rst introduce following three lemmas. Lemma 5.6. For any x \u2208{\u22121, +1}n and z \u2208R with \u2225z\u22252 = 1 we have d(x, sgn(z)) \u2264n \r \r \r \r x \u221an \u2212z \r \r \r \r 2 2 . Here d represents the Hamming distance: d(x, z) = Pn i=1 1{xi\u0338=yi}. Proof. See [21]. \u25a1 EJP 0 (2021), paper 0. Page 33/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration Lemma 5.7. Assume that Z \u2208Rp1\u00d7p2 has independent sub-Gaussian entries, Var(Zij) = \u03c32 ij, \u03c32 C = maxj P i \u03c32 ij, \u03c32 R = maxi P j \u03c32 ij, \u03c32 \u2217= maxi,j \u03c32 ij. Assume that \u2225Zij/\u03c3ij\u2225\u03c82 \u2264\u03ba. Let V \u2208Op2,r be a \ufb01xed orthogonal matrix. Then, P (\u2225EV \u2225\u22652(\u03c3C + x)) \u22642 exp \u0012 5r \u2212min \u001a x4 \u03ba4\u03c32 \u2217\u03c32 C , x2 \u03ba2\u03c32 \u2217 \u001b\u0013 , E \u2225EV \u2225\u2272\u03c3C + \u03bar1/4(\u03c3\u2217\u03c3C)1/2 + \u03bar1/2\u03c3\u2217. Proof. See [36, Lemma 3]. Lemma 5.8 (Davis-Kahan). Let A be an n-by-n symmetric matrix with eigenvalues |\u03bb1| \u2265|\u03bb2| \u2265\u00b7 \u00b7 \u00b7 , with |\u03bbk| \u2212|\u03bbk+1| \u22652\u03b4. Let B be a symmetric matrix such that \u2225B\u2225< \u03b4. Let Ak and (A + B)k be the spaces spanned by the top k eigenvectors of the respective matrices. Then \r \rIk \u2212A\u22a4 k (A + B)k \r \r \u2264\u2225B\u2225 \u03b4 . Proof. See [13]. Now we are ready for the proof. Recall that Y = X + Z, we can write Y Y \u22a4= XX\u22a4+ XZ\u22a4+ X\u22a4+ ZZ\u22a4 = XX\u22a4+ XZ\u22a4+ ZX\u22a4+ \u0000ZZ\u22a4\u2212EZZ\u22a4\u0001 + EZZ\u22a4. (5.42) Since EZZ\u22a4= \u0010Pp j=1 \u03c32 j \u0011 I, the leading eigenvector of Y \u22a4Y (i.e., \u02c6 v) is the same as that of XX\u22a4+ XZ\u22a4+ ZX\u22a4+ \u0000ZZ\u22a4\u2212EZZ\u22a4\u0001 . Since 1 \u221anl is the leading eigenvector of X\u22a4X, it follows that EM(l, \u02c6 l) Lemma 5.6 \u2264 E min \u00b1 \r \r \r \r 1 \u221anl \u00b1 \u02c6 l \r \r \r \r 2 2 Lemma 5.8 \u2264 E \r \rXZ\u22a4+ ZX\u22a4+ ZZ\u22a4\u2212EZZ\u22a4\r \r n \u2225\u00b5\u22252 2 \u22642E \r \rZX\u22a4\r \r + E \r \rZ\u22a4Z \u2212EZ\u22a4Z \r \r n \u2225\u00b5\u22252 2 Lemma 5.7 \u2272 n \u2225\u00b5\u22252 \u03c3\u2217+ E \r \rZ\u22a4Z \u2212EZ\u22a4Z \r \r n \u2225\u00b5\u22252 2 Theorem 3.13 \u2272 n \u2225\u00b5\u2225\u03c3\u2217+ p n Pp i=1 \u03c34 i + n\u03c32 \u2217 n \u2225\u00b5\u22252 2 . Proof of Theorem 4.2. We only need to prove the lower bound under the following two situations: \u2022 when \u03bb \u2264c1\u03c3\u2217, there exists {\u03c3i}p i=1 such that maxi \u03c3i \u2264\u03c3\u2217, P i \u03c34 i \u2264\u02dc \u03c34 and the lower bound holds; \u2022 when \u03bb \u2264c2\u02dc \u03c3/n1/4, there exists {\u03c3i}p i=1 such that maxi \u03c3i \u2264\u03c3\u2217, P i \u03c34 i \u2264\u02dc \u03c34 and the lower bound holds. EJP 0 (2021), paper 0. Page 34/41 https://www.imstat.org/ejp \fHeteroskedastic Wishart-type Concentration We start with the \ufb01rst case. We specify \u03c31 = \u03c3\u2217and take \u03c32, . . . , \u03c3p to be arbitrary values that satisfy the constraint of P\u03bb,l(\u03c3\u2217, \u02dc \u03c3). Consider the metric space {\u22121, 1}n with the metric M(l(1), l(2)) = 1 n min n\f \f \fi : l(1) i \u0338= l(2) i \f \f \f , \f \f \fi : l(i) i \u0338= \u2212l(2) i \f \f \f o , By [35, Lemma 4], when n \u22656, we can \ufb01nd some constant c0, such that there exists a subset {l(1), . . . , l(N)} \u2282{\u22121, 1}n satisfying M(l(i1), l(i2)) \u22651/3, \u22001 \u2264i1 < i2 \u2264N and N \u2265exp(c0n). Let Y (i) = \u00b5 \u0000l(i)\u0001\u22a4+ Z \u2208Rp\u00d7n, where Zij ind \u223cN(0, \u03c32 i ). Let \u00b5 = [\u03bb, 0, \u00b7 \u00b7 \u00b7 , 0]\u22a4, then the KL-divergence between Y (ii) and Y (i2) for i1 \u0338= i2 is DKL(Y (i1)|Y (i2)) = 1 2 p X j=1 \u03c3\u22122 j \u00b52 j \r \r \rl(i1) \u2212l(i2)\r \r \r 2 2 \u22644n\u03a3p j=1\u03c3\u22122 j \u00b52 j = 4n\u03c3\u22122 1 \u03bb2 = 4n\u03bb2/\u03c32 \u2217. (5.43) By the generalized Fano\u2019s lemma, we have inf \u02c6 l sup Pl,\u03bb(\u03c3\u2217,\u02dc \u03c3) EM(l, \u02c6 l) \u22651 3 \u0012 1 \u22124n\u03bb2/\u03c32 \u2217+ log 2 c0n \u0013 \u22651 4. In the last inequality we use the assumption that \u03bb \u2264c1\u03c32 \u2217for some suf\ufb01ciently small constant c1. Now we consider the second situation. We specify \u03c34 1 = \u03c34 2 = . . . = \u03c34 p = \u02dc \u03c34 p . When the variance structure reduces to a homoskedastic structure, we have the following lower bound result which is already established. Lemma 5.9. Suppose \u03c32 1 = \u00b7 \u00b7 \u00b7 = \u03c32 p = 1, there exists c2, C such that if n \u2265C, inf \u02c6 l sup \u2225\u00b5\u22252\u2264c2(p/n)1/4 l\u2208{\u22121,1}n EM(\u02c6 l, l) \u22651/4. Proof. See [11, Theorem 6]. Based on Lemma 5.9 and homoskedasticity of \u00b5 and \u03c3, if we set \u03bb < c2\u02dc \u03c3 p1/4 \u00b7 \u0000 p n \u00011/4 = c2\u02dc \u03c3/n1/4 in our setting, we obtain inf \u02c6 l sup Pl,\u03bb(\u03c3\u2217,\u02dc \u03c3) EM(l, \u02c6 l) \u22651/4. This \ufb01nishes the proof." + }, + { + "url": "http://arxiv.org/abs/2002.07624v3", + "title": "Optimal Structured Principal Subspace Estimation: Metric Entropy and Minimax Rates", + "abstract": "Driven by a wide range of applications, many principal subspace estimation\nproblems have been studied individually under different structural constraints.\nThis paper presents a unified framework for the statistical analysis of a\ngeneral structured principal subspace estimation problem which includes as\nspecial cases non-negative PCA/SVD, sparse PCA/SVD, subspace constrained\nPCA/SVD, and spectral clustering. General minimax lower and upper bounds are\nestablished to characterize the interplay between the information-geometric\ncomplexity of the structural set for the principal subspaces, the\nsignal-to-noise ratio (SNR), and the dimensionality. The results yield\ninteresting phase transition phenomena concerning the rates of convergence as a\nfunction of the SNRs and the fundamental limit for consistent estimation.\nApplying the general results to the specific settings yields the minimax rates\nof convergence for those problems, including the previous unknown optimal rates\nfor non-negative PCA/SVD, sparse SVD and subspace constrained PCA/SVD.", + "authors": "T. Tony Cai, Hongzhe Li, Rong Ma", + "published": "2020-02-18", + "updated": "2020-11-16", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction Spectral methods such as the principal component analysis (PCA) and singular value decomposition (SVD) are a ubiquitous technique in modern data analysis with a wide range of applications in many \ufb01elds including statistics, machine learning, applied mathematics, and engineering. As a fundamental tool for dimension reduction, the spectral methods aim to extract the low-dimensional structures embedded in the high-dimensional data. In many of these modern applications, the complexity of the datasets and the need of incorporating the existing knowledge from the subject areas require the data analysts to take into account the prior structural information on the statistical objects of interest in their analysis. In \u00a92000 . License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v1/.html. arXiv:2002.07624v3 [math.ST] 16 Nov 2020 \fCai, Li and Ma particular, many interesting problems in high-dimensional data analysis can be formulated as a structured principal subspace estimation problem where one has the prior knowledge that the underlying principal subspace satis\ufb01es certain structural conditions (see Section 1.2 for a list of related problems). The present paper aims to provide a uni\ufb01ed treatment of the structured principal subspace estimation problems that have attracted much recent interest in both theory and practice. 1.1 Problem Setup To \ufb01x ideas, we consider two generic models that have been extensively studied in the literature, namely, the matrix denoising model and the spiked Wishart model (see, for example, Johnstone (2001); Baik and Silverstein (2006); Paul (2007); Bai and Yao (2008); Cai et al. (2013); Donoho and Gavish (2014); Wang and Fan (2017); Choi et al. (2017); Donoho et al. (2018); Perry et al. (2018); Bao et al. (2018), among many others). De\ufb01nition 1 (Matrix Denoising Model) Let Y \u2208Rp1\u00d7p2 be the observed data matrix generated from the model Y = U\u0393V\u22a4+Z where Z \u2208Rp1\u00d7p2 has i.i.d. entries from N(0, \u03c32), \u0393 \u2208Rr\u00d7r is a diagonal matrix with ordered diagonal entries \u03bb1 \u2265\u03bb2 \u2265... \u2265\u03bbr > 0 for 1 \u2264 r \u2264min{p1, p2}, U \u2208O(p1, r), and V \u2208O(p2, r) with O(p, r) = {W \u2208Rp\u00d7r : W\u22a4W = Ir} being the set of all p \u00d7 r orthonormal matrices. De\ufb01nition 2 (Spiked Wishart Model) Let Y \u2208Rn\u00d7p be the observed data matrix whose rows Yi \u2208Rp, i = 1, . . . , n, are independently generated from N(\u00b5, U\u0393U\u22a4+ \u03c32Ip) where U \u2208O(p, r) with 1 \u2264r \u2264p, and \u0393 \u2208Rr\u00d7r is diagonal with ordered diagonal entries \u03bb1 \u2265... \u2265\u03bbr > 0. Equivalently, Yi can be viewed as Yi = Xi +\u03f5i where Xi \u223cN(\u00b5, U\u0393U\u22a4), \u03f5i \u223cN(0, \u03c32Ip), and X1, . . . , Xn and \u03f51, . . . , \u03f5n are independent. In the past decades, these two models have attracted substantial practical and theoretical interest and have been studied in di\ufb00erent contexts in statistics, probability, and machine learning. This paper addresses the problem of optimal estimation of the principal (eigen/singular) subspaces spanned by the orthonormal columns of U (denoted as span(U)), based on the data matrix Y and the prior structural knowledge on U. Speci\ufb01cally, we aim to uncover the deep connections between the statistical limit of the estimation problem as measured by the minimax risk and the geometric complexity of the parameter spaces as characterized by functions of certain entropy measures. Since the principal subspaces can be uniquely identi\ufb01ed with their associated projection matrices, estimating span(U) is equivalent to estimating UU\u22a4. A commonly used metric for gauging the distance between two linear subspaces span(U1) and span(U2) is d(U1, U2) = \u2225U1U\u22a4 1 \u2212U2U\u22a4 2 \u2225F . In this paper, we use d(\u00b7, \u00b7) as the loss function and measure the performance of an estimator b U of U by the risk R( b U, U) = Ed( b U, U). 2 \fOptimal Structured Principal Subspace Estimation 1.2 Related Works The problem considered in this paper can be viewed as a generalization and uni\ufb01cation of many interesting problems in high-dimensional statistics and machine learning. We \ufb01rst present a few examples to demonstrate the richness of the structured principal subspace estimation problem and its connections to the existing literature. 1. Sparse PCA/SVD. The goal of sparse PCA/SVD is to recover span(U) under the assumption that columns of U are sparse. Sparse PCA has been extensively studied in the past two decades under the spiked Wishart model (see, for example, d\u2019Aspremont et al. (2005); Zou et al. (2006); Shen and Huang (2008); Witten et al. (2009); Yang et al. (2011); Vu and Lei (2012); Cai et al. (2013); Ma (2013); Birnbaum et al. (2013); Cai et al. (2015), among many others). In particular, the exact minimax rates of convergence under the loss d(\u00b7, \u00b7) was established by Cai et al. (2013) in the general rank-r setting. In contrast, theoretical analysis for the sparse SVD is relatively scarce, and the minimax rate of convergence remains unknown. 2. Non-negative PCA/SVD. Non-negative PCA/SVD aims to estimate span(U) under the assumption that entries of U are non-negative. This problem has been studied by Deshpande et al. (2014) and Montanari and Richard (2015) under the rank-one matrix denoising model (r=1), where the statistical limit and certain sharp asymptotics were carefully established. However, it is still unclear what are the minimax rates of convergence for estimating span(U) under either rank-one or general rank-r settings under either the spiked Wishart model or matrix denoising model. 3. Subspace Constrained PCA/SVD. The subspace constrained PCA/SVD assumes the columns of U are in some low-dimensional linear subspaces of Rp. In other words, U \u2208CA(p, k) = {U \u2208O(p, r) : AU.j = 0 for all 1 \u2264j \u2264r} for some rank (p \u2212k) matrix A \u2208Rp\u00d7(p\u2212k) where r < k < p. Estimating the principal subspaces under various linear subspace constraints has been considered in many applications such as network clustering (Wang and Davidson, 2010; Kawale and Boley, 2013; Kleindessner et al., 2019). However, the minimax rates of convergence for subspace constrained PCA/SVD remain unknown. 4. Spectral Clustering. Suppose we observe Yi \u223cN(\u03b8i, \u03c32Ip) independently, where \u03b8i \u2208 {\u03b8, \u2212\u03b8} \u2282Rp for i = 1, ..., n. Let Y \u2208Rn\u00d7p such that Yi is the i-th row of Y. We have Y = h\u03b8\u22a4+ Z where h \u2208{\u00b11}n and Z has i.i.d. entries from N(0, \u03c32). Spectral clustering of {Yi}1\u2264i\u2264n aims to recover the class labels in h. Equivalently, spectral clustering can be treated as estimating the leading left singular vector u = h/\u2225h\u22252 in the matrix denoising model with u \u2208Cn \u00b1 = {u \u2208Rn : \u2225u\u22252 = 1, ui \u2208{\u00b1n\u22121/2}}. See Azizyan et al. (2013); Jin and Wang (2016); Lu and Zhou (2016); Jin et al. (2017); Cai and Zhang (2018); Giraud and Verzelen (2018); Ndaoud (2018); L\u00a8 o\ufb04er et al. (2019) and references therein for recent theoretical results. In addition to the aforementioned problems, there are many other interesting problems that share the same generic form as the structured principal subspace estimation problem. For example, motivated by applications in the statistical analysis of metagenomics data, 3 \fCai, Li and Ma Ma et al. (2019, 2020) considered an approximately rank-one matrix denoising model where the leading singular vector satis\ufb01es the monotonicity constraint. In a special case of matrix denoising model, namely, the Gaussian Wigner model Y = \u03bbuu\u22a4+ Z \u2208Rn\u00d7n, where Z has i.i.d. entries (up to symmetry) drawn from a Gaussian distribution, the Gaussian Z/2 synchronization problem (Javanmard et al., 2016; Perry et al., 2018) aims to recover the leading singular vector u where u \u2208{u \u2208Rn : \u2225u\u22252 = 1, ui \u2208{\u00b1n\u22121/2}}. These important applications provide motivations for a uni\ufb01ed framework to study the fundamental di\ufb03culty and optimality of these estimation problems. On the other hand, investigations of metric entropy as a measure of statistical complexity has been one of the central topics in theoretical statistics, ranging from nonparametric function estimation (Yatracos, 1988; Haussler and Opper, 1997b; Yang and Barron, 1999; Yang, 1999; Wu and Yang, 2016), high-dimensional statistical inference (Raskutti et al., 2011; Verzelen, 2012; Vu and Lei, 2012; Cai et al., 2013; Ma, 2013) to statistical learning theory (Haussler and Opper, 1997a; Lugosi and Nobel, 1999; Bousquet et al., 2002; Bartlett and Mendelson, 2002; Koltchinskii, 2006; Lecu\u00b4 e and Mendelson, 2009; Cai et al., 2016; Rakhlin et al., 2017). Among them, interesting connections between the complexity of the parameter space and the fundamental di\ufb03culty of the statistical problem as quanti\ufb01ed by certain minimax risk have been carefully established. In this sense, the current work stands as a step along this direction in the context of principal subspace estimation under some general random matrix models. 1.3 Main Contribution The main contribution of this paper is three-fold. Firstly, a uni\ufb01ed framework is introduced for the study of structured principal subspace estimation problems under both the matrix denoising model and the spiked Wishart model. Novel generic minimax lower bounds and risk upper bounds are established to characterize explicitly the interplay between the information-geometric complexity of the structural set for the principal subspaces, the signal-to-noise ratio (SNR), and the dimensionality of the parameter spaces. The results yield interesting phase transition phenomena concerning the rates of convergence as functions of the SNRs and the fundamental limit for consistent estimation. The general lower and upper bounds reduce determination of the minimax optimal rates for many interesting problems to mere calculations of certain information-geometric quantities. Secondly, to obtain the general risk upper bounds, new technical tools are developed for the analysis of the proposed estimators in their general forms. In addition, the minimax lower bounds rely on careful constructions of multiple composite hypotheses about the structured parameter spaces, and non-trivial calculations of the Kullback-Leibler (KL) divergence between certain mixture probability measures, which can be of independent interest. Thirdly, by directly applying our general results to the speci\ufb01c problems discussed in Section 1.2, we establish the minimax optimal rates for those problems. Among them, the minimax rates for sparse SVD, non-negative PCA/SVD and subspace constrained PCA/SVD, are to our knowledge previously unknown. 4 \fOptimal Structured Principal Subspace Estimation 1.4 Organization and Notation The rest of the paper is organized as follows. After introducing the notation at the end of this section, we characterize in Section 2 a minimax lower bound under the matrix denoising model using local metric entropy measures. A general estimator is introduced in Section 3 and its risk upper bound is obtained via certain global metric entropy measures. In Section 4, the spiked Wishart model is discussed in detail and generic risk lower and upper bounds are obtained. The general results are applied in Section 5 to speci\ufb01c settings and minimax optimal rates are established by explicitly calculating the local and global metric-entropic quantities. In Section 6, we address the computational issues of the proposed estimators and discuss some extensions and make connections to some other interesting problems. For a vector a = (a1, ..., an)\u22a4\u2208Rn, we denote diag(a1, ..., an) \u2208Rn\u00d7n as the diagonal matrix whose i-th diagonal entry is ai, and de\ufb01ne the \u2113p norm \u2225a\u2225p = \u0000 Pn i=1 ap i \u00011/p. We write a \u2227b = min{a, b} and a \u2228b = max{a, b}. For a matrix A = (aij) \u2208Rp1\u00d7p2, we de\ufb01ne its Frobenius norm as \u2225A\u2225F = qPp1 i=1 Pp2 j=1 a2 ij and its spectral norm as \u2225A\u2225= sup\u2225x\u22252\u22641 \u2225Ax\u22252; we also denote A.i \u2208Rp1 as its i-th column and Ai. \u2208Rp2 as its i-th row. Let O(p, k) = {V \u2208Rp\u00d7k : V\u22a4V = Ik} be the set of all p \u00d7 k orthonormal matrices and Op = O(p, p), the set of p-dimensional orthonormal matrices. For a rank r matrix A \u2208Rp1\u00d7p2 with 1 \u2264r \u2264p1 \u2227p2, its SVD is denoted as A = U\u0393V\u22a4where U \u2208O(p1, r), V \u2208O(p2, r), and \u0393 = diag(\u03bb1(A), \u03bb2(A), ..., \u03bbr(A)) with \u03bbmax(A) = \u03bb1(A) \u2265\u03bb2(A) \u2265 ... \u2265\u03bbp1\u2227p2(A) = \u03bbmin(A) \u22650 being the ordered singular values of A. The columns of U and the columns of V are the left singular vectors and right singular vectors associated to the non-zero singular values of A, respectively. For a given set S, we denote its cardinality as |S|. For sequences {an} and {bn}, we write an = o(bn) or an \u226abn if limn an/bn = 0, and write an = O(bn), an \u2272bn or bn \u2273an if there exists a constant C such that an \u2264Cbn for all n. We write an \u224dbn if an \u2272bn and an \u2273bn. Lastly, c, C, C0, C1, ... are constants that may vary from place to place. 2. Minimax Lower Bounds via Local Packing We start with the matrix denoising model. Without loss of generality, we focus on estimating the structured left singular subspace span(U). Speci\ufb01cally, for a given subset C \u2282O(p1, r), we consider the parameter space Y(C, t, p1, p2, r) = \u001a (\u0393, U, V) : \u0393 = diag(\u03bb1, ..., \u03bbr), U \u2208C, V \u2208O(p2, r) Lt \u2265\u03bb1 \u2265... \u2265\u03bbr \u2265t/L > 0 \u001b , (1) for some \ufb01xed constant L > 1. For any U \u2208O(p1, r) and \u03f5 \u2208(0, 1), the \u03f5-ball centered at U is de\ufb01ned as B(U, \u03f5) = {U\u2032 \u2208O(p1, r) : d(U\u2032, U) \u2264\u03f5}, and for any given subset C \u2282O(p1, r), we de\ufb01ne diam(C) = sup U1,U2\u2208C d(U1, U2). We introduce the concepts of packing and covering of a given set before stating a general minimax lower bound. 5 \fCai, Li and Ma De\ufb01nition 3 (\u03f5-packing and \u03f5-covering) Let (V, d) be a metric space and M \u2282V . We say that G(M, d, \u03f5) \u2282M is an \u03f5-packing of M if for any mi, mj \u2208G(M, d, \u03f5) with mi \u0338= mj, it holds that d(mi, mj) > \u03f5. We say that H(M, d, \u03f5) \u2282M is an \u03f5-covering of M if for any m \u2208M, there exists an m\u2032 \u2208H(M, d, \u03f5) such that d(m, m\u2032) < \u03f5. We denote M(M, d, \u03f5) = max{|G(M, d, \u03f5)|} and N(M, d, \u03f5) = min{|H(M, d, \u03f5)|} as the \u03f5-packing number and the \u03f5-covering number of M, respectively. Following Yang and Barron (1999), we also de\ufb01ne the metric entropy of a given set. De\ufb01nition 4 (packing and covering \u03f5-entropy) Let M(M, d, \u03f5) and N(M, d, \u03f5) be the \u03f5-packing and \u03f5-covering number of M, respectively. We call log M(M, d, \u03f5) the packing \u03f5-entropy and log N(M, d, \u03f5) the covering \u03f5-entropy of M. The following theorem gives a minimax lower bound for estimating span(U) over Y(C, t, p1, p2, r), as a function of the cardinality of a local packing set of C, the magnitude of the leading singular values (t), the noise level (\u03c32), the rank (r), and the dimension (p2) of the right singular vectors in V. Theorem 5 Under the matrix denoising model Y = U\u0393V\u22a4+Z where (\u0393, U, V) \u2208Y(C, t, p1, p2, r), suppose there exist some U0 \u2208C, \u03f50 > 0 and \u03b1 \u2208(0, 1) such that a local packing set G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50) satis\ufb01es \u03f50 = p c\u03c32(t2 + \u03c32p2) t2 p log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u2227diam(C) (2) for some c \u2208(0, 1/640]. Then, as long as |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u22652, it holds that, for \u03b8 = (\u0393, U, V), inf b U sup \u03b8\u2208Y(C,t,p1,p2,r) R( b U, U) \u2273 \u0012\u03c3 p t2 + \u03c32p2 t2 p log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)|\u2227diam(C) \u0013 , (3) where the in\ufb01mum is over all the estimators based on the observation Y. The above theorem, to the best of our knowledge, is the \ufb01rst minimax lower bound result for the matrix denoising model under the general parameter space (1). Its proof is separated into two parts. In the strong signal regime (t2 \u2273\u03c32p2), the minimax lower bound can be obtained by generalizing the ideas in Vu and Lei (2012, 2013) and Cai et al. (2013), where a general lower bound for testing multiple hypotheses (Lemma 30) is applied to obtain (3). In contrast, the analysis is much more complicated in the weak signal regime (t2 \u2272\u03c32p2) due to the asymmetry between U and V: the dependence on p2 need to be captured by extra e\ufb00orts in the lower bound construction (Cai and Zhang, 2018), which is di\ufb00erent from the aforementioned works on sparse PCA. To achieve this, our analysis relies on a generalized Fano\u2019s method for testing multiple composite hypotheses (Lemma 31) and a nontrivial calculation of the pairwise KL divergence between certain mixture probability measures (Lemma 32). A key observation from the above theorem is the role of the local packing set G(B(U0, \u03f50)\u2229 C, d, \u03b1\u03f50) and its entropy measure log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| in characterizing the fundamental di\ufb03culty of the estimation problem. Similar phenomena connecting the local packing 6 \fOptimal Structured Principal Subspace Estimation numbers to the minimax lower bounds has been observed in, for example, nonparametric function estimation (Yang and Barron, 1999), high-dimensional linear regression (Raskutti et al., 2011; Verzelen, 2012), and sparse principal component analysis (Vu and Lei, 2012; Cai et al., 2013). By Cai and Zhang (2018), a sharp minimax lower bound for estimating span(U) under the unstructured matrix denoising models (i.e., C = O(p1, r)) is inf b U sup (\u0393,U,V)\u2208Y(O(p1,r),t,p1,p2,r) R( b U, U) \u2273 \u0012\u03c3 p (t2 + \u03c32p2)rp1 t2 \u2227\u221ar \u0013 , (4) which, in light of the packing number estimates for the orthogonal group (Lemma 1 of Cai et al. (2013)), is a direct consequence of our lower bound (3) for any U0 \u2208O(p1, r). In addition, comparing the lower bounds (3) and (4), we observe that the informationgeometric quantity log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| essentially quanti\ufb01es the intrinsic statistical dimension (which is rp1 in the case of C = O(p1, r)) of the set C. 3. Risk Upper Bound using Dudley\u2019s Entropy Integral In this section, we consider a general singular subspace estimator and study its theoretical properties. Speci\ufb01cally, we obtain its risk upper bound which, analogous to the minimax lower bound, can be expressed as a function of certain entropic measures related to the structural constraint C. Under the matrix denoising model, with the parameters (\u0393, U, V) \u2208Y(C, t, p1, p2, r) for some given set C \u2282O(p1, r), we consider the structured singular subspace estimator b U = arg max U\u2208C tr(U\u22a4YY\u22a4U). (5) Before stating our main theorem, we need to make more de\ufb01nitions about quantities that play important roles in our subsequent discussions. De\ufb01nition 6 For given C \u2282O(p1, r) and any U \u2208C, we de\ufb01ne the set T (C, U) = \u001a WW\u22a4\u2212UU\u22a4 \u2225WW\u22a4\u2212UU\u22a4\u2225F \u2208Rp1\u00d7p1 : W \u2208C \\ {U} \u001b , equipped with the Frobenius distance d2, where for any D1, D2 \u2208T (C, U), we de\ufb01ne d2(D1, D2) = \u2225D1 \u2212D2\u2225F . De\ufb01nition 7 (Dudley\u2019s entropy integral) For a metric space (T, d) and a subset A \u2282 T, Dudley\u2019s entropy integral of A is de\ufb01ned as D(A, d) = R \u221e 0 p log N(A, d, \u03f5)d\u03f5. Moreover, we de\ufb01ne D\u2032(A, d) = R \u221e 0 log N(A, d, \u03f5)d\u03f5. Theorem 8 Under the matrix denoising model, for any given subset C \u2282O(p1, r) and the parameter space Y(C, t, p1, p2, r), if t2/\u03c32 \u2273supU\u2208C[D\u20322(T (C, U), d2)/D2(T (C, U), d2)], it holds that sup (\u0393,U,V)\u2208Y(C,t,p1,p2,r) R( b U, U) \u2272 \u0012\u03c3\u2206(C) p t2 + \u03c32p2 t2 \u2227diam(C) \u0013 , (6) where \u2206(C) = supU\u2208C D(T (C, U), d2). 7 \fCai, Li and Ma The proof of the above theorem, as it concerns the generic estimator (5) under some arbitrary structural set C, is involved and very di\ufb00erent from the existing works such as Cai et al. (2013) Deshpande et al. (2014) Cai and Zhang (2018) and Zhang et al. (2018) where speci\ufb01c examples of C are considered. The argument relies on careful analysis the supremum of a Gaussian chaos of order 2 and the supremum of a Gaussian process. In the latter case, we applied Dudley\u2019s integral inequality (Theorem 22) and the invariance property of the covering numbers with respect to Lipschitz maps (Lemma 23), whereas in the former case, the Arcones-Gin\u00b4 e decoupling inequality (Theorem 24) as well as the generic chaining argument (Theorem 27) were used to obtain the desired upper bounds. Many technical tools concatenated for the proof of this theorem can be of independent interest. See more details in Section A.1. Interestingly, both the risk upper bound (6) and the minimax lower bound (3) indicate two phase transitions when treated as a function of the SNR t/\u03c3, with the \ufb01rst critical point t \u03c3 \u224d\u221ap2, (7) and the second critical point t \u03c3 \u224d \u0014 \u03b6 diam2(C) + s \u03b6p2 diam2(C) \u00151/2 , (8) where in the upper bound \u03b6 = \u22062(C) and in the lower bound \u03b6 = log |G(B(U0, \u03f50) \u2229 C, d, \u03b1\u03f50)|. Speci\ufb01cally, the phase transition at the \ufb01rst critical point highlights the role of the dimensionality of the right singular vectors (V) and the change of the rates of convergence from an inverse quadratic function (\u03c32\u221ap2\u03b6/t2) to an inverse linear function (\u03c3\u221a\u03b6/t) of t/\u03c3. The message from the second phase transition concerns the statistical limit of the estimation problem: consistent estimation is possible only when the SNR exceeds the critical point (8) asymptotically. See Figure 1 (left) for a graphical illustration. As for the implications of the condition t2/\u03c32 \u2273sup U\u2208C [D\u20322(T (C, U), d2)/D2(T (C, U), d2)] (9) required by Theorem 8, it can be seen in Section 5 that, for many speci\ufb01c problems, a su\ufb03cient condition for (9) is that t/\u03c3 is above the second critical point (8), which is mild and natural since the latter condition characterizes the region where b U is consistent and more generally where consistent estimation is possible. Comparing our risk upper bound (6) to the minimax lower bound (3), we can observe the similar role played by the information-geometric quantities that characterize the intrinsic statistical dimension of the sets C or T (C, U). Speci\ufb01cally, in (6), the quantity \u2206(C) is related to the global covering entropy, whereas in (3), the quantity p log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| is associated to the local packing entropy. To obtain the minimax optimal rate of convergence, we need to compare the above two quantities and show \u22062(C) \u224dlog |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)|. (10) Proving the above equation in its general form is di\ufb03cult. Alternatively, we brie\ufb02y discuss the a\ufb03nity between these two geometric quantities yielded by information theory and leave more detailed discussions in the context of some speci\ufb01c examples in Section 5. 8 \fOptimal Structured Principal Subspace Estimation Figure 1: A graphical illustration of the phase transitions in risks as a function of the SNRs under the matrix denoising model (left) and the spiked Wishart model (right). By de\ufb01nition of the packing numbers, we have the relationship log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u2264log M(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50), (11) that links log |G(B(U0, \u03f50)\u2229C, d, \u03b1\u03f50)| to the local packing entropy. A well known fact about the equivalence between the packing and the covering number of a set M is that M(M, d, 2\u03f5) \u2264N(M, d, \u03f5) \u2264M(M, d, \u03f5). (12) Moreover, Yang and Barron (1999) obtained a very interesting result connecting the local and the global (covering) metric entropies. Speci\ufb01cally, let U be any element from M, then log M(M, d, \u03f5/2) \u2212log M(M, d, \u03f5) \u2264log M(B(U, \u03f5) \u2229M, d, \u03f5/2) \u2264log M(M, d, \u03f5). (13) In Section 5, by focusing on some speci\ufb01c examples of C that are widely considered in practice, we show that equation (10) holds, which along with our generic lower and upper bounds recovers some existing minimax rates, and more importantly, helps to establish some previously unknown rates. 4. Structured Eigen Subspace Estimation in the Spiked Wishart Model We turn the focus in this section to the spiked Wishart model where one has i.i.d. observations Yi \u223cN(\u00b5, \u03a3) with \u03a3 = U\u0393V\u22a4+ \u03c32I, which is usually referred as the spiked covariance. Similar to the matrix denoising model, a minimax lower bound based on some local packing set and a risk upper bound based on the Dudley\u2019s entropy integral can be obtained. 4.1 Minimax Lower Bound For any given subset C \u2282O(p, r), we consider the parameter space Z(C, t, p, r) = {(\u0393, U) : \u0393 = diag(\u03bb1, ..., \u03bbr), Lt \u2265\u03bb1 \u2265... \u2265\u03bbr \u2265t/L > 0, U \u2208C}, 9 \fCai, Li and Ma where L > 1 is some \ufb01xed constant. The following theorem provides minimax lower bound for estimating span(U) over Z(C, t, p, r) under the spiked Wishart model. Theorem 9 Under the spiked Wishart model where (\u0393, U) \u2208Z(C, t, p, r), suppose there exist some U0 \u2208C, \u03f50 > 0 and \u03b1 \u2208(0, 1) such that a local packing set G(B(U0, \u03f50)\u2229C, d, \u03b1\u03f50) satis\ufb01es \u03f50 = \u03c3 p c(\u03c32 + t) t\u221an p log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u2227diam(C), (14) for some c \u2208(0, 1/32]. Then, as long as |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u22652, it holds that inf b U sup (\u0393,U)\u2208Z(C,t,p,r) R( b U, U) \u2273 \u0012\u03c3 \u221a \u03c32 + t t\u221an p log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u2227diam(C) \u0013 , (15) where the in\ufb01mum is over all the estimators based on the observation Y. In Zhang et al. (2018), a sharp minimax lower bound for estimating span(U) under the unstructured spiked Wishart model was obtained as inf b U sup (\u0393,U)\u2208Z(O(p,r),t,p,r) R( b U, U) \u2273 \u0012\u03c3 p (\u03c32 + t)rp t\u221an \u2227\u221ar \u0013 . (16) Comparing the general lower bound (15) with (16), we observe that the local entropic quantity log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| again characterizes the intrinsic statistical dimension (which is rp in the case of C = O(p, r)) of the set C. See Section 5 for more examples. 4.2 Risk Upper Bound Under the spiked Wishart model, to estimate the eigen subspace span(U) under the structural constraint U \u2208C, we start with the sample covariance matrix \u02c6 \u03a3 = 1 n n X i=1 (Yi \u2212\u00af Y )(Yi \u2212\u00af Y )\u22a4, where \u00af Y = 1 n Pn i=1 Yi and Yi is the i-th row of the observed data matrix Y \u2208Rn\u00d7p. Since \u02c6 \u03a3 is invariant to any translation on Y, we assume \u00b5 = 0 without loss of generality. Similar to the matrix denoising model, for the spiked Wishart model, with a slight abuse of notation, we de\ufb01ne the eigen subspace estimator as b U = arg max U\u2208C tr(U\u22a4\u02c6 \u03a3U). (17) The following theorem provides the risk upper bound of b U. Theorem 10 Under the spiked Wishart model, for any given C \u2282O(p, r) and the parameter space Z(C, t, p, r), suppose n \u2273max{log t \u03c32 , r} and t/\u03c32 \u2273supU\u2208C[D\u20322(T (C, U), d2)/D2(T (C, U), d2)], then sup (\u0393,U)\u2208Z(C,t,p,r) R( b U, U) \u2272 \u0012\u03c3\u2206(C) \u221a t + \u03c32 t\u221an \u2227diam(C) \u0013 , where \u2206(C) is de\ufb01ned in Theorem 8. 10 \fOptimal Structured Principal Subspace Estimation Similar to the matrix denoising model, the above risk upper bound has a great a\ufb03nity to the minimax lower bound (15), up to a di\ufb00erence in the information-geometric (metricentropic) measures of C, and the sharpness of our results relies on the relative magnitude between the pair of quantities \u22062(C) and log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)|. In addition, phase transitions in the rates of the lower and upper bounds as functions of the SNR t/\u03c32 can be observed with the \ufb01rst critical point at t \u03c32 \u224d1, (18) and the second critical point at t \u03c32 \u224d \u03b6 n \u00b7 diam2(C) + s \u03b6 n \u00b7 diam2(C), (19) where in the lower bound \u03b6 = log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| and in the upper bound \u03b6 = \u22062(C). Again, the phase transition at the \ufb01rst critical point re\ufb02ects the change of the speed of the rates of convergence, whereas the phase transition at the second critical point characterizes the statistical limit of the estimation problem. See Figure 1 (right) for a graphical illustration. Finally, it will be seen in Section 5 that for many speci\ufb01c problems, the condition t/\u03c32 \u2273supU\u2208C[D\u20322(T (C, U), d2)/D2(T (C, U), d2)] required by Theorem 10 is mild and in fact necessary for consistent estimation. 5. Applications In the following, building upon the minimax lower bounds and the risk upper bounds established in the previous sections, we obtain minimax rates and fundamental limits for various structural principal subspace estimation problems of broad interest. Speci\ufb01cally, in light of our generic results, we show the asymptotic equivalence of various local and global entropic measures associated to some speci\ufb01c examples of C. Previous discussions under the general settings such as the phase transition phenomena also apply to each of the examples. 5.1 Sparse PCA/SVD We start with the sparse PCA/SVD where the columns of U are sparse vectors. Suppose CS(p, r, k) is the k-sparse subset of O(p, r) for some k \u2264p, i.e., CS(p, r, k) = {U \u2208O(p, r) : max1\u2264i\u2264r \u2225U.i\u22250 \u2264k}. The following proposition concerns some estimates about the local and global entropic quantities associated with the set CS(p, r, k). For simplicity, we denote CS(k) = CS(p, r, k) when there is no confusion. Proposition 11 Under the matrix denoising model where (\u0393, U, V) \u2208Y(CS(k), t, p1, p2, r) with k = o(p1) and r = O(1), there exist some (U0, \u03f50, \u03b1) and a local packing set G(B(U0, \u03f50)\u2229 CS(p1, r, k), d, \u03b1\u03f50) satisfying (2) such that log |G(B(U0, \u03f50) \u2229CS(p1, r, k), d, \u03b1\u03f50)| \u224d\u22062(CS(p1, k, r)) \u224dk log(ep1/k) + k. Similarly, under the spiked Wishart model where (\u0393, V) \u2208Z(CS(k), t, p, r) with k = o(p) and r = O(1), there exist some (U0, \u03f50, \u03b1) and a local packing set G(B(U0, \u03f50)\u2229CS(p, r, k), d, \u03b1\u03f50) satisfying (14) such that log |G(B(u0, \u03f50) \u2229CS(p, k, r), d, \u03b1\u03f50)| \u224d\u22062(CS(p, k, r)) \u224dk log(ep/k) + k. 11 \fCai, Li and Ma In light of our lower and upper bounds under both the matrix denoising model (Theorem 5 and 8) and the spiked Wishart model (Theorem 9 and 10), with Proposition 11, we are able to establish sharp minimax rates of convergence for sparse PCA/SVD. Theorem 12 Under the matrix denoising model with U \u2208CS(p1, r, k) where k = o(p1) and r = O(1), it holds that inf b U sup Y(CS(k),t,p1,p2,r) R( b U, U) \u224d \u0012\u03c3 p t2 + \u03c32p2 t2 \u0012r k log ep1 k + \u221a k \u0013 \u22271 \u0013 (20) where the estimator (5) is rate-optimal whenever consistent estimation is possible. Similarly, under the spiked Wishart model with U \u2208CS(p, r, k) where k = o(p) and r = O(1), if n \u2273max{log t \u03c32 , r}, then inf b U sup Z(CS(k),t,p,r) R( b U, U) \u224d \u0012\u03c3 \u221a t + \u03c32 t\u221an \u0012r k log ep k + \u221a k \u0013 \u22271 \u0013 , (21) where the estimator (17) is rate-optimal whenever consistent estimation is possible. The minimax rate (21) for the spiked Wishart model (sparse PCA) recovers the ones obtained by Vu and Lei (2012) and Cai et al. (2013) under either rank-one or \ufb01nite rank r settings. In contrast, the result (20) for the matrix denoising model (sparse SVD), to the best of our knowledge, has not been established. 5.2 Non-Negative PCA/SVD We now turn to the non-negative PCA/SVD under either the matrix denoising model (SVD) or the spiked Wishart model (PCA) where U \u2208CN(p, r) = {U = (uij) \u2208O(p, r) : uij \u2265 0 for all i, j}. The following proposition provides estimates about the local and global entropic quantities related to the set CN(p, r). Proposition 13 Under the matrix denoising model where (\u0393, U, V) \u2208Y(CN(p1, r), t, p1, p2, r) and r = O(1), there exist some (U0, \u03f50, \u03b1) and a local packing set G(B(U0, \u03f50)\u2229CN(p1, r), d, \u03b1\u03f50) satisfying (2) such that \u22062(CN(p1, r)) \u224dlog |G(B(U0, \u03f50) \u2229CN(p1, r), d, \u03b1\u03f50)| \u224dp1. Similarly, under the spiked Wishart model where (\u0393, U) \u2208Z(CN(p, r), t, p, r) and r = O(1), there exist some (U0, \u03f50, \u03b1) and a local packing set G(B(U0, \u03f50) \u2229CN(p, r), d, \u03b1\u03f50) satisfying (14) such that \u22062(CN(p, r)) \u224dlog |G(B(U0, \u03f50) \u2229CN(p, r), d, \u03b1\u03f50)| \u224dp. Proposition 13 enables us to establish sharp minimax rates of convergence for nonnegative PCA/SVD using the general lower and upper bounds from the previous sections. 12 \fOptimal Structured Principal Subspace Estimation Theorem 14 Under the matrix denoising model with U \u2208CN(p1, r) where r = O(1), it holds that inf b U sup Y(CN(p1,r),t,p1,p2,r) R( b U, U) \u224d\u03c3 p (t2 + \u03c32p2)p1 t2 \u22271, (22) and the estimator (5) is rate-optimal whenever consistent estimation is possible. Similarly, for the spiked Wishart model with U \u2208CN(p, r) where r = O(1), if n \u2273max{log t \u03c32 , r}, then inf b U sup Z(CN(p,r),t,p,r) R( b U, U) \u224d\u03c3 p (t + \u03c32)p t\u221an \u22271, (23) where the estimator (17) is rate-optimal whenever consistent estimation is possible. The minimax rates for non-negative PCA/SVD, which were previously unknown, turn out to be the same as the rates for the ordinary unstructured SVD (Cai and Zhang, 2018) and PCA (Zhang et al., 2018). This is due to the fact claimed in Proposition 13 that, under the \ufb01nite rank scenarios, as a much smaller subset of O(p, r), CN(p, r) has asymptotically the same geometric complexity as O(p, r). Remark 15 Deshpande et al. (2014) considered the rank-one Gaussian Wigner model Y = \u03bbuu\u22a4+ Z \u2208Rp1\u00d7p1, which can be treated as a special case of the matrix denoising model. Speci\ufb01cally, it was shown that, for b u = arg maxu\u2208CN(p,1) u\u22a4Yu, it holds that sup (\u03bb,u)\u2208Z(CN(p,1),t,p,1) E[1 \u2212|b u\u22a4u|] \u2272\u03c3\u221ap t \u22271, which, by the fact that 1 \u2212|b u\u22a4u| \u2264d(b u, u), can be implied by our result (see also Section 6.2). Similar problems were studied in Montanari and Richard (2015) under the setting where p1/p2 \u2192\u03b1 \u2208(0, \u221e). However, their focus is to unveil the asymptotic behavior of b u\u22a4u as well as the analysis of an approximate message passing algorithm, which is di\ufb00erent from ours. 5.3 Subspace Constrained PCA/SVD In some applications such as network clustering (Wang and Davidson, 2010; Kawale and Boley, 2013; Kleindessner et al., 2019), it is of interest to estimate principal subspaces with certain linear subspace constraints. For example, under the matrix denoising model, for some \ufb01xed A \u2208Rp1\u00d7(p1\u2212k) of rank (p1 \u2212k) where r < k < p1, a k-dimensional subspace constraint on the singular subspace span(U) could be U \u2208CA(p1, r, k) = {U \u2208O(p1, r) : AU.i = 0, \u22001 \u2264i \u2264r}. Again, subspace constrained PCA/SVD can be solved based on the general results obtained in the previous sections. Proposition 16 For given A \u2208Rp1\u00d7(p1\u2212k) of rank (p1 \u2212k), under the matrix denoising model where (\u0393, U, V) \u2208Y(CA(p1, r, k), t, p1, p2, r) and r = O(1), there exist some (U0, \u03f50, \u03b1) and a local packing set G(B(U0, \u03f50) \u2229CA(p1, r, k), d, \u03b1\u03f50) satisfying (2) such that \u22062(CA(p1, r, k)) \u224dlog |G(B(u0, \u03f50) \u2229CA(p1, r, k), d, \u03b1\u03f50)| \u224dk. 13 \fCai, Li and Ma Similarly, for given B \u2208Rp\u00d7(p\u2212k) of rank (p \u2212k), under the spiked Wishart model with (\u0393, U) \u2208Z(CB(p, r, k), t, p, r) and r = O(1), there exist some (U0, \u03f50, \u03b1) and a local packing set G(B(U0, \u03f50) \u2229CB(p, r, k), d, \u03b1\u03f50) satisfying (14) such that \u22062(CB(p, r, k)) \u224dlog |G(B(U0, \u03f50) \u2229CB(p, r, k), d, \u03b1\u03f50)| \u224dk. Theorem 17 Under the matrix denoising model with U \u2208CA(p1, r, k) where r < k < p1, r = O(1) and A \u2208Rp1\u00d7(p1\u2212k) is of rank (p1 \u2212k), it holds that inf b U sup Y(CA(p1,r,k),t,p1,p2,r) R( b U, U) \u224d \u0012\u03c3 p (t2 + \u03c32p2)k t2 \u22271 \u0013 (24) and the estimator (5) is rate-optimal whenever consistent estimation is possible. Similarly, under the spiked Wishart model with U \u2208CB(p, r, k), where r < k < p, r = O(1) and B \u2208Rp\u00d7(p\u2212k) is of rank (p \u2212k), if n \u2273max{log t \u03c32 , r}, then inf b U sup Z(CB(p,r,k),t,p,r) R( b U, U) \u224d \u0012\u03c3 p (t + \u03c32)k t\u221an \u22271 \u0013 , (25) where the estimator (17) is rate-optimal whenever consistent estimation is possible. 5.4 Spectral Clustering As discussed in Section 1.2, spectral clustering can be treated as estimation of the structural eigenvector under the rank-one matrix denoising model Y = \u03bbuv\u22a4+ Z \u2208Rn\u00d7p where \u03bb = \u2225h\u22252 2\u2225\u03b8\u22252 2 is the global signal strength, u = h/\u2225h\u22252 \u2208Cn \u00b1 = {u \u2208Rn : \u2225u\u22252 = 1, ui \u2208 {\u00b1n\u22121/2}} indicates the group labels, and Z has i.i.d. entries from N(0, \u03c32). As a result, important insights about the clustering problem can be obtained by calculating the entropic quantities related to Cn \u00b1 and applying the general results from the previous sections. Proposition 18 Under the matrix denoising model where (\u03bb, u, v) \u2208Y(Cn \u00b1, t, n, p, 1), it holds that \u22062(Cn \u00b1) \u2272n. In addition, if t2 = C\u03c32(\u221apn + n) for some constant C > 0, then there exist some (u0, \u03f50, \u03b1) and a local packing set G(B(u0, \u03f50) \u2229Cn \u00b1, d, \u03b1\u03f50) satisfying (2) such that log |G(B(u0, \u03f50) \u2229Cn \u00b1, d, \u03b1\u03f50)| \u224dn. Theorem 19 Under the spectral clustering model de\ufb01ned in Section 1.2, or equivalently, the matrix denoising model Y = \u03bbuv\u22a4+ Z \u2208Rn\u00d7p where u \u2208Cn \u00b1, the estimator b u = arg maxu\u2208Cn \u00b1 u\u22a4YY\u22a4u satis\ufb01es sup (\u03bb,u,v)\u2208Y(Cn \u00b1,t,n,p,1) R(b u, u) \u2272 \u0012\u03c3 p (t2 + \u03c32p)n t2 \u22271 \u0013 . (26) In addition, if t2 \u2272\u03c32(n + \u221anp), then inf b u sup (\u03bb,u,v)\u2208Y(Cn \u00b1,t,n,p,1) R(b u, u) \u2273C (27) for some absolute constant C > 0. 14 \fOptimal Structured Principal Subspace Estimation Intuitively, the fundamental di\ufb03culty for clustering relies on the interplay between the global signal strength \u03bb, which re\ufb02ects both the sample size (n) and the distance between the two clusters (\u2225\u03b8\u22252), the noise level (\u03c32), and the dimensionality (p). In particular, the lower bound from the above theorem shows that one needs \u03bb2 \u2273\u03c32(\u221apn + n) in order to have consistent clustering. Moreover, the risk upper bound implies that, whenever \u03bb2 \u2273 \u03c32(\u221apn + n), the estimator b u is consistent. Theorem 19 thus establishes the fundamental statistical limit for the minimal global signal strength for consistent clustering. Similar phenomena have also been observed by Azizyan et al. (2013) and Cai and Zhang (2018). Nevertheless, it should be noted that, despite the fundamental limits for consistent recovery yielded by Theorem 19, the estimator b u is in itself sub-optimal and can be further improved through a variant of Lloyd\u2019s iterations. See Lu and Zhou (2016) and Ndaoud (2018) for more details. 6. Discussions In this paper, we studied a collection of structural principal subspace estimation problems in a uni\ufb01ed framework by exploring the deep connections between the di\ufb03culty for statistical estimation and the geometric complexity of the parameter spaces. Minimax optimal rates of convergence for a collection of structured PCA/SVD problems are established. In this section, we discuss the computational issues of the proposed estimators as well as the extensions and connections to other problems. 6.1 Computationally E\ufb03cient Algorithms and the Iterative Projection Method In general, the constrained optimization problems that de\ufb01ne the estimators in (5) and (17) are computationally intractable. However, in practice, many iterative algorithms have been developed to approximate such estimators. For example, under the matrix denoising model, given the data matrix Y, the set C, and an initial estimator U0 \u2208O(p1, r), an iterative algorithm for the constrained optimization problem arg maxU\u2208C tr(U\u22a4YY\u22a4U) can be realized through iterations over the following updates for t \u22651: 1. Multiplication: Gt = YY\u22a4Ut; 2. QR factorization: U\u2032 t+1Wt+1 = Gt where U\u2032 t+1 is p1 \u00d7 r orthonormal and Wt+1 is r \u00d7 r upper triangular; 3. Projection: Ut+1 = PC(U\u2032 t+1). Here the projection operator PC(\u00b7) is de\ufb01ned as PC(U) = arg minG\u2208C d(U, G). The above algorithm generalizes the ideas of the projected power method (see, for example, Boumal (2016); Chen and Cand` es (2018); Onaran and Villar (2017)) and the orthogonal iteration method (Golub and Van Loan, 2012; Ma, 2013). The computational e\ufb03ciency of this iterative algorithm relies on the complexity of the projection operator PC for a given C. In the rank-one case (r=1), Ferreira et al. (2013) pointed out that, whenever the set C is an intersection of a convex cone and the unit sphere, the projection operator PC(\u00b7) admits an explicit formula and can be computed e\ufb03ciently. 15 \fCai, Li and Ma This class of spherical convex sets includes many of the above examples such as non-negative PCA/SVD and subspace constrained PCA/SVD. The case of spectral clustering, under the rank-one setting, is also straightforward as the projection has a simple expression PCn \u00b1(u) = sgn(u)/\u221an (see Ndaoud (2018) and L\u00a8 o\ufb04er et al. (2019) for more in depth discussions). As for sparse PCA/SVD, the computational side of the problem is much more complicated and has been extensively studied in literature (Shen and Huang, 2008; d\u2019Aspremont et al., 2008; Witten et al., 2009; Journ\u00b4 ee et al., 2010; Ma, 2013; Vu et al., 2013; Yuan and Zhang, 2013; Deshpande and Montanari, 2014). In addition to the iterative projection method discussed above, there are several other computationally e\ufb03cient algorithms such as convex (semide\ufb01nite in particular) relaxations (Singer, 2011; Deshpande et al., 2014; Bandeira et al., 2017) and the approximate message passing algorithms (Deshpande and Montanari, 2014; Deshpande et al., 2014; Montanari and Richard, 2015; Rangan and Fletcher, 2012), that have been considered to solve the structured eigenvector problems. However, the focuses of these algorithms are still rankone matrices, and it remains to be understood how well these algorithms generalize to the general rank-r cases. We leave further investigations along these directions to future work. 6.2 Extensions and Future Work As mentioned in Section 1.2, an important special case of matrix denoising model is the Gaussian Wigner model (Deshpande et al., 2014; Montanari and Richard, 2015; Perry et al., 2018), where the data matrix Y = U\u0393U\u22a4+ Z \u2208Rp\u00d7p is symmetric, and the noise matrix Z has i.i.d. entries (up to symmetry) drawn from N(0, \u03c32). Consider the parameter space Z(C, t, p, r) de\ufb01ned in Section 4.1. It can be shown that, under similar conditions to those of Theorem 5, inf b U sup (\u0393,U)\u2208Z(C,t,p,r) R( b U, U) \u2273 \u0012\u03c3 t p log |G(B(U0, \u03f50) \u2229C, d, \u03b1\u03f50)| \u2227diam(C) \u0013 . (28) Moreover, if we de\ufb01ne b U = arg maxU\u2208C tr(U\u22a4YU), then its risk upper bound can be obtained as sup (\u0393,U)\u2208Z(C,t,p,r) R( b U, U) \u2272 \u0012\u03c3\u2206(C) t \u2227diam(C) \u0013 . (29) These general bounds combined with the entropic quantities calculated in Section 5 would yield many other interesting optimality results. For instance, recall that the Gaussian Z/2 synchronization problem can be treated as a rank-one Gaussian Wigner model Y = \u03bbuu\u22a4+ Z where u \u2208Cn \u00b1. In this case, we have, for t \u2272\u03c3\u221an inf b u sup (\u03bb,u)\u2208Z(Cn \u00b1,t,n,1) R(b u, u) \u2273C. and, for b u = arg maxu\u2208Cn \u00b1 u\u22a4Yu, sup (\u03bb,u)\u2208Z(Cn \u00b1,t,n,1) R(b u, u) \u2272 \u0012\u03c3\u221an t \u22271 \u0013 . 16 \fOptimal Structured Principal Subspace Estimation This implies that, about Gaussian Z/2 synchronization, to have consistent estimation/recovery, one needs \u03bb \u2273\u03c3\u221an, and the estimator b u is consistent whenever \u03bb \u2273\u03c3\u221an. These results make interesting connections to the existing works (Javanmard et al., 2016; Perry et al., 2018) concerning the so-called critical threshold or fundamental limit for the SNRs in Z/2 synchronization problems. In the present paper, under the matrix denoising model, we only focused on the cases where the prior structural knowledge on the targeted singular subspace span(U) is available. However, in some applications, structural knowledge on the other singular subspace span(V) can also be available. An interesting question is whether and how much the prior knowledge on span(V) will help in the estimation of span(U). Some preliminary thinking suggests that novel phenomena might exist in such settings. For example, in an extreme case, if V is completely known a priori, then after a simple transform YV = U\u0393 + ZV, estimation of span(U) can be reduced to a Gaussian mean estimation problem, whose minimax rate is clearly independent of the dimension of the columns in V and therefore quite di\ufb00erent from the rates obtained in this paper. The problem again bears important concrete examples in statistics and machine learning. The present work provides a theoretical foundation for studying these problems. Appendix A. Proof of the Main Theorems In this section, we prove Theorems 5, 8, 9 and 10. A.1 Risk Upper Bounds This section proves Theorems 8 and 10. Throughout, for any X, Y \u2208Rp1\u00d7p2, we denote \u27e8X, Y\u27e9= tr(X\u22a4Y). We recall Lemma 1 in Cai and Zhang (2018), which concerns the relationships between di\ufb00erent distance measures. Lemma 20 For H1, H2 \u2208O(p, r), \u2225H1H\u22a4 1 \u2212H2H\u22a4 2 \u2225F = q 2(r \u2212\u2225H\u22a4 1 H2\u22252 F ), and 1 \u221a 2\u2225H1H\u22a4 1 \u2212 H2H\u22a4 2 \u2225F \u2264infO\u2208O(r) \u2225H1 \u2212H2O\u2225F \u2264\u2225H1H\u22a4 1 \u2212H2H\u22a4 2 \u2225F . Proof of Theorem 8. We begin by stating a useful lemma, whose proof is delayed to Section C. Lemma 21 Let U \u2208O(p1, r), and \u0393 = diag(\u03bb1, ..., \u03bbr). Then for any W \u2208O(p1, r), we have \u03bb2 r 2 \u2225UU\u22a4\u2212WW\u22a4\u22252 F \u2264\u27e8U\u03932U\u22a4, UU\u22a4\u2212WW\u22a4\u27e9\u2264\u03bb2 1 2 \u2225UU\u22a4\u2212WW\u22a4\u22252 F . By Lemma 21 and the fact that tr( b U\u22a4YY\u22a4b U) \u2265tr(U\u22a4YY\u22a4U), or equivalently \u27e8YY\u22a4, UU\u22a4\u2212b U b U\u22a4\u27e9\u22640, we have \u2225b U b U\u22a4\u2212UU\u22a4\u22252 F \u22642 \u03bb2 r \u27e8U\u03932U\u22a4\u2212YY\u22a4, UU\u22a4\u2212b U b U\u22a4\u27e9. 17 \fCai, Li and Ma Since Y = U\u0393V\u22a4+ Z, we have YY\u22a4= U\u03932U\u22a4+ ZV\u0393U\u22a4+ U\u0393V\u22a4Z\u22a4+ ZZ\u22a4. Thus \u2225b U b U\u22a4\u2212UU\u22a4\u22252 F \u22642 \u03bb2 r \u0002 \u27e8U\u0393V\u22a4Z\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9+ \u27e8ZV\u0393U\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9 + \u27e8ZZ\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9 \u0003 \u22612 \u03bb2 r (H1 + H2 + H3). For H1, if we set GW = WW\u22a4\u2212UU\u22a4 \u2225WW\u22a4\u2212UU\u22a4\u2225F , W \u2208O(p1, r) \\ {U}, (30) we can write H1 = \u27e8U\u0393V\u22a4Z\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9= \u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 \u27e8U\u0393V\u22a4Z\u22a4, G b U\u27e9 \u2264\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr(ZV\u0393U\u22a4GW). Similarly, we have H2 \u2264\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 supW\u2208C tr(U\u0393V\u22a4Z\u22a4GW), and H3 \u2264\u2225b U b U\u22a4\u2212 UU\u22a4\u2225F \u00b7 supW\u2208C tr(Z\u22a4GWZ). It then follows that \u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u22642 \u03bb2 r \u0012 sup W\u2208C tr(ZV\u0393U\u22a4GW)+ sup W\u2208C tr(U\u0393V\u22a4Z\u22a4GW)+ sup W\u2208C tr(Z\u22a4GWZ) \u0013 . (31) The rest of the proof is separated into three parts. In the \ufb01rst two parts, we obtain upper bounds for the right-hand side of equation (31). In the third part, we derive the desired risk upper bound. Part I. For the term supW\u2208C tr(ZV\u0393U\u22a4GW), we have sup W\u2208C tr(ZV\u0393U\u22a4GW) = sup W\u2208C tr(U\u22a4GWZV\u0393) = sup W\u2208C r X i=1 \u03bbi(U\u22a4GWZV)ii \u2264\u03bb1 sup W\u2208C tr(VU\u22a4GWZ) \u2264\u03bb1 sup G\u2208T \u2032(C,U,V) \u27e8G, Z\u27e9, where we de\ufb01ned T \u2032(C, U, V) = {GWUV\u22a4\u2208Rp1\u00d7p2 : W \u2208C \\ {U}}. To control the expected suprema of the Gaussian process supG\u2208T \u2032(C,U,V)\u27e8G, Z\u27e9, we use the following Dudley\u2019s integral inequality (see, for example, Vershynin 2018, pp. 188). Theorem 22 (Dudley\u2019s Integral Inequality) Let {Xt}t\u2208T be a Gaussian process, that is, a jointly Gaussian family of centered random variables indexed by T, where T is equipped with the canonical distance d(s, t) = p E(Xs \u2212Xt)2. For some universal constant L, we have E supt\u2208T Xt \u2264L R \u221e 0 p log N(T, d, \u03f5)d\u03f5. For the Gaussian process supG\u2208T \u2032(C,U,V)\u27e8G, Z\u27e9, the canonical distance de\ufb01ned over the set T \u2032(C, U, V) can be obtained as follows. For any G1, G2 \u2208T (C, U, V), the canonical distance between G1 and G2, by de\ufb01nition, is p E\u27e8G1 \u2212G2, Z\u27e92 = \u2225G1\u2212G2\u2225F \u2261d2(G1, G2). Theorem 22 yields E sup G\u2208T \u2032(C,U,V) \u27e8G, Z\u27e9\u2264C\u03c3 Z \u221e 0 p log N(T \u2032(C, U, V), d2, \u03f5)d\u03f5, (32) 18 \fOptimal Structured Principal Subspace Estimation for some universal constant C > 0. Next, for any G1, G2 \u2208T \u2032(C, U, V), without loss of generality, if we assume G1 = GW1UV\u22a4and G2 = GW2UV\u22a4, where W1, W2 \u2208C \\ {U}, then it holds that d2(G1, G2) = \u2225G1 \u2212G2\u2225F \u2264\u2225GW1 \u2212GW2\u2225F \u2225U\u2225\u2225V\u2225 (33) \u2264\u2225GW1 \u2212GW2\u2225F = d2(GW1, GW2), where we used the fact that \u2225HG\u2225F \u2264\u2225H\u2225F \u2225G\u2225. The next lemma, obtained by Szarek (1998), concerns the invariance property of the covering numbers with respect to Lipschitz maps. Lemma 23 (Szarek (1998)) Let (M, d) and (M1, d1) be metric spaces, K \u2282M, \u03a6 : M \u2192 M1, and let L > 0. If \u03a6 satis\ufb01es d1(\u03a6(x), \u03a6(y)) \u2264Ld(x, y) for x, y, \u2208M, then, for every \u03f5 > 0, we have N(\u03a6(K), d1, L\u03f5) \u2264N(K, d, \u03f5). De\ufb01ne the set T (C, U) = {GW : W \u2208C \\ {U}}. Equation (33) and Lemma 23 imply log N(T \u2032(C, U, V), d2, \u03f5) \u2264log N(T (C, U), d2, \u03f5), (34) which means sup W\u2208C tr(ZV\u0393U\u22a4GW) \u2264C\u03bb1\u03c3 Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (35) Applying the same argument to supW\u2208C tr(U\u0393V\u22a4Z\u22a4GW) leads to sup W\u2208C tr(U\u0393V\u22a4Z\u22a4GW) \u2264C\u03bb1\u03c3 Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (36) Part II. To bound supW\u2208C tr(Z\u22a4GWZ), note that tr(Z\u22a4GWZ) = vec(Z)\u22a4DWvec(Z), where vec(Z) = (Z11, ..., Zp11, Z12, ..., Zp12, ..., Z1p2, ..., Zp1p2)\u22a4, and DW = \uf8ee \uf8ef \uf8f0 GW ... GW \uf8f9 \uf8fa \uf8fb\u2208Rp1p2\u00d7p1p2, (37) It su\ufb03ces to control the expected supremum of the following Gaussian chaos of order 2, sup D\u2208P(C,U) vec(Z)\u22a4Dvec(Z), (38) where P(C, U) = {DW \u2208Rp1p2\u00d7p1p2 : W \u2208C \\{U}}. To analyze the above Gaussian chaos, a powerful tool from empirical process theory is the decoupling technique. In particular, we apply the following decoupling inequality obtained by Arcones and Gin\u00b4 e (1993) (see also Theorem 2.5 of Krahmer et al. (2014)). Theorem 24 (Arcones-Gen\u00b4 e Decoupling Inequality) Let {gi}1\u2264i\u2264n be a sequence of independent standard Gaussian variables and let {g\u2032 i}1\u2264i\u2264n be an independent copy of {gi}1\u2264i\u2264n. Let B be a collection of n\u00d7n symmetric matrices. Then for all p \u22651, there exists an absolute constant C such that E sup B\u2208B \f \f \f \f X 1\u2264j\u0338=k\u2264n Bjkgjgk + n X j=1 Bjj(g2 j \u22121) \f \f \f \f p \u2264CpE sup B\u2208B \f \f \f \f X 1\u2264j,k\u2264n Bjkgjg\u2032 k \f \f \f \f p . 19 \fCai, Li and Ma From Theorem 24 and the fact that for given W \u2208C\\{U} we have Evec(Z)\u22a4DWvec(Z) = 0, then E sup D\u2208P(C,U) [vec(Z)\u22a4Dvec(Z)] \u2264CE sup D\u2208P(C,U) [vec(Z)\u22a4Dvec(Z\u2032)] (39) where Z\u2032 is an independent copy of Z. The upper bound of the right hand side of (39) can be obtained by using a generic chaining argument developed by Talagrand (2014). To state the result, we make the following de\ufb01nitions that characterize the complexity of a set in a metric space. De\ufb01nition 25 (admissible sequence) Given a set T in the metric space (S, d), an admissible sequence is an increasing sequence {An} of partitions of T such that |A0| = 1 and |An| \u226422n for n \u22651. De\ufb01nition 26 (\u03b3\u03b1(T, d)) Given \u03b1 > 0 and a set T in the metric space (S, d), we de\ufb01ne \u03b3\u03b1(T, d) = inf supt\u2208T P n\u22650 2n/\u03b1diam(An(t)), where An(t) is the unique element of An which contains t and the in\ufb01mum is taken over all admissible sequences. The following theorem from (Talagrand, 2014, pp. 246) provides an important upper bound of the general decoupled Gaussian chaos of order 2. Theorem 27 (Talagrand (2014)) Let g, g\u2032 \u2208Rn be independent standard Gaussian vectors, and Q = {qij}1\u2264i,j\u2264n \u2208Rn\u00d7n. Given a set T \u2282Rn\u00d7n equipped with two distances d\u221e(Q1, Q2) = \u2225Q1 \u2212Q2\u2225and d2(Q1, Q2) = \u2225Q1 \u2212Q2\u2225F , E sup Q\u2208T g\u22a4Qg\u2032 \u2264L(\u03b31(T, d\u221e) + \u03b32(T, d2)), for some absolute constant L \u22650. A direct consequence of Theorem 27 is E sup D\u2208P(C,U) [vec(Z)\u22a4Dvec(Z\u2032)] \u2264C\u03c32(\u03b31(P(C, U), d\u221e) + \u03b32(P(C, U), d2)). (40) Our next lemma obtains estimates of the functionals \u03b31(P(C, U), d\u221e) and \u03b32(P(C, U), d2). Lemma 28 Let T (C, U) = {GW \u2208Rp1\u00d7p1 : W \u2208C \\ {U}} be equipped with distances d\u221e and d2 de\ufb01ned in Theorem 27. It holds that \u03b31(P(C, U), d\u221e) \u2264C1 Z \u221e 0 log N(T (C, U), d2, \u03f5)d\u03f5, (41) \u03b32(P(C, U), d2) \u2264C2 \u221ap2 Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (42) Combining the above results, we have E sup W\u2208C tr(Z\u22a4GWZ) \u2272\u03c32\u221ap2 Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5+\u03c32 Z \u221e 0 log N(T (C, U), d2, \u03f5)d\u03f5. (43) 20 \fOptimal Structured Principal Subspace Estimation Part III. By (31) (35) (36) and (43), we have, for any (\u0393, U, V) \u2208Y(C, t, p1, p2, r), whenever t \u2273\u03c3D\u2032(T (C, U), d2)/D(T (C, U), d2), E\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u2272\u03c3\u03bb1D(T (C, U), d2) \u03bb2 r + \u03c32\u221ap2D(T (C, U), d2) + \u03c32D\u2032(T (C, U), d2) \u03bb2 r \u2272\u03c3\u2206(C) p t2 + \u03c32p2 t2 . The \ufb01nal result then follows by noticing the trivial upper bound of diam(C). Proof of Theorem 10. We \ufb01rst state a useful lemma (Lemma 3 in Cai et al. (2013)). Lemma 29 Let \u03a3 = \u03c32Ip + U\u0393U\u22a4where U \u2208O(p, r) and \u0393 = diag(\u03bb1, ..., \u03bbr). Then for any W \u2208O(p, r), we have \u03bbr 2 \u2225UU\u22a4\u2212WW\u22a4\u22252 F \u2264\u27e8\u03a3, UU\u22a4\u2212WW\u22a4\u27e9\u2264\u03bb1 2 \u2225UU\u22a4\u2212 WW\u22a4\u22252 F . Note that Y = X\u03931/2U\u22a4+ Z \u2208Rn\u00d7p where \u03931/2 = diag(\u03bb1/2 1 , ..., \u03bb1/2 r ), X \u2208Rn\u00d7r has i.i.d. entries from \u223cN(0, 1), and Z has i.i.d. entries from N(0, \u03c32). We can write \u02c6 \u03a3 = 1 nY\u22a4Y \u2212\u00af Y \u00af Y \u22a4= 1 n(U\u03931/2X\u22a4X\u03931/2U\u22a4+ Z\u22a4X\u03931/2U\u22a4+ U\u03931/2X\u22a4Z + Z\u22a4Z) \u2212(U\u03931/2 \u00af X \u00af X\u22a4\u03931/2U\u22a4+ U\u03931/2 \u00af X \u00af Z\u22a4+ \u00af Z \u00af X\u22a4\u03931/2U\u22a4+ \u00af Z \u00af Z\u22a4), where \u00af X = 1 n Pn i=1 Xi \u2208Rr and \u00af Z = 1 n Pn i=1 Zi \u2208Rp. Now since tr( b U\u22a4\u02c6 \u03a3 b U) \u2265tr(U\u22a4\u02c6 \u03a3U), or equivalently \u27e8\u02c6 \u03a3, UU\u22a4\u2212b U b U\u22a4\u27e9\u22640, we have \u2225b U b U\u22a4\u2212UU\u22a4\u22252 F \u22642 \u03bbr \u27e8\u03a3 \u2212\u02c6 \u03a3, UU\u22a4\u2212b U b U\u22a4\u27e9. Hence, \u2225b U b U\u22a4\u2212UU\u22a4\u22252 F \u22642 \u03bbr \u0002 \u27e8n\u22121Z\u22a4X\u03931/2U\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9+ \u27e8n\u22121U\u03931/2X\u22a4Z, b U b U\u22a4\u2212UU\u22a4\u27e9 + \u27e8n\u22121U\u03931/2X\u22a4X\u03931/2U\u22a4\u2212U\u0393U\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9+ \u27e8n\u22121Z\u22a4Z \u2212Ip, b U b U\u22a4\u2212UU\u22a4\u27e9 \u2212\u27e8U\u03931/2 \u00af X \u00af X\u22a4\u03931/2U\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9\u2212\u27e8U\u03931/2 \u00af X \u00af Z\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9 \u2212\u27e8\u00af Z \u00af X\u22a4\u03931/2U\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9\u2212\u27e8\u00af Z \u00af Z\u22a4, b U b U\u22a4\u2212UU\u22a4\u27e9 \u0003 \u22612 \u03bbr (H1 + H2 + H3 + H4 \u2212H5 \u2212H6 \u2212H7 \u2212H8). To control H1, using the same notations in (30), we have H1 \u22641 n\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr(U\u03931/2X\u22a4ZGW). 21 \fCai, Li and Ma Similarly, it holds that H2 \u22641 n\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr(Z\u22a4X\u03931/2U\u22a4GW), H3 \u2264\u27e8\u03931/2(n\u22121X\u22a4X \u2212Ir)\u03931/2, U\u22a4b U b U\u22a4U \u2212Ir\u27e9 \u2264\u2225\u03931/2(n\u22121X\u22a4X \u2212Ir)\u03931/2\u2225\u00b7 |tr(U\u22a4b U b U\u22a4U \u2212Ir)| \u2264\u03bb1 2 \u2225n\u22121X\u22a4X \u2212Ir\u2225\u2225UU\u22a4\u2212b U b U\u22a4\u22252 F , H4 \u2264\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr((n\u22121Z\u22a4Z \u2212Ip)GW), H5 \u2264\u2225\u03931/2 \u00af X \u00af X\u22a4\u03931/2\u2225\u00b7 |tr(U\u22a4b U b U\u22a4U \u2212Ir)| \u2264\u03bb1 2 \u2225\u00af X \u00af X\u22a4\u2225\u2225UU\u22a4\u2212b U b U\u22a4\u22252 F , H6 \u2264\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr(U\u03931/2 \u00af X \u00af Z\u22a4GW), H7 \u2264\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr( \u00af Z\u22a4\u00af X\u03931/2U\u22a4GW), H8 \u2264\u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u00b7 sup W\u2208C tr( \u00af Z \u00af Z\u22a4GW). Combining the above inequalities, we have \u2225b U b U\u22a4\u2212UU\u22a4\u2225F \u2264 2 \u03bbr(1 \u2212\u03bb1 \u03bbr \u2225n\u22121X\u22a4X \u2212Ir\u2225\u2212\u03bb1 \u03bbr \u2225\u00af X \u00af X\u22a4\u2225) \u0012 n\u22121 sup W\u2208C tr(U\u03931/2X\u22a4ZGW) + n\u22121 sup W\u2208C tr(Z\u22a4X\u03931/2U\u22a4GW) + sup W\u2208C tr((n\u22121Z\u22a4Z \u2212Ip)GW) + sup W\u2208C tr(U\u03931/2 \u00af X \u00af Z\u22a4GW) + sup W\u2208C tr( \u00af Z\u22a4\u00af X\u03931/2U\u22a4GW) + sup W\u2208C tr( \u00af Z \u00af Z\u22a4GW) \u0013 (44) The rest of the proof is separated into four parts, with the \ufb01rst three parts controlling the right-hand side of the inequality (44), and the last part deriving the \ufb01nal risk upper bound. Part I. Note that sup W\u2208C tr(U\u03931/2X\u22a4ZGW) = sup W\u2208C tr(X\u22a4ZGWU\u03931/2) \u2264\u03bb1/2 1 sup W\u2208C tr(ZGWUX\u22a4/\u2225X\u2225)\u2225X\u2225 \u2264\u03bb1/2 1 sup G\u2208T0(C,U,X) \u27e8Z\u22a4, G\u27e9\u2225X\u2225, where T0(C, U, X) = \b GWUX\u22a4 \u2225X\u2225 : W \u2208C \\ {U} \t . By Theorem 22, we have E \u0014 sup G\u2208T0(C,U,X) \u27e8Z\u22a4, G\u27e9 \f \f \f \fX \u0015 \u2264C\u03c3 Z \u221e 0 p log N(T0(C, U, X), d2, \u03f5)d\u03f5. For any G1, G2 \u2208T0(C, U, X), without loss of generality, if we assume G1 = \u2225X\u2225\u22121GW1UX\u22a4 and G2 = \u2225X\u2225\u22121GW2UX\u22a4where W1, W2 \u2208C \\ {U}, then d2(G1, G2) \u2264\u2225GW1 \u2212GW2\u2225F \u2225U\u2225\u2264\u2225GW1 \u2212GW2\u2225F = d2(GW1, GW2). (45) 22 \fOptimal Structured Principal Subspace Estimation Again, recall the set T (C, U) de\ufb01ned in the proof of Theorem 8, by Lemma 23, we have log N(T0(C, U, X), d2, \u03f5) \u2264log N(T (C, U), d2, \u03f5), which implies E sup W\u2208C tr(U\u03931/2X\u22a4ZGW) \u2264C\u03bb1/2 1 E\u2225X\u2225\u03c3 Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (46) Now by Theorem 5.32 of Vershynin (2010), we have E\u2225X\u2225\u2264\u221an + \u221ar, then En\u22121 sup W\u2208C tr(U\u03931/2X\u22a4ZGW) \u2264C\u03bb1/2 1 \u03c3(1/\u221an + \u221ar/n) Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (47) Similarly, we can derive En\u22121 sup W\u2208C tr(Z\u22a4X\u03931/2U\u22a4GW) \u2264C\u03bb1/2 1 \u03c3(1/\u221an + \u221ar/n) Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (48) One the other hand, since supW\u2208C tr(U\u03931/2 \u00af X \u00af Z\u22a4GW) = supW\u2208C tr( \u00af X \u00af Z\u22a4GWU\u03931/2) \u2264 \u03bb1/2 1 supW\u2208C tr( \u00af Z\u22a4GWU \u00af X/\u2225\u00af X\u22252)\u2225\u00af X\u22252 \u2264\u03bb1/2 1 \u2225\u00af X\u22252 supg\u2208T1\u27e8\u00af Z, g\u27e9, where T1(C, U, X) = {GWU \u00af X \u2225\u00af X\u22252 : W \u2208C \\ {U}} is equipped with the Euclidean \u21132 distance. By Theorem 22, we have E \u0014 sup g\u2208T1(C,U,X) \u27e8\u00af Z, g\u27e9 \f \f \f \fX \u0015 \u2264C\u03c3 \u221an Z \u221e 0 p log N(T1(C, U, X), d2, \u03f5)d\u03f5. Now for any g1, g2 \u2208T1(C, U, X), without loss of generality, if we assume g1 = \u2225\u00af X\u2225\u22121 2 GW1U \u00af X and g2 = \u2225\u00af X\u2225\u22121 2 GW2U \u00af X, where W1, W2 \u2208C \\{U}, then \u2225g1\u2212g2\u22252 \u2264\u2225\u00af X\u2225\u22121 2 \u2225GW1U \u00af X \u2212 GW2U \u00af X\u22252 \u2264d\u221e(GW1, GW2) \u2264d2(GW1, GW2). Lemma 23 implies log N(T1(C, U, X), d2, \u03f5) \u2264 log N(T (C, U), d2, \u03f5), which along with the fact that E\u2225\u00af X\u22252 \u2272 p r/n implies E sup W\u2208C tr(U\u03931/2 \u00af X \u00af Z\u22a4GW) \u2264C\u03c3\u221ar\u03bb1/2 1 n Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (49) Similarly, we have sup W\u2208C tr( \u00af Z\u22a4\u00af X\u03931/2U\u22a4GW) \u2264C\u03c3\u221ar\u03bb1/2 1 n Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5. (50) Part II. Note that tr((n\u22121Z\u22a4Z\u2212\u03c32Ip)GW) = tr(n\u22121Z\u22a4ZGW)\u2212\u03c32tr(GW) = n\u22121vec(Z)\u22a4DWvec(Z), where DW is de\ufb01ned in (37). By the similar chaining argument in Part II of the proof of Theorem 8, we have E sup W\u2208C tr((n\u22121Z\u22a4Z\u2212Ip)GW) \u2272\u03c32 \u221an Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5+\u03c32 n Z \u221e 0 log N(T (C, U), d2, \u03f5)d\u03f5 (51) Similarly, since supW\u2208C tr( \u00af Z \u00af Z\u22a4GW) = supW\u2208C \u00af Z\u22a4GW \u00af Z, we also have E sup W\u2208C tr( \u00af Z \u00af Z\u22a4GW) \u2272\u03c32 n Z \u221e 0 p log N(T (C, U), d2, \u03f5)d\u03f5 + \u03c32 n Z \u221e 0 log N(T (C, U), d2, \u03f5)d\u03f5. (52) 23 \fCai, Li and Ma Part III. De\ufb01ne the event E = {\u2225n\u22121X\u22a4X \u2212Ir\u2225\u22641/(4L2), \u2225\u00af X \u00af X\u22a4\u2225\u22641/(4L2)}, where L is the constant in Z(C, t, p, r). By Proposition D.1 in the Supplementary Material of Ma (2013), P(\u2225n\u22121X\u22a4X \u2212Ir\u2225\u22642( p r/n + t) + ( p r/n + t)2) \u22651 \u22122e\u2212nt2/2, which implies P(\u2225n\u22121X\u22a4X \u2212Ir\u2225\u22641/(4L2)) \u22651 \u22122e\u2212cn. In addition, since \u2225\u00af X \u00af X\u22a4\u2225\u2264 \u2225\u00af X\u22252 2 = 1 n Pr i=1 g2 i , where gi \u223ci.i.d. N(0, 1), it follows from the concentration inequality for independent exponential variables (Vershynin, 2010, Proposition 5.16) that P(\u2225\u00af X \u00af X\u22a4\u2225\u2264 1/(4L2)) \u22651 \u22122e\u2212cn. Thus, it follows that P(Ec) \u2264P(\u2225n\u22121X\u22a4X \u2212Ir\u2225\u22651/(4L2)) + P(\u2225\u00af X \u00af X\u22a4\u2225\u22651/(4L2)) \u22644e\u2212cn. Part IV. Note that Ed(U, b U) = E[d(U, b U)|E] + E[d(U, b U)|Ec]. It follows from (44) and the inequalities (47)-(52) from Parts I and II that sup (\u0393,U)\u2208Z(C,t,p,t) E[d(U, b U)|E] \u2264C t \u0014\u221a t\u03c3 \u0012 1 \u221an + \u221ar n \u0013 D(T (C, U), d2) + \u03c32D(T (C, U), d2) \u221an + \u03c32D\u2032(T (C, U), d2) n \u0015 \u2264C\u03c3\u2206(C) p t(1 + r/n) + \u03c32 \u221ant , where the last inequality holds whenever t/\u03c32 \u2273supU\u2208C[D\u20322(T (C, U), d2)/D2(T (C, U), d2)]. On the other hand, by Part III, E[d(U, b U)|Ec] \u2264diam(C)\u00b7P(Ec) \u2264C\u221are\u2212cn. Consequently as long as n \u2273max{log t \u03c32 , r} and t/\u03c32 \u2273supU\u2208C[D\u20322(T (C, U), d2)/D2(T (C, U), d2)], we have sup (\u0393,U)\u2208Z(C,t,p,t) Ed(U, b U) \u2264C\u03c3\u2206(C) \u221a t + \u03c32 \u221ant . The \ufb01nal result then follows by noticing the trivial upper bound of diam(C). A.2 Minimax Lower Bounds Proof of Theorem 5. The proof is divided into two parts, the strong signal regime (t2 \u2265\u03c32p2/4) and the weak signal regime (t2 < \u03c32p2/4). Part I. Strong Signal Regime. The following general lower bound for testing multiple hypotheses Tsybakov (2009) are needed. Lemma 30 (Tsybakov (2009)) Assume that M \u22652 and suppose that (\u0398, d) contains elements \u03b80, \u03b81, ..., \u03b8M such that: (i) d(\u03b8j, \u03b8k) \u22652s > 0 for any 0 \u2264j < k \u2264M; (ii) it holds that 1 M PM j=1 D(Pj, P0) \u2264\u03b1 log M with 0 < \u03b1 < 1/8 and Pj = P\u03b8j for j = 0, 1, ..., M, where D(Pj, P0) = R log dPj dP0 dPj is the KL divergence between Pj and P0. Then inf \u02c6 \u03b8 sup \u03b8\u2208\u0398 P\u03b8(d(\u02c6 \u03b8, \u03b8) \u2265s) \u2265 \u221a M 1 + \u221a M \u0012 1 \u22122\u03b1 \u2212 r 2\u03b1 log M \u0013 . 24 \fOptimal Structured Principal Subspace Estimation Let V0 \u2208O(p2, r) be \ufb01xed and U0 \u2208C. Denote the \u03f5-ball B(U0, \u03f5) = {U \u2208O(p1, r) : d(U, U0) \u2264\u03f5}. For some \u03b4 < \u03f5, we consider the local \u03b4-packing set G\u03b4 = G(B(U0, \u03f5)\u2229C, d, \u03b4) such that for any pair U, U\u2032 \u2208B(U0, \u03f5)\u2229C, it holds that d(U, U\u2032) = \u2225UU\u22a4\u2212U\u2032U\u2032\u22a4\u2225F \u2265\u03b4. We denote the elements of G\u03b4 as Ui for 1 \u2264i \u2264|G\u03b4|. Lemma 20 shows that, for any i, we can \ufb01nd Oi \u2208Or such that \u2225U0 \u2212UiOi\u2225F \u2264d(U0, Ui) \u2264\u03f5. Set U\u2032 i = UiOi and denote G\u2032 \u03b4 = {U\u2032 i}. For given t > 0, we consider the subset X(t, \u03f5, \u03b4, U0, V0) = {(\u0393, U, V) : U \u2208G\u2032 \u03b4, V = V0, \u0393 = tIr} \u2282Y(C, t, p1, p2, r). In particular, the above construction admits |X(t, \u03f5, \u03b4, U0, V0)| = |G\u03b4|. Moreover, for any (\u0393, Ui, V0) \u2208X(t, \u03f5, \u03b4, U0, V0), let Pi be the probability measure of Y = Ui\u0393V\u22a4 0 + Z where Z has i.i.d. entries from N(0, \u03c32). We have, for 1 \u2264i \u0338= j \u2264|G\u03b4|, D(Pi, Pj) = \u2225(U\u2032 i \u2212U\u2032 j)\u0393V\u22a4 0 \u22252 F 2\u03c32 \u2264 t2\u2225U\u2032 i \u2212U\u2032 j\u22252 F 2\u03c32 \u22642t2\u03f52 \u03c32 . Now set \u03f5 = \u03f50 and \u03b4 = \u03b1\u03f5 for some \u03b1 \u2208(0, 1). By assumption, \u0012c\u03c32 t2 log |G\u03b1\u03f50| \u2227diam2(C) \u0013 \u2264\u03f52 0 \u2264 \u0012 \u03c32 32t2 log |G\u03b1\u03f50| \u2227diam2(C) \u0013 (53) for some c \u2208(0, 1/32), it holds that D(Pi, Pj) \u2264 1 16 log |G\u03b1\u03f50|. Now by Lemma 30, it holds that, for \u03b8 = (\u0393, U, V), inf b U sup \u03b8\u2208X(t,\u03f5,\u03b4,U0,V0) P\u03b8(d( b U, U) \u2265\u03b1\u03f50/2) \u2265 p |G\u03b1\u03f50| 1 + p |G\u03b1\u03f50| \u00127 8 \u2212 1 p 8 log |G\u03b1\u03f50| \u0013 . By Markov\u2019s inequality, we have inf b U sup \u03b8\u2208X(t,\u03f5,\u03b4,U0,V0) E\u03b8d( b U, U) \u2265 \u03b1\u03f50 p |G\u03b1\u03f50| 2(1 + p |G\u03b1\u03f50|) \u00127 8 \u2212 1 p 8 log |G\u03b1\u03f50| \u0013 \u2265C\u03b1\u03f50, for some C > 0 as long as |G\u03b1\u03f50| \u22652. Therefore, it holds that inf b U sup \u03b8\u2208Y(C,t,p1,p2,r) E\u03b8d( b U, U) \u2265inf b U sup \u03b8\u2208X(t,\u03f5,\u03b4,U0,V0) E\u03b8d( b U, U) \u2273(\u03c3t\u22121p log |G\u03b1\u03f50| \u2227diam(C)) \u2273 \u0012\u03c3 p t2 + \u03c32p2 t2 p log |G\u03b1\u03f50| \u2227diam(C) \u0013 . Part II. Weak Signal Regime. The proof relies on the following generalized Fano\u2019s method, obtained by Ma et al. (2019), about testing multiple composite hypotheses. Lemma 31 (Generalized Fano\u2019s Method) Let \u00b50, \u00b51, ..., \u00b5M be M + 1 priors on the parameter spaces \u0398 of the family {P\u03b8}, and let Pj be the posterior probability measures on (X, A) such that Pj(S) = Z P\u03b8(S)\u00b5j(d\u03b8), \u2200S \u2208A, j = 0, 1, ..., M. 25 \fCai, Li and Ma Let F : \u0398 \u2192(Rd, d). If (i) there exist some sets B0, B1, ..., BM \u2282Rd such that d(Bi, Bj) \u2265 2s for some s > 0 for all 0 \u2264i \u0338= j \u2264M and \u00b5j(\u03b8 \u2208\u0398 : F(\u03b8) \u2208Bj) = 1; and (ii) it holds that 1 M PM j=1 D(Pj, P0) \u2264\u03b1 log M with 0 < \u03b1 < 1/8. Then inf \u02c6 F sup \u03b8\u2208\u0398 P\u03b8(d( \u02c6 F, F(\u03b8)) \u2265s) \u2265 \u221a M 1 + \u221a M \u0012 1 \u22122\u03b1 \u2212 r 2\u03b1 log M \u0013 . To use the above lemma, we need to construct a collection of priors over the set Y(C, t, p1, p2, r). Speci\ufb01cally, recall the previously constructed \u03b4-packing set G\u03b4 = {Ui : 1 \u2264i \u2264|G\u03b4|}. Inspired by Cai and Zhang (2018), we consider the prior probability measure \u00b5i over Y(C, t, p1, p2, r), whose de\ufb01nition is given as follows. Let W be a random matrix on Rp2\u00d7r, whose probability density is given by p(W) = C \u0012 p2 2\u03c0 \u0013rp2/2 exp(\u2212p2\u2225W\u22252 F /2) \u00b7 1{1/2 \u2264\u03bbmin(W) \u2264\u03bbmax(W) \u22642}, where C is a normalizing constant; then, if we denote \u02dc Ui\u02dc \u0393i \u02dc V\u22a4 i as the SVD of tUiW\u22a4\u2208 Rp1\u00d7p2 where Ui \u2208G\u03b4 and W \u223cp(W), then \u00b5i is de\ufb01ned as the joint distribution of (\u02dc \u0393i, \u02dc Ui, \u02dc Vi). By de\ufb01nition of Ui, one can easily verify that \u00b5i is a well-de\ufb01ned probability measure on Y(C, t, p1, p2, r). Note that, for any \u03b8i = (\u02dc \u0393i, \u02dc Ui, \u02dc Vi) \u2208supp(\u00b5i) and \u03b8j = (\u02dc \u0393j, \u02dc Uj, \u02dc Vj) \u2208supp(\u00b5j) with 1 \u2264i \u0338= j \u2264|G\u03b4|, it holds that d( \u02dc Ui, \u02dc Uj) = d(Ui, Uj) \u2265\u03b4. Consequently, the joint distribution of Y = U\u0393V\u22a4+ Z with (\u0393, U, V) \u223c\u00b5i and Zij \u223c N(0, \u03c32) can be expressed as Pi(Y) = C Z 1/2\u2264\u03bbmin(W)\u2264\u03bbmax(W)\u22642 \u03c3\u2212p1p2 (2\u03c0)p1p2/2 exp(\u2212\u2225Y \u2212tUiW\u22a4\u22252 F /(2\u03c32)) \u00d7 \u0012 p2 2\u03c0 \u0013rp2/2 exp(\u2212p2\u2225W\u22252 F /2)dW, and it remains to control the pairwise KL divergence D(Pi, Pj) for any 1 \u2264i \u0338= j \u2264|G\u03b4|. This is done by the next lemma, whose proof, which is involved, is delayed to Section C. Lemma 32 Under the assumption of the theorem, for any 1 \u2264i \u0338= j \u2264|G\u03b4|, we have D(Pi, Pj) \u2264C1t4d2(Ui,Uj) \u03c32(4t2+\u03c32p2) + C2 where C1, C2 > 0 are some uniform constant and {Ui} are elements of G\u03b4. Again, set \u03f5 = \u03f50 and \u03b4 = \u03b1\u03f5 for some \u03b1 \u2208(0, 1). By assumption, \u0012c\u03c32(t2 + \u03c32p2) t4 log |G\u03b4| \u2227diam(C) \u0013 \u2264\u03f52 0 \u2264 \u0012\u03c32(t2 + \u03c32p2) 640t4 log |G\u03b4| \u2227diam(C) \u0013 , for some c \u2208(0, 1/640]. It then follows that D(Pi, Pj) \u2264C log |G\u03b1\u03f50| + C2. Now let X \u2032(t, \u03f5, \u03b4, U0) = S 1\u2264i\u2264|G\u03b1\u03f50| supp(\u00b5i). By Lemma 31 and Markov\u2019s inequality, we have, for \u03b8 = (\u0393, U, V), inf b U sup \u03b8\u2208X \u2032(t,\u03f50,\u03b1\u03f50,U0) E\u03b8d( b U, U) \u2265 \u03b1\u03f50 p |G\u03b1\u03f50| 2(1 + p |G\u03b1\u03f50|) \u00127 8 \u2212 1 p 8 log |G\u03b1\u03f50| \u0013 \u2265C\u03b1\u03f50, 26 \fOptimal Structured Principal Subspace Estimation for some C > 0 as long as |G\u03b1\u03f50| \u22652. Hence, inf b U sup \u03b8\u2208Y(C,t,p1,p2,r) E\u03b8d( b U, U) \u2273inf b U sup \u03b8\u2208X \u2032(t,\u03f50,\u03b1\u03f50,U0) E\u03b8d( b U, U) \u2273 \u0012\u03c3 p 4t2 + \u03c32p2 t2 p log |G\u03b1\u03f50| \u2227diam(C) \u0013 . Proof of Theorem 9. For some U0 \u2208C, similar to the proof of Theorem 5, we consider the \u03b4-packing set G\u03b4 = G(B(U0, \u03f5) \u2229C, d, \u03b4), where for any Ui, Uj \u2208G\u03b4, d(Ui, Uj) = \u2225UiU\u22a4 i \u2212UjU\u22a4 j \u2225F \u2265\u03b4. Then, for given t > 0, we consider the subset Z\u2032(t, \u03f5, \u03b4, U0) = {(\u0393, U) \u2208Z(C, t, p, r) : U \u2208G\u03b4, \u0393 = tIr}, so that |Z\u2032(t, \u03f5, \u03b4, U0)| = |G\u03b4|. Let Pi be the joint probability measure of Yk \u223ci.i.d. N(0, \u03a3i) with k = 1, ..., n and \u03a3i = tUiU\u22a4 i + \u03c32Ip. We have, for any 1 \u2264i \u0338= j \u2264|G\u03b4|, D(Pi, Pj) = n 2 \u0012 tr(\u03a3\u22121 j \u03a3i) \u2212p + log \u0012 det \u03a3i det \u03a3j \u0013\u0013 = n 2 tr \u0012 \u2212 t t + \u03c32 UiU\u22a4 i + t \u03c32 UjU\u22a4 j \u2212 t2 \u03c32(t + \u03c32)UiU\u22a4 i UjU\u22a4 j \u0013 = nt2 2\u03c32(\u03c32 + t)(r \u2212\u2225U\u22a4 i Uj\u22252 F ) \u2264nt2d2(Ui, Uj) \u03c32(\u03c32 + t) \u2264 nt2\u03f52 \u03c32(\u03c32 + t), where the second equation follows from the Woodbury matrix identity and the second last inequality follows from Lemma 20. Now let \u03f5 = \u03f50 and \u03b4 = \u03b1\u03f5 for some \u03b1 \u2208(0, 1). By assumption, \u0012c\u03c32(\u03c32 + t) nt2 log |G\u03b1\u03f50| \u2227diam2(C) \u0013 \u2264\u03f52 0 \u2264 \u0012\u03c32(\u03c32 + t) 32nt2 log |G\u03b1\u03f50| \u2227diam2(C) \u0013 , for some c \u2208(0, 1/32). It holds that D(Pi, Pj) \u2264 1 16 log |G\u03b1\u03f50|. Now by Lemma 30, it holds that, for \u03b8 = (\u0393, U), inf b U sup \u03b8\u2208Z\u2032(t,\u03f50,\u03b1\u03f50,U0) P\u03b8(d( b U, U) \u2265\u03b1\u03f50/2) \u2265 p |G\u03b1\u03f50| 1 + p |G\u03b1\u03f50| \u00127 8 \u2212 1 p 8 log |G\u03b1\u03f50| \u0013 . By Markov\u2019s inequality, as long as |G\u03b1\u03f50| \u22652, we have inf b U sup \u03b8\u2208Z\u2032(t,\u03f50,\u03b1\u03f50,U) E\u03b8d( b U, U) \u2265C\u03b1\u03f50, for some C > 0. Therefore, since Z\u2032(t, \u03f50, \u03b1\u03f50, U) \u2282Z(C, t, p, r), inf b U sup \u03b8\u2208Z(C,t,p,r) R( b U, U) \u2265inf b U sup \u03b8\u2208Z\u2032(t,\u03f50,\u03b1\u03f50,U0) E\u03b8d( b U, U) \u2273 \u0012\u03c3 \u221a \u03c32 + t t\u221an p log |G\u03b1\u03f50| \u2227diam(C) \u0013 . 27 \fCai, Li and Ma Appendix B. Calculation of Metric Entropies In this section, we prove the results in Section 5 by calculating metric entropies of some speci\ufb01c sets. The calculation relies on the following useful lemmas. Lemma 33 (Varshamov-Gilbert Bound) Let \u2126= {0, 1}n and 1 \u2264d \u2264n/4. Then there exists a subset {\u03c9(1), ..., \u03c9(M)} of \u2126such that \u2225\u03c9(j)\u22250 = d for all 1 \u2264j \u2264M and \u2225\u03c9(j) \u2212\u03c9(k)\u22250 \u2265d 2 for 0 \u2264j < k \u2264M, and log M \u2265cd log n d where c \u22650.233. The proof of the above version of Varshamov-Gilbert bound can be found, for example, in Lemma 4.10 in Massart (2007)). The next two lemmas concern estimates of the covering/packing numbers of the orthogonal group. Lemma 34 (Candes and Plan 2011) De\ufb01ne P0 = { \u00af U\u00af \u0393 \u00af V\u22a4: \u00af U, \u00af V \u2208O(p, 2r), \u2225(\u00af \u0393ii)1\u2264i\u22642r\u22252 = 1}. Then for any \u03f5 \u2208(0, \u221a 2), there exists an \u03f5-covering set H(P0, d2, \u03f5) such that |H(P0, d2, \u03f5)| \u2264 (c/\u03f5)2(2p+1)r for some constant c > 0. Lemma 35 For any V \u2208O(k, r), identifying the subspace span(V ) with its projection matrix V V \u22a4, de\ufb01ne the metric on the Grassmannian manifold G(k, r) by \u03c1(V V \u22a4, UU\u22a4) = \u2225V V \u22a4\u2212UU \u22a4\u2225F . Then for any \u03f5 \u2208(0, p 2(r \u2227(k \u2212r))), \u0012c0 \u03f5 \u0013r(k\u2212r) \u2264N(G(k, r), \u03c1, \u03f5) \u2264 \u0012c1 \u03f5 \u0013r(k\u2212r) , where N(E, \u03f5) is the \u03f5-covering number of E and c0, c1 are absolute constants. Moreover, for any V \u2208O(k, r) and any \u03b1 \u2208(0, 1), it holds that \u0012 c0 \u03b1c1 \u0013r(k\u2212r) \u2264M(B(V, \u03f5), \u03c1, \u03b1\u03f5) \u2264 \u0012 2c1 \u03b1c0 \u0013r(k\u2212r) . Proof We only prove the entropy upper bound M(B(V, \u03f5), d, \u03b1\u03f5) \u2264 \u0012 c0 \u03b1c1 \u0013r(k\u2212r) , (54) as the other results has been proved in Lemma 1 of Cai et al. (2013). Speci\ufb01cally, Let G\u03f5 be the \u03f5-packing set of O(k, r). It then holds that M(O(k, r), d, \u03b1\u03f5) \u2265 X V \u2208G\u03f5 M(B(V, \u03f5), d, \u03b1\u03f5) \u2265|G\u03f5|M(B(V \u2217, \u03f5), d, \u03b1\u03f5) = M(O(k, r), d, \u03f5)M(B(V \u2217, \u03f5), d, \u03b1\u03f5) for some V \u2217\u2208O(k, r). Hence, M(B(V \u2217, \u03f5), d, \u03b1\u03f5) \u2264M(O(k, r)), d, \u03b1\u03f5) M(O(k, r)), d, \u03f5) . By the equivalence between the packing and the covering numbers, it holds that M(B(V \u2217, \u03f5), d, \u03b1\u03f5) \u2264N(O(k, r)), d, \u03b1\u03f5/2) N(O(k, r)), d, \u03f5) \u2264 \u0012 2c1 \u03b1c0 \u0013r(k\u2212r) , 28 \fOptimal Structured Principal Subspace Estimation where the last inequality follows from the \ufb01rst statement of the lemma. Then (54) holds since the metric d is unitarily invariant. The following lemma is an estimate of the Dudley\u2019s entropy integral for the orthogonal group O(p, r). Lemma 36 For any given U \u2208O(p, r), there exists some constant C > 0 such that R \u221e 0 p log N(T (O(p, r), U), d2, \u03f5)d\u03f5 \u2264C\u221apr. Therefore, we have \u22062(O(p, r)) \u2264Cpr. Proof By de\ufb01nition, for any G \u2208T (O(p, r), U), it is at most rank 2r, and suppose its SVD is G = \u00af U\u00af \u0393 \u00af V\u22a4, then \u00af \u0393 is a diagonal matrix with nonnegative diagonal entries and Frobenius norm equal to one. Thus, if we de\ufb01ne P0 = { \u00af U\u00af \u0393 \u00af V\u22a4: \u00af U, \u00af V \u2208O(p, 2r), \u2225(\u00af \u0393ii)1\u2264i\u22642r\u22252 = 1}, then by Lemma 23, N(T (O(p, r), U), d2, \u03f5) \u2264N(P0, d2, \u03f5). By Lemma 34, we can calculate that Z \u221e 0 p log N(T (O(p, r), U), d2, \u03f5)d\u03f5 \u2264 Z \u221e 0 p log N(P0, d2, \u03f5)d\u03f5 \u2264C\u221apr Z \u221a 2 0 p log(c/\u03f5)d\u03f5 \u2264C\u221apr. (55) The second statement follows directly from the de\ufb01nition of \u22062(O(p, r)). B.1 Sparse PCA/SVD: Proof of Proposition 11 and Theorem 12 Matrix denoising model with CS(p1, r, k), or sparse SVD. By Lemma 33, we can construct a subset \u0398\u03f5(k) \u2282CS(p1, r, k) as follows. Let \u2126M = {\u03c9(1), ..., \u03c9(M)} \u2282{0, 1}p1\u2212r\u22121 be the set obtained from Lemma 33 where n = p1 \u2212r \u22121, d = k/e < (p1 \u2212r \u22121)/4 and M is the smallest integer such that log M \u2265cd log n/d, i.e., M = \u2308exp(ck log e(p1\u2212r\u22121) k )\u2309. We de\ufb01ne \u0398\u03f5 = \u001a \u0014v 0 0 Ir\u22121 \u0015 : v = ( p 1 \u2212\u03f52, \u03f5\u03c9/ \u221a d) \u2208Sp1\u2212r\u22121, \u03c9 \u2208\u2126M \u001b , \u03f5 \u2208(0, 1). Then \u0398\u03f5 is a \u03f5 2-packing set of B(U0, \u221a 2\u03f5) \u2229CS(p1, r, k) with U0 = \u0014v0 0 0 Ir\u22121 \u0015 where v0 = (1, 0, ..., 0)\u22a4, |\u0398\u03f5| = M. Now we set \u03f52 = c1(t2 + \u03c32p2)\u03c32k log(e(p1 \u2212r \u22121)/k) t4 \u22271, for some su\ufb03ciently small c1 > 0. It follows that \u0012c2\u03c32(t2 + \u03c32p2) t4 log |\u0398\u03f5| \u22271 \u0013 \u2264\u03f52 \u2264 \u0012\u03c32(t2 + \u03c32p2) 640t4 log |\u0398\u03f50| \u22271 \u0013 29 \fCai, Li and Ma for some c2 \u2208(0, 1/640). So the condition of Theorem 5 holds with \u03f50 = \u221a 2\u03f5, \u03b1 = 1/(2 \u221a 2) and log |\u0398\u03f5| \u224dk log(ep1/k). Moreover, for any U\u2032 \u2208O(k, r), suppose M\u03f5 \u2282O(k, r) is an \u03b1\u03f5-packing set of B(U\u2032, \u03f5) constructed as in Lemma 35, then the set \u0398\u2032 \u03f5 = \u001a U = \u0014W 0 \u0015 , W \u2208M\u03f5 \u001b \u2282CS(p1, r, k), (56) is an \u03b1\u03f5-packing set of CS(p1, r, k) \u2229B(U0, \u03f5) where U0 = \u0014U\u2032 0 \u0015 , and |\u0398\u2032 \u03f5| \u2265(c/\u03b1)r(k\u2212r). Now we set \u03f52 = c1(t2 + \u03c32p2)\u03c32r(k \u2212r) t4 \u2227r2, for some su\ufb03ciently small c1 > 0. It follows that \u0012c2\u03c32(t2 + \u03c32p2) t4 log |\u0398\u2032 \u03f5| \u2227r \u0013 \u2264\u03f52 \u2264 \u0012\u03c32(t2 + \u03c32p2) 640t4 log |\u0398\u2032 \u03f50| \u2227r \u0013 for some c2 \u2208(0, 1/640). Thus, the condition of Theorem 5 holds with log |\u0398\u2032 \u03f5| \u224dr(k \u2212r). To obtain an upper bound for \u2206(CS(p1, r, k)), we notice that any element H \u2208CS(p1, r, k) satis\ufb01es H = H\u22a4and max 1\u2264i\u2264p1 \u2225Hi.\u22250 \u2264k, max 1\u2264i\u2264p1 \u2225H.i\u22250 \u2264k. Then T (CS(p1, r, k), U) can be covered by the union of its \u0000p1 k \u0001 disjoint subsets, with each subset corresponding to a \ufb01xed sparsity con\ufb01guration. Each of the above subsets can be identi\ufb01ed with T (O(k, r), U\u2032) for some U\u2032 \u2208O(k, r), and by Lemma 34 and the proof of Lemma 36, N(T (O(k, r), U\u2032), d2, \u03f5) \u2264(c/\u03f5)2r(2k+1). for any \u03f5 \u2208(0, \u221a 2). Then by taking a union of the covering sets, we have N(T (CS(p1, r, k), U), d2, \u03f5) \u2264 \u0012p1 k \u0013 (c1/\u03f5)2r(2k+1) \u2264(ep1/k)k(c1/\u03f5)2r(2k+1). As a result, Z \u221e 0 p log N(T (CS(p1, r, k), U), d2, \u03f5)d\u03f5 \u2264 p 2k log(ep1/k) + p 2r(2k + 1) Z \u221a 2 0 r log c1 \u03f5 d\u03f5 \u2264C( p k log(ep1/k) + \u221a rk). In addition, we also have Z \u221e 0 log N(T (CS(p1, r, k), U), d2, \u03f5)d\u03f5 \u2264C(k log(ep1/k) + rk). The validity of Theorem 8 reduces to the condition t2 \u03c3 \u2273k log(ep1/k) + rk. Note that when r = O(1), this condition is satis\ufb01ed whenever \u03c3 p t2 + \u03c32p2 t2 \u0012r k log ep1 k + \u221a k \u0013 \u22721. In other words, in light of the minimax lower bound (from Theorem 5), whenever consistent estimation is possible, the condition t2 \u03c3 \u2273k log(ep1/k) + k is satis\ufb01ed and the proposed estimator is minimax optimal. The \ufb01nal results follows by combining Theorems 5 and 8. 30 \fOptimal Structured Principal Subspace Estimation Spiked Wishart model with CS(p, r, k), or sparse PCA. We omitted the proof of this case as it is similar to the proof of the sparse SVD. B.2 Non-Negative PCA/SVD: Proof of Proposition 13 and Theorem 14 Matrix denoising model with CN(p1, r), or non-negative SVD. On the one hand, with Lemma 33, we can construct a subset \u0398\u03f5 \u2282O(p1, r) as follows. Let \u2126M = {\u03c9(1), ..., \u03c9(M)} \u2282 {0, 1}n be the set obtained from Lemma 33 where n = p1\u2212r\u22121, d = (p1\u2212r\u22121)/4 and M is the smallest integer such that log M \u2265cd log n/d, i.e., M = \u2308exp( c(p1\u2212r\u22121) log 2 2 )\u2309. Following the idea of Vu and Lei (2012) and Cai et al. (2013), we de\ufb01ne \u0398\u03f5 = \u001a \u0014v 0 0 Ir\u22121 \u0015 : v = ( p 1 \u2212\u03f52, \u03f5\u03c9/ \u221a d) \u2208Sp1\u2212r\u22121, \u03c9 \u2208\u2126M \u001b , \u03f5 \u2208(0, 1). Then it holds that \u0398\u03f5 \u2282B(U0, \u221a 2\u03f5) for U0 = \u0014v0 0 0 Ir\u22121 \u0015 where v0 = (1, 0, ..., 0)\u22a4, |\u0398\u03f5| = M, and that for any U \u0338= U\u2032 \u2208\u0398\u03f5, d(U, U\u2032) \u2265 \u221a 2 \u00b7 p 1 \u2212(1 \u2212\u03f52/8)2 \u2265\u03f5 2. In other words, \u0398\u03f5 is a \u03f5 2-packing set of B(U0, \u221a 2\u03f5) \u2229CNN(p1, r). Now we set \u03f52 = c1(t2 + \u03c32p2)\u03c32(p1 \u2212r \u22121) t4 \u22271, for some su\ufb03ciently small c1 > 0. It follows that \u0012c2\u03c32(t2 + \u03c32p2) t4 log |\u0398\u03f5| \u22271 \u0013 \u2264\u03f52 \u2264 \u0012\u03c32(t2 + \u03c32p2) 640t4 log |\u0398\u03f50| \u22271 \u0013 for some c2 \u2208(0, 1/640). So the condition of Theorem 5 holds with \u03f50 = \u221a 2\u03f5, \u03b1 = 1/(2 \u221a 2) and log |\u0398\u03f5| \u224dp1. On the other hand, we need to obtain an upper bound for \u2206(CN(p1, r)). To bound the Dudley\u2019s entropy integral R \u221e 0 p log N(T (CN(p1, r), U), d2, \u03f5)d\u03f5, we simply use the fact that CN(p1, r) \u2282O(p1, r) and N(T (CN(p1, r), U), d2, \u03f5) \u2264N(T (O(p1, r), U), d2, \u03f5). Then by Lemma 36, we have \u22062(CNN(p1, r)) \u2272p1r. Combining Theorems 5 and 8, we have \u22062(CNN(p1, r)) \u2273log |\u0398\u03f5|, which implies \u22062(CNN(p1, r)) \u224dlog |\u0398\u03f5| \u224dp1 if r = O(1). Again, Theorem 8 requires t2 \u03c3 \u2273rp1. Note that when r = O(1), this condition is satis\ufb01ed whenever \u03c3 p p1(t2 + \u03c32p2) t2 \u22721. In other words, in light of the minimax lower bound (from Theorem 5), whenever consistent estimation is possible, the condition t2 \u03c3 \u2273p1 is satis\ufb01ed and the proposed estimator is minimax optimal. 31 \fCai, Li and Ma Spiked Wishart model with CN(p, r), or non-negative PCA. Similarly, let \u2126M = {\u03c9(1), ..., \u03c9(M)} \u2282{0, 1}p\u2212r\u22121 be the set obtained from Lemma 33 where d = (p\u2212r\u22121)/4 and M is the smallest integer such that log M \u2265cd log(p\u2212r\u22121)/d, i.e., M = \u2308exp( c(p\u2212r\u22121) log 2 2 )\u2309. We de\ufb01ne \u0398\u03f5 = \u001a \u0014v 0 0 Ir\u22121 \u0015 : v = ( p 1 \u2212\u03f52, \u03f5\u03c9/ \u221a d) \u2208Sp\u2212r\u22121, \u03c9 \u2208\u2126M \u001b , \u03f5 \u2208(0, 1). Then it holds that \u0398\u03f5 \u2282B(U0, \u221a 2\u03f5) for U0 = \u0014v0 0 0 Ir\u22121 \u0015 where v0 = (1, 0, ..., 0)\u22a4, |\u0398\u03f5| = M, and that for any U \u0338= U\u2032 \u2208\u0398\u03f5, d(U, U\u2032) \u2265 \u221a 2 \u00b7 p 1 \u2212(1 \u2212\u03f52/8)2 \u2265\u03f5 2. In other words, \u0398\u03f5 is a \u03f5 2-packing set of B(U0, \u221a 2\u03f5) \u2229CNN(p, r). Now we set \u03f52 = c1\u03c32(\u03c32 + t)(p \u2212r \u22121) nt2 \u22271, for some su\ufb03ciently small c1 > 0. It follows that \u0012c2\u03c32(\u03c32 + t) nt2 log |\u0398\u03f5|\u22271 \u0013 \u2264\u03f52 \u2264 \u0012\u03c32(\u03c32 + t) nt2 (p \u2212r \u22121) log 2 10 \u22271 \u0013 \u2264 \u0012\u03c32(\u03c32 + t) 32nt2 log |\u0398\u03f5|\u22271 \u0013 for some c2 \u2208(0, 1/32), so that condition of Theorem 9 holds and log |\u0398\u03f5| \u224dp. The rest of the arguments such as the calculation of Dudley\u2019s entropy integral are the same as the above proof of the non-negative SVD. B.3 Subspace PCA/SVD: Proof of Proposition 16 and Theorem 17 To prove this proposition, in light of Lemmas 34, 35 and 36, it su\ufb03ces to establish the isometry between (CA(p, r, k), d) and (O(k, r), d). Let Q \u2208O(p, k) has its columns being the basis of the null space of A. We consider the map F : O(k, r) \u2192CA(p, r, k) where F(W) = QW. To show that F is a bijection, we notice that 1. For any G \u2208CA(p, r, k), for each of its columns Q.i, there exists some vi \u2208Sk\u22121 such that G.i = Qvi and v\u22a4 i vj = v\u22a4 i Q\u22a4Qvj = G\u22a4 .i G.j = 0. Then let W = [v1, ..., vr] \u2208 O(k, r), apparently, we have F(W) = G. This proves that the map is onto. 2. For any W1 \u0338= W2 \u2208O(k, r), it follows that F(W1) \u0338= F(W2). This proves the injection. To show the map F is isometric, we notice that 1. For any G1 = F(W1), G2 = F(W2) \u2208CA(p, r, k), d(F(W1), F(W2)) = \u2225QW1W\u22a4 1 Q\u22a4\u2212QW2W\u22a4 2 Q\u22a4\u2225F \u2264\u2225Q\u22252\u2225W1W\u22a4 1 \u2212W2W\u22a4 2 \u2225F \u2264d(W1, W2). 32 \fOptimal Structured Principal Subspace Estimation 2. For any W1, W2 \u2208O(k, r), d(W1, W2) = \u2225Q\u22a4QW1W\u22a4 1 Q\u22a4\u2212QW2W\u22a4 2 Q\u22a4Q\u2225F \u2264d(F(W1, W2)). Thus d(F(W1), F(W2)) = d(W1, W2). B.4 Spectral Clustering: Proof of Proposition 18 and Theorem 19 The upper bound \u22062(Cn \u00b1) \u2272n follows from the same argument as in the proof of Proposition 15. For the second statement, by Lemma 33, we can construct a subset \u0398(d) \u2282Sn\u22121 as follows. Let \u2126M = {\u03c9(1), ..., \u03c9(M)} \u2282{0, 1}n be the set obtained from Lemma 33 where \u2225\u03c9(j)\u22250 = d \u2264n/4 for all 1 \u2264j \u2264n and M is the smallest integer such that log M \u2265cd, i.e., M = \u2308exp(cd log n d)\u2309. We de\ufb01ne \u0398(d) = \u001a2|\u03c9 \u22120.5 \u00b7 1| \u221an \u2208Cn \u00b1 : \u03c9 \u2208\u2126M \u222a{(0, ..., 0)} \u001b , where 1 = (1, ..., 1)\u22a4\u2208Rn. Then since for u0 = (\u22121/\u221an, ..., \u22121/\u221an)\u22a4and any u \u2208\u0398(d), d(u0, u) \u2264\u2225u0 \u2212u\u22252 \u22642 r d n, it holds that \u0398(d) \u2282B(u0, 2 p d/n) with and that for any u \u0338= u\u2032 \u2208\u0398(d), d(u, u\u2032) \u2265 1 \u221a 2\u2225u \u2212u\u2032\u22252 \u2265 r d n so that \u0398(d) is a q d n-packing set of B(u0, 2 p d/n) \u2229Cn \u00b1. Now since t2 = C\u03c32(n + \u221anp), we can set \u03f50 = r d n, where d = c1n, for some su\ufb03ciently small c1 > 0, and thus it follows that \u0012c2\u03c32(t2 + \u03c32p) t4 log |\u0398(d)| \u22271 \u0013 \u2264\u03f52 0 \u2264 \u0012\u03c32(t2 + \u03c32p) 128t4 log |\u0398(d)| \u22271 \u0013 for some c2 \u2208(0, 1/128). So the condition of Theorem 5 holds with \u03b1 = 1/2 and log |\u0398(d)| \u224d n. Appendix C. Proof of Technical Lemmas Proof of Lemma 21. The \ufb01rst inequality can be proved by \u27e8U\u03932U, UU\u22a4\u2212WW\u22a4\u27e9= tr(U\u03932U\u22a4) \u2212tr(W\u22a4U\u03932U\u22a4W) = tr(\u03932) \u2212tr(\u03932U\u22a4WW\u22a4U) = r X i=1 \u03bb2 i (1 \u2212(U\u22a4WW\u22a4U)ii) \u2265\u03bb2 r(r \u2212tr(U\u22a4WW\u22a4U)) = \u03bb2 r 2 \u2225UU\u22a4\u2212WW\u22a4\u22252 F . 33 \fCai, Li and Ma The other inequality follows from the same rationale. Proof of Lemma 28. Throughout the proof, for simplicity, we write P = P(C, U) and T = T (C, U). By Corollary 2.3.2 of Talagrand (2014), for any metric space (T, d), if we de\ufb01ne en(T) = inf{\u03f5 : N(T, d, \u03f5) \u2264Nn}, where N0 = 1; Nn = 22n for n \u22651, (57) then there exists some constant K(\u03b1) only depending on \u03b1 such that \u03b3\u03b1(T, d) \u2264K(\u03b1) X n\u22650 2n/\u03b1en(T). (58) The following inequalities establish the correspondence between en and the Dudley\u2019s entropy integral, X n\u22650 2n/2en(T) \u2264C Z \u221e 0 p log N(T, d, \u03f5)d\u03f5, X n\u22650 2nen(T) \u2264C Z \u221e 0 log N(T, d, \u03f5)d\u03f5, (59) whose derivation is delayed to the end of this proof. Combining (58) and (59), it follows that \u03b3\u03b1(T, d) \u2264K(\u03b1) Z \u221e 0 log1/\u03b1 N(T, d, \u03f5)d\u03f5. (60) By (60), it su\ufb03ces to obtain estimates of the metric entropies log N(P, d\u221e, \u03f5) and p log N(P, d2, \u03f5). By de\ufb01nition of T , apparently (P, d\u221e) is isomorphic to (T , d\u221e), then by Lemma 23, it holds that N(P, d\u221e, \u03f5) = N(T , d\u221e, \u03f5). Along with the fact that, for any G1, G2 \u2208T , d\u221e(G1, G2) \u2264d2(G1, G2) and therefore N(T , d\u221e, \u03f5) \u2264N(T , d2, \u03f5), we prove the \ufb01rst statement of the lemma. On the other hand, consider the map F : (P, d2) \u2192(T , d2) where for any D \u2208P, F(D) \u2208Rp1\u00d7p1 is the submatrix of D by extracting its entries in the \ufb01rst p1 columns and rows. Then, for any D1, D2 \u2208P, it holds that d2(F(D1), F(D2)) = \u2225F(D1) \u2212F(D2)\u2225F = 1 \u221ap2 d2(D1, D2). Again, applying Lemma 6, we have N(P, d2, \u03f5) = N(T , d2, \u03f5/\u221ap2). The second statement of the lemma then follows simply from the change of variable \u03b32(P, d2) \u2264C2 Z \u221e 0 q log N(T , d2, \u03f5/\u221ap2)d\u03f5 = C2 \u221ap2 Z \u221e 0 p log N(T , d2, \u03f5)d\u03f5. 34 \fOptimal Structured Principal Subspace Estimation Proof of (59). The proof of the \ufb01rst inequality can be found, for example, on page 22 of Talagrand (2014). Nevertheless, we provide a detailed proof for completeness. By de\ufb01nition of en, if \u03f5 < en(T), we have N(T, d, \u03f5) > Nn and N(T, d, \u03f5) \u2265Nn + 1. Then p log(1 + Nn)(en(T) \u2212en+1(T)) \u2264 Z en(T) en+1(T) p log N(T, d, \u03f5). Since log(1 + Nn) \u22652n log 2 for n \u22650, summation over n \u22650 yields p log 2 X n\u22650 2n/2(en \u2212en+1(T)) \u2264 Z e0(T) 0 p log N(T, d, \u03f5). Then the \ufb01nal inequality (59) follows by noting that X n\u22650 2n/2(en \u2212en+1(T)) = X n\u22650 2n/2en(T) \u2212 X n\u22651 2(n\u22121)/2en(T) \u2265(1 \u22121/ \u221a 2) X n\u22650 2n/2en(T). The second inequality can be obtained similarly by working with the inequality log(1 + Nn)(en(T) \u2212en+1(T)) \u2264 Z en(T) en+1(T) log N(T, d, \u03f5). Proof of Lemma 32. The proof of this lemma generalizes the ideas in Cai and Zhang (2018) and Ma et al. (2019). In general, direct calculation of D(Pi, Pj) is di\ufb03cult. We detour by introducing an approximate density of Pi as \u02dc Pi(Y) = \u03c3\u2212p1p2 (2\u03c0)p1p2/2 Z exp(\u2212\u2225Y \u2212tUiW\u22a4\u22252 F /(2\u03c32)) \u0012 p2 2\u03c0 \u0013rp2/2 exp(\u2212p2\u2225W\u22252 F /2)dW. Now for Y \u223c\u02dc Pi, if Yk is the k-th column of Y, we have Yk|Ui \u223ci.i.d. N \u0012 0, \u03c32 \u0012 In \u2212 4t2 4t2 + \u03c32p2 UiU\u22a4 i \u0013\u22121\u0013 = N \u0012 0, \u03c32In + 4t2 p2 UiU\u22a4 i \u0013 , (61) for k = 1, ..., p2. It is well-known that the KL-divergence between two p-dimensional multivariate Gaussian distribution is D(N(\u00b50, \u03a30)\u2225N(\u00b51, \u03a31)) = 1 2 \u0012 tr(\u03a3\u22121 0 \u03a31) + (\u00b51 \u2212\u00b50)\u22a4\u03a3\u22121 1 (\u00b51 \u2212\u00b50) \u2212p + log \u0012det \u03a31 det \u03a30 \u0013\u0013 . As a result, we can calculate that for any \u02dc Pi and \u02dc Pj, D( \u02dc Pi, \u02dc Pj) = p2 2 \u001a tr \u0012\u0012 Ip1 \u2212 4t2 4t2 + \u03c32p2 UiU\u22a4 i \u0013\u0012 Ip1 + 4t2 \u03c32p2 UjU\u22a4 j \u0013\u0013 \u2212p1 \u001b \u2264 Ct4 4t2 + \u03c32p2 (r \u2212\u2225U\u22a4 i Uj\u22252 F ) = Ct4d(Ui, Uj) 4t2 + \u03c32p2 (62) 35 \fCai, Li and Ma where the last inequality follows from Lemma 20. Hence, the proof of this proposition is complete if we can show that there exist some constant C > 0 such that D(Pi, Pj) \u2264D( \u02dc Pi, \u02dc Pj) + C. (63) The rest of the proof is devoted to the proof of (63). Proof of (63). De\ufb01ne the event G = {W \u2208Rr\u00d7p2 : 1/2 \u2264\u03bbmin(W) \u2264\u03bbmax(W) \u22642}. For any given u, Pi \u02dc Pi = 1 (2\u03c0) rp2 2 ( \u03c32 4t2+\u03c32p2 ) rp2 2 exp \u0012 1 2\u03c32 p2 X k=1 Y \u22a4 k (Ip1 \u2212 4t2 4t2 + \u03c32p2 UiU\u22a4 i )Yk \u0013 \u00d7 CUi,t Z G exp(\u2212\u2225Y \u2212tUiW\u22a4\u22252 F /(2\u03c32) \u2212p2\u2225W\u22252 F /2)dW = \u00124t2 + \u03c32p2 2\u03c0\u03c32 \u0013p2r/2 exp \u0012 \u2212(4t2 + \u03c32p2) \r \r \r \rW \u2212 2t 4t2 + \u03c32p2 U\u22a4 i Y \r \r \r \r 2 F /2 \u0013 dW = CUi,tP \u0012 W\u2032 \u2208G \f \f \f \fW\u2032 \u223cN \u0012 2t 4t2 + \u03c32p2 U\u22a4 i Y, \u03c32 4t2 + \u03c32p2 Ip1 \u0013\u0013 \u2264CUi,t. (64) Recall that C\u22121 Ui,t = P \u0000W = (wjk) \u2208G|wjk \u223cN(0, 1/p2) \u0001 . By concentration of measure inequalities for Gaussian random matrices (see, for example, Corollary 5.35 of Vershynin (2010)), we have, for su\ufb03ciently large (p2, r), P(W \u2208G) \u22651 \u22122 exp(\u2212cp2), (65) for some constant c > 0. In other words, we have C\u22121 Ui,t \u22651 \u2212p\u2212c 2 (66) and Pi \u02dc Pi \u22641 + p\u2212c 2 (67) uniformly for some constant c > 0. Thus, for some constant \u03b4 > 0, we have D(Pi, Pj) = Z Pi \u0014 log \u0012Pi \u02dc Pi \u0013 + log \u0012 \u02dc Pi \u02dc Pj \u0013 + log \u0012 \u02dc Pj Pj \u0013\u0015 dY \u2264log(1 + \u03b4) + D( \u02dc Pi, \u02dc Pj) + Z (Pi \u2212\u02dc Pi) log \u0012 \u02dc Pi \u02dc Pj \u0013 dY + Z Pi log \u0012 \u02dc Pi Pj \u0013 dY \u2264log(1 + \u03b4) + D( \u02dc Pi, \u02dc Pj) + Z \u02dc Pi \u0012Pi \u02dc Pi \u22121 \u0013 log \u0012 \u02dc Pi \u02dc Pj \u0013 dY + (1 + \u03b4) Z \u02dc Pi \f \f \f \f log \u0012 \u02dc Pj Pj \u0013\f \f \f \fdY \u2264log(1 + \u03b4) + D( \u02dc Pi, \u02dc Pj) + p\u2212c 2 Z \u02dc Pi \f \f \f \f log \u0012 \u02dc Pi \u02dc Pj \u0013\f \f \f \fdY + (1 + \u03b4) Z \u02dc Pi \f \f \f \f log \u0012 \u02dc Pj Pj \u0013\f \f \f \fdY. (68) 36 \fOptimal Structured Principal Subspace Estimation Now since Z \u02dc Pi \f \f \f \f log \u0012 \u02dc Pi \u02dc Pj \u0013\f \f \f \fdY = 1 2\u03c32 Z \u02dc Pi \f \f \f \f 4t2 4t2 + \u03c32p2 p2 X k=1 Y \u22a4 k (UiU\u22a4 i \u2212UjU\u22a4 j )Yk \f \f \f \fdY \u2264 1 2\u03c32 E \u0014 4t2 4t2 + \u03c32p2 p2 X k=1 Y \u22a4 k (UiU\u22a4 i + UjU\u22a4 j )Yk \u0015 = 4t2p2 2\u03c32(4t2 + \u03c32p2)tr \u0012 (UiU\u22a4 i + UjU\u22a4 j ) \u0000\u03c32Ip1 + 4t2 p2 UiU\u22a4 i \u0001\u0013 \u2264 4t2p2 4t2 + \u03c32p2 tr \u0012 U\u22a4 i \u0000Ip1 + 4t2 \u03c32p2 \u0001 Ui \u0013 = 4rt2 \u03c32 \u2264rp2, where in the second row the expectation is with respect to Yk \u223cN \u00000, \u03c32Ip1 + 4t2\u03c32 \u03c32p2 UiU\u22a4 i \u0001 . we know that the third term in (68) can be bounded by p\u2212c 2 Z \u02dc Pi \f \f \f \f log \u0012 \u02dc Pi \u02dc Pj \u0013\f \f \f \fdY \u2264rp2 \u00b7 p\u2212c 2 \u2264C for some constants C, c > 0. Finally, by (64), we have Z \u02dc Pi \f \f \f \f log \u0012 \u02dc Pj Pj \u0013\f \f \f \fdY \u2264 Z \u02dc Pi \f \f \f \f log 1 CUj,t \f \f \f \fdY + Z \u02dc Pi \f \f \f \f log 1 P(W\u2032 \u2208G|E) \f \f \f \fdY, where we denoted E = \u001a W\u2032 \u223cN \u0012 2t 4t2 + \u03c32p2 U\u22a4 i Y, \u03c32 4t2 + \u03c32p2 Ip1 \u0013\u001b . Now on the one hand, Z \u02dc Pi \f \f \f \f log 1 CUi,t \f \f \f \fdY \u2264 \u0000log(1 + \u03b4) \u2228| log(1 \u2212\u03b4)\u22121| \u0001 . On the other hand, for \ufb01xed Y and U\u22a4 i Y \u2208Rr\u00d7p2, we can \ufb01nd Q \u2208O(p2, p2 \u2212r) which is orthogonal to U\u22a4 i Y, i.e., U\u22a4 i YQ = 0. Then W\u2032Q \u2208Rr\u00d7(p2\u2212r) are i.i.d. normal distributed with mean 0 and variance \u03c32 4t2+\u03c32p2 . Then again by standard result in random matrix (e.g. Corollary 5.35 in Vershynin (2010)), we have \u03bbmin(W\u2032) = \u03bbr(W\u2032) \u2265\u03bbr(W\u2032Q) \u2265 \u03c3 p 4t2 + \u03c32p2 (\u221ap2 \u2212r \u2212\u221ar \u2212x) with probability at least 1 \u22122 exp(\u2212x2/2). Since t2 < \u03c32p2/4, for p2 su\ufb03ciently large, we can \ufb01nd c such that by setting x = c\u221ap2, P(\u03bbmin(W\u2032) \u22651/2) \u22651 \u2212e\u2212cp2. (69) 37 \fCai, Li and Ma Analogous to the argument on \u03bbmin(W\u2032), we also have P(\u03bbmax(W\u2032) \u22642) \u22651 \u2212e\u2212cp2. (70) Thus, by the union bound inequality, we have P(W\u2032 \u2208G) \u22651 \u22122e\u2212cp2, and consequently, Z \u02dc Pi \f \f \f \f log 1 P(W\u2032 \u2208G|E) \f \f \f \fdY \u2264 \f \f \f \f log 1 1 \u2212p\u2212c 2 \f \f \f \f \u2264p\u2212c 2 . This helps us to bound the last term of (68). Combining the above results, we have proven the inequality (63) and therefore completed the proof." + }, + { + "url": "http://arxiv.org/abs/1909.09851v2", + "title": "Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference", + "abstract": "We study sparse group Lasso for high-dimensional double sparse linear\nregression, where the parameter of interest is simultaneously element-wise and\ngroup-wise sparse. This problem is an important instance of the simultaneously\nstructured model -- an actively studied topic in statistics and machine\nlearning. In the noiseless case, matching upper and lower bounds on sample\ncomplexity are established for the exact recovery of sparse vectors and for\nstable estimation of approximately sparse vectors, respectively. In the noisy\ncase, upper and matching minimax lower bounds for estimation error are\nobtained. We also consider the debiased sparse group Lasso and investigate its\nasymptotic property for the purpose of statistical inference. Finally,\nnumerical studies are provided to support the theoretical results.", + "authors": "T. Tony Cai, Anru R. Zhang, Yuchen Zhou", + "published": "2019-09-21", + "updated": "2022-05-07", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.LG", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction Consider the high-dimensional double sparse regression with simultaneously group-wise and element-wise sparsity structures y = X\u03b2\u2217+ \u03b5, or equivalently yi = X\u22a4 i \u03b2\u2217+ \u03b5i, i = 1, . . . , n. (1) Here, the covariates X \u2208Rn\u00d7p and parameter \u03b2\u2217are divided into d known groups, where the jth group contains bj variables, X = [X(1) \u00b7 \u00b7 \u00b7 X(d)], \u03b2\u2217= \u0010 (\u03b2\u2217 (1))\u22a4, \u00b7 \u00b7 \u00b7 (\u03b2\u2217 (d))\u22a4\u0011\u22a4 , X(j) \u2208Rn\u00d7bj, \u03b2\u2217 (j) \u2208Rbj; (2) \u03b2\u2217is a (s, sg)-sparse vector in the sense that \u2225\u03b2\u2217\u22250,2 := d X j=1 1{\u03b2\u2217 (j)\u0338=0} \u2264sg and \u2225\u03b2\u2217\u22250 := p X i=1 1{\u03b2\u2217 i \u0338=0} \u2264s, (3) The focus of this paper is on the estimation of and inference for \u03b2\u2217based on (y, X). This problem has great importance in a variety of applications. For example in genome-wide association studies (GWAS) [1], the genes can be grouped into pathways and it is believed that only a small portion of the pathways contain causal single nucleotide polymorphisms (SNPs), and the number of causal SNPs is much less than the one of non-causal SNPs in a causal pathway. The sparse group Lasso has been applied to identify causal genes or SNPs associated with a certain trait [1]. Other examples include cancer diagnosis and therapy [2, 3], classi\ufb01cation [4], and climate prediction [5] among many others. The problem can also be viewed as a prototype of various problems in statistics and machine learning, such as the sparse multiple response regression [6] and multiple task learning [7, 8, 9]. The sparse group Lasso [10, 11, 12] provides a classic and straightforward estimator for \u03b2\u2217: \u02c6 \u03b2 = arg min \u03b2 \u2225y \u2212X\u03b2\u22252 2 + \u03bb\u2225\u03b2\u22251 + \u03bbg\u2225\u03b2\u22251,2. (4) Here, \u2225\u03b2\u22251 = Pp i=1 |\u03b2i| and \u2225\u03b2\u22251,2 = P j \u2225\u03b2(j)\u22252 are \u21131 and \u21131,2 convex regularizers to account for element-wise and group-wise sparsity structures, respectively. \u03bb, \u03bbg \u22650 are tuning parameters. In the noiseless setting that \u03b5 = 0, one can apply the constrained \u21131 + \u21131,2 minimization instead to estimate \u03b2\u2217: \u02c6 \u03b2 = arg min \u03bb\u2225\u03b2\u22251 + \u03bbg\u2225\u03b2\u22251,2 subject to y = X\u03b2. (5) 2 \fIn fact, when \u03bb, \u03bbg tend to zero while \u03bb/\u03bbg is \ufb01xed as a constant, the sparse group Lasso (4) tends to the \u21131 + \u21131,2 minimization (5). When \u03b2\u2217is only element-wise sparse, the regular Lasso [13] \u02c6 \u03b2L = arg min \u03b2 \u2225y \u2212X\u03b2\u22252 2 + \u03bb\u2225\u03b2\u22251 (6) can be applied and its theoretical properties have been well studied. See, for example, [14, 15]. When \u03b2\u2217is only group-wise sparse, the group Lasso \u02c6 \u03b2GL = arg min \u03b2 \u2225y \u2212X\u03b2\u22252 2 + \u03bbg\u2225\u03b2\u22251,2 (7) and its variations have been widely investigated [16, 17, 18]. However, to estimate the simultaneously element-wise and group-wise sparse vector \u03b2\u2217, despite many empirical successes of sparse group Lasso in practice, the theoretical properties, including optimal rate of convergence and sample complexity, are still unclear so far to the best of our knowledge. 1.1 Simultaneously Structured Models More broadly speaking, the simultaneously structured models, i.e., the parameter of interest has multiple structures at the same time, have attracted enormous attention in many \ufb01elds including statistics, applied mathematics, and machine learning. In addition to the high-dimensional double sparse regression, other simultaneously structured models include sparse principal component analysis [19, 20], tensor singular value decomposition [21, 22], simultaneously sparse and low-rank matrix/tensor recovery [23, 24], sparse matrix/tensor SVD [25], and sparse phase retrieval [26, 27, 28]. As shown in [29, 23], by minimizing multi-objective regularizers with norms associated with these structures (such as \u21131 norm for element-wise sparsity, nuclear norm for low-rankness, and total variation norm for piecewise constant structures), one usually cannot do better than applying an algorithm that only exploits one structure. They particularly illustrated that simultaneously sparse and low-rank structured matrix cannot be well estimated by penalizing \u21131 and nuclear norm regularizers. Instead, non-convex methods were proposed and shown to achieve better performance. However based on their results, it remains an open question whether the convex regularization, such as sparse group Lasso or \u21131 + \u21131,2 minimization, can achieve good performance 3 \fin estimation of parameter with two types of sparsity structures, such as the aforementioned high-dimensional double sparse regression. Speci\ufb01cally, as illustrated in Section 2.2, a direct application of [23] does not provide a sample complexity lower bound for exact recovery that matches our upper bound. 1.2 Optimality and Related Literature This paper \ufb01lls the void of statistical limits of sparse group Lasso and provides an a\ufb03rmative answer to the aforementioned question: by exploiting both element-wise and group-wise sparsity structures, the \u21131 + \u21131,2 regularization does provide better performance in high-dimensional double sparse regression. Particularly in the noiseless case, it is shown that (s, sg)-sparse vectors can be exactly recovered and approximately (s, sg)-sparse vectors can be stably estimated with high probability whenever the sample size satis\ufb01es n \u2273sg log(d/sg) + s log(esgb), where b = max1\u2264i\u2264d bi. On the other hand, we prove that exact recovery cannot be achieved by \u21131 + \u21131,2 regularization and stable estimation of approximately (s, sg)-sparse vectors is impossible in general unless n \u2273sg log(d/sg) +s log(esgb/s). We then consider the noisy case and develop the matching upper and lower bounds on the convergence rate for the estimation error. Simulation studies are carried out and the results support our theoretical \ufb01ndings. In addition, statistical inference for the individual coordinates of \u03b2\u2217is studied. A con\ufb01dence interval is constructed based on the debiased sparse group Lasso estimator and its asymptotic property. The results show that by exploring the simultaneously element-wise and group-wise sparsity structures, the debiased sparse group Lasso requires less sample size than the debiased Lasso and debiased group Lasso in the literature [30, 31, 32, 33]. The theoretical analysis of sparse group Lasso and \u21131+\u21131,2 minimization is highly non-trivial. First, the regularizer \u03bb\u2225\u00b7 \u22251 + \u03bbg\u2225\u00b7 \u22251,2 is not decomposable with respect to the support of \u03b2\u2217so that the classic techniques of decomposable regularizers [34] and null space property [35] may not be suitable here. Despite a substantial body of literature on high-dimensional element-wise sparse vector estimation based on restricted isometry property (RIP) [36, 37, 38, 39, 40] and restricted eigenvalue [14], these techniques cannot provide nearly optimal results for sparse group Lasso here as it is technically di\ufb03cult to partition general vectors into simultaneously element4 \fwise and group-wise ones that preserves some ordering structures. Departing from the previous literature, our theoretical analysis relies on a novel construction of approximate dual certi\ufb01cate. See Section 2.3 for further details. Although our results mostly focus on the performance of sparse group Lasso and \u21131 + \u21131,2 estimators, the techniques of approximate dual certi\ufb01cate on multi-norm structures here can also be of independent interest. The statistical properties of sparse group Lasso and related estimators have been studied previously. For example, [5] developed consistency results for estimators with a general treestructured norm regularizers, of which the sparse group Lasso is a special case. [41] analyzed the asymptotic behaviors of the adaptive sparse group Lasso estimator. [4, 42] studied the multitask learning and classi\ufb01cation problems based on a variant of sparse group Lasso estimator. [12] studied multivariate linear regression via sparse group Lasso. [43] provided a theoretical framework for developing error bounds of the group Lasso, sparse group Lasso, and group Lasso with tree structured overlapping groups. Speci\ufb01cally, their results imply that the group-wise sparse signal can be exactly recovered with high probability by solving (5) if the sample size satis\ufb01es n \u2273sg (b + log d). Di\ufb00erent from previous results, this paper focused on both the required sample size and convergence rate of estimation error of sparse group Lasso. To the best of our knowledge, this is the \ufb01rst paper that provides optimal theoretical guarantees for both the sample complexity and estimation error of sparse group Lasso. 1.3 Organization of the Paper The rest of the article is organized as follows. After a brief introduction to notation and preliminaries in Section 2.1, the main theoretical results on constrained \u21131 + \u21131,2 minimization in the noiseless setting is presented in Section 2.2 and the key proof ideas are explained in Section 2.3. Results for sparse group Lasso in the noisy setting are discussed in Section 3. In particular, the optimal rate of estimation error and statistical inference are studied in Sections 3.1 and 3.2, respectively. In Section 4.1, we introduce a practical scheme to select tuning parameters. In Section 4.2, we provide simulation results in both noiseless and noisy cases to justify our theoretical \ufb01ndings. The proofs of technical results are given in Section 6. All technical lemmas and their proofs can be found in Appendix A. 5 \f2 \u21131 + \u21131,2 Minimization in Noiseless Case 2.1 Notation and Preliminaries The following notation will be used throughout the paper. We denote a \u2227b = min{a, b}, a \u2228b = max{a, b}. Let sgn(\u00b7) be the sign function, i.e., sgn(x) = 1, 0, or \u22121, if x > 0, x = 0, or x < 0, respectively. H\u03b1(\u00b7) is the soft-thresholding function such that H\u03b1(x) = sgn(x) \u00b7 {(|x| \u2212\u03b1) \u22280} for any x \u2208R. We say a \u2272b and a \u2273b if a \u2264Cb and b \u2264Ca for some uniform constant C > 0, respectively. a \u224db means a \u2272b and a \u2273b both hold. Let the uppercase C, C1, C0, . . . and lowercase c, c1, c0, . . . denote large and small positive constants respectively, whose actual values vary from time to time. Throughout the paper, we focus on the parameter index set {1, . . . , p} partitioned into d groups. Denote (1), . . . , (d) \u2286{1, . . . , p} as the index sets belonging to each group. Additionally, for any group index subset G \u2286{1, . . . , d}, de\ufb01ne (G) = \u222aj\u2208G(j), (Gc) = \u222aj / \u2208G(j). For any vector \u03b3 and index subset T, \u03b3T \u2208R|T| represents the sub-vector of \u03b3 with index set T. In particular, \u03b3(G) represents the sub-vector of \u03b3 in the union of Groups j \u2208G. De\ufb01ne the \u2113q norm of any vector \u03b3 as \u2225\u03b3\u2225q = (P i |\u03b3i|q)1/q. For any vector \u03b2 \u2208Rp with group structures, we also de\ufb01ne the \u2113q1,q2 norm for any 0 \u2264q1, q2 \u2264\u221eas \u2225\u03b3\u2225q1,q2 = \uf8eb \uf8ed d X j=1 \u2225\u03b3(j)\u2225q1 q2 \uf8f6 \uf8f8 1/q1 = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 d X j=1 \uf8eb \uf8edX i\u2208(j) |\u03b3i|q2 \uf8f6 \uf8f8 q1/q2\uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe 1/q1 . In particular, \u2225\u03b3\u22250,2 = Pd j=1 1{\u03b3(j)\u0338=0} is the number of non-zero groups of \u03b3, \u2225\u03b3\u2225\u221e,2 = maxj \u2225\u03b3(j)\u22252 is the maximum \u21132 norm among all groups of \u03b3, and \u2225\u03b3\u22251,2 = Pd j=1 \u2225\u03b3(j)\u22252 is the group-wise \u21131 penalty. With a slight abuse of notation, we simply denote \u2225\u03b3T \u2225q1,q2 = \u2225u\u2225q1,q2 if u \u2208Rp, u restricted on subset T is \u03b3T and u restricted on T c is 0. The focus of this paper is on simultaneously element-wise and group-wise sparse vectors de\ufb01ned as follows. De\ufb01nition 1 (Simultaneous element-wise and group-wise sparsity) Assume \u03b2\u2217\u2208Rp is associated with group partition (1), . . . , (d). For positive integers s, sg satisfying sg \u2264d and sg \u2264s \u2264max\u2126\u2286{1,...,d},|\u2126|=sg P i\u2208\u2126bi, we say \u03b2\u2217is (s, sg)-sparse if \u2225\u03b2\u2217\u22250,2 = d X j=1 1{\u03b2\u2217 (j)\u0338=0} \u2264sg, \u2225\u03b2\u2217\u22250 = X i 1{\u03b2\u2217 i \u0338=0} \u2264s. 6 \f2.2 Noiseless Case and Sample Complexity To analyze the performance of sparse group Lasso and \u21131 + \u21131,2 minimization, we \ufb01rst introduce the following assumption on the design matrix X. Assumption 1 (Sub-Gaussian assumption) Suppose all rows of X are i.i.d. centered subGaussian distributed. Speci\ufb01cally, EXi\u00b7 = 0, Var(X\u22a4 i\u00b7 ) = \u03a3, and for any \u03b1 \u2208Rp, we have E exp \u0000\u03b1\u22a4\u03a3\u22121/2X\u22a4 i\u00b7 \u0001 \u2264exp \u0000\u03ba2\u2225\u03b1\u22252 2/2 \u0001 for constant \u03ba > 0. We also assume there exist two constants Cmax \u2265cmin > 0 such that cmin \u2264\u03c3min(\u03a3) \u2264\u03c3max(\u03a3) \u2264Cmax, where \u03c3max(\u03a3) and \u03c3min(\u03a3) are the largest and smallest eigenvalues of \u03a3, respectively. Clear, a random matrix X with i.i.d. standard normal entries satis\ufb01es this assumption \u2013 this design is referred to as the Gaussian ensemble and has been considered as a benchmark setting in compressed sensing and high-dimensional regression literature [44, 45]. The following theorem shows that the \u21131 +\u21131,2 minimization achieves the exact recovery with high probability when \u03b2\u2217is simultaneously element-wise and group-wise sparse, X is weakly dependent, and Assumption 1 holds. The theorem also provides a more general upper bound on estimation error if \u03b2\u2217is approximately element-wise and group-wise sparse. Theorem 1 (\u21131 + \u21131,2 minimization in noiseless case) Suppose one observes y = X\u03b2\u2217, where X has the group structure (2) and satis\ufb01es Assumption 1, \u03b2\u2217is (s, sg)-sparse, and b = max1\u2264i\u2264d bi. Let T be the support of \u03b2\u2217. Suppose there exist uniform constants C, c > 0 such that n \u2265C (sg log(d/sg) + s log(esgb)) , (8) max i\u2208T c \r \r \r\u03a3i,T \u03a3\u22121 T,T \r \r \r 2 \u2264c/\u221as, (9) then the constrained \u21131 + \u21131,2 minimization (5) with \u03bbg = p s/sg\u03bb achieves the exact recovery with probability at least 1 \u2212C exp(\u2212cn/s). Moreover, if \u03b2\u2217\u2208Rp is a general vector and \u02c6 \u03b2 is the solution to the constrained \u21131 + \u21131,2 minimization (5) with \u03bbg = p s/sg\u03bb, then \u2225\u02c6 \u03b2 \u2212\u03b2\u2217\u22252 \u2272 min S: \u2225\u03b2\u2217 S\u22250\u2264s,\u2225\u03b2\u2217 S\u22250,2\u2264sg, maxi\u2208Sc \u2225\u03a3i,S\u03a3\u22121 S,S\u22252\u2264c/\u221as \u0012 1 \u221as\u2225\u03b2\u2217 Sc\u22251 + 1 \u221asg \u2225\u03b2\u2217 Sc\u22251,2 \u0013 . (10) 7 \fwith probability at least 1 \u2212C exp(\u2212cn/s). Remark 1 (Interpretation and comparison) In Theorem 1, the required sample size for achieving exact recovery contains two terms: sg log(d/sg) and s log(esgb). Intuitively speaking, sg log(d/sg) corresponds to the complexity of identifying sg non-zero groups and s log(esgb) corresponds to the complexity of estimating s non-zero elements of \u03b2 in sg known groups. When \u03b2\u2217is only element-wise or group-wise sparse, one can apply respectively the classic \u21131 or \u21131,2 minimization to recover \u03b2\u2217, \u02c6 \u03b2\u21131 = arg min \u03b2 \u2225\u03b2\u22251 subject to y = X\u03b2, (11) \u02c6 \u03b2\u21131,2 = arg min \u03b2 \u2225\u03b2\u22251,2 subject to y = X\u03b2. (12) The \u21131 minimization and \u21131,2 minimization here are respectively the special form of the regular Lasso and group Lasso (if \u03bb, \u03bbg = 0+ in (6) and (7)), respectively. Especially if the group size b1 \u224d\u00b7 \u00b7 \u00b7 \u224dbd \u224db, to ensure exact recovery in the noiseless setting with high probability, (11) requires n \u2273Cs log(ebd/s) [46] and group Lasso requires n \u2273sg(b + log(ed/sg)). The \u21131 + \u21131,2 minimization (5) has provable advantages over both regular and group Lasso when b \u226blog(d) \u226blog(esgb) and sgb/ log(esgb) \u226bs \u226bsg. In particular, when sg = s, the double sparse regression reduces to the vanilla sparse linear regression, and the upper bound (10) matches the classic upper bound for \u21131 minimization [44]. In addition, Condition (9) is an important technical condition we used in our theoretical analysis. Next, we consider the sample complexity lower bound. Suppose b1 = b2 = \u00b7 \u00b7 \u00b7 = bd and d \u22652sg. Recall that one observes y = X\u03b2\u2217without noise and aims to estimate the (s, sg)-sparse vector \u03b2\u2217based on y and X. As indicated by classic results in compressed sensing [47], with su\ufb03cient computing power, the \u21130 minimization below achieves exact recovery of \u03b2\u2217 \u02c6 \u03b2\u21130 = arg min \u2225\u03b2\u22250 subject to X\u03b2 = y (13) as along as X is non-degenerate and n \u22652s. This bound is actually sharp: when n < 2s, for any set T \u2286{1, . . . , db} with cardinality 2s, one can \ufb01nd a vector \u03b3 such that supp(\u03b3) \u2286T and 8 \fX\u03b3 = 0. By choosing an appropriate T, we can split the support \u03b3 to obtain two (s, sg)-sparse vectors \u03b21, \u03b22 satisfying \u03b21 + \u03b22 = \u03b3. Then, X\u03b21 = X(\u2212\u03b22) but there is no way to distinguish \u03b21 and \u03b22 merely based on X and y = X\u03b21 = X(\u2212\u03b22). However, the \u21130 minimization (13) is computational infeasible in practice while a larger sample size is required for applying more practical methods. The following theorem shows that by performing the convex \u21131 regularization, \u21131,2 regularization, or any weighted combination of them, one requires at least \u2126(sg log(d/sg) + s log(esgb/s)) observations to ensure exact recovery of (s, sg)-sparse vectors. Theorem 2 (Sample complexity lower bound for exact recovery) Suppose b1 = \u00b7 \u00b7 \u00b7 = bd = b, d, b \u22653. Suppose X is an n-by-(db) matrix. If every (2s, 2sg)-sparse vector \u03b2 \u2208Rdb is a minimizer of the following programming for some (\u03bb, \u03bbg) \u2208{(\u03bb, \u03bbg) : \u03bb, \u03bbg \u22650, \u03bb + \u03bbg > 0}: min z \u03bb\u2225z\u22251 + \u03bbg\u2225z\u22251,2 subject to Xz = y = X\u03b2. In other words, if the \u21131 + \u21131,2 minimization exactly recover all (2s, 2sg)-sparse vector \u03b2, then we must have n \u2273sg log(d/sg) + s log(esgb/s). The following sample complexity lower bound shows that for arbitrary methods, to ensure stable estimation of all approximately sparse vectors, one requires at least \u2126(sg log(d/sg) + s log(esgb/s)) observations. Theorem 3 (Sample complexity lower bound for stable estimation) Suppose b1 = \u00b7 \u00b7 \u00b7 = bd = b, b, d \u22653. Assume there exists a matrix X \u2208Rn\u00d7(bd), a map \u2206: Rn \u2192Rbd (\u2206may depend on X), and a constant C > 0 satisfying \u2225\u03b2 \u2212\u2206(X\u03b2)\u22252 \u2264C \u0012\u2225\u03b2\u22251 \u221as + \u2225\u03b2\u22251,2 \u221asg \u0013 (14) for all \u03b2 \u2208Rp and some s, sg satisfying d \u2265sg, sgb \u2265s \u2265sg. There exists constants C0 and c0 that depend only on C such that whenever sg \u2265C0, we must have n \u2265c0(sg log(d/sg) + s log(esgb/s)). Remark 2 (Optimality and comparison with previous results) Theorems 2 and 3 show that the sample complexity upper bound in Theorem 1 is rate-optimal under a weak condition: 9 \flog(esgb) \u224dlog(esgb) \u2212log(s) or log(d) \u22652s log(s)/sg. Oymak, et al. [23] provided a general analysis for convex regularization of simultaneously structured parameter estimation. Speci\ufb01cally for the high-dimensional double sparse regression, a direct application of their Theorem 3.2 and Corollary 3.1 implies that if \u21131 + \u21131,2 minimization can exactly recover (s, sg)-sparse vector \u03b2\u2217with a constant probability, one must have n \u2273s. We can see that Theorem 2 provides a sharper lower bound on sample complexity. In addition, by setting sg = s, the lower bound in Theorems 2 and 3 reduces to n \u2273s log(p/s), which matches the optimal sample complexity lower bound for exact recovery of s-sparse vectors [46, Theorem10.11, Proposition 10.7]. By setting s = sgb, we obtain a sample complexity lower bound n \u2273sg(b + log(d/sg)) for (approximate) sg-group-wise sparse vector recovery and stable estimation. To the best of our knowledge, this is the \ufb01rst sample complexity lower bound for group Lasso. 2.3 Proof Sketches We brie\ufb02y discuss the proof sketches of the main technical results in this section. The detailed proofs are postponed to Section 6. The proof of Theorem 1 is based on a novel dual certi\ufb01cate scheme. The dual certi\ufb01cate [48] has been used in the theoretical analysis for various convex optimization methods in highdimensional problems, such as matrix completion [49, 50], compressed sensing [44], robust PCA [51], tensor completion [52], etc. The high-dimensional double sparse linear regression exhibits di\ufb00erent aspects from these previous works due to the simultaneous sparsity structure. In particular, we can show that if the uet de\ufb01ned below is in the row space of X, it can be used as an exact dual certi\ufb01cate for recovery of (s, sg)-sparse vector \u03b2\u2217: uet = vet + wet \u2208Rp, \uf8f1 \uf8f2 \uf8f3 (vet)(j) = p s/sg\u03b2\u2217 (j)/\u2225\u03b2\u2217 (j)\u22252, j \u2208G; \u2225(vet)(j)\u22252 < p s/sg, j \u2208Gc; \uf8f1 \uf8f2 \uf8f3 (wet)T = sgn(\u03b2\u2217 T ) \u2225(wet)T c\u2225\u221e< 1. (15) Here, T and G are the element-wise and group-wise supports of \u03b2\u2217: T = {i : \u03b2i \u0338= 0} \u2286{1, . . . , p}, G = {j : \u03b2(j) \u0338= 0} \u2286{1, . . . , d}. Roughly speaking, uet is the sub-gradient of objective function (5) evaluated at \u03b2 = \u03b2\u2217. If uet 10 \fis in the row space of X, the sub-gradient will be perpendicular to the feasible set of (5), which implies that \u03b2\u2217is the unique minimizer of \u21131 + \u21131,2 minimization (5). For more general vector \u03b2\u2217that does not necessarily have a sparse support T or G, we consider the following (s, sg)-sparse approximation: \u03b2ap = arg min S 1 \u221as\u2225\u03b2\u2217 Sc\u22251 + 1 \u221asg \u2225\u03b2\u2217 Sc\u22251,2 subject to \u2225\u03b2\u2217 S\u22250 \u2264s. \u2225\u03b2\u2217 S\u22250,2 \u2264sg, max i\u2208Sc \u2225\u03a3i,S\u03a3\u22121 S,S\u22252 \u2264c/\u221as. (16) Let T = {i : \u03b2ap i \u0338= 0} and G = {j : (\u03b2ap)(j) \u0338= 0} be the element-wise and group-wise supports of \u03b2ap. De\ufb01ne e u0 = e v0 + e w0 \u2208Rp, \uf8f1 \uf8f2 \uf8f3 (e v0)(j) = p s/sg\u03b2\u2217 T,(j)/\u2225\u03b2\u2217 T,(j)\u22252, j \u2208G; \u2225(e v0)(j)\u22252 < p s/sg, j \u2208Gc; \uf8f1 \uf8f2 \uf8f3 ( e w0)T = sgn(\u03b2\u2217 T ) \u2225( e w0)T c\u2225\u221e< 1. (17) Here \u03b2\u2217 T,(j) \u2208Rbj is the subvector \u03b2\u2217restricted on the j-th group with all entries in T c set to zero. Similarly to the exactly sparse case, if e u0 is in the row space of X and the true \u03b2\u2217is approximately (s, sg)-sparse, the minimizer of (5) will be close to \u03b2\u2217. However, it is often di\ufb03cult to \ufb01nd an exact dual certi\ufb01cate that lies in the row space of X and satis\ufb01es stringent conditions in (15) or (17). We instead propose to analyze via the approximate dual certi\ufb01cate de\ufb01ned as (18) in the following lemma. Lemma 1 (Approximate dual certi\ufb01cate for sparse group Lasso) Suppose T, G are elementwise and group-wise support de\ufb01ned in (16). e u0 is de\ufb01ned in (17). Assume X satis\ufb01es \u03c3min \u0000X\u22a4 T XT /n \u0001 \u2265 cmin/2. If there exists u \u2208Rp in the row span of X satisfying \u2225uT \u2212(e u0)T \u22252 \u00b7 max i\u2208T c \r \r \rX\u22a4 T Xi/n \r \r \r 2 \u2264cmin/8, \u2225H1/2(u(Gc))\u2225\u221e,2 \u2264\u221as0/2, \u2225u(G)\\T \u2225\u221e\u22641/2, (18) Then the conclusion of Theorem 1 (10) holds with probability at least 1 \u22122e\u2212cn. Here, H1/2(\u00b7) is the soft-thresholding operator de\ufb01ned at the beginning of Section 2. If we additionally assume \u03b2\u2217is (s, sg)-sparse, then \u03b2\u2217is the unique solution to the sparse group \u21131 + \u21131,2 minimization (5) with probability at least 1 \u22122e\u2212cn. 11 \fLemma 1 shows that the conclusion of Theorem 1 holds if there exists an approximate dual certi\ufb01cate u satisfying the condition (18). The following lemma shows that, under the assumptions in Theorem 1, one can \ufb01nd such an approximate dual certi\ufb01cate with high probability. Lemma 2 Suppose X has group structure (2) and satis\ufb01es Assumption 1. Recall \u03c3min(X\u22a4 T XT /n) is the least eigenvalue of X\u22a4 T XT /n. Then \u03c3min \u0000X\u22a4 T XT /n \u0001 \u22651/2 and (18) holds with probability at least 1 \u2212Ce\u2212cn/s, where T is de\ufb01ned in (16). Another key technical tool to the proof of Theorem 1 is the following Lemma, which shows that X satis\ufb01es the restricted isometry property for all simultaneously element-wise and groupwise sparse vectors with high probability when there are enough samples. Lemma 3 If n \u2265C(sg log(d/sg) + s log(esgb)), cmin 2 \u2225\u03b3\u22252 2 \u22641 n\u2225X\u03b3\u22252 2 \u2264(Cmax + cmin 2 )\u2225\u03b3\u22252 2, \u2200\u03b3 \u2208{\u03b3 \u2208Rp : \u2225\u03b3\u22250 \u22642s, \u2225\u03b3\u22250,2 \u22642sg} (19) with probability at least 1 \u22122e\u2212cn. Next we brie\ufb02y discuss the proof of Theorem 2. Consider the quotient space Rdb/ker(X) = {[\u03b3] := x + ker(X), \u03b3 \u2208Rdb} and de\ufb01ne an associated norm as \u2225[\u03b3]\u2225= infv\u2208ker(X){\u03bb\u2225\u03b3 \u2212v\u22251 + \u03bbg\u2225\u03b3\u2212v\u22251,2}. We show that there exist N di\ufb00erent (s, sg)-sparse vectors \u03b2(1), . . . , \u03b2(N) such that log(N) \u224ds log(esgb/s)+sg log(d/sg) and \u2225[\u03b2(i)]\u2225= 1, \u2225[\u03b2(i)]\u2212[\u03b2(j)]\u2225\u22652/9 for all 1 \u2264i \u0338= j \u2264N. By a property of the packing number and the fact that dim(Rdb/ker(X)) \u2264n, we must have N \u226410n. Thus n \u2273log(N) \u224ds log(esgb/s) + sg log(d/sg). We prove Theorem 3 by contradiction. Assume that n < c0 (s log(esgb/s) + sg log(d/sg)) (20) for a su\ufb03ciently small constant c0. Let \u2225\u00b7 \u2225= \u2225\u00b7 \u22251 + p s/sg\u2225\u00b7 \u22251,2 and B = {x \u2208Rdb : \u2225x\u2225\u22641} be the unit ball associated with \u2225\u00b7 \u2225. De\ufb01ne dn(B, Rp) = inf Ln is a subspace of Rp with dim(Rp/Ln)\u2264n ( sup \u03b2\u2208B\u2229Ln \u2225\u03b2\u22252 ) , 12 \fWe have dn(B, Rp) \u2264 C \u221as by the assumption of this theorem. We can also show that there exists a uniform constant c > 0 such that dn(B, Rp) \u2265c min \uf8f1 \uf8f2 \uf8f3 1 \u221as0 , \" sg s log c s sg d log(esgb/s) n ! + log(esgb/s) ! /n #1/2\uf8fc \uf8fd \uf8fe. The previous two inequalities and (20) together imply that n \u2265c sg log c s sg d log(esgb/s) n ! + s log(esgb/s) ! \u2265c0 (s log(esgb/s) + sg log(d/sg)) > n. This contradiction shows that n \u2265c0 (s log(esgb/s) + sg log(d/sg)). 3 Sparse Group Lasso in Noisy Case We now turn to the noisy case. 3.1 Optimal Rate of Estimation Error of Sparse Group Lasso When observations are noisy, we have the following theoretical guarantee for the sparse group Lasso. Theorem 4 (Upper bound of estimation error) Suppose y = X\u03b2\u2217+ \u03b5, X satis\ufb01es Assumption 1, n \u2265C (sg log(d/sg) + s log(esgb)) for some uniform constant C > 0, \u03b5 iid \u223cN(0, \u03c32), and b = max1\u2264i\u2264d bi. Then the sparse group Lasso estimator (4) with \u03bb = C\u03c3 q (s log(esgb) + sg log(ed/sg))n/s and \u03bbg = q s/sg\u03bb satis\ufb01es \u2225\u02c6 \u03b2 \u2212\u03b2\u2217\u22252 \u2272 min S: \u2225\u03b2\u2217 S\u22250\u2264s,\u2225\u03b2\u2217 S\u22250,2\u2264sg, maxi\u2208Sc \u2225\u03a3i,S\u03a3\u22121 S,S\u22252\u2264c/\u221as (r \u03c32(sg log(d/sg) + s log(esgb)) n + \u2225\u03b2\u2217 Sc\u22251 \u221as + \u2225\u03b2\u2217 Sc\u22251,2 \u221asg ) with probability at least 1 \u2212C exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 . Especially, if \u03b2\u2217is exactly (s, sg)-sparse and maxi\u2208T c \u2225\u03a3i,T \u03a3\u22121 T,T \u22252 \u2264c/\u221as holds, then \u2225\u02c6 \u03b2 \u2212\u03b2\u2217\u22252 2 \u2272\u03c32(sg log(d/sg) + s log(esgb)) n (21) with probability at least 1 \u2212C exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 . 13 \fIn addition, we focus on the following class of simultaneously element-wise and group-wise sparse vectors, Fs,sg = {\u03b2 : \u2225\u03b2\u22250 \u2264s, \u2225\u03b2\u22250,2 \u2264sg}. The following minimax lower bound of estimation error holds. Theorem 5 (Lower bound of estimation error) Suppose X satis\ufb01es Assumption 1, b1 = \u00b7 \u00b7 \u00b7 = bd = b, and d, b \u22653. Then we have inf \u02c6 \u03b2 sup \u03b2\u2208Fs,sg E\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 2 \u2273\u03c32(sg log(ed/sg) + s log(esgb/s)) n . Remark 3 Theorems 4 and 5 together show that the sparse group Lasso yields the minimax optimal rate of convergence as long as the following condition holds: log(esgb) \u224dlog(esgb) \u2212 log(s) or log(d) \u2273s log(s)/sg. Remark 4 We brie\ufb02y discuss the main proof ideas of Theorem 5 here. First, we randomly generate a series of subsets \u2126(i) \u2286{1, . . . , p} as feasible supports of (s, sg)-sparse vectors. Then, we prove by a probabilistic argument that there exist N \u2273(sg log(d/sg) + s log(esgb/s)) subsets {\u2126(i)}N i=1 such that |\u2126(i) \u2229\u2126(j)| < 8sg\u230as/sg\u230b/9 for any i < j. Next, we construct a series of candidate (s, sg)-sparse vectors \u03b2(i) such that \u03b2(i) k = \u03c41{k\u2208\u2126(i)}. Intuitively speaking, {\u03b2(i)}N i=1 are non-distinguishable based only on observations (y, X) by such a construction. Theorem 5 then follows by choosing an appropriate \u03c4 and the generalized Fano\u2019s lemma. 3.2 Statistical Inference via Debiased Sparse Group Lasso We further consider the statistical inference for \u03b2\u2217under the double sparse linear regression model. First, let \u02c6 \u03b2 be the sparse group Lasso estimator given by (4). Inspired by the recent advances in inference for high-dimensional linear regression [30, 53, 31, 33], we propose the following debiased sparse group Lasso estimator, \u02c6 \u03b2u = \u02c6 \u03b2 + 1 n \u02c6 MX\u22a4\u0010 Y \u2212X \u02c6 \u03b2 \u0011 . (22) Here, \u02c6 \u03a3 = 1 n Pn k=1 XkX\u22a4 k is the sample covariance matrix and \u02c6 M = [ \u02c6 m1 \u00b7 \u00b7 \u00b7 \u02c6 mp]\u22a4is an approximation of the inverse covariance matrix \u03a3\u22121, where \u02c6 mi is the solution to the following convex 14 \foptimization, minimize m\u22a4\u02c6 \u03a3m subject to \u2225H\u03b1(\u02c6 \u03a3m \u2212ei)\u2225\u221e,2 \u2264\u03b3. (23) Here, H\u03b1 is the soft-thresholding operator with thresholding level \u03b1 de\ufb01ned at the beginning of Section 2 and ei is the i-th vector in the canonical basis of Rp. The following theorem establishes an asymptotic result for debiased sparse group Lasso. Theorem 6 (Asymptotic distribution of debiased sparse group Lasso) Suppose \u03b2\u2217\u2208 Rp is (s, sg)-sparse, X \u2208Rn\u00d7p satis\ufb01es Assumption 1, and maxi\u2208T c \u2225\u03a3i,T \u03a3\u22121 T,T \u22252 \u2264c/\u221as. Set \u03bb = C\u03c3 q (s log(esgb)+sg log(d/sg))n s and \u03bbg = q s sg \u03bb in (4), \u03b1 = \u03bb n\u03c3, \u03b3 = q s sg \u03bb n\u03c3 in (23). Then with probability at least 1 \u2212C exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 , the debiased sparse group Lasso estimator \u02c6 \u03b2u can be decomposed as \u221an \u0010 \u02c6 \u03b2u \u2212\u03b2\u2217\u0011 = \u2206+ w, where \u2225\u2206\u2225\u221e\u2264C (s log(esgb) + sg log(ed/sg)) \u221an \u03c3, w|X \u223cN \u0010 0, \u03c32 \u02c6 M \u02c6 \u03a3 \u02c6 M\u22a4\u0011 . (24) In particular, if \u221an \u226bs log(esgb) + sg log(ed/sg), for any 1 \u2264i \u2264p, \u221an \u0010 \u02c6 \u03b2u i \u2212\u03b2\u2217 i \u0011 q \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi \u2192N \u00000, \u03c32\u0001 . (25) Remark 5 (25) provides a method to construct con\ufb01dence intervals for \u03b2\u2217. Speci\ufb01cally if \u02c6 \u03c3 is a consistent estimator of \u03c3, such as the scaled sparse group Lasso to be discussed in Section 5, \uf8ee \uf8f0\u02c6 \u03b2u i \u2212\u03a6\u22121(1 \u2212\u03b1/2)\u02c6 \u03c3 s \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi n , \u02c6 \u03b2u i + \u03a6\u22121(1 \u2212\u03b1/2)\u02c6 \u03c3 s \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi n \uf8f9 \uf8fb would be an asymptotic (1 \u2212\u03b1)-con\ufb01dence interval for \u03b2\u2217 i . We can see that the debiased sparse group Lasso estimator has the provably advantage on sample complexity (n \u226b(s log(esgb) + sg log(ed/sg))2) over the ones via debiased Lasso (n \u226bs log p, see [30, 31, 33]) or debiased group Lasso (n \u226b(sgb + sg log p)2, see [32]) for constructing asymptotic con\ufb01dence intervals of \u03b2\u2217. 4 Simulation Studies In this section, we investigate the numerical performance of the sparse group Lasso and \u21131 + \u21131,2 minimization for double sparse regression. The results support our theoretical \ufb01ndings in 15 \fSections 2 and 3. We \ufb01rst discuss the practical choice for the tuning parameters used in the proposed algorithms. 4.1 Practical Selection of Tuning Parameters By introducing \u03c4 as a surrogate for (\u03bbg/\u03bb)2, we can rewrite the \u21131 + \u21131,2 minimization and the sparse group Lasso as \u02c6 \u03b2 = arg min \u2225\u03b2\u22251 + \u221a\u03c4\u2225\u03b2\u22251,2 subject to y = X\u03b2, (26) \u02c6 \u03b2 = arg min \u03b2 \u2225y \u2212X\u03b2\u22252 2 + \u03bb\u2225\u03b2\u22251 + \u03bb\u221a\u03c4\u2225\u03b2\u22251,2. (27) As suggested by Theorems 1 and 4, the theoretical choice of the tuning parameters (\u03bb, \u03c4) relies on \u03c3, s, and sg in sparse group Lasso and \u21131 + \u21131,2 minimization for double sparse regression. These values, however, are usually unknown in practice. In addition, those theoretical values of tuning parameters may not achieve the best \ufb01nite-sample numerical performance. We thus introduce in this section a data-driven approach to tuning parameter selection using K-fold cross-validation. We \ufb01rst discuss how to select \u03c4 in the \u21131 + \u21131,2 minimization (26). Recall n is the sample size, p is the total number of covariates, d is the number of groups, b1, . . . , bd are the number of covariates in each group, and b = maxj bj. Since the theoretical value \u03c4 = s/sg and s/sg must satisfy 1 \u2264s/sg \u2264b, for a given integer L \u22651, we introduce a grid S0 = {b(l\u22121)/(L\u22121) : 1 \u2264l \u2264L} (28) as a set of candidate values for \u03c4. Here, the grid size L can be set to a typical value of 10, or a larger value if more computing power is available. We split the data {Xi, yi}n i=1 into K groups. For 1 \u2264k \u2264K, let Jk \u2282{1, . . . , n} be the index set of the kth group and Jc k = {1, . . . , n}\\Jk. For each \u03c4 \u2208S0, we solve \u02c6 \u03b2(k)(\u03c4) = arg min \u2225\u03b2\u22251 + \u221a\u03c4\u2225\u03b2\u22251,2 subject to yJc k = X[Jc k,:]\u03b2 and calculate the prediction error \u02c6 R(\u03c4) = K X k=1 X j\u2208Jk \u0010 yj \u2212X[j,:] \u02c6 \u03b2(k)(\u03c4) \u00112 . 16 \fLet \u03c4\u2217be the minimizer of the prediction error: \u03c4\u2217= arg min\u03c4\u2208S0 \u02c6 R(\u03c4). Then, the \ufb01nal estimator \u02c6 \u03b2 is calculated using (26) with \u03c4\u2217. Then we consider the sparse group Lasso (27), which includes two tuning parameters (\u03c4, \u03bb). We still de\ufb01ne S0 in (28) as a grid of candidate values of \u03c4. Following the idea in [11, Section 3.3], for each \u03c4 \u2208S0, we begin with a large value of \u03bbmax(\u03c4) so that \u02c6 \u03b2, the outcome of sparse group Lasso (27) with tuning parameters (\u03c4, \u03bbmax(\u03c4)), is zero (this can be achieved by the SGL package1). Let \u03bbmin(\u03c4) be a small fraction of \u03bbmax(\u03c4) (e.g., \u03bbmin = 0.1\u03bbmax as suggested in [11, Section 5]). Then we de\ufb01ne \u039b(\u03c4) = \b {\u03bbmin(\u03c4)}(L\u2212l)/(L\u22121) \u00b7 {\u03bbmax(\u03c4)}(l\u22121)/(L\u22121) : l = 1, . . . , L \t . Next, we split the data {Xi, yi}n i=1 into K groups. For 1 \u2264k \u2264K, let Jk \u2282{1, . . . , n} be the index set of the kth group and Jc k = {1, . . . , n}\\Jk. For each \u03c4 \u2208S0, \u03bb \u2208\u039b(\u03c4), and k \u2208{1, . . . , K}, we solve \u02c6 \u03b2(k)(\u03c4, \u03bb) = arg min \u03b2 \r \r \ryJc k \u2212X[Jc k,:]\u03b2 \r \r \r 2 2 + \u03bb\u2225\u03b2\u22251 + \u03bb\u221a\u03c4\u2225\u03b2\u22251,2 and calculate the prediction error \u02c6 R(\u03c4, \u03bb) = K X k=1 X j\u2208Jk \u0010 yj \u2212X[j,:] \u02c6 \u03b2(k)(\u03c4, \u03bb) \u00112 . Let (\u03c4\u2217, \u03bb\u2217) be the minimizer of the prediction error: (\u03c4\u2217, \u03bb\u2217) = arg min\u03c4\u2208S0,\u03bb\u2208\u039b(\u03c4) \u02c6 R(\u03c4, \u03bb). The \ufb01nal estimator \u02c6 \u03b2 is calculated using (27) with (\u03c4\u2217, \u03bb\u2217). In our simulation studies next, we will examine the performance of this cross-validation scheme with K = L = 10, \u03bbmin = 0.1\u03bbmax. 4.2 Numerical Results We begin by considering the sample complexity for the exact recovery in the noiseless case. Suppose all group sizes are equal (b1 = \u00b7 \u00b7 \u00b7 = bd = b) and the number of observations n varies from 5 to 200. We consider four simulation designs with (1) d = 60, b = 20, sg = 1; (2) d = 100, b = 30, sg = 2; (3) d = b = 20, sg = 1; and (4) d = b = 40, sg = 1. For each setting, we randomly draw X \u2208Rn\u00d7db with i.i.d. standard normal entries, construct the \ufb01xed vector 1https://cran.r-project.org/web/packages/SGL/index.html 17 \f\u03b2\u2217\u2208Rdb satisfying \u03b2\u2217 (j) = \uf8f1 \uf8f2 \uf8f3 (1, 2, 3, 4, 5, 0, . . . , 0) \u2208Rb j = 1, . . . , sg; 0 j = sg + 1, . . . , d, and generate y = X\u03b2\u2217= Psg j=1 X(j)\u03b2\u2217 (j). We implement the \u21131 + \u21131,2 minimization (5) with \u03bbg = p s/sg\u03bb (SGL), \u21131 minimization (11) (Lasso), and \u21131,2 minimization (12) (Group Lasso), and \u21131 + \u21131,2 minimization (5) with the tuning parameter \u03bbg/\u03bb selected using cross validation discussed in Section 4.1 (SGL CV). An exact recovery of \u03b2\u2217is considered to be successful if \u2225\u02c6 \u03b2 \u2212 \u03b2\u2217\u22252 \u226410\u22124. The successful recovery rate based on 100 replicates is shown in Figure 1. It can be seen that SGL and SGL CV have comparable performance and both methods have signi\ufb01cantly better performance than Lasso and Group Lasso. This is in line with our theoretical results. (a) d = 60, b = 20, sg = 1 (b) d = 100, b = 30, sg = 2 (c) d = 20, b = 20, sg = 1 (d) d = 40, b = 40, sg = 1 Figure 1: Exact recovery rate in the noiseless case Then we consider the noisy case and focus on average estimation errors of di\ufb00erent methods. 18 \fWe generate y = X\u03b2\u2217+ \u03b5 = sg X j=1 X(j)\u03b2\u2217 (j) + \u03b5, where X, \u03b2\u2217are drawn in the same way as the previous setting and \u03b5 iid \u223cN(0, 0.12). We consider four designs: i. d = 60, b = 20, sg = 1; ii. d = 100, b = 30, sg = 2; iii. d = b = 20, sg = 1; and iv. d = b = 40, sg = 2. For each case, the number of observations n is chosen from an equally spaced sequence from 5 to 200 and the simulation is replicated for 500 times. We compare the average estimation error of (a) SGL_CV1: sparse group Lasso with theoretical value \u03bbg = p s/sg\u03bb and \u03bb selected via cross validation; (b) SGL_package: sparse group Lasso via SGL package2 in R with the option of automatic tuning parameter selection; (c) Lasso: regular Lasso with tuning parameter selected via cross validation; (d) group Lasso: group Lasso with tuning parameter selected via cross validation; (e) SGL_CV2: sparse group Lasso with both \u03bb and \u03bbg selected using the proposed cross validation scheme. We can see the proposed method SGL_CV2 achieves smaller estimation error than all other methods, including SGL_CV1, the focus of our theory. These experimental results demonstrate our theory and the applicability of the proposed cross-validation scheme. 5 Discussions In this paper, we study the high-dimensional double sparse regression and investigate the theoretical properties of the sparse group Lasso and \u21131 + \u21131,2 minimization. Particularly, we develop the matching upper and lower bounds on the sample complexity for \u21131 + \u21131,2 minimization in the noiseless case. We also prove that the sparse group Lasso achieves minimax optimal rate of convergence in a range of settings in the noisy case. Our results give an a\ufb03rmative answer to the open question for high-dimensional statistical inference for simultaneously structured model: by introducing both \u21131 and \u21131,2 penalties, one can achieve better performance on estimation and statistical inference for simultaneously element-wise and group-wise sparse vectors. In addition to \u03b2\u2217, the estimation and inference for noise level \u03c3 is another importance task in high-dimensional double sparse regression. Motivated by the recent development of scaled 2https://cran.r-project.org/web/packages/SGL/index.html 19 \f(a) d = 60, b = 20, sg = 1 (b) d = 100, b = 30, sg = 2 (c) d = 20, b = 20, sg = 1 (d) d = 40, b = 40, sg = 2 Figure 2: Average estimation error in the noisy case Lasso [55], one may consider the following scaled sparse group Lasso estimator: {\u02c6 \u03b2s, \u02c6 \u03c3} = arg min \u03b2\u2208Rp,\u03c3>0 \u001a\u2225y \u2212X\u03b2\u22252 2 \u03c3 + n\u03c3 + e \u03bb\u2225\u03b2\u22251 + e \u03bbg\u2225\u03b2\u22252 \u001b , where e \u03bb and e \u03bbg are tuning parameters that do not rely on \u03c3. The consistency of \u02c6 \u03c3 can be established based on similar ideas of scaled Lasso in the literature [55, 31] and the approximate dual certi\ufb01cate in this work. Moreover, our technical results can be useful in a variety of other problems with simultaneous sparsity structures. For example, [56, 57] considered the estimation of piece-wise constant sparse signals, i.e., both the signal vector and the di\ufb00erence between successive entries of the signal vec20 \ftor are sparse. [58, 59] discussed the estimation of structured parameters where both the number of non-zero elements and the number of distinct values of the parameter vectors are small. [60] considered the estimation of matrices with simultaneous sparsity structures within each block and among di\ufb00erent blocks. It is interesting to further study the statistical limits, including the sample complexity and minimax optimal rate of convergence for these problems. In particular, based on the speci\ufb01c sparsity structures of each problem, we can introduce corresponding multi-objective regularizers and the convex regularization methods. The corresponding approximate dual certi\ufb01cates can be proposed, constructed, and analyzed to provide strong theoretical guarantees. 6 Proofs We collect the proofs of technical results in this section. 6.1 Proof of Lemma 1 Let T satisfy (16). For convenience, we denote s0 = s/sg and decompose u as u = v + w, vi = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ui \u2212\u221as0\u03b2\u2217 i /\u2225\u03b2\u2217 T,(j)\u22252, i \u2208T, i \u2208(j); ui, i \u2208(G)\\T; ui \u2212H1/2(ui), i \u2208(Gc). w(j) = \uf8f1 \uf8f2 \uf8f3 \u221as0\u03b2\u2217 T,(j)/\u2225\u03b2\u2217 T,(j)\u22252, j \u2208G; H1/2(u(j)), j / \u2208G. (29) Note that |H1/2(x) \u2212x| \u22641/2 for any x \u2208R. Based on the property of (18), \u2225u(G)\\T \u2225\u221e\u22641/2, then max i\u2208T c |vi| \u22641/2, \u2225vT \u2212sgn(\u03b2\u2217 T )\u22252 = \u2225uT \u2212(e u0)T \u22252 \u2264 cmin 8 maxi\u2208T c \u2225X\u22a4 T Xi/n\u22252 ; (30) w(j) = \u221as0\u03b2\u2217 T,(j)/\u2225\u03b2\u2217 T,(j)\u22252, if j \u2208G; \u2225w(j)\u22252 \u2264\u221as0/2, if j / \u2208G. (31) 21 \fSuppose \u02c6 \u03b2 is the minimizer to (5), h = \u02c6 \u03b2 \u2212\u03b2\u2217, then based on the sub-di\ufb00erential of \u2225\u03b2\u22251 and \u2225\u03b2\u22251,2, we have P(\u02c6 \u03b2) =\u2225\u02c6 \u03b2\u22251 + \u221as0\u2225\u02c6 \u03b2\u22251,2 = \u2225\u03b2\u2217+ h\u22251 + \u221as0\u2225\u03b2\u2217+ h\u22251,2 \u2265\u2225\u03b2\u2217 T \u22251 + sgn(\u03b2\u2217 T )\u22a4hT + \u2225hT c\u22251 + \u221as0 \uf8eb \uf8ed\u2225\u03b2\u2217 T \u22251,2 + X j\u2208G \u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 + X j / \u2208G \u2225h(j)\u22252 \uf8f6 \uf8f8 \u2212\u2225\u03b2\u2217 T c\u22251 \u2212\u221as0\u2225\u03b2\u2217 T c\u22251,2 \u2265P(\u03b2\u2217) + \u2225hT c\u22251 + \u221as0\u2225h(Gc)\u22251,2 + sgn(\u03b2\u2217 T )\u22a4hT + X j\u2208G \u221as0\u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 \u22122\u2225\u03b2\u2217 T c\u22251 \u22122\u221as0\u2225\u03b2\u2217 T c\u22251,2. (32) The last inequality comes from \u2225\u03b2\u2217\u22251 = \u2225\u03b2\u2217 T \u22251 + \u2225\u03b2\u2217 T c\u22251 and \u2225\u03b2\u2217\u22251,2 \u2264\u2225\u03b2\u2217 T \u22251,2 + \u2225\u03b2\u2217 T c\u22251,2. In particular, given Xh = 0 and that u lies in the row span of X, we have v\u22a4h + w\u22a4h = u\u22a4h = 0. Therefore, sgn(\u03b2\u2217 T )\u22a4hT + X j\u2208G \u221as0\u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 = sgn(\u03b2\u2217 T )\u22a4hT \u2212v\u22a4h + X j\u2208G \u221as0\u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 \u2212w\u22a4h = \u2212(vT \u2212sgn(\u03b2\u2217 T ))\u22a4hT \u2212v\u22a4 T chT c \u2212 X j\u2208G \u0010 w(j) \u2212\u221as0\u03b2\u2217 T,(j)/\u2225\u03b2\u2217 T,(j)\u22252 \u0011\u22a4 h(j) \u2212(w(Gc))\u22a4h(Gc) \u2265\u2212\u2225vT \u2212sgn(\u03b2\u2217 T )\u22252\u2225hT \u22252 \u2212\u2225vT c\u2225\u221e\u00b7 \u2225hT c\u22251 \u2212max j\u2208G \r \r \rw(j) \u2212\u221as0\u03b2\u2217 T,(j)/\u2225\u03b2\u2217 T,(j)\u22252 \r \r \r 2 \u00b7 \u2225h(G)\u22251,2 \u2212\u2225w(Gc)\u2225\u221e,2\u2225h(Gc)\u22251,2 (30)(31) \u2265 \u2212\u2225vT \u2212sgn(\u03b2\u2217 T )\u22252 \u00b7 \u2225hT \u22252 \u2212\u2225hT c\u22251/2 \u2212\u221as0\u2225h(Gc)\u22251,2/2. (33) Next note that h = hT + hT c, we must have XT hT = \u2212XT chT c, then \u2225hT \u22252 =\u2225(X\u22a4 T XT /n)\u22121X\u22a4 T XT hT /n\u22252 \u2264\u03c3\u22121 min(X\u22a4 T XT /n)\u2225XT XT chT c/n\u22252 \u22642 cmin \u00b7 max i\u2208T c \u2225X\u22a4 T Xi/n\u22252 \u00b7 \u2225hT c\u22251. (34) Combining (30), (33), and (34), one obtains sgn(\u03b2\u2217 T )\u22a4hT + X j\u2208G \u221as0\u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 \u2265\u22123/4 \u00b7 \u2225hT c\u22251 \u2212\u221as0\u2225h(Gc)\u22251,2/2. Plug this inequality to (32), we \ufb01nally have P(\u02c6 \u03b2) \u2265P(\u03b2\u2217) + \u2225hT c\u22251/4 + \u221as0\u2225h(Gc)\u22251,2/2 \u22122\u2225\u03b2\u2217 T c\u22251 \u22122\u221as0\u2225\u03b2\u2217 T c\u22251,2. 22 \fSince \u02c6 \u03b2 is the minimizer to (5), we must have P(\u02c6 \u03b2) \u2264P(\u03b2\u2217), then \u2225hT c\u22251/4 + \u221as0\u2225h(Gc)\u22251,2/2 \u22642\u2225\u03b2\u2217 T c\u22251 + 2\u221as0\u2225\u03b2\u2217 T c\u22251,2. (35) If \u03b2\u2217is (s, sg)-sparse, immediately we have hT c = 0. Then 0 = X\u22a4 T Xh = (X\u22a4 T XT )hT . By \u03c3min(X\u22a4 T XT /n) \u2265cmin/2, we know X\u22a4 T XT /n is non-singular, then hT = 0. Now, we consider the general case. Without loss of generality, suppose G = {1, . . . , g}, where g \u2264sg. Denote T1 as the indices of the s largest entries of h(G)\\T , T2 as the indices of the s largest entries of h(G)\\[T\u222aT1], and so on. For sg + 1 \u2264i \u2264d, denote Si,1 as the indices of the \u230as/sg\u230blargest entries of h(i), Si,2 as the indices of the \u230as/sg\u230blargest entries of h(i)\\Si,1, and so on. Let e S1, . . . , e SPd i=g+1\u2308bi/\u230as/sg\u230b\u2309be an arrangement of Si,j(1 \u2264j \u2264\u2308bi/\u230as/sg\u230b\u2309, g + 1 \u2264i \u2264d) such that \u2225he S1\u22252 2 \u2265\u00b7 \u00b7 \u00b7 \u2265\u2225he SPd i=g+1\u2308bi/\u230as/sg\u230b\u2309\u22252 2. Let R1 = \u222asg i=1 e Si, R2 = \u222a2sg i=sg+1 e Si, and so on. Then (T1, T2, . . . , R1, R2, . . . ) is a partition of T c, and |Ti|, |Rj| \u2264s, |g(Ti)|, |g(Rj)| \u2264sg, where g(S) = {i1, . . . , ik} if S \u2286\u222ak j=1(ij) and S \u2229(ij) are not empty for all 1 \u2264j \u2264k. Let e T = T \u222aT1 \u222aR1. If (19) holds, then cmin 2 \u2225h e T \u22252 2 \u22641 n\u2225X e T h e T \u22252 2 = 1 n\u27e8X e T h e T , Xh\u27e9\u22121 n\u27e8X e T h e T , X e T ch e T c\u27e9. (36) Since Xh = 0, we have \u27e8X e T h e T , Xh\u27e9= 0. (37) Now, we consider \f \f\u27e8X e T h e T , X e T ch e T c\u27e9 \f \f. By triangle inequality, \f \f\u27e8X e T h e T , X e T ch e T c\u27e9 \f \f \u2264 \f \f\u27e8XT hT , X e T ch e T c\u27e9 \f \f + \f \f\u27e8XT1hT1, X e T ch e T c\u27e9 \f \f + \f \f\u27e8XR1hR1, X e T ch e T c\u27e9 \f \f . The triangle inequality shows that \f \f\u27e8XT hT , X e T ch e T c\u27e9 \f \f \u2264 X i\u22652 |\u27e8XT hT , XTihTi\u27e9| + X j\u22652 \f \f\u27e8XT hT , XRjhRj\u27e9 \f \f . Combine the parallelogram identity and (19) together, we have |\u27e8XT hT , XTihTi\u27e9| \u2264Cmaxn\u2225hT \u22252\u2225hTi\u22252, \f \f\u27e8XT hT , XRjhRj\u27e9 \f \f \u2264Cmaxn\u2225hT \u22252\u2225hRj\u22252. Thus, \f \f\u27e8XT hT , X e T ch e T c\u27e9 \f \f \u2264Cmaxn\u2225hT \u22252( X i\u22652 \u2225hTi\u22252 + X j\u22652 \u2225hRj\u22252). (38) 23 \fBy (3.10) in [37], we have X i\u22652 \u2225hTi\u22252 \u2264s\u22121/2\u2225h(G)\\T \u22251, (39) and X j\u22652 \u2225hRj\u22252 = X j\u22652 \uf8eb \uf8ed jsg X i=(j\u22121)sg+1 \u2225he Si\u22252 2 \uf8f6 \uf8f8 1/2 \u2264 X j\u22652 \u221asg\u2225he S(j\u22121)sg \u22252 \u2264 X j\u22652 \u221asg (j\u22121)sg X i=(j\u22122)sg+1 \u2225he Si\u22252/sg =s\u22121/2 g X k \u2225he Sk\u22252 = s\u22121/2 g d X i=g+1 X j \u2225hSi,j\u22252. For all g + 1 \u2264i \u2264d, apply (3.10) in [37] again, X j\u22652 \u2225hSi,j\u22252 \u2264(\u230as/sg\u230b)\u22121/2\u2225h(i)\u22251 \u2264 \u221a 2(s/sg)\u22121/2\u2225h(i)\u22251. Moreover, by the de\ufb01nition of Si,1, d X i=g+1 \u2225hSi,1\u22252 \u2264 d X i=g+1 \u2225h(i)\u22252 = \u2225h(Gc)\u22251,2. Therefore, X j\u22652 \u2225hRj\u22252 \u2264s\u22121/2 g \uf8eb \uf8ed d X i=g+1 \u221a 2(s/sg)\u22121/2\u2225h(i)\u22251 \uf8f6 \uf8f8+ s\u22121/2 g \u2225h(Gc)\u22251,2 = \u221a 2s\u22121/2\u2225h(Gc)\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2. (40) Combine (38), (39) and (40) together, if (19) holds, we have \f \f\u27e8XT hT , X e T ch e T c\u27e9 \f \f \u2264Cmaxn\u2225hT \u22252(s\u22121/2\u2225h(G)\\T \u22251 + \u221a 2s\u22121/2\u2225h(Gc)\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2) \u2264Cmaxn\u2225hT \u22252( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2). Similarly, if (19) holds, then \f \f\u27e8XT1hT1, X e T ch e T c\u27e9 \f \f \u2264Cmaxn\u2225hT1\u22252( \u221a 2s\u22121/2\u2225hT c\u22251+s\u22121/2 g \u2225h(Gc)\u22251,2) and \f \f\u27e8XR1hR1, X e T ch e T c\u27e9 \f \f \u2264Cmaxn\u2225hR1\u22252( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2). Thus, with probability at least 1 \u22122e\u2212cn, \f \f\u27e8X e T h e T , X e T ch e T c\u27e9 \f \f \u2264Cmaxn (\u2225hT \u22252 + \u2225hT1\u22252 + \u2225hR1\u22252) ( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2) \u2264 \u221a 3Cmaxn\u2225h e T \u22252( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2). (41) 24 \fThe last inequality holds due to Cauchy-Schwarz inequality. Combine (36), (37), (41) and Lemma 3 together, we know that with probability at least 1 \u22122e\u2212cn, cmin 2 \u2225h e T \u22252 2 \u2264 \u221a 3Cmax\u2225h e T \u22252( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2), i.e., with probability at least 1 \u22122e\u2212cn, \u2225h e T \u22252 \u22642 \u221a 3Cmax cmin ( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2). Finally, by (35), (39), (40) and the previous inequality, with probability at least 1 \u22122e\u2212cn, \u2225h\u22252 \u2264\u2225h e T \u22252 + X i\u22652 \u2225hTi\u22252 + X j\u22652 \u2225hRj\u22252 \u22642 \u221a 3Cmax cmin ( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2) + \u221a 2s\u22121/2\u2225hT c\u22252 + s\u22121/2 g \u2225h(Gc)\u22251,2 \u2264C \u0012 1 \u221as\u2225\u03b2\u2217 T c\u22251 + 1 \u221asg \u2225\u03b2\u2217 T c\u22251,2 \u0013 . In summary, we have \ufb01nished the proof of this lemma. \u25a1 6.2 Proof of Lemma 2 Let T satisfy (16). Given \u2225\u03b2\u2217 T \u22250,2 \u2264sg, without loss of generally we assume that \u03b2\u2217 T,(sg+1), \u00b7 \u00b7 \u00b7 , \u03b2\u2217 T,(d) = 0. We also denote T(j) as the support of \u03b2\u2217 T,(j). First by Lemma 6 Part 3 with v \u2208Rp, vk = \uf8f1 \uf8f2 \uf8f3 1, k = i; 0, k \u0338= i; U \u2208Rp\u00d7|T| = R(Pd i=1 bi)\u00d7|T|, U[T,:] = I; U[T c,:] = 0, and notice that x log(eu/x) \u2265log(eu) for all 1 \u2264x \u2264u, we have P \u0012 max i\u2208T c \r \r \rX\u22a4 T Xi/n \r \r \r 2 \u22651/2 \u0013 \u2264 X i\u2208T c P \u0010\r \r \rX\u22a4 T Xi/n \r \r \r 2 \u22651/2 \u0011 \u2264 X i\u2208T c P \u0010\r \r \rX\u22a4 T Xi/n \u2212EX\u22a4 T Xi/n \r \r \r 2 + \u2225EX\u22a4 T Xi/n\u22252 \u22651/2 \u0011 \u2264 X i\u2208T c P \u0010\r \r \rX\u22a4 T Xi/n \u2212EX\u22a4 T Xi/n \r \r \r 2 \u22651/2 \u2212\u2225\u03a3T,T \u2225\u2225\u03a3i,T \u03a3\u22121 T,T \u22252 \u0011 \u2264 X i\u2208T c P \u0010\r \r \rX\u22a4 T Xi/n \u2212EX\u22a4 T Xi/n \r \r \r 2 \u22651/4 \u0011 \u2264db \u00b7 C exp (Cs \u2212n) \u2264C exp (log(d) + log(b) + Cs \u2212n) \u2264C exp (sg log(ed/sg) + s log(esgb/s) + Cs \u2212n) \u2264C exp(\u2212cn) (42) 25 \fprovided that n \u2265C (s log(esgb/s) + sg log(d/sg)) for some large constant C > 0. Note that the fourth inequality comes from the facts that \u2225\u03a3T,T \u2225\u2264\u2225\u03a3\u2225\u2264Cmax and \u2225\u03a3i,T \u03a3\u22121 T,T \u22252 \u2264c/\u221as \u2264 1/(4Cmax). By Lemma 7 Part 1, we also know P \u0010 \u03c3min(X\u22a4 T XT /n) \u2264cmin/2 \u0011 \u2264P \u0010 \u2225X\u22a4 T XT /n \u2212\u03a3T,T \u2225\u2265cmin/2 \u0011 \u2264P \u0010 \u2225X\u22a4 T XT \u03a3\u22121 T,T /n \u2212I|T|\u2225\u2225\u03a3T,T \u2225\u2265cmin/2 \u0011 \u2264P \u0010 \u2225X\u22a4 T XT \u03a3\u22121 T,T /n \u2212I|T|\u2225\u2265cmin/(2Cmax) \u0011 \u2264C exp (Cs \u2212cn) \u2264C exp(\u2212cn). Next, we apply the well-regarded gol\ufb01ng scheme [50, 44] to \ufb01nd an approximate dual certi\ufb01cate u that satis\ufb01es (18). Let u0 \u2208Rp, (u0)(j) = \uf8f1 \uf8f2 \uf8f3 p s/sg\u03b2\u2217 T,(j)/\u2225\u03b2\u2217 T,(j)\u22252 + sgn(\u03b2\u2217 T,(j)), j \u2208G; 0, j \u2208Gc. (43) Immediately we have (u0)T = (e u0)T . We divide n rows of X into non-overlapping batches, say X[I1,:], X[I2,:], . . . , with |Il| = nl. Here, n1, n2, . . . will be speci\ufb01ed a little while later. Consider the following sequences \u03b10 = u0, \u03b3l = X\u22a4 [Il,:]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 (\u03b1l\u22121)T , \u03b1l = \u03b1l\u22121 \u2212\u03b3l, l = 1, 2, . . . , lmax. (44) Finally the approximate dual certi\ufb01cate is de\ufb01ned as u = lmax X l=1 \u03b3l = lmax X l=1 X\u22a4 [Il,:]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 (\u03b1l\u22121)T . (45) From the inductive de\ufb01nition we can see (\u03b1l)T = (I\u2212X\u22a4 [Il,T]X[Il,T]\u03a3\u22121 T,T /nl)(\u03b1l\u22121)T , (\u03b3l)T c = X\u22a4 [Il,T c]X[Il,T]\u03a3\u22121 T,T /nl\u00b7(\u03b1l\u22121)T , l = 1, 2, . . . . Next, we apply the random matrix results (Lemmas 7 and 6) and obtain the following tail probabilities. \u2022 if nl \u2265Cstl for large constant C > 0 and tl \u2265C, by Part 1 of Lemma 7, P \u0010 \u2225X\u22a4 [Il,T]X[Il,T]\u03a3\u22121 T,T /nl \u2212I|T|\u2225\u2265C p stl/nl \u0011 \u2264C exp Cs \u2212nl min ( stl nl , \u0012stl nl \u00131/2)! \u2264C exp (\u2212cstl) ; (46) 26 \f\u2022 Suppose ql\u22121 = (\u03b1l\u22121)T \u2208R|T| is independent of X[Il,:]. If nl \u2265C(s0 log(esgb/s)+log d) min{s0\u03b42 l ,\u221as0\u03b4l} for \u03b4l \u2265C maxi\u2208T c \u2225\u03a3i,T \u03a3\u22121 T,T \u22252 \u2265C(maxi\u2208T c \u2225\u03a3i,T \u03a3\u22121 T,T \u22252)\u2225\u03a3\u22121 T,T ql\u22121\u22252/\u2225ql\u22121\u22252, by Lemma 7 Part 2, P \u0012 max j\u2208Gc \r \r \rH\u2225ql\u22121\u22252\u03b4l \u0010 X\u22a4 [Il,(j)]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 ql\u22121 \u0011\r \r \r 2 \u2265\u221as0\u2225ql\u22121\u22252\u03b4l \u0013 \u2264 X j\u2208Gc P \u0010\r \r \rH\u2225ql\u22121\u22252\u03b4l \u0010 X\u22a4 [Il,(j)]X[Il,T](\u03a3\u22121 T,T ql\u22121)/nl \u0011\r \r \r 2 \u2265\u221as0\u2225ql\u22121\u22252\u03b4l \u0011 \u2264d \u00b7 \u0012 b \u2308s0\u2309 \u0013 exp Cs0 \u2212cnl min ( s0\u2225ql\u22121\u22252 2\u03b42 l \u03ba4\u2225\u03a3\u22121 T,T ql\u22121\u22252 2 , \u221as0\u2225ql\u22121\u22252\u03b4l \u03ba2\u2225\u03a3\u22121 T,T ql\u22121\u22252 )! + d \u00b7 \u0012 b \u230as0\u230b \u0013 exp Cs0 \u2212cnl min ( s0\u2225ql\u22121\u22252 2\u03b42 l \u03ba4\u2225\u03a3\u22121 T,T ql\u22121\u22252 2 , \u221as0\u2225ql\u22121\u22252\u03b4l \u03ba2\u2225\u03a3\u22121 T,T ql\u22121\u22252 )! \u22642d \u00b7 \u0012 eb \u230as0\u230b \u0013\u2308s0\u2309 exp \u0000Cs0 \u2212cnl min{s0\u03b42 l , \u221as0\u03b4l} \u0001 \u2264C exp \u0000log(d) + Cs0 log(2esgb/s) + Cs0 \u2212cnl min{s0\u03b42 l , \u221as0\u03b4l} \u0001 \u2264C exp(\u2212cnl min{s0\u03b42 l , \u221as0\u03b4l}); (47) The third inequality comes from \u2225\u03a3\u22121 T,T \u2225\u2264 1 cmin . \u2022 Suppose ql\u22121 = (\u03b1l\u22121)T \u2208R|T| is \ufb01xed. If nl min{\u03b82 l , \u03b8l} \u2265C log(esgb), \u03b8l \u22652 maxi\u2208T c \u2225\u03a3i,T \u03a3\u22121 T,T \u22252, by Lemma 6 Part 2, P \u0010 \u2225X\u22a4 [Il,(G)\\T]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 ql\u22121\u2225\u221e\u2265\u03b8l\u2225ql\u22121\u22252 \u0011 \u2264 X i\u2208(G)\\T P \u0010 |X\u22a4 [Il,i]X[Il,T]/nl \u00b7 (\u03a3\u22121 T,T ql\u22121)| \u2265\u03b8l\u2225ql\u22121\u22252 \u0011 \u2264 X i\u2208(G)\\T P \u0010 |X\u22a4 [Il,i]X[Il,T]/nl \u00b7 (\u03a3\u22121 T,T ql\u22121) \u2212\u03a3i,T \u03a3\u22121 T,T ql\u22121| \u2265\u03b8l\u2225ql\u22121\u22252 \u2212|\u03a3i,T \u03a3\u22121 T,T ql\u22121| \u0011 \u2264 X i\u2208(G)\\T P \u0010 |X\u22a4 [Il,i]X[Il,T]/nl \u00b7 (\u03a3\u22121 T,T ql\u22121) \u2212\u03a3i,T \u03a3\u22121 T,T ql\u22121| \u2265\u03b8l\u2225ql\u22121\u22252 \u2212\u2225\u03a3i,T \u03a3\u22121 T,T \u22252\u2225ql\u22121\u22252 \u0011 \u2264 X i\u2208(G)\\T P \u0012 |X\u22a4 [Il,i]X[Il,T]/nl \u00b7 (\u03a3\u22121 T,T ql\u22121) \u2212\u03a3i,T \u03a3\u22121 T,T ql\u22121| \u22651 2\u03b8l\u2225ql\u22121\u22252 \u0013 \u2264 X i\u2208(G)\\T P \u0010 |X\u22a4 [Il,i]X[Il,T]/nl \u00b7 (\u03a3\u22121 T,T ql\u22121) \u2212\u03a3i,T \u03a3\u22121 T,T ql\u22121| \u2265cmin 2 \u03b8l\u2225\u03a3\u22121 T,T ql\u22121\u22252 \u0011 \u2264sgb \u00b7 C exp \u0000\u2212cnl min{\u03b82 l , \u03b8l} \u0001 = C exp(log(sgb) \u2212cnl min{\u03b82 l , \u03b8l}) \u2264C exp \u0000\u2212cnl min{\u03b82 l , \u03b8l} \u0001 . (48) 27 \fThen we specify {nl, tl, \u03b4l, \u03b8l}l\u22651 as follows, \u2022 n1 = n2 \u2265C(s log(esgb) + sg log(d/sg)), t1 = t2 = cn1/(s log(es)) \u2265C, \u03b41 = \u03b42 = 1/(16\u221as), \u03b81 = \u03b82 = 1/(16\u221as); \u2022 n3 = \u00b7 \u00b7 \u00b7 = nlmax \u224d n1 lmax\u22122 \u2265C(s log(esgb) + sg log(d/sg))/ log(es), t3 = \u00b7 \u00b7 \u00b7 = tlmax = cn3/s \u2265C, \u03b43 = \u00b7 \u00b7 \u00b7 = \u03b4lmax = log(es)/(16\u221as) \u2265max{(log(es)/s)1/2/16, log(es)\u221as0/(16s)}, \u03b83 = \u00b7 \u00b7 \u00b7 = \u03b8lmax = (log(es)/s)1/2/16, with lmax = \u2308C log(es)\u2309+ 2. We can see the following events happen \u2225X\u22a4 [Il,T]X[Il,T]\u03a3\u22121 T,T /nl \u2212I|T|\u2225\u2264C p stl/nl \u2264 p 1/ log(es), l = 1, 2; \u2225X\u22a4 [Il,T]X[Il,T]\u03a3\u22121 T,T /nl \u2212I|T|\u2225\u2264C p stl/nl \u22641/2, l = 3, . . . , lmax; (49) max j\u2208Gc \r \r \rH\u2225ql\u22121\u22252/(16\u221as) \u0010 X\u22a4 [Il,(j)]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 ql\u22121 \u0011\r \r \r 2 \u2264\u221as0\u2225ql\u22121\u22252/(16\u221as), l = 1, 2; max j\u2208Gc \r \r \rH\u2225ql\u22121\u22252\u00b7log(es)/(16\u221as) \u0010 X\u22a4 [Il,(j)]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 ql\u22121 \u0011\r \r \r 2 \u2264\u221as0\u2225ql\u22121\u22252 log(es)/(16\u221as), l = 3, . . . , lmax; (50) \r \r \rX\u22a4 [Il,(G)\\T]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 ql\u22121 \r \r \r \u221e\u2264\u2225ql\u22121\u22252/(16\u221as), l = 1, 2 \r \r \rX\u22a4 [Il,(G)\\T]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 ql\u22121 \r \r \r \u221e\u2264\u2225ql\u22121\u22252 \u00b7 (log(es)/s)1/2/16, l = 3, . . . , lmax. (51) with probability at least 1\u2212C log(es) exp(\u2212c n log(es))\u2212C log(es) exp \u0010 \u2212c n sg \u0011 \u2212C log(es) exp \u0000\u2212c n s \u0001 . By triangle inequality, u0 satis\ufb01es \u2225u0\u22252 \u2264 q s/sg \uf8eb \uf8edX j\u2208G \r \r \r \r \r \u03b2\u2217 T,(j) \u2225\u03b2\u2217 T,(j)\u22252 \r \r \r \r \r 2 2 \uf8f6 \uf8f8 1/2 + \u2225sgn(\u03b2\u2217 T )\u22252 \u22642\u221as. (52) When maxi\u2208T c \u2225X\u22a4 T Xi/n\u22252 \u22641 2 and (49)-(52) hold, we have \u2225q0\u22252 \u22642\u221as, \u2225q1\u22252 = \r \r \r(I|T| \u2212X\u22a4 I1,T XI1,T \u03a3\u22121 T,T /n1)q0 \r \r \r \u2264\u2225I|T| \u2212X\u22a4 I1,T XI1,T \u03a3\u22121 T,T /n1\u2225\u00b7 \u2225q0\u22252 \u22642 p s/ log(es); similarly, \u2225q2\u22252 \u2264\u2225q1\u22252/ p log(es) \u22642\u221as/(log(es)); \u2225ql\u22252 \u2264\u2225ql\u22121\u22252/2 \u2264\u00b7 \u00b7 \u00b7 \u2264\u2225q2\u2225/2l\u22122 \u226423\u2212l\u221as/(log(es)), l \u22653. (53) 28 \fFor large constant C > 0,\u2225qlmax\u22252 \u226423\u2212C log(es)\u221as/ log(es) \u2264cmin/8. Notice that uT = ( lmax X l=1 \u03b3l)T = ( lmax X l=1 (\u03b1l\u22121 \u2212\u03b1l))T = (\u03b10 \u2212\u03b1lmax)T = (e u0)T \u2212(qlmax)T , we know that \u2225uT \u2212(e u0)T \u22252 \u00b7 max i\u2208T c \u2225X\u22a4 T Xi/n\u22252 = \u2225qlmax\u22252 \u00b7 max i\u2208T c \u2225X\u22a4 T Xi/n\u22252 \u2264cmin 8 \u00b7 1 2 < cmin 8 . In addition, \u2225u(G)\\T \u2225\u221e\u2264 lmax X l=1 \r \r \rX\u22a4 [Il,(G)\\T]X[Il,T]\u03a3\u22121 T,T /nl \u00b7 (\u03b1l\u22121)T \r \r \r \u221e \u2264\u2225q0\u22252/(16\u221as) + \u2225q1\u22252/(16\u221as) + lmax X l=3 \u2225ql\u22121\u22252 \u00b7 (log(es)/s)1/2/16 \u22641/8 + 1/8 + \u221e X l=3 24\u2212l/16 \u22641/2. Since \u2225q0\u22252/(16\u221as) + \u2225q1\u22252/(16\u221as) + lmax X l=3 \u2225ql\u22121\u22252 \u00b7 log(es)/(16\u221as) \u22641 8 + 1 8 + lmax X l=3 24\u2212l\u221as/(log(es)) \u00b7 log(es)/(16\u221as) \u22641 2, \r \rH1/2(u(Gc)) \r \r \u221e,2 \u2264\u2225H\u2225q0\u22252/(16\u221as)+\u2225q1\u22252/(16\u221as)+Plmax l=3 \u2225ql\u22121\u22252\u00b7log(es)/(16\u221as)(u(Gc))\u2225\u221e,2 \u2264 2 X l=1 \r \r \rH\u2225ql\u22121\u22252/(16\u221as) \u0010 X\u22a4 [Il,(Gc)]X[Il,T c]ql\u22121 \u0011\r \r \r \u221e,2 + lmax X l=3 \r \r \rH\u2225ql\u22121\u22252\u00b7log(es)/(16\u221as) \u0010 X\u22a4 [Il,(Gc)]X[Il,T c]ql\u22121 \u0011\r \r \r \u221e,2 \u2264 2 X l=1 \u221as0\u2225ql\u22121\u22252/(16\u221as) + lmax X l=3 \u221as0\u2225ql\u22121\u22252 \u00b7 log(es)/(16\u221as) \u2264\u221as0/2. Thus, the construction of u satis\ufb01es all required condition in Lemma 1 with probability at least 1 \u2212C exp \u0000\u2212c n s \u0001 . This has \ufb01nished the proof of this lemma. \u25a1 29 \f6.3 Proof of Lemma 3 Let g(S) be the group support of set S, that is, g(S) = {i1, . . . , ik} if S \u2282\u222ak j=1(ij) and S \u2229(ij) are not empty for all 1 \u2264j \u2264k. Lemma 7 Part 1 and the union bound show that P \u0012 \u2203\u03b3 \u2208Rp, \u2225\u03b3\u22250 \u22642s, \u2225\u03b3\u22250,2 \u22642sg, 1 n\u2225X\u03b3\u22252 2 / \u2208 hcmin 2 \u2225\u03b3\u22252 2, (Cmin + cmin 2 )\u2225\u03b3\u22252 2 i\u0013 =P \u0012 \u2203x \u2208R2s\u2227p, S \u22861, \u00b7 \u00b7 \u00b7 , p, |S| = 2s \u2227p, |g(S)| \u22642sg, 1 n\u2225XSx\u22252 2 / \u2208 hcmin 2 \u2225\u03b3\u22252 2, (Cmin + cmin 2 )\u2225\u03b3\u22252 2 i\u0013 \u2264 X S\u2286{1,...,p},|S|=2s\u2227p,|g(S)|\u22642sg P \u0012 \u2200x \u2208R2s\u2227p, 1 n\u2225XSx\u22252 2 / \u2208 hcmin 2 \u2225\u03b3\u22252 2, (Cmin + cmin 2 )\u2225\u03b3\u22252 2 i\u0013 \u2264 X S\u2286{1,...,p},|S|=2s\u2227p,|g(S)|\u22642sg P \u0012 \u22251 nX\u22a4 S XS \u2212\u03a3S,S\u2225\u2265cmin 2 \u0013 \u2264 X S\u2286{1,...,p},|S|=2s\u2227p,|g(S)|\u22642sg P \u0012 \u22251 nX\u22a4 S XS\u03a3\u22121 S,S \u2212I|S|\u2225\u2265 cmin 2Cmax \u0013 \u2264 \u0014\u0012 d 2sg \u0013 \u22281 \u0015 \u00122sgb 2s \u0013 \u00b7 2 exp (Cs \u2212cn) \u2264 \u0012 ed 2sg \u00132sg \u0012e \u00b7 2sgb 2s \u00132s \u00b7 2 exp (Cs \u2212cn) \u22642 exp (2s log(esgb/s) + 2sg log(ed/sg) + Cs \u2212cn) \u22642e\u2212cn. \u25a1 6.4 Proof of Theorem 2 If d \u22653sg and b \u22653s/sg, by (80), we can \ufb01nd \u2126(1), . . . , \u2126(N) \u2282{1, . . . , db} such that |\u2126(i)| = sg\u230as/sg\u230b, |\u2126(i) (k)| = \u230as/sg\u230b1{\u2126(i) (k) is not empty} for all 1 \u2264i \u2264N, 1 \u2264k \u2264d, and \f \f \f\u2126(i) \u2229\u2126(j)\f \f \f \u22648sg\u230as/sg\u230b/9, 1 \u2264i \u0338= j \u2264N, (54) \f \f \f n k| \f \f \f\u2126(i) (k) \u2229\u2126(j) (k) \f \f \f \u22652\u230as/sg\u230b/3 o\f \f \f \u22642sg/3, 1 \u2264i \u0338= j \u2264N, (55) where N = \u0016\u0010 d 2 \u221a 2sg \u0011sg/3 \u0010 b 2 \u221a 2\u230as/sg\u230b \u0011s/9\u0017 . For any 1 \u2264j \u2264db, 1 \u2264i \u2264N, de\ufb01ne \u03b2(i) j = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 \u03bbsg\u230as/sg\u230b+\u03bbgsg\u221a \u230as/sg\u230b, j \u2208\u2126(i) 0 j / \u2208\u2126(i), 30 \fthen \u2225\u03b2(i)\u22250 \u2264s, \u2225\u03b2(i)\u22250,2 \u2264sg. We consider the quotient space Rdb/ker(X) = n [x] := x + ker(X), x \u2208Rdbo . Then the dimension of Rdb/ker(X) is rank(X) \u2264n. De\ufb01ne the norm \u2225[x]\u2225= infv\u2208ker(X){\u03bb\u2225x \u2212 v\u22251 + \u03bbg\u2225x \u2212v\u22251,2}. For any vector x \u2208Rdb satisfying \u2225x\u22250 \u22642s, \u2225x\u22250,2 \u22642sg, note that x \u2212v with v \u2208ker(X) satis\ufb01es X(x \u2212v) = Xx, by our assumption, we have \u2225[x]\u2225= \u03bb\u2225x\u22251 + \u03bbg\u2225x\u22251,2. Thus \u2225[\u03b2(1)]\u2225= \u00b7 \u00b7 \u00b7 = \u2225[\u03b2(N)]\u2225= 1. Moreover, by (54) and (55), \u2225\u03b2(i) \u2212\u03b2(j)\u22251 = 1 \u03bbsg\u230as/sg\u230b+ \u03bbgsg p \u230as/sg\u230b \u0010 |\u2126(i)| + |\u2126(j)| \u22122|\u2126(i) \u2229\u2126(j)| \u0011 \u2265 2sg\u230as/sg\u230b 9(\u03bbsg\u230as/sg\u230b+ \u03bbgsg p \u230as/sg\u230b) , and \u2225\u03b2(i) \u2212\u03b2(j)\u22251,2 = d X k=1 \u2225\u03b2(i) (k) \u2212\u03b2(j) (k)\u22252 \u2265 X k\u2208Si,j \u2225\u03b2(i) (k) \u2212\u03b2(j) (k)\u22252 \u2265 1 \u03bbsg\u230as/sg\u230b+ \u03bbgsg p \u230as/sg\u230b r 2\u230as/sg\u230b 3 \u00b7 |Si,j| \u2265 1 \u03bbsg\u230as/sg\u230b+ \u03bbgsg p \u230as/sg\u230b r 2\u230as/sg\u230b 3 \u00b7 sg 3 , where Si,j = n k|\u2126(i) (k), \u2126(j) (k) are not empty sets, \f \f \f\u2126(i) (k) \u2229\u2126(j) (k) \f \f \f < 2\u230as/sg\u230b/3 o . Since \u03b2(i) \u2212\u03b2(j) is (2s, 2sg)-sparse, \r \r \r[\u03b2(i)] \u2212[\u03b2(j)] \r \r \r = \r \r \r[\u03b2(i) \u2212\u03b2(j)] \r \r \r = \u03bb \r \r \r\u03b2(i) \u2212\u03b2(j)\r \r \r 1 + \u03bbg \r \r \r\u03b2(i) \u2212\u03b2(j)\r \r \r 1,2 \u22652/9. By [46, Proposition C.3], we have N \u226410rank(X) \u226410n. Therefore we have \uf8ef \uf8ef \uf8ef \uf8f0 d 2 \u221a 2sg !sg/3 b 2 \u221a 2\u230as/sg\u230b !s/9\uf8fa \uf8fa \uf8fa \uf8fb\u226410n, which means that n \u2265c(sg log(d/sg) + s log(esgb/s)). If d < 3sg or b < 3s/sg, let s\u2032 g = [sg/3]\u22281 \u2265sg/5, s\u2032 = [s/15]\u2228s\u2032 g, then d \u22653s\u2032 g and b \u22653s\u2032/s\u2032 g. Since all (2s, 2sg)-sparse vectors can be exactly recovered by the \u21131 + \u21131,2 minimization and 31 \fs\u2032 \u2264s, s\u2032 g \u2264sg, we know that the \u21131 + \u21131,2 minimization exactly recover all (2s\u2032, 2s\u2032 g)-sparse vectors. Therefore, we have n \u2265c(s\u2032 g log(d/s\u2032 g) + s\u2032 log(es\u2032 gb/s\u2032)) \u2265c \u0012sg 5 \u00b7 log \u0012 d sg \u0013 + s 15 \u00b7 log \u0012eb(sg/5) s/15 \u2228eb \u0013\u0013 \u2265c\u2032(sg log(d/sg) + s log(esgb/s)). (56) \u25a1 6.5 Proof of Theorem 3 We would like prove Theorem 3 by contradiction. Let c = min ( 1 8, c\u2032, r c\u2032 256 ) , c0 = min \u001a c 2e, c2 2C2 , 16c2 \u001b , C0 = max \u001aC2 c2 , 1 32c2 \u001b , where c\u2032 is a uniform constant such that n \u2265c\u2032(s log(esgb/s) + sg log(d/sg)) if the conditions in Theorem 2 are satis\ufb01ed. Assume for contradiction that n < c0(s log(esgb/s) + sg log(d/sg)). (57) Let s0 = s/sg, de\ufb01ne the norm \u2225\u00b7 \u2225= \u2225\u00b7 \u22251 + \u221as0\u2225\u00b7 \u22251,2. Let B = {x \u2208Rp|\u2225x\u2225\u22641}, dn(B, Rp) = inf Ln is a subspace of Rp with dim(Rp/Ln)\u2264n ( sup \u03b2\u2208B\u2229Ln \u2225\u03b2\u22252 ) . By [46, Theorem 10.4], we have dn(B, Rp) \u2264sup \u03b2\u2208B \u2225\u03b2 \u2212\u2206(X\u03b2)\u22252 \u2264C \u221as sup \u03b2\u2208B (\u2225\u03b2\u22251 + \u221as0\u2225\u03b2\u22251,2) = C \u221as. (58) If dn(B, Rp) \u2265c min \uf8f1 \uf8f2 \uf8f3 1 \u221as0 , \" sg s log c s sg d log(esgb/s) n ! + log(esgb/s) ! /n #1/2\uf8fc \uf8fd \uf8fe, (59) since C \u221as \u2264c\u221aC0 \u221as \u2264c\u221asg \u221as = c \u221as0 , (58) and (59) together imply that n \u2265c2 C2 sg log c s sg d log(esgb/s) n ! + s log(esgb/s) ! . (60) 32 \fBy (57), c s sg d log(esgb/s) n > c s sg d log(esgb/s) c0(s log(esgb/s) + sg log(d/sg)) \u22652e s sg d log(esgb/s) s log(esgb/s) + sg log(d/sg) \u2265min ( e s sg d log(esgb/s) s log(esgb/s) , e s sg d log(esgb/s) sg log(ed/sg) ) \u2265min ( ed sg , ed sg log( ed sg ) ) \u2265 \u0012ed sg \u00131/2 . (61) In the last inequality, we used x1/2 \u2265log(x)/2 for all x \u22651. Combine (60) and (61) together, we have n \u2265 c2 2C2 (s log(esgb/s) + sg log(d/sg)) \u2265c0 (s log(esgb/s) + sg log(d/sg)) > n, contradiction! Thus, we only need to prove (59) based on (57). We still use the proof of contradiction. If dn(B, Rp) < c min \uf8f1 \uf8f2 \uf8f3 1 \u221as0 , \" sg s log c s sg d log(esgb/s) n ! + log(esgb/s) ! /n #1/2\uf8fc \uf8fd \uf8fe:= \u00b5, then there exists a subspace Ln of Rp with dim(Rp/Ln) \u2264n such that for all v \u2208Ln\\{0}, \u2225v\u22252 < \u00b5 (\u2225v\u22251 + \u221as0\u2225v\u22251,2) . Let B \u2208Rn\u00d7p satisfying ker(B) = Ln. Let s\u2032 = \u230a 1 32\u00b52 \u230b, s\u2032 g = \u230as\u2032/s0\u230b, by (57) and (61), 1 8s\u22121/2 0 \u2265cs\u22121/2 0 \u2265\u00b5 \u2265c min (r C0 s , \u0012 sg 2s log(d/sg) + log(esgb/s) c0(sg log(d/sg) + s log(esgb/s)) \u00131/2) \u2265 1 4 \u221a 2s\u22121/2, which means that 1 \u2264s\u2032 \u2264s, 1 \u2264s\u2032 g \u2264sg. Moreover, we have 1 64\u00b52 < s\u2032 \u2264 1 32\u00b52 . For any (2s\u2032, 2s\u2032 g)-sparse \u03b2 with support set T and group support set G, and v \u2208ker(A), by Cauchy-Schwarz inequality, \u2225vT \u22251 + \u221as0\u2225v(G)\u22251,2 \u2264 \u221a 2s\u2032\u2225vT \u22252 + \u221as0 q 2s\u2032 g\u2225vT \u22252 \u22642 \u221a 2s\u2032\u2225vT \u22252 <2 \u221a 2 1 4 \u221a 2\u00b5\u00b5 (\u2225v\u22251 + \u221as0\u2225v\u22251,2) = 1 2 (\u2225v\u22251 + \u221as0\u2225v\u22251,2) , 33 \fi.e., \u2225vT \u22251 + \u221as0\u2225v(G)\u22251,2 < \u2225vT c\u22251 + \u221as0\u2225v(Gc)\u22251,2. Based on Cauchy-Schwarz inequality and the sub-di\ufb00erential of \u2225\u03b2\u22251 and \u2225\u03b2\u22251,2, we have \u2225\u03b2 + v\u22251 + \u221as0\u2225\u03b2 + v\u22251,2 \u2265\u2225\u03b2\u22251 + sgn(\u03b2)\u22a4vT + \u2225vT c\u22251 + \u221as0 \uf8eb \uf8ed\u2225\u03b2\u22251,2 + X j\u2208G \u03b2\u22a4 (j)v(j) \u2225\u03b2(j)\u22252 + X j\u2208Gc \u2225vj\u22252 \uf8f6 \uf8f8 \u2265\u2225\u03b2\u22251 \u2212\u2225vT \u22251 + \u2225vT c\u22251 + \u221as0 \u0000\u2225\u03b2\u22251,2 \u2212\u2225v(G)\u22252 + \u2225v(Gc)\u22252 \u0001 >\u2225\u03b2\u22251 + \u221as0\u2225\u03b2\u22251,2. By Theorem 2, n \u2265c\u2032(s\u2032 log(es\u2032 gb/s\u2032) + s\u2032 g log(d/s\u2032 g)) \u2265c\u2032s\u2032 \u001a log \u0012esgb 2s \u0013 + 1 2s0 log(s0d/s\u2032) \u001b \u2265c\u2032s\u2032 log \u0012esgb 2s \u0013 . Thus n \u2265c\u2032s\u2032 log \u0012esgb 2s \u0013 + sg s log c\u2032 s sg d log(esgb/s) n !! > c\u2032 64\u00b52 1 4 log(esgb/s) + sg s log c s sg d log(esgb/s) n !! \u2265n provided that c = min \u001a 1 8, c\u2032, q c\u2032 256 \u001b , contradiction! This means that (59) holds if (57) is true. Therefore, we have \ufb01nished the proof of Theorem 3. \u25a1 34 \f6.6 Proof of Theorem 4 Let \u03bb = C\u03c3 q s log(esgb)+sg log(d/sg) s n, \u03bbg = p s/sg\u03bb. By (86) in Lemma 5 and (101), one has P \u0012\r \r \rH 1 10 \u03bb(X\u22a4\u03b5) \r \r \r \u221e,2 \u22651 10\u03bbg \u0013 \u2264P \u0012 \u22031 \u2264j \u2264d, \r \r \rH 1 10 \u03bb(X\u22a4 (j)\u03b5) \r \r \r 2 \u22651 10\u03bbg, \u2225\u03b5\u22252 \u22655 \u221a n\u03c32 \u0013 + P \u0010 \u2225\u03b5\u22252 \u22655 \u221a n\u03c32 \u0011 \u2264P \u0012 \u22031 \u2264j \u2264d, \r \r \rH 1 10 \u03bb(X\u22a4 (j)\u03b5) \r \r \r 2 \u22651 10\u03bbg \f \f \f \f\u2225\u03b5\u22252 \u22655 \u221a n\u03c32 \u0013 + P \u0010 \u2225\u03b5\u22252 \u22655 \u221a n\u03c32 \u0011 \u2264d exp \u0012 \u2212C s log(esgb) + sg log(d/sg) sg \u0013 + e\u2212n = exp \u0012 log(sg) + log(d/sg) \u2212C s log(esgb) + sg log(d/sg) sg \u0013 + e\u2212n \u2264exp \u0012 \u2212C s log(esgb) + sg log(d/sg) sg \u0013 + e\u2212n. (62) By the de\ufb01nition of \u02c6 \u03b2 and KKT condition, we have X\u22a4(y \u2212X \u02c6 \u03b2) + \u03bbz1 + \u03bbgz2 = 0, where \uf8f1 \uf8f2 \uf8f3 (z1)i = sgn(\u02c6 \u03b2i), \u02c6 \u03b2i \u0338= 0; |(z1)i| \u22641, \u02c6 \u03b2i = 0; \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 (z2)(j) = \u02c6 \u03b2(j) \u2225\u02c6 \u03b2(j)\u22252 , \u02c6 \u03b2(j) \u0338= 0; \u2225(z2)(j)\u22252 \u22641, \u02c6 \u03b2(j) = 0. Therefore, \u2225H\u03bb(X\u22a4(X \u02c6 \u03b2 \u2212y))\u2225\u221e,2 \u2264\u03bbg. (62), Lemma 8 Part 1 and the previous inequality together imply that P \u0012\r \r \rH(1+ 1 10 )\u03bb(X\u22a4Xh) \r \r \r \u221e,2 \u2264(1 + 1 10)\u03bbg \u0013 \u22651 \u2212exp \u0012 \u2212C s log(esgb) + sg log(d/sg) sg \u0013 \u2212e\u2212n, (63) where h = \u02c6 \u03b2 \u2212\u03b2\u2217. By the de\ufb01nition of \u02c6 \u03b2, we have \u2225y \u2212X \u02c6 \u03b2\u22252 2 + \u03bb\u2225\u02c6 \u03b2\u22251 + \u03bbg\u2225\u02c6 \u03b2\u22251,2 \u2264\u2225y \u2212X\u03b2\u2217\u22252 2 + \u03bb\u2225\u03b2\u2217\u22251 + \u03bbg\u2225\u03b2\u2217\u22251,2. (32) and the previous inequality show that \u2225Xh\u22252 2 + \u03bb\u2225hT c\u22251 + \u03bbg\u2225h(Gc)\u22251,2 \u22642\u27e8Xh, \u03b5\u27e9\u2212\u03bb \u00b7 sgn(\u03b2\u2217 T )\u22a4hT \u2212\u03bbg X j\u2208G \u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 + 2\u03bb\u2225\u03b2\u2217 T c\u22251 + 2\u03bbg\u2225\u03b2\u2217 T c\u22251,2. (64) 35 \fFirst, we consider \u27e8Xh, \u03b5\u27e9. Denote P = XT (X\u22a4 T XT )\u22121X\u22a4 T , since Xh = XT hT + XT chT c and (In \u2212P)XT = 0, |\u27e8Xh, \u03b5\u27e9| \u2264|\u27e8PXh, \u03b5\u27e9| + |\u27e8(In \u2212P)Xh, \u03b5\u27e9| = \f \f \f\u27e8X\u22a4 T Xh, (X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u27e9 \f \f \f + |\u27e8(In \u2212P)XT chT c, \u03b5\u27e9| = \f \f \f\u27e8X\u22a4 T Xh, (X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u27e9 \f \f \f + |\u27e8XT chT c, (In \u2212P)\u03b5\u27e9| . (65) Therefore, to give an upper bound of |\u27e8Xh, \u03b5\u27e9|, we only need to bound \f \f\u27e8X\u22a4 T Xh, (X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u27e9 \f \f and |\u27e8XT chT c, (In \u2212P)\u03b5\u27e9|, respectively. By Part 1 of Lemma 7 and also notice that cmin \u2264 \u03c3min(\u03a3) \u2264\u03c3max(\u03a3) \u2264Cmax, P \r \r \r \r \r \u0012 1 nX\u22a4 T XT \u0013\u22121\r \r \r \r \r \u2265 2 cmin ! \u2264P \u0012 \u22251 nX\u22a4 T XT \u2212\u03a3T,T \u2225\u2265cmin 2 \u0013 \u2264P \u0012 \u22251 nX\u22a4 T XT \u03a3\u22121 T,T \u2212Is\u2225\u2265 cmin 2Cmax \u0013 \u22642 exp (Cs \u2212cn) \u22642 exp (\u2212cn) . (66) (66), Lemma 9 and Cauchy-Schwarz inequality together imply that with probability at least 1 \u2212exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 \u22122 exp(\u2212cn), \u2225(X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u22251 \u2264 2 cmin \u221as n \u2225X\u22a4 T \u03b5\u22252 \u2264 2 cmin s n\u2225X\u22a4 T \u03b5\u2225\u221e\u2264C s n r ns log(esgb) + sg log(d/sg) s \u03c32 \u2264C s n\u03bb, \u2225(X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u22251,2 \u2264\u221asg\u2225(X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u22252 \u2264 2 cmin \u221asg n \u2225X\u22a4 T \u03b5\u22252 \u2264C \u221as \u00b7 sg n \u03bb. Combine Lemma 8 Part 2, (63) and the previous two inequalities together, with probability at least 1 \u22122 exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 \u22123e\u2212cn, \f \f \f\u27e8X\u22a4 T Xh, (X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u27e9 \f \f \f \u226411 10\u03bb\u2225(X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u22251 + 11 10\u03bbg\u2225(X\u22a4 T XT )\u22121X\u22a4 T \u03b5\u22251,2 \u2264C s n\u03bb2. (67) Similarly to the proof of (62), also notice that \u2225(In \u2212P)\u03b5\u22252 \u2264\u2225\u03b5\u22252 and X(Gc) is independent of In \u2212P, we have P \u0012\r \r \rH 1 10 \u03bb \u0010 X\u22a4 (Gc)(In \u2212P)\u03b5 \u0011\r \r \r \u221e,2 \u22651 10\u03bbg \u0013 \u2264P \u0012 \u2203j \u2208Gc, \r \r \rH 1 10 \u03bb \u0010 X\u22a4 (j)(In \u2212P)\u03b5 \u0011\r \r \r 2 \u22651 10\u03bbg \f \f \f \f\u2225(In \u2212P)\u03b5\u22252 \u22655 \u221a n\u03c32 \u0013 + P \u0010 \u2225\u03b5\u22252 \u22655 \u221a n\u03c32 \u0011 \u2264exp \u0012 \u2212C s log(esgb) + sg log(d/sg) sg \u0013 + e\u2212n. 36 \fBy Lemma 8 Part 2 and (62), with probability at least 1 \u2212exp \u0010 \u2212C s log(esgb)+sg log(d/sg) sg \u0011 \u2212e\u2212n, \f \f\u27e8X(Gc)h(Gc), (In \u2212P)\u03b5\u27e9 \f \f = \f \f \f\u27e8h(Gc), X\u22a4 (Gc)(In \u2212P)\u03b5\u27e9 \f \f \f \u22641 10\u03bb\u2225h(Gc)\u22251 + 1 10\u03bbg\u2225h(Gc)\u22251,2. Notice that XT c\\(Gc) and In \u2212P are independent and |T c\\(Gc)| \u2264|G| \u2264sgb, by Lemma 9, with probability at least 1 \u2212exp(\u2212C s log(esgb)+sg log(d/sg) s ) \u2212e\u2212n, \f \f\u27e8XT c\\(Gc)hT c\\(Gc), (In \u2212P)\u03b5\u27e9 \f \f \u2264\u2225hT c\\(Gc)\u22251\u2225X\u22a4 T c\\(Gc)(In \u2212P)\u03b5\u2225\u221e \u2264C r ns log(esgb) + sg log(d/sg) s \u03c32\u2225hT c\\(Gc)\u22251 \u22641 10\u03bb\u2225hT c\\(Gc)\u22251. Combine the previous two inequalities together, we have |\u27e8XT chT c, (In \u2212P)\u03b5\u27e9| \u2264 \f \f\u27e8X(Gc)h(Gc), (In \u2212P)\u03b5\u27e9 \f \f + \f \f\u27e8XT c\\(Gc)hT c\\(Gc), (In \u2212P)\u03b5\u27e9 \f \f \u22641 10\u03bb\u2225hT c\u22251 + 1 10\u03bbg\u2225h(Gc)\u22251,2 (68) with probability 1 \u2212C exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 \u2212Ce\u2212cn. Combine (65), (67) and (68) together, we know that with probability at least 1 \u2212C exp \u0010 \u2212C s log(esgb)+sg log(d/sg) s \u0011 \u2212Ce\u2212cn, |\u27e8Xh, \u03b5\u27e9| \u2264C s n\u03bb2 + 1 10\u03bb\u2225hT c\u22251 + 1 10\u03bbg\u2225h(Gc)\u22251,2. (69) Moreover, by the proof of Theorem 1, with probability at least 1 \u2212C exp(\u2212cn/s), there exists an approximate dual certi\ufb01cate u \u2208Rp in the row span of X satisfying (18), and \u2225vT \u2212 sgn(\u03b2\u2217 T )\u22252 \u22641 8, where v is de\ufb01ned in (29). Similarly to (33), we have sgn(\u03b2\u2217 T )\u22a4hT + X j\u2208G \u221as0\u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 \u2265\u2212\u2225vT \u2212sgn(\u03b2\u2217 T )\u22252 \u00b7 \u2225hT \u22252 \u2212\u2225hT c\u22251/2 \u2212\u221as0\u2225h(Gc)\u22251,2/2 + \u27e8h, u\u27e9 \u2265\u2212cmin 8 \u00b7 \u2225hT \u22252 \u2212\u2225hT c\u22251/2 \u2212\u221as0\u2225h(Gc)\u22251,2/2 + \u27e8h, u\u27e9. By Lemma 10, with probability at least 1\u2212Ce\u2212cn/s, u = X\u22a4w with \u2225w\u22252 \u2264C p s/n. Therefore, with probability at least 1 \u2212Ce\u2212cn/s, |\u27e8h, u\u27e9| = |\u27e8Xh, w\u27e9| \u2264\u2225Xh\u22252\u2225w\u22252 \u2264C p s/n\u2225Xh\u22252. The two previous inequalities together imply that sgn(\u03b2\u2217 T )\u22a4hT + X j\u2208G \u221as0\u03b2\u2217\u22a4 T,(j)h(j) \u2225\u03b2\u2217 T,(j)\u22252 \u2265\u2212cmin 8 \u00b7 \u2225hT \u22252 \u2212\u2225hT c\u22251/2 \u2212\u221as0\u2225h(Gc)\u22251,2/2 \u2212C p s/n\u2225Xh\u22252 (70) 37 \fwith probability at least 1 \u2212Ce\u2212cn/s. Combine (64), (69) and (70) together, with probability at least 1\u2212Ce\u2212C s log(esgb)+sg log(d/sg) s \u2212 Ce\u2212cn/s, \u2225Xh\u22252 2 + 3 10\u03bb\u2225hT c\u22251 + 3 10\u03bbg\u2225h(Gc)\u22251,2 \u2264C s n\u03bb2 + cmin 8 \u03bb\u2225hT \u22252 + C p s/n\u03bb\u2225Xh\u22252 + 2\u03bb\u2225\u03b2\u2217 T c\u22251 + 2\u03bbg\u2225\u03b2\u2217 T c\u22251,2. (71) By (42), (63) and (66), with probability at least 1 \u2212exp \u0010 \u2212C s log(esgb)+sg log(d/sg) sg \u0011 \u2212Ce\u2212cn, \u2225hT \u22252 \u2264\u2225(X\u22a4 T XT )\u22121\u2225\u2225X\u22a4 T XT hT \u22252 \u2264 2 cminn\u2225X\u22a4 T Xh \u2212X\u22a4 T XT chT c\u22252 \u2264 2 cminn \u0010 \u2225X\u22a4 T Xh\u22252 + \u2225X\u22a4 T XT chT c\u22252 \u0011 \u2264 2 cminn \u2225H 11 10 \u03bb(X\u22a4 T Xh)\u22252 + 11 10 \u221as\u03bb + n X i\u2208T c \u2225X\u22a4 T Xi/n\u22252|hi| ! \u2264 2 cminn \u0012\u221asg\u2225H 11 10 \u03bb(X\u22a4 T Xh)\u2225\u221e,2 + 11 10 \u221as\u03bb + n max i\u2208T c \u2225X\u22a4 T Xi/n\u22252\u2225hT c\u22251 \u0013 \u2264 2 cminn \u0012\u221asg 11 10\u03bbg + 11 10 \u221as\u03bb + n 2 \u2225hT c\u22251 \u0013 \u22645 cmin \u221as n \u03bb + 1 cmin \u2225hT c\u22251. (72) The fourth inequality comes from \u2225x\u22252 \u2264\u2225H\u03b1(x)\u22252 + \u221as\u03b1 for x \u2208Rs; the \ufb01fth inequality holds since \u2225X\u22a4 T Xh\u22250,2 \u2264sg. (71) and (72) together imply that \u2225Xh\u22252 2 + 7 40\u03bb\u2225hT c\u22251 + 3 10\u03bbg\u2225h(Gc)\u22251,2 \u2264C s n\u03bb2 + C p s/n\u03bb\u2225Xh\u22252 + 2\u03bb\u2225\u03b2\u2217 T c\u22251 + 2\u03bbg\u2225\u03b2\u2217 T c\u22251,2 with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ) \u2212Ce\u2212cn/s. Also notice that C p s/n\u03bb\u2225Xh\u22252 \u2264\u2225Xh\u22252 2 + C s n\u03bb2, with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ) \u2212Ce\u2212cn/s, \u2225hT c\u22251 + \u221as0\u2225h(Gc)\u22251,2 \u2264C \u0010 s n\u03bb + \u2225\u03b2\u2217 T c\u22251 + \u221as0\u2225\u03b2\u2217 T c\u22251,2 \u0011 . (73) From the proof of Lemma 1, we know that (36) and (41) hold with probability at least 1 \u2212 2e\u2212cn. By Lemma 8 Part 2 and (63), with probability at least 1\u2212exp \u0010 \u2212C s log(esgb)+sg log(d/sg) sg \u0011 \u2212 38 \fe\u2212n, \f \f\u27e8X e T h e T , Xh\u27e9 \f \f = \f \f \f\u27e8h e T , X\u22a4 e T Xh\u27e9 \f \f \f \u226411 10 \u0000\u03bb\u2225h e T \u22251 + \u03bbg\u2225h e T \u22251,2 \u0001 \u226411 10 \u0010 \u03bb \u00b7 \u221a 3s\u2225h e T \u22252 + \u03bbg p 2sg\u2225h e T \u22252 \u0011 \u22644\u03bb\u221as\u2225h e T \u22252. (74) The second inequality is due to \u2225h e T \u22250 \u22643s, \u2225h e T \u22250,2 \u22642sg and Cauchy-Schwarz inequality. Combine (36), (41), (73) and (74) together, with probability at least 1\u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s )\u2212 Ce\u2212cn/s, we have cmin 2 \u2225h e T \u22252 2 \u22641 n4\u03bb\u221as\u2225h e T \u22252 + \u221a 3Cmax\u2225h e T \u22252( \u221a 2s\u22121/2\u2225hT c\u22251 + s\u22121/2 g \u2225h(Gc)\u22251,2) \u22641 n4\u03bb\u221as\u2225h e T \u22252 + \u221a 3Cmax\u2225h e T \u22252 \u00b7 C \u221as \u0010 s n\u03bb + \u2225\u03b2\u2217 T c\u22251 + \u221as0\u2225\u03b2\u2217 T c\u22251,2 \u0011 \u2264C \u0012\u221as n \u03bb + 1 \u221as\u2225\u03b2\u2217 T c\u22251 + 1 \u221asg \u2225\u03b2\u2217 T c\u22251,2 \u0013 \u2225h e T \u22252. Therefore, with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ) \u2212Ce\u2212cn/s, \u2225h e T \u22252 \u2264C \u0012\u221as n \u03bb + 1 \u221as\u2225\u03b2\u2217 T c\u22251 + 1 \u221asg \u2225\u03b2\u2217 T c\u22251,2 \u0013 . (75) By (39), (40), (73) and the previous inequality, also notice that e\u2212cn/s \u2264e\u2212C s log(esgb)+sg log(d/sg) s , with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ), \u2225h\u22252 \u2264\u2225h e T \u22252 + X i\u22652 \u2225hTi\u22252 + X j\u22652 \u2225hRj\u22252 \u2264\u2225h e T \u22252 + \u221a 2s\u22121/2\u2225hT c\u22252 + s\u22121/2 g \u2225h(Gc)\u22251,2 \u2264C \u0012\u221as n \u03bb + 1 \u221as\u2225\u03b2\u2217 T c\u22251 + 1 \u221asg \u2225\u03b2\u2217 T c\u22251,2 \u0013 , (76) i.e., with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ), \u2225h\u22252 \u2264C r \u03c32(sg log(d/sg) + s log(esgb)) n + 1 \u221as\u2225\u03b2\u2217 T c\u22251 + 1 \u221asg \u2225\u03b2\u2217 T c\u22251,2 ! . Moreover, if \u03b2\u2217is (s, sg)-sparse, then \u2225\u03b2\u2217 T c\u22251 = \u2225\u03b2\u2217 T c\u22251,2 = 0. Therefore, with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ), \u2225h\u22252 2 \u2264C\u03c32(sg log(d/sg) + s log(esgb)) n . \u25a1 39 \f6.7 Proof of Theorem 5 First, we consider the case that d \u22653sg and b \u22653s/sg. Let \u03c9(1), . . . , \u03c9(N) be uniformly randomly vectors from A = {\u03c9 \u2208{0, 1}db| X j 1{\u03c9(j)\u0338=0} = sg, \u2225\u03c9(j)\u22250 = \u230as/sg\u230bif \u03c9(j) \u0338= 0}. Denote \u2126(i) = {j|\u03c9(i) j \u0338= 0}, \u2126(i) (k) = {j|j \u2208(k), \u03c9(i) j \u0338= 0} and \u03b2(i) = \u03c4\u03c9(i), for all 1 \u2264i \u2264N, 1 \u2264 k \u2264d, where \u03c4 is a parameter that will be speci\ufb01ed later. Obviously, \u2225\u03b2(i)\u22250 = sg\u230as/sg\u230b\u2264s, therefore \u2225\u03b2(i) \u2212\u03b2(j)\u22252 2 \u22642sg\u230as/sg\u230b\u03c4 2 \u22642s\u03c4 2. Moreover, if |\u2126(i) \u2229\u2126(j)| \u22658sg\u230as/sg\u230b/9, then we must have \f \f \f n k|\u03c9(i) (k), \u03c9(j) (k) \u0338= 0, \f \f \f\u2126(i) (k) \u2229\u2126(j) (k) \f \f \f \u22652\u230as/sg\u230b/3 o\f \f \f \u22652sg/3, otherwise |\u2126(i) \u2229\u2126(j)| \u22642sg 3 \u230as/sg\u230b+ sg 3 2\u230as/sg\u230b/3 \u22648sg\u230as/sg\u230b/9, which is a contradiction. Therefore, P \u0010 \u2225\u03b2(i) \u2212\u03b2(j)\u22252 2 \u22642sg\u230as/sg\u230b\u03c4 2/9 \u0011 =P \u0010 |\u2126(i) \u2229\u2126(j)| \u22658sg\u230as/sg\u230b/9 \u0011 \u2264P \u0010\f \f \f n k|\u03c9(i) (k), \u03c9(j) (k) \u0338= 0, \f \f \f\u2126(i) (k) \u2229\u2126(j) (k) \f \f \f \u22652\u230as/sg\u230b/3 o\f \f \f \u22652sg/3 \u0011 \u2264 Psg l=\u23082sg/3\u2309 \u0000sg l \u0001 hP\u230as/sg\u230b t=\u23082\u230as/sg\u230b/3\u2309 \u0000\u230as/sg\u230b t \u0001\u0000b\u2212\u230as/sg\u230b \u230as/sg\u230b\u2212t \u0001il \u0000b \u230as/sg\u230b \u0001sg\u2212l\u0000 d\u2212l sg\u2212l \u0001 \u0000 d sg \u0001\u0000b \u230as/sg\u230b \u0001sg = sg X l=\u23082sg/3\u2309 \u0012sg l \u0013\u0000 d\u2212l sg\u2212l \u0001 \u0000 d sg \u0001 \u00b7 \uf8ee \uf8f0 \u230as/sg\u230b X t=\u23082\u230as/sg\u230b/3\u2309 \u0012\u230as/sg\u230b t \u0013\u0000b\u2212\u230as/sg\u230b \u230as/sg\u230b\u2212t \u0001 \u0000b \u230as/sg\u230b \u0001 \uf8f9 \uf8fb l . (77) Note that \u0000 d\u2212l sg\u2212l \u0001 \u0000 d sg \u0001 = (d\u2212l)\u00b7\u00b7\u00b7(d\u2212sg+1) (sg\u2212l)! d(d\u22121)\u00b7\u00b7\u00b7(d\u2212sg+1) sg! = sg(sg \u22121) \u00b7 \u00b7 \u00b7 (sg \u2212l + 1) d(d \u22121) \u00b7 \u00b7 \u00b7 (d \u2212l + 1) \u2264 \u0010sg d \u0011l , The inequality holds since sg\u2212i d\u2212i \u2264sg d for all 1 \u2264i \u2264sg. Similarly, for 1 \u2264t \u2264\u230as/sg\u230b, \u0000b\u2212\u230as/sg\u230b \u230as/sg\u230b\u2212t \u0001 \u0000b \u230as/sg\u230b \u0001 \u2264 \u0000b\u2212t \u230as/sg\u230b\u2212t \u0001 \u0000b \u230as/sg\u230b \u0001 \u2264 \u0012\u230as/sg\u230b b \u0013t . 40 \fCombine (77) and the previous two inequalities together, we have P \u0010 \u2225\u03b2(i) \u2212\u03b2(j)\u22252 2 \u22642sg\u230as/sg\u230b\u03c4 2/9 \u0011 \u2264 sg X l=\u23082sg/3\u2309 \u0012sg l \u0013 \u0010sg d \u0011l \u00b7 \uf8ee \uf8f0 \u230as/sg\u230b X t=\u23082\u230as/sg\u230b/3\u2309 \u0012\u230as/sg\u230b t \u0013 \u0012\u230as/sg\u230b b \u0013t \uf8f9 \uf8fb l \u2264 sg X l=\u23082sg/3\u2309 \u0012sg l \u0013 \u0010sg d \u0011l \u00b7 \uf8ee \uf8f0 \u230as/sg\u230b X t=\u23082\u230as/sg\u230b/3\u2309 \u0012\u230as/sg\u230b t \u0013 \u0012\u230as/sg\u230b b \u00132\u230as/sg\u230b/3 \uf8f9 \uf8fb l \u2264 sg X l=\u23082sg/3\u2309 \u0012sg l \u0013 \u0010sg d \u0011l \u00b7 \" 2\u230as/sg\u230b \u0012\u230as/sg\u230b b \u00132\u230as/sg\u230b/3#l \u2264 sg X l=\u23082sg/3\u2309 \u0012sg l \u0013 \u0010sg d \u00112sg/3 \u00b7 \uf8ee \uf8f0 2 \u221a 2\u230as/sg\u230b b !2\u230as/sg\u230b/3\uf8f9 \uf8fb 2sg/3 \u2264 2 \u221a 2sg d !2sg/3 \u00b7 2 \u221a 2\u230as/sg\u230b b !2s/9 . (78) Set N = \u0016\u0010 d 2 \u221a 2sg \u0011sg/3 \u0010 b 2 \u221a 2\u230as/sg\u230b \u0011s/9\u0017 , then P \u0010 \u22001 \u2264i \u0338= j \u2264N, \u2225\u03b2(i) \u2212\u03b2(j)\u22252 2 > 2sg\u230as/sg\u230b\u03c4 2/9 \u0011 \u22651 \u2212N(N \u22121) 2 2 \u221a 2sg d !2sg/3 \u00b7 2 \u221a 2\u230as/sg\u230b b !2s/9 >0. i.e., the probability that \u03b2(1), . . . , \u03b2(N); \u2126(1), \u00b7 \u00b7 \u00b7 , \u2126(N) satisfy s 9\u03c4 2 < 2sg\u230as/sg\u230b\u03c4 2/9 < min i\u0338=j \u2225\u03b2(i) \u2212\u03b2(j)\u22252 2 \u22642s\u03c4 2, (79) |\u2126(i) \u2229\u2126(j)| < 8sg\u230as/sg\u230b/9, \u22001 \u2264i < j \u2264N (80) is positive. For convenience, we \ufb01x \u03b2(1), . . . , \u03b2(N) to be the vectors satisfying (79). Denote y(i) = X\u03b2(i) + \u03b5 for all 1 \u2264i \u2264N. We consider the Kullback-Leibler divergence between di\ufb00erent distribution pairs: DKL \u0010 (y(i), X), (y(j), X) \u0011 = E(y(j),X) \" log p(y(i), X) p(y(j), X) !# , 41 \fwhere p(y(i), X) is the probability density of (y(i), X). Conditioning on X, we have E(y(j),X) \" log p(y(i), X) p(y(j), X) ! |X # = \u2225X(\u03b2(i) \u2212\u03b2(j))\u22252 2 2\u03c32 . Thus for 1 \u2264i \u0338= j \u2264N, DKL \u0010 (y(i), X), (y(j), X) \u0011 =EX \u2225X(\u03b2(i) \u2212\u03b2(j))\u22252 2 2\u03c32 = n(\u03b2(i) \u2212\u03b2(j))\u22a4\u03a3(\u03b2(i) \u2212\u03b2(j)) 2\u03c32 \u22643n\u2225\u03b2(i) \u2212\u03b2(j)\u22252 2 4\u03c32 \u22643ns\u03c4 2 2\u03c32 . (81) In the \ufb01rst inequality, we used \u03c3max(\u03a3) \u22643 2. By generalized Fano\u2019s Lemma, inf \u02c6 \u03b2 sup \u03b2\u2208Fs,sg E\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 \u2265 p s\u03c4 2/9 2 1 \u2212 3ns\u03c4 2 2\u03c32 + log 2 log N ! . Since log N \u224dsg log( d sg ) + s log \u0010 esgb s \u0011 , by setting \u03c4 = c r \u03c32 \u0010 sg log( d sg )+s log \u0010 esgb s \u0011\u0011 ns , we have inf \u02c6 \u03b2 sup \u03b2\u2208Fs,sg E\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 2 \u2265 inf \u02c6 \u03b2 sup \u03b2\u2208Fs,sg E\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 !2 \u2265c\u03c32 (sg log(d/sg) + s log(esgb/s)) n . If d < 3sg or b < 3s/sg, let s\u2032 g = [sg/3] \u22281 \u2265sg/5, s\u2032 = [s/15] \u2228s\u2032 g, then d \u22653s\u2032 g and b \u22653s\u2032/s\u2032 g. Similarly to (56), we have inf \u02c6 \u03b2 sup \u03b2\u2208Fs,sg E\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 2 \u2265inf \u02c6 \u03b2 sup \u03b2\u2208Fs\u2032,s\u2032 g E\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 2 \u2265c\u03c32 \u0000s\u2032 g log(d/s\u2032 g) + s\u2032 log(es\u2032 gb/s\u2032) \u0001 n \u2265c\u2032 \u03c32 (sg log(d/sg) + s log(esgb/s)) n . \u25a1 6.8 Proof of Theorem 6 The proof of Theorem 6 relies on the following key lemma, which shows that \u03a3\u22121 is in the feasible set of the optimization problem (23) with high probability by choosing appropriate \u03b1 and \u03b3. Lemma 4 By setting \u03b1 = C q s log(esgb)+sg log(d/sg) sn , \u03b3 = q s sg \u03b1 in (23), we have P \u0012 max 1\u2264i\u2264p \u2225H\u03b1(ei \u22121 nX\u22a4X\u03a3\u22121ei)\u2225\u221e,2 \u2264\u03b3 \u0013 \u22651 \u22124 exp \u0012 \u2212C s log(esgb) + sg log(d/sg) sg \u0013 . 42 \fNote that Y = X\u03b2\u2217+ \u03b5, we have \u221an(\u02c6 \u03b2u\u2212\u03b2\u2217) = \u221an \u0012 \u02c6 \u03b2 \u2212\u03b2\u2217+ 1 n \u02c6 MX\u22a4\u0010 Y \u2212X \u02c6 \u03b2 \u0011\u0013 = \u221an \u0012 I \u22121 n \u02c6 MX\u22a4X \u0013 (\u02c6 \u03b2\u2212\u03b2\u2217)+ 1 \u221an \u02c6 MX\u22a4\u03b5. Since \u03b5i i.i.d. \u223cN(0, \u03c32), we know that 1 \u221an \u02c6 MX\u22a4\u03b5|X \u223cN \u0010 0, \u02c6 M \u02c6 \u03a3 \u02c6 M\u22a4\u0011 . Denote h = \u02c6 \u03b2 \u2212\u03b2\u2217. Since \u03b2\u2217is (s, sg)-sparse, by (73), (76) and Cauchy-Schwarz inequality, with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ), \u2225h\u22251 \u2264\u2225hT \u22251 + \u2225hT c\u22251 \u2264\u221as\u2225hT \u22252 + \u2225hT c\u22251 \u2264\u221as\u2225h\u22252 + \u2225hT c\u22251 \u2264C s n\u03bb. \u2225h\u22251,2 \u2264\u2225h(G)\u22251,2 + \u2225h(Gc)\u22251,2 \u2264\u221asg\u2225h(G)\u22252 + \u2225h(Gc)\u22251,2 \u2264\u221asg\u2225h\u22252 + \u2225h(Gc)\u22251,2 \u2264C \u221as \u00b7 sg n \u03bb. In addition, Lemma 4 shows that \u03a3\u22121 is in the feasible set of (23) with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ). By the de\ufb01nition of \u02c6 M, max i \u2225H\u03b1(ei \u2212\u02c6 \u03a3 \u02c6 M\u22a4ei)\u2225\u221e,2 = max i \u2225H\u03b1(ei \u2212\u02c6 \u03a3 \u02c6 mi)\u2225\u221e,2 \u2264\u03b3. (82) Combining these facts, by Lemma 8 Part 2, we must have \r \r \r \r(I \u22121 n \u02c6 MXX\u22a4)(\u02c6 \u03b2 \u2212\u03b2\u2217) \r \r \r \r \u221e = max i \f \f \f\u27e8ei \u2212\u02c6 \u03a3 \u02c6 M\u22a4ei, h\u27e9 \f \f \f \u2264\u03b1\u2225h\u22251 + \u03b3\u2225h\u22251,2 \u2264C s n\u03b1\u03bb + C \u221as \u00b7 sg n \u03b3\u03bb =C(s log(esgb) + sg log(d/sg)) n \u03c3 with probability at least 1 \u2212C exp(\u2212C s log(esgb)+sg log(d/sg) s ). This has \ufb01nished the proof of (24). Next, we consider \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi. By (82) and Lemma 8 Part 2, we have 1 \u2212\u27e8ei, \u02c6 \u03a3 \u02c6 mi\u27e9= \u27e8ei, ei \u2212\u02c6 \u03a3 \u02c6 mi\u27e9\u2264\u03b1\u2225ei\u22251 + \u03b3\u2225ei\u22251,2 = \u03b1 + \u03b3. Therefore, for any c \u22650, \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi \u2265\u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi + c(1 \u2212\u03b1 \u2212\u03b3) \u2212c\u27e8ei, \u02c6 \u03a3 \u02c6 mi\u27e9\u2265min m n m\u22a4\u02c6 \u03a3m + c(1 \u2212\u03b1 \u2212\u03b3) \u2212c\u27e8ei, \u02c6 \u03a3m\u27e9 o . Since m = cei/2 achieves the minimum of the right hand side, we have \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi \u2265c(1 \u2212\u03b1 \u2212\u03b3) \u2212c2 4 \u02c6 \u03a3i,i. 43 \fIf \u02c6 \u03a3ii > 0 for all 1 \u2264i \u2264p, by setting c = 2(1 \u2212\u03b1 \u2212\u03b3)/\u02c6 \u03a3i,i, we have \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi \u2265(1 \u2212\u03b1 \u2212\u03b3)2 \u02c6 \u03a3i,i , \u22001 \u2264i \u2264p. (83) Moreover, by Lemma 6 Part 2 with u = v = ei, we have P \u0010\f \f \f\u02c6 \u03a3i,i \u2212\u03a3i,i \f \f \f \u2265cmin 2 \u0011 \u22642 exp (\u2212cn) . By the union bound, P \u0010 \u22031 \u2264i \u2264p, \f \f \f\u02c6 \u03a3i,i \u2212\u03a3i,i \f \f \f \u2265cmin 2 \u0011 \u2264 p X i=1 P \u0010\f \f \f\u02c6 \u03a3i,i \u2212\u03a3i,i \f \f \f \u2265cmin 2 \u0011 \u2264db \u00b7 2 exp (\u2212cn) \u22642 exp (\u2212cn) . Therefore, with probability at least 1 \u22122 exp (\u2212cn), cmin 2 \u2264\u02c6 \u03a3i,i \u2264Cmax + cmin 2 , \u22001 \u2264i \u2264p. (83) and the previous inequality together imply that with probability at least 1 \u22122 exp (\u2212cn), \u02c6 m\u22a4 i \u02c6 \u03a3 \u02c6 mi \u2265 1 2Cmax , \u22001 \u2264i \u2264p. (24) and the previous inequality together imply (25). \u25a1" + }, + { + "url": "http://arxiv.org/abs/1905.08757v1", + "title": "Asymptotic Analysis for Extreme Eigenvalues of Principal Minors of Random Matrices", + "abstract": "Consider a standard white Wishart matrix with parameters $n$ and $p$.\nMotivated by applications in high-dimensional statistics and signal processing,\nwe perform asymptotic analysis on the maxima and minima of the eigenvalues of\nall the $m \\times m$ principal minors, under the asymptotic regime that $n,p,m$\ngo to infinity. Asymptotic results concerning extreme eigenvalues of principal\nminors of real Wigner matrices are also obtained. In addition, we discuss an\napplication of the theoretical results to the construction of compressed\nsensing matrices, which provides insights to compressed sensing in signal\nprocessing and high dimensional linear regression in statistics.", + "authors": "T. Tony Cai, Tiefeng Jiang, Xiaoou Li", + "published": "2019-05-21", + "updated": "2019-05-21", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "math.PR", + "stat.TH" + ], + "main_content": "Introduction Random matrix theory is traditionally focused on the spectral analysis of eigenvalues and eigenvectors of a single random matrix. See, for example, Bai and Silverstein (2010); Bryc et al. (2006); Diaconis and Evans (2001); Dyson (1962a,b,c); Jiang (2004a,b); Johnstone (2001, 2008); Mehta (2004); Tracy and Widom (1994, 1996, 2000); Wigner (1955, 1958). It is important in its own right and has been proved to be a powerful tool in a wide range of \ufb01elds including high-dimensional statistics, quantum physics, electrical engineering, and number theory. The laws of large numbers and the limiting distributions for the extreme eigenvalues of the Wishart matrices are now well known, see, e.g., Bai (1999) and Johnstone (2001, 2008). 1 arXiv:1905.08757v1 [math.ST] 21 May 2019 \fLet X = Xn\u00d7p be a random matrix with i.i.d. N(0, 1) entries and let W = X\u22baX. Let \u03bb1(W) \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bbp(W) be the eigenvalues of W. The limiting distribution of the largest eigenvalue \u03bb1(W) satis\ufb01es, for n, p \u2192\u221ewith n/p \u2192\u03b3, P \u0010\u03bb1(W) \u2212\u00b5n \u03c3n \u2264x \u0011 \u2192F1(x) where \u00b5n = (\u221an \u22121 + \u221ap)2 and \u03c3n = (\u221an \u22121 + \u221ap)( 1 \u221an\u22121 + 1 \u221ap)1/3 and F1(x) is the distribution function of the Tracy-Widom law of type I. The results for the smallest eigenvalue \u03bbp(W) can be found in, e.g., Edelman (1988) and Bai and Yin (1993). These results have also been extended to generalized Wishart matrices, i.e., the entries of X are i.i.d. but not necessarily normally distributed, in, e.g., Bai and Silverstein (2010); P\u00b4 ech\u00b4 e (2009); Tao and Vu (2010). Motivated by applications in high-dimensional statistics and signal processing, we study in this paper the extreme eigenvalues of the principal minors of a Wishart matrix W. Write X = (xij)n\u00d7p = (x1, \u00b7 \u00b7 \u00b7 , xp). Let S = {i1, \u00b7 \u00b7 \u00b7 , ik} \u2282{1, 2, \u00b7 \u00b7 \u00b7 , p} with the size of S being k and XS = (xi1, \u00b7 \u00b7 \u00b7 , xik). Then WS = X\u22ba SXS is a k \u00d7 k principal minor of W. Denote by \u03bb1(WS) \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bbk(WS) the eigenvalues of WS in descending order. We are interested in the largest and the smallest eigenvalues of all the k \u00d7 k principal minors of W in the setting that n, p, and k are large but k relatively smaller than min{n, p}. More speci\ufb01cally, we are interested in the properties of the maximum of the eigenvalues of all k \u00d7 k minors: \u03bbmax(k) = max 1\u2264i\u2264k,S\u2282{1,...,p},|S|=k \u03bbi(WS) (1.1) and the minimum of the eigenvalues of all k \u00d7 k minors: \u03bbmin(k) = min 1\u2264i\u2264k,S\u2282{1,...,p},|S|=k \u03bbi(WS), (1.2) where |S| denotes the cardinality of the set S. This is a problem of signi\ufb01cant interest in its own right, and it has important applications in statistics and engineering. Before we establish the properties for the extreme eigenvalues \u03bbmax(k) and \u03bbmin(k), of the k \u00d7 k principal minors of a Wishart matrix W, we \ufb01rst discuss an application in signal processing and statistics, namely the construction of the compressed sensing matrix, as the motivation for our study. The properties of the extreme eigenvalues \u03bbmax(k) and \u03bbmin(k) can also be used in other applications, including testing for the covariance structure of a high-dimensional Gaussian distribution, which is an important problem in statistics. 2 \f1.1 Construction of Compressed Sensing Matrices Compressed sensing, which aims to develop e\ufb03cient data acquisition techniques that allow accurate reconstruction of highly undersampled sparse signals, has received much attention recently in several \ufb01elds, including signal processing, applied mathematics and statistics. The development of the compressed sensing theory also provides crucial insights into inference for high dimensional linear regression in statistics. It is now well understood that the constrained \u21131 minimization method provides an e\ufb00ective way for recovering sparse signals. See, e.g., Candes and Tao (2005, 2007), Donoho (2006), and Donoho et al. (2006). More speci\ufb01cally, in compressed sensing, one observes (X, y) with y = X\u03b2 + z where y \u2208Rn, X \u2208Rn\u00d7p with n being much smaller than p, \u03b2 \u2208Rp is a sparse signal of interest, and z \u2208Rn is a vector of measurement errors. One wishes to recover the unknown sparse signal \u03b2 \u2208Rp based on (X, y) using an e\ufb03cient algorithm. Since the number of measurements n is much smaller than the dimension p, without structural assumptions, the signal \u03b2 is under-determined, even in the noiseless case. A usual assumption in compressed sensing is that \u03b2 is sparse and one of the most commonly used frameworks for sparse signal recovery is the Restricted Isometry Property (RIP). See Candes and Tao (2005). A vector is said to be k-sparse if |supp(v)| \u2264k, where supp(v) = {i : vi \u0338= 0} is the support of v. In compressed sensing, the RIP requires subsets of certain cardinality of the columns of X to be close to an orthonormal system. For an integer 1 \u2264k \u2264p, de\ufb01ne the restricted isometry constant \u03b4k to be the smallest non-negative numbers such that for all k-sparse vectors \u03b2, (1 \u2212\u03b4k)\u2225\u03b2\u22252 2 \u2264\u2225X\u03b2\u22252 2 \u2264(1 + \u03b4k)\u2225\u03b2\u22252 2. (1.3) There are a variety of su\ufb03cient conditions on the RIP for the exact/stable recovery of ksparse signals. A sharp condition was established in Cai and Zhang (2014) and a conjecture was proved in Zhang and Li (2018). Let b\u2217(t) = \uf8f1 \uf8f2 \uf8f3 t 4\u2212t 0 < t < 4 3 q t\u22121 t t \u22654 3 . (1.4) For any given t > 0, the condition \u03b4tk < b\u2217(t) guarantees the exact recovery of all k sparse 3 \fsignals in the noiseless case through the constrained \u21131 minimization \u02c6 \u03b2 = arg min{\u2225\u03b3\u22251 : y = X\u03b3, \u03b3 \u2208Rp}. Moreover, for any \u03b5 > 0, \u03b4tk < b\u2217(t) + \u03b5 is not su\ufb03cient to guarantee the exact recovery of all k-sparse signals for large k. In addition, the conditions \u03b4tk < b\u2217(t) is also shown to be su\ufb03cient for stable recovery of approximately sparse signals in the noisy case. One of the major goals of compressed sensing is the construction of the measurement matrix Xn\u00d7p, with the number of measurements n as small as possible relative to p, such that all k-sparse signals can be accurately recovered. Deterministic construction of large measurement matrices that satisfy the RIP is known to be di\ufb03cult. Instead, random matrices are commonly used. Certain random matrices have been shown to satisfy the RIP conditions with high probability. See, e.g., Baraniuk et al. (2008). When the measurement matrix X is a Gaussian matrix with i.i.d. N(0, 1 n) entries, for any given t, the condition \u03b4tk < b\u2217(t) is equivalent to that the extreme eigenvalues, \u03bbmax(tk) and \u03bbmin(tk), of the tk \u00d7 tk principal minors of the Wishart matrix W = X\u22baX satisfy 1 \u2212b\u2217(t) < \u03bbmin(tk) \u2264\u03bbmax(tk) < 1 + b\u2217(t). Hence the condition (1.3) can be viewed as a condition on \u03bbmin(tk) and \u03bbmax(tk) as de\ufb01ned in (1.1) and (1.2), respectively. 1.2 Main results and organization of the paper In this paper, we investigate the asymptotic behavior of the extreme eigenvalues \u03bbmax(m) and \u03bbmin(m) de\ufb01ned in (1.1) and (1.2). We also consider the extreme eigenvalues of a related Wigner matrix. We then discuss the application of the results in the construction of compressed sensing matrices. The rest of the paper is organized as follows. Section 2 describes the precise setting of the problem. The main results are stated in Section 3. The proofs of the main theorems are given in Section 4. The proofs of all the supporting lemma are given in the Appendix. The proof strategy for the main results is given in Section 4.1. 2 Problem settings In this paper, we consider a white Wishart matrix W = (wij)1\u2264i,j\u2264p = X\u22baX, where X = (xij)1\u2264i\u2264n,1\u2264j\u2264p and xij are independent N(0, 1)-distributed random variables. For S \u2282 4 \f{1, ..., p}, set the principal minor WS = (wij)i,j\u2208S. For an m \u00d7 m symmetric matrix A, let \u03bb1(A) and \u03bbm(A) denote the largest and the smallest eigenvalues of A, respectively. Let Tm,n,p = max S\u2282{1,...,p},|S|=m \u03bb1(WS), (2.1) and |S| denotes the cardinality of the set S. We also de\ufb01ne Vm,n,p = min S\u2282{1,...,p},|S|=m \u03bbm(WS). (2.2) Of interest is the asymptotic behavior of Tm,n,p and Vm,n,p when both n and p grow large. Notice Wij is the sum of n independent and identically distributed (i.i.d.) random variables. By the standard central limit theorem, for given i \u22651 and j \u22651, we have wij \u2212n \u221an = \u21d2N(0, 2) if i = j, and wij \u221an = \u21d2N(0, 1) if i \u0338= j, as n \u2192\u221e, where we use \u201c= \u21d2\u201d to indicate convergence in distribution. Motivated by this limiting distribution, we also consider the Wigner matrix \u02dc W = ( \u02dc wij)1\u2264i,j\u2264p, which is a symmetric matrix whose upper triangular entries are independent Gaussian variables with the following distribution \u02dc wij \u223c \uf8f1 \uf8f2 \uf8f3 N(0, 2) if i = j; N(0, 1) if i < j. (2.3) For S \u2282{1, ..., p}, set \u02dc WS = ( \u02dc wij)i,j\u2208S. We will work on the corresponding statistics \u02dc Tm,p = max S\u2282{1,...,p},|S|=m \u03bb1( \u02dc WS) (2.4) and \u02dc Vm,p = min S\u2282{1,...,p},|S|=m \u03bbm( \u02dc WS). (2.5) In this paper, we study asymptotic results regarding the four statistics Tm,n,p, Vm,n,p, \u02dc Vm,p and \u02dc Tm,p. 3 Main results Throughout the paper, we will let n \u2192\u221eand let p = pn \u2192\u221ewith a speed depending on n. The following technical assumptions will be used in our main results. 5 \fAssumption 1. The integer m \u22652 is \ufb01xed and log p = o(n1/2); or m \u2192\u221ewith m = o \u0012 min \u001a(log p)1/3 log log p , n1/4 (log n)3/2(log p)1/2 \u001b\u0013 . (3.1) Notice the second part of Assumption 1 implies that log p = o(n1/2(log n\u22123)). It says the population dimension p can be very large and it can be as large as exp{o(n1/2/ log n3)}. This assumption is used in the analysis of Tm,n,p and Vm,n,p. The requirement m = o((log p)1/3/ log log p) is used in the last step in (4.58). The second part of the condition m = o(n1/4(log n)\u22123/2(log p)\u22121/2) is needed in a few places including (4.55). The key scales (log p)1/3 and n1/4 in condition (3.1) are tight, the terms of lower order log log p and (log n)3/2 can be improved to be relatively smaller. The next assumption is needed for studying the properties of \u02dc Vm,p and \u02dc Tm,p. Assumption 2. The integer m satis\ufb01es that m \u22652 is \ufb01xed, or m \u2192\u221ewith m = o \u0010(log p)1/3 log log p \u0011 . (3.2) This condition is the same as the \ufb01rst part of (3.1). We start with asymptotic results for Tm,n,p in (2.1) and Vm,n,p in (2.2). Theorem 1. Suppose Assumption 1 in (3.1) holds. Recall Tm,n,p de\ufb01ned as in (2.1). Then, Zn := Tm,n,p \u2212n \u221an \u22122 p m log p \u21920 in probability as n \u2192\u221e. Furthermore, lim n\u2192\u221eE \u0002 e\u03b1|Zn|1{|Zn|\u2265\u03b4} \u0003 = 0 (3.3) for all \u03b1 > 0 and \u03b4 > 0. Remark 1. Suppose Assumption 1 in (3.1) holds. Recall Vm,n,p de\ufb01ned as in (2.2). Similar to the proof of Theorem 1 it can be shown that Z\u2032 n := Vm,n,p \u2212n \u221an + 2 p m log p \u21920 in probability as n \u2192\u221e, and furthermore, lim n\u2192\u221eE h e\u03b1|Z\u2032 n|1{|Z\u2032 n|\u2265\u03b4} i = 0 (3.4) 6 \ffor all \u03b1 > 0 and \u03b4 > 0. For reasons of space, we omit the details here. We now turn to the asymptotic analysis for \u02dc Tm,p and \u02dc Vm,p. Theorem 2. Suppose Assumption 2 in (3.2) is satis\ufb01ed. Recall \u02dc Tm,p de\ufb01ned as in (2.4). Then, \u02dc Zp := \u02dc Tm,p \u22122 p m log p \u21920 in probability as n \u2192\u221e. Furthermore, lim p\u2192\u221eE h e\u03b1| \u02dc Zp|1{| \u02dc Zp|\u2265\u03b4} i = 0 (3.5) for all \u03b1 > 0 and \u03b4 > 0. Remark 2. Suppose Assumption 2 in (3.2) is satis\ufb01ed. Review \u02dc W = ( \u02dc wij)1\u2264i,j\u2264p above (2.3), we know \u02dc W and \u2212\u02dc W have the same distribution. Let \u02dc Vm,p be de\ufb01ned as in (2.5). It follows that \u2212\u02dc Tm,p and \u02dc Vm,p have the same distribution. Then, by Theorem 2, \u02dc Z\u2032 p := \u02dc Vm,p + 2 p m log p \u21920 in probability as n \u2192\u221e. Furthermore, lim p\u2192\u221eE h e\u03b1| \u02dc Z\u2032 p|1{| \u02dc Z\u2032 p|\u2265\u03b4} i = 0 (3.6) for all \u03b1 > 0 and \u03b4 > 0. To better explain the convergence results in the (3.3) \u2013 (3.6), we give the following comments. Remark 3. Equation (3.3) has the following implications, whose rigorous justi\ufb01cation is given in Section 4. 1. lim n\u2192\u221eE \u0002 e\u03b1|Zn|\u0003 = 1 for all \u03b1 > 0; 2. lim n\u2192\u221eE(|Zn|\u03b1) = 0 for all \u03b1 > 0; 3. lim n\u2192\u221eVar(Zn) = 0. We now elaborate on the above results. First, the moment generating function of |Zn| exists and is close to 1 when n is large. As a result, |Zn| has a sub-exponential tail probability for large n. Second, Zn converges to 0 in Lq for all q > 0. Third, the variance of Zn vanishes for large n, indicating that Var(Tm,n,p) = o(n) as n \u2192\u221e. Overall, we can see (3.3) is stronger 7 \fthan the typical convergence in probability. This provides information on the behavior of the tail probability. Similar interpretations can also be made for (3.4), (3.5) and (3.6), respectively. 3.1 Extensions In this section, we discuss extensions of Theorems 1 and 2. Similar extensions can also be made to Remarks 1 and 2. They are omitted for the clarity of presentation. First, we point out that Theorems 1 and 2 still hold if we replace the size-m principal minors by the principal minors with the size no larger than m in the de\ufb01nition of \u02dc Tm,p and Tm,n,p, by the eigenvalue interlacing theorem [see, e.g., Horn and Johnson (2012)]. We then have the following corollary. Corollary 1. De\ufb01ne \u02c6 Tm,n,p = maxS\u2282{1,...,p},|S|\u2264m \u03bb1(WS) and \u02c6 Tm,p = maxS\u2282{1,...,p},|S|\u2264m \u03bb1( \u02dc WS). Then, Theorems 1 and 2 still hold if \u201cTm,n,p\u201d and \u201c \u02dc Tm,p\u201d are replaced by \u201c \u02c6 Tm,n,p\u201d and \u201c \u02c6 Tm,p\u201d, respectively. Next, we extend Theorem 2 to allow other values of variance for the Wigner matrix. Here, we assume that the matrix \u02dc W to have the following distribution, instead of that in (2.3). For some \u03b7 \u22650, \u02dc wij \u223c \uf8f1 \uf8f2 \uf8f3 N(0, \u03b7) if i = j; N(0, 1) if i < j. (3.7) In addition, assume that \u02dc W is symmetric and \u02dc wij are independent for i \u2264j. Note that if \u03b7 = 2, then the above distribution is the same as that de\ufb01ned in (2.3). For \u02dc W de\ufb01ned in (3.7), we consider the statistic \u02dc Tm,p. The following law of large numbers is obtained. Theorem 3. Suppose p \u2192\u221eand that Assumption 2 in (3.2) is satis\ufb01ed. In addition, assume \u02dc W has the distribution as in (3.7) with 0 \u2264\u03b7 \u22642. Then, \u02dc Tm,p p [4(m \u22121) + 2\u03b7] log p \u21921 in probability as n \u2192\u221e. Remark 4. A related open question is whether Theorem 1 can be extended to other distribution of xij for the Wishart distribution. We conjecture that with certain assumptions on the moments of xij and under the asymptotic regime that n is su\ufb03ciently large compared to log p and m, and Var(x2 11) Var(x11x12) \u22642, the asymptotic behavior of Tm,n,p\u2212n \u221an will be similar to that 8 \fof \u02dc Tm,p as is discussed in Theorem 3. We leave this question for future research, because it requires development of some technical tools that are beyond the scope of the current paper. Some special cases for this question have been answered in the literature for Wishart matrices with non-Gaussian entries. For example, if m = 2, and xij follows an asymmetric Rademacher distribution P(xij = 1) = p and P(xij = \u22121) = 1 \u2212p, then it is easy to check W{i,j} = \uf8eb \uf8ed n Pn k=1 xkixkj Pn k=1 xkixkj n \uf8f6 \uf8f8 and \u03bb1(W[i,j]) = n + | Pn k=1 xkixkj|. As a result, Tm,n,p = max1\u2264i 0 and every \u03b4 > 0. Proposition 2. Suppose Assumption 2 in (3.2) is satis\ufb01ed. Recall \u02dc Tm,p de\ufb01ned as in (2.4). Then, lim p\u2192\u221esup t\u2265\u03b4 e\u03b1tt2P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 = 0 for every \u03b1 > 0 and every \u03b4 > 0. Another auxiliary lemma is need. Its proof is put in the Appendix. 11 \fLemma 1. Let Z \u22650 be a random variable with E[e\u03b1Z] < \u221efor all \u03b1 > 0. Then E \u0002 e\u03b1Z1{Z\u2265\u03b4} \u0003 = e\u03b1\u03b4P(Z \u2265\u03b4) + \u03b1 Z \u221e \u03b4 e\u03b1tP(Z > t)dt for every \u03b1 > 0 and every \u03b4 > 0. Proof of Theorem 2. By Propositions 1 and 2, we have lim p\u2192\u221esup t\u2265\u03b4 e\u03b1tt2P \u0010\f \f \u02dc Tm,p \u22122 p m log p \f \f \u2265\u03b4 \u0011 = 0 (4.1) for any \u03b1 > 0 and \u03b4 > 0. Consequently, for given \u03b1 > 0, there exists a sequence of positive numbers ap \u21920 such that e\u03b1tt2P \u0010\f \f \u02dc Tm,p \u22122 p m log p \f \f \u2265t \u0011 \u2264ap for all t \u2265\u03b4 as p is su\ufb03ciently large. Now we estimate E(e\u03b1| \u02dc T\u22122\u221am log p|1{| \u02dc T\u22122\u221am log p|\u2265\u03b4}). By applying Lemma 1 to Zp,m = | \u02dc T \u22122\u221am log p|, we see E h e\u03b1| \u02dc Tm,p\u22122\u221am log p|1| \u02dc Tm,p\u22122\u221am log p|\u2265\u03b4 i =E \u0002 e\u03b1Zp,m1{Zp,m\u2265\u03b4} \u0003 =e\u03b1\u03b4P(Zp,m \u2265\u03b4) + Z \u221e \u03b4 e\u03b1tP(Zp,m \u2265t)dt. According to (4.1), the above display can be bounded from above by \u03b4\u22122ap + ap Z \u221e \u03b4 t\u22122dt, which tends to 0 as p \u2192\u221e. The proof is then complete. Now we proceed to prove Propositions 1 and 2. 12 \fProof of Proposition 1. For any t > 0, we have from the de\ufb01nition of \u02dc Tm,p that P \u0010 \u02dc Tm,p \u22652 p m log p + t \u0011 =P \u0010 max S\u2282{1,...,p},|S|=m \u03bb1( \u02dc WS) \u22652 p m log p + t \u0011 =P \u0010 [ S\u2282{1,...,p},|S|=m \b \u03bb1( \u02dc WS) \u22652 p m log p + t \t\u0011 \u2264 X S\u2282{1,...,p},|S|=m P \u0010 \u03bb1( \u02dc WS) \u22652 p m log p + t \u0011 \u2264pmP \u0010 \u03bb1( \u02dc W{1,...,m}) \u22652 p m log p + t \u0011 , (4.2) where in the last inequality we use the fact that WS are identically distributed for all di\ufb00erent S with |S| = m. The following result enables us to bound the last probability. Lemma 2. Let \u02dc W{1,...,m} be de\ufb01ned as above (2.4) with S = {1, ..., m}. Then there is a constant \u03ba > 0 such that P \u0010 \u03bb1(W{1,...,m}) \u2265x or \u03bbm(W{1,...,m}) \u2264\u2212x \u0011 \u2264e\u2212(x2/4)+\u03bam log x for all x > 4\u221am and all m \u22652. Taking x := 2\u221am log p + t in the above lemma, we know x > 4\u221am as n is large enough, and hence log h e\u03b1tpmP \u0010 \u03bb1( \u02dc W{1,...,m}) \u22652 p m log p + t \u0011i \u2264\u03b1t + m log p \u22121 4 \u0010 2 p m log p + t \u00112 + \u03bam log \u0010 2 p m log p + t \u0011 =\u03b1t \u2212t p m log p \u22121 4t2 + \u03bam log \u0010 2 p m log p \u0011 + \u03bam log \u0012 1 + t 2\u221am log p \u0013 . Note that \u22121 4t2 \u22640, \u03bam log(2\u221am log p) = O(m log log p), and \u03bam log(1 + t 2\u221am log p) = O( \u221am \u221alog pt) < t as p is su\ufb03ciently large. Thus, the above inequality further implies log h e\u03b1tpmP \u0010 \u03bb1( \u02dc W{1,...,m}) \u22652 p m log p + t \u0011i \u2264 \u2212 \u0010p m log p \u2212\u03b1 \u22121 \u0011 t + O (m log log p) \u2264 \u2212t 2 p m log p + O (m log log p) (4.3) 13 \funiformly for all t \u22650 as p su\ufb03ciently large, where \u03b1 > 0 is \ufb01xed. With the above inequality, we complete the proof. Proof of Proposition 2. Recall \u03bep = log log log p. The proof will be evidently \ufb01nished if the following two limits hold. For each \u03b1 > 0 and each \u03b4 > 0, lim p\u2192\u221e sup \u03b4\u2264t\u22642\u221am log p\u2212m\u03bep e\u03b1tt2P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 = 0 (4.4) and lim p\u2192\u221e sup t\u22652\u221am log p\u2212m\u03bep e\u03b1tt2P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 = 0. (4.5) We now verify the above two limits. The proof of (4.4). Recall \u03bb1(A) \u22651 k k X i=1 k X j=1 aij (4.6) for any k \u00d7 k square and symmetric matrix A = (aij)1\u2264i,j\u2264k, where \u03bb1(A) is the largest eigenvalue of A. For each S \u2282{1, ..., p} such that |S| = m and \u02dc WS = ( \u02dc Wij), set \u02dc AS = ( \u02dc Wij \u2265(1 \u2212\u03b5m,p,t) r 4 log p m for all i, j \u2208S and i \u2264j ) , (4.7) where \u03b5m,p,t := (4m log p)\u22121/2t. (4.8) If 0 < t \u22642\u221am log p \u2212m\u03bep then 0 < \u03b5m,p,t < 1. According to (4.6), if there exists S0 \u2282{1, ..., p} such that |S0| = m and \u02dc AS0 occurs, then \u02dc Tm,p \u2265\u03bb1( \u02dc WS0) \u2265m(1 \u2212\u03b5m,p,t) r 4 log p m = 2 p m log p \u2212t. De\ufb01ne \u02dc Qm,p = X S\u2282{1,...,p}: |S|=m 1 \u02dc AS, (4.9) 14 \fwhere 1 \u02dc AS is the indicator function of \u02dc AS. Then, P \u0010 \u02dc Tm,p < 2 p m log p \u2212t \u0011 \u2264P \u0010 \u02dc Qm,p = 0 \u0011 . (4.10) For any random variable Y with EY > 0 and E(Y 2) < \u221e, we have P(Y \u22640) \u2264 P(Y \u2212EY \u2264\u2212EY ) \u2264 P((Y \u2212EY )2 \u2265(EY )2) \u2264 V ar(Y ) (EY )2 . (4.11) Applying this inequality to \u02dc Qm,p, we obtain P \u0010 \u02dc Qm,p = 0 \u0011 = P \u0010 \u02dc Qm,p \u22640 \u0011 \u2264V ar( \u02dc Qm,p) (E \u02dc Qm,p)2 . (4.12) We proceed to \ufb01nd a lower bound on E \u02dc Qm,p and an upper bound on V ar( \u02dc Qm,p) in two steps. Step 1: the estimate of E \u02dc Qm,p. Note that 1 \u02dc AS are identically (not independently) distributed Bernoulli variables for di\ufb00erent S with success rate P( \u02dc A{1,...,m}). Thus, we have E \u02dc Qm,p = \u0012 p m \u0013 P( \u02dc AS0), (4.13) where we choose S0 = {1, ..., m} with a bit abuse of notation. For convenience, write \u03c4m,p,t = (1 \u2212\u03b5m,p,t) r 4 log p m = r 4 log p m \u2212t m. (4.14) Since the upper triangular entries of \u02dc W are independent Gaussian variables, we have from (4.7) that P( \u02dc AS0) = m Y k=1 P \u0010 \u02dc Wkk \u2265\u03c4m,p,t \u0011 Y 1\u2264i 1 \u2212\u03b5m,p,t \u2265\u03bep q m 4 log p since 0 < t \u22642\u221am log p \u2212m\u03bep. It follows that | log(1 \u2212\u03b5m,p,t)| = O \u0010 log q log p m\u03be2 p \u0011 = O(log log p). Also, log q 4 log p m = O(log log p). As a result, from (4.14) we have \u03c4 2 m,p,t = (1 \u2212\u03b5m,p,t)2 \u00b7 4 log p m and log (\u03c4m,p,t) = O(log log p). It follows that log P( \u02dc AS0) = \u22121 4m2 (\u03c4m,p,t)2 + O(m2 log log p) = \u2212(1 \u2212\u03b5m,p,t)2m log p + O(m2 log log p). (4.17) Combining this with (4.13), we see log(E \u02dc Qm,p) = log \u0012 p m \u0013 \u2212(1 \u2212\u03b5m,p,t)2m log p + O(m2 log log p). (4.18) To control \u0000 p m \u0001 , we need the next result, which will be proved in the Appendix. Lemma 3. For all m \u2265p \u22651, we have m log p \u2212m log m \u2264log \u0012 p m \u0013 \u2264m log p + m \u2212m log m. Using the above lemma, (4.18), and note that m log m = O(m2 log log p), we have log(E \u02dc Qm,p) = \u0002 1 \u2212(1 \u2212\u03b5m,p,t)2\u0003 m log p + O(m2 log log p). (4.19) 16 \fStep 2: the estimate of V ar( \u02dc Qm,p). Reviewing \u02dc Qm,p in (4.9), we have V ar( \u02dc Qm,p) = E \u02dc Q2 m,p \u2212(E \u02dc Qm,p)2 = X S1,S2\u2282{1,..,p},|S1|=|S2|=m P( \u02dc AS1 \u2229\u02dc AS2) \u2212(E \u02dc Qm,p)2. (4.20) Note that P( \u02dc AS1 \u2229\u02dc AS2) is determined by |S1 \u2229S2| and m. By (4.7), X S1,S2\u2282{1,..,p},|S1|=|S2|=m P \u0010 \u02dc AS1 \u2229\u02dc AS2 \u0011 = m X l=0 X |S1\u2229S2|=l,|S1|=|S2|=m P \u0010 \u02dc AS1 \u2229\u02dc AS2 \u0011 = m X l=0 \u0012p l \u0013\u0012 p \u2212l m \u2212l \u0013\u0012p \u2212m m \u2212l \u0013 P \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 = m X l=0 p! l!(m \u2212l)!(m \u2212l)!(p \u22122m + l)!P \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 Single out the terms where l = 0 and l = m, we further have X S1,S2\u2282{1,..,p},|S1|=|S2|=m P( \u02dc AS1 \u2229\u02dc AS2) = p! m!m!(p \u22122m)!P \u0010 \u02dc A{1,...,m} \u00112 + \u0012 p m \u0013 P \u0010 \u02dc A{1,...,m} \u0011 + m\u22121 X l=1 p! l!(m \u2212l)!(m \u2212l)!(p \u22122m + l)!P \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 . (4.21) On the other hand, E \u02dc Qm,p = \u0000 p m \u0001 P \u0000 \u02dc A{1,...,m} \u0001 and hence (E \u02dc Qm,p)2 = p!2 m!2(p \u2212m)!2P \u0010 \u02dc A{1,...,m} \u00112 = p! m!m!(p \u22122m)!P \u0010 \u02dc A{1,...,m} \u00112 \u00b7 p!(p \u22122m)! (p \u2212m)!2 . (4.22) 17 \fCombining (4.20), (4.21) and (4.22), we arrive at V ar( \u02dc Qm,p) =(E \u02dc Qm,p)2 \u0012 (p \u2212m)!2 p!(p \u22122m)! \u22121 \u0013 + E \u02dc Qm,p + m\u22121 X l=1 p! l!(m \u2212l)!(m \u2212l)!(p \u22122m + l)!P \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 . Observe that p! (p\u22122m+l)! = p(p\u22121) \u00b7 \u00b7 \u00b7 (p\u22122m+l \u22121) \u2264p2m\u2212l and 1 l!(m\u2212l)!(m\u2212l)! \u22641. It follows that V ar( \u02dc Qm,p) \u2264E \u02dc Qm,p + (E \u02dc Qm,p)2 \u0012 (p \u2212m)!2 p!(p \u22122m)! \u22121 \u0013 + m max l=1,...,m\u22121 p2m\u2212lP \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 . (4.23) Similar to (4.15) we have P \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 =\u00af \u03a6 \u0012 1 \u221a 2\u03c4m,p,t \u00132m\u2212l \u00af \u03a6 (\u03c4m,p,t) m(m\u22121) 2 \u00b72\u2212l(l\u22121) 2 . (4.24) Again, we \ufb01nd an approximation for the above display by using (4.16) and simplifying it. We arrive at log P \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 \u2264\u22121 m(2m2 \u2212l2)(1 \u2212\u03b5m,p,t)2 log p + O(m2 log log p). Therefore, for the last term in (4.23), we see log h mp2m\u2212lP \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011i \u2264log m + (2m \u2212l) log p \u22121 m(2m2 \u2212l2)(1 \u2212\u03b5m,p,t)2log p + O(m2 log log p) = \u0014 2m \u2212l \u22121 m(1 \u2212\u03b5m,p,t)2(2m2 \u2212l2) \u0015 log p + O(m2 log log p). (4.25) The following lemma enables us to evaluate the coe\ufb03cient of log p. 18 \fLemma 4. For any 0 < \u03b5 < 1 and m \u22652, we have max l=1,...,m\u22121 n (2m \u2212l) \u22122m2 \u2212l2 m (1 \u2212\u03b5)2o =(2m \u22121) \u2212 \u00002m \u22121 m \u0001 (1 \u2212\u03b5)2 =2m \u0002 1 \u2212(1 \u2212\u03b5)2\u0003 \u2212 h 1 \u22121 m(1 \u2212\u03b5)2i . Applying the above lemma to (4.25), we get m max l=1,...,m\u22121 p2m\u2212lP \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 \u2264exp n 2m \u0002 1 \u2212(1 \u2212\u03b5m,p,t)2\u0003 log p \u2212 \u0002 1 \u22121 m(1 \u2212\u03b5m,p,t)2\u0003 log p + O(m2 log log p) o . This inequality together with (4.19) implies that \u0010 E \u02dc Qm,p \u0011\u22122 m max l=1,...,m\u22121 p2m\u2212lP \u0010 \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l} \u0011 \u2264exp \u001a \u2212 \u0014 1 \u22121 m(1 \u2212\u03b5m,p,t)2 \u0015 log p + O(m2 log log p) \u001b . (4.26) Combining the above display with (4.23), we arrive at V ar \u0010 \u02dc Qm,p \u0011 \u0010 E \u02dc Qm,p \u00112 \u2264exp \u001a \u2212 \u0014 1 \u22121 m(1 \u2212\u03b5m,p,t)2 \u0015 log p + O(m2 log log p) \u001b + \u0010 E \u02dc Qm,p \u0011\u22121 + (p \u2212m)!2 p!(p \u22122m)! \u22121. Lemma 5. For all integers p \u2265m \u22651 satisfying 2m < p, we have (p \u2212m)!2 p!(p \u22122m)! < 1. 19 \fTherefore, V ar( \u02dc Qm,p) (E \u02dc Qm,p)2 \u2264exp \u001a \u2212 \u0014 1 \u22121 m(1 \u2212\u03b5m,p,t)2 \u0015 log p + O(m2 log log p) \u001b + \u0010 E \u02dc Qm,p \u0011\u22121 . (4.27) We now study the last two terms one by one. For m \u22652, \u2212 \u0014 1 \u22121 m(1 \u2212\u03b5m,p,t)2 \u0015 log p + O(m2 log log p) \u2264\u22121 2 log p + O(m2 log log p) \u2264\u22121 4 log p (4.28) for n su\ufb03ciently large under Assumption 2 in (3.2). Recalling \u03b5m,p,t = (4m log p)\u22121/2t, we see from (4.19) that log \u0010 E \u02dc Qm,p \u0011\u22121 = \u2212 \u0002 1 \u2212(1 \u2212\u03b5m,p,t)2\u0003 m log p + O(m2 log log p) \u2264\u2212\u03b5m,p,tm log p + O(m2 log log p) \u2264\u2212t 2 p m log p + O(m2 log log p). (4.29) Combining (4.27), (4.28) and (4.29), we arrive at V ar( \u02dc Qm,p) (E \u02dc Qm,p)2 \u2264exp \u001a \u2212t 2 p m log p + O(m2 log log p) \u001b + exp \u001a \u22121 4 log p \u001b . This together with (4.10) and (4.12) yields P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 \u2264exp \u001a \u2212t 2 p m log p + O(m2 log log p) \u001b + 1 p1/4 uniformly for all \u03b4 \u2264t \u22642\u221am log p \u2212m\u03bep. Consequently, we get (4.4). The proof of (4.5). For any S \u2282{1, ..., p} with |S| = m, write \u02dc WS = ( \u02dc Wij)i,j\u2208S. Note that 20 \f\u03bb1( \u02dc WS) \u2265maxi\u2208S \u02dc Wii. Thus, \u02dc Tm,p \u2265 max S\u2282{1,...,p},|S|=m \u03bb1( \u02dc WS) \u2265max 1\u2264i\u2264p \u02dc Wii. As a result, P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 \u2264 P \u0010 max 1\u2264i\u2264p \u02dc Wii \u22642 p m log p \u2212t \u0011 = \u03a6 \u0010p 2m log p \u22121 \u221a 2t \u0011p , where the function \u03a6(z) = R z \u2212\u221e 1 \u221a 2\u03c0e\u2212s2 2 ds for z \u2208R. To proceed, we discuss two scenarios: 2\u221am log p\u2212m\u03bep \u2264t \u22644\u221am log p and t > 4\u221am log p. For 2\u221am log p\u2212m\u03bep \u2264t \u22644\u221am log p, we have \u03a6 \u0012p 2m log p \u2212 t \u221a 2 \u0013p \u2264\u03a6 \u0012p 2m log p \u22122\u221am log p \u2212m\u03bep \u221a 2 \u0013p =\u03a6 \u0012m\u03bep \u221a 2 \u0013p = exp \u001a p log \u0012 1 \u2212\u00af \u03a6 \u0012m\u03bep \u221a 2 \u0013\u0013\u001b \u2264exp \u001a \u2212p\u00af \u03a6 \u0012m\u03bep \u221a 2 \u0013\u001b , where \u00af \u03a6(z) = 1\u2212\u03a6(z) for any z \u2208R and the inequality log(1\u2212x) \u2264\u2212x for any x < 1 is used in the last step. Note \u00af \u03a6 \u0010 1 \u221a 2m\u03bep \u0011 = (1 + o(1)) 1 \u221a 4\u03c0m\u03bepe\u2212m2\u03bep2 4 and p0.1(\u03bep)\u22121e\u2212m2\u03bep2 4 \u2192\u221e since \u03bep = log log log p. Thus, \u03a6 \u0010p 2m log p \u2212 t \u221a 2 \u0011p \u2264exp n \u2212p0.9m o , (4.30) for su\ufb03ciently large p. This further implies lim p\u2192\u221e sup 2\u221am log p\u2212m\u03bep\u2264t\u22644\u221am log p e\u03b1tt2P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 \u2264lim sup p\u2192\u221eexp n \u2212p0.9m + \u03b1 \u00b7 4 p m log p + 2 log \u0010 4 p m log p \u0011o =0 (4.31) 21 \ffor any \u03b1 > 0. Note that \u03a6(\u2212x) = \u00af \u03a6(x) \u2264 1 \u221a 2\u03c0 xe\u2212x2/2 \u2264e\u2212x2/2 for any x \u22651. Then, for the other scenario where t \u22654\u221am log p, we have \u03a6 \u0012p 2m log p \u2212 t \u221a 2 \u0013p \u2264\u03a6 \u0012 \u2212 t 2 \u221a 2 \u0013p \u2264exp \u001a \u2212pt2 16 \u001b as n is large enough. Thus, lim p\u2192\u221e sup t\u22654\u221am log p e\u03b1tt2P \u0010 \u02dc Tm,p \u22642 p m log p \u2212t \u0011 \u2264lim sup p\u2192\u221e sup t\u22654\u221am log p exp \u001a \u2212pt2 16 + \u03b1t + 2 log t \u001b =0 (4.32) for any \u03b1 > 0. Joining (4.31) and (4.32), we see (4.5). This completes the whole proof. 4.2.2 Proof of Theorem 1 To prove Theorem 1, we need the following two propositions. Proposition 3. Suppose Assumption 1 in (3.1) holds. Recall Tm,n,p de\ufb01ned as in (2.1). Then, lim n\u2192\u221esup t\u2265\u03b4 e\u03b1tt2P \u0012 1 \u221an(Tm,n,p \u2212n) \u22652 p m log p + t \u0013 = 0 for any \u03b1 > 0 and \u03b4 > 0. Proposition 4. Suppose Assumption 1 in (3.1) holds. Recall Tm,n,p de\ufb01ned as in (2.1). Then, lim n\u2192\u221esup t\u2265\u03b4 e\u03b1tt2P \u0012 1 \u221an(Tm,n,p \u2212n) \u22642 p m log p \u2212t \u0013 = 0 for any \u03b1 > 0 and \u03b4 > 0. Proof of Theorem 1. Similar to the proof of Theorem 2, it is su\ufb03cient to prove (3.3). By the same argument as in the proof of Theorem 2, with the upper bound for P( Tm,n,p\u2212n \u221an \u2265 2\u221am log p + t) given in Proposition 3 and the upper bound for P( Tm,n,p\u2212n \u221an \u22642\u221am log p \u2212t) for t > \u03b4 given in Proposition 4, we get (3.3). In the following we start to prove Propositions 3 and 4. 22 \fProof of Proposition 3. Without loss of generality, we assume \u03b4 < 1 since the expectation in (3.3) is monotonically decreasing in \u03b4. Let W{1,...,m} be as WS above (2.1) with S = {1, 2, \u00b7 \u00b7 \u00b7 , m}. Analogous to (4.2), we have P \u0012 1 \u221an(Tm,n,p \u2212n) \u22652 p m log p + t \u0013 \u2264pmP \u0012 1 \u221an(\u03bb1(W{1,...,m}) \u2212n) \u22652 p m log p + t \u0013 . (4.33) We now bound the last probability. Since the above tail probability involve moderate bound and large deviation bound for di\ufb00erent ranges of t, we will discuss three di\ufb00erent cases and use di\ufb00erent proof strategies. Recall \u03bep = log log log p. Set \u03c9n = \u0010 m log p \u00111/2 \u03bep log n. (4.34) The three cases are: (1) t > \u03b4\u221an 100 , (2) \u03b4 \u2228\u03c9n \u2264t \u2264\u03b4\u221an 100 , and (3) \u03b4 \u2264t < \u03b4 \u2228\u03c9n. They cover all situations for t \u2265\u03b4. For the \ufb01rst two cases, the upper bound is based on the next lemma, which gives a moderate deviation bound for the spectrum of 1 \u221anW{1,...,m} from the identity matrix Im. Lemma 6. There exists a constant \u03ba > 0 such that for all n, p, m, r \u22651, 0 < d < 1/2 and y > 2dmr, we have P \u0012\u03bb1(W{1,...,m}) \u2212n n \u2265y \u0013 \u22642 \u00b7 exp n \u2212nI(1 + y \u22122dmr) + \u03bam log 1 d o + 2 \u00b7 e\u2212mnI(r) (4.35) and P \u0012\u03bbm(W{1,...,m}) \u2212n n \u2264\u2212y \u0013 \u22642 \u00b7 exp n \u2212nI(1 \u2212y + 2dmr) + \u03bam log 1 d o + 2 \u00b7 e\u2212mnI(r) (4.36) where I(s) = 1 2(s \u22121 \u2212log s) for s > 0 and I(s) = \u221efor s \u22640. Case 1: t > \u03b4\u221an 100 . Let \u03b1 > 0 be given. Choose r = max(2, 1 + 80\u03b1t mn ), d = min(1 2, t 4m\u221anr), and y = 2\u221am log p+t \u221an in Lemma 6. The choice of r, d, and y satis\ufb01es that 2dmr \u2264 t 2\u221an and hence y \u22122dmr \u22652\u221am log p \u221an + t 2\u221an. Set z = 2\u221am log p \u221an + t 2\u221an. Notice that I(s) from Lemma 6 23 \fis increasing for s \u22651. Then, by the lemma, t2e\u03b1tpmP \u0012\u03bb1(W{1,...,m}) \u2212n \u221an \u22652 p m log p + t \u0013 =t2e\u03b1tpmP \u0012\u03bb1(W{1,...,m}) \u2212n \u221an \u2265\u221any \u0013 \u22642 \u00b7 exp \u001a \u2212n 2[z \u2212log(1 + z)] + \u03bam log 1 d + \u03b1t + 2 log t + m log p \u001b + 2 \u00b7 exp \u001a \u22121 2(r \u22121 \u2212log r)mn + \u03b1t + 2 log t + m log p \u001b . (4.37) The following lemma says that both of the last two terms go to zero. Lemma 7. Suppose Assumption 1 in (3.1) holds. Let \u03b1 > 0 and \u03b4 > 0 be given. For r = max(2, 1 + 80\u03b1t mn ), d = min( 1 2, t 4m\u221anr) and z = 2\u221am log p \u221an + t 2\u221an, we have lim n\u2192\u221esup t> \u03b4\u221an 100 exp \u001a \u2212n 2[z \u2212log(1 + z)] + \u03bam log 1 d + \u03b1t + 2 log t + m log p \u001b = 0 (4.38) and lim n\u2192\u221esup t> \u03b4\u221an 100 exp \u001a \u22121 2(r \u22121 \u2212log r)mn + \u03b1t + 2 log t + m log p \u001b = 0. (4.39) Combining (4.33), (4.37)-(4.39), we conclude lim n\u2192\u221esup t> \u03b4\u221an 100 t2e\u03b1tP \u0012 1 \u221an(Tm,n,p \u2212n) \u22652 p m log p + t \u0013 = 0. (4.40) Case 2: \u03b4 \u2228\u03c9n \u2264t \u2264\u03b4\u221an 100 . Review \u03c9n in (4.34). Now we choose r = 2, d = t 8m\u221an < 1 2 and y = 2\u221am log p+t \u221an . Then y > t 2\u221an = 2dmr. By (4.35), t2e\u03b1tpmP \u0012\u03bb1(W{1,...,m}) \u2212n \u221an \u22652 p m log p + t \u0013 \u22642 \u00b7 exp \u001a \u2212n 2[z \u2212log(1 + z)] + \u03bam log 1 d + \u03b1t + 2 log t + m log p \u001b + 2 \u00b7 exp \u001a \u22121 2(1 \u2212log 2)mn + \u03b1t + 2 log t + m log p \u001b (4.41) where z := y \u22122dmr = 2\u221am log p \u221an + t 2\u221an. The last two terms are analyzed in the next lemma. Lemma 8. Suppose Assumption 1 in (3.1) holds. Let \u03c9n be as in (4.34). For \u03b4 \u2228\u03c9n \u2264t \u2264 24 \f\u03b4\u221an 100 , z = 2\u221am log p \u221an + t 2\u221an and d = t 8m\u221an, we have exp \u001a \u2212n 2[z \u2212log(1 + z)] + \u03bam log 1 d + \u03b1t + 2 log t + m log p \u001b \u2264exp \u001a \u22121 4t p m log p \u001b (4.42) as n is su\ufb03ciently large. In addition, \u22121 2(1 \u2212log 2)mn + \u03b1t + 2 log t + m log p = \u22121 \u2212log 2 2 [1 + o(1)]mn (4.43) as n \u2192\u221e. Joining (4.41)-(4.43), we obtain lim n\u2192\u221e sup \u03b4\u2228\u03c9n\u2264t\u2264\u03b4\u221an 100 pmt2e\u03b1tP \u0012\u03bb1(W{1,...,m}) \u2212n \u221an \u22652 p m log p + t \u0013 = 0, (4.44) which together with (4.33) implies that lim n\u2192\u221e sup \u03b4\u2228\u03c9n\u2264t\u2264\u03b4\u221an 100 t2e\u03b1tP \u0012 1 \u221an(Tm,n,p \u2212n) \u22652 p m log p + t \u0013 = 0. This completes our analysis for Case 2. By using the same argument as obtaining (4.44), we have the following limit, which will be used later on. lim n\u2192\u221e sup \u03b4\u2228\u03c9n\u2264t\u2264\u03b4\u221an 100 t2e\u03b1tpmP \u0012\u03bbm(W{1,...,m}) \u2212n \u221an \u2264\u22122 p m log p \u2212t \u0013 = 0. (4.45) We next study Case 3. Case 3: \u03b4 \u2264t < \u03b4\u2228\u03c9n. Note that this case is only possible if n \u2265exp{((log p)/m)1/2\u03bep \u22121\u03b4}. We point out that Lemma 6 is not a suitable approach for bounding the tail probability in this case because the term m log(1/d), which cannot be easily controlled, will dominate the other terms in the error bound for very large n. Instead, we will use another approach to obtain an upper bound of P \u0000\u03bb1(W{1,...,m}) \u22652\u221am log p + t \u0001 . The main step here is to quantify the approximation of the extreme eigenvalue of a Wishart matrix to that of a Wigner matrix. We will analyze their density functions and leverage them with the results in the proof of 25 \fTheorem 2. Let \u00b5 = (\u00b51, ..., \u00b5m) be the order statistics of the eigenvalues of W{1,...,m} such that \u00b51 > \u00b52 > ... > \u00b5m. Write \u03bd = (\u03bd1, ..., \u03bdm) with \u03bdi = (\u00b5i \u2212n)/\u221an. Let \u02dc W{1,...,m} = ( \u02dc wij)1\u2264i,j\u2264m where \u02dc wij\u2019s are as in (2.3). Let the eigenvalues of \u02dc W{1,...,m} be \u03bb1 > ... > \u03bbm. Set \u03bb = (\u03bb1, ..., \u03bbm). Intuitively, the law of \u03bd is close to that of \u03bb when n is large. The next lemma quanti\ufb01es the approximation speed. Review \u2225x\u2225\u221e= max1\u2264i\u2264m |xi| for any x = (x1, \u00b7 \u00b7 \u00b7 , xm) \u2208Rm. Lemma 9. Let gn,m(\u00b7) be the density function of \u03bd, and let hm(\u00b7) be the density function of \u03bb. Assume m3 = o(n). Then, log gn,m(v) \u2212log hm(v) = o(1) + O \u0000m2n\u22121/2\u2225v\u2225\u221e+ m2n\u22121\u2225v\u22252 \u221e+ mn\u22121/2\u2225v\u22253 \u221e \u0001 for all v \u2208Rm with \u2225v\u2225\u221e\u22642 3 \u221an. Let rm,n = 2\u221am log p + \u03c9n, where \u03c9n is as in (4.34). Then for t such that \u03b4 \u2264t \u2264\u03c9n, P \u0012 1 \u221an \u0000\u03bb1(W{1,...,m}) \u2212n \u0001 \u22652 p m log p + t \u0013 \u2264P \u0012 1 \u221an \u0000\u03bb1(W{1,...,m}) \u2212n \u0001 \u22652 p m log p + t, max 1\u2264i\u2264m |\u03bdi| \u2264rm,n \u0013 + P \u0000max 1\u2264i\u2264m |\u03bdi| > rm,n \u0001 . (4.46) There are three probabilities above, denote the second one by Hn. For Hn, we use the change-of-measure argument. In fact, Hn = Z v1\u22652\u221am log p+t,\u2225v\u2225\u221e\u2264rm,n gn,m(v)dv = Z v1\u22652\u221am log p+t,\u2225v\u2225\u221e\u2264rm,n exp{log gn,m(v) \u2212log hm(v)}hm(v)dv = exp n o(1)+O(m2n\u22121/2rm,n) + O(m2n\u22121r2 m,n) + O(mn\u22121/2r3 m,n) o \u00b7 Z v1\u22652\u221am log p+t,\u2225v\u2225\u221e\u2264rm,n hm(v)dv \u22642\u00b7 exp n O(m2n\u22121/2rm,n) + O(m2n\u22121r2 m,n) + O(mn\u22121/2r3 m,n) o \u00b7 P \u0010 \u03bb1( \u02dc W{1,...m}) \u22652 p m log p + t \u0011 . 26 \fNow O(m2n\u22121/2rm,n) + O(m2n\u22121r2 m,n) + O(mn\u22121/2r3 m,n) = \u0010 m r2 m,n + m \u221anrm,n + 1 \u0011 \u00b7 O(mn\u22121/2r3 m,n) = O(mn\u22121/2r3 m,n) since rm,n > \u221am log p and m = o(n). By the de\ufb01nition of \u03c9n in (4.34), mn\u22121/2r3 m,n = mn\u22121/2 \u00b7 O \u0010 (m log p)3/2 + m3/2(log n)3(log log log p)3(log p)\u22123/2\u0011 = p m log p \u00b7 O \u0010m2 log p \u221an + m2(log n)3(log log log p)3 \u221an(log p)2 \u0011 = o( p m log p) where Assumption 1 from (3.1) is used. Therefore, Hn \u2264exp \u0010 o \u0000p m log p \u0001\u0011 \u00b7 P \u0010 \u03bb1( \u02dc W{1,...m}) \u22652 p m log p + t \u0011 . Note that t \u22641 \u03b2e\u03b2t for any \u03b2 > 0 and t > 0. It follows from (4.3) that sup t\u2265\u03b4 \b pme\u03b1tt2Hn \t \u2264sup t\u2265\u03b4 exp n \u22121 2t p m log p + O(m log log p) + o( p m log p ) o \u2264sup t\u2265\u03b4 exp n \u22121 2 p m log p \u00b7 (\u03b4 + o(1)) + o( p m log p ) o =o(1) by the fact t \u2265\u03b4 and Assumption 1. Combining this with (4.46), we have sup \u03b4\u2264t\u2264\u03b4\u2228\u03c9n \u001a pme\u03b1tt2 \u00b7 P \u0010\u03bb1(W{1,...,m}) \u2212n \u221an \u22652 p m log p + t \u0011\u001b \u2264o(1) + pme\u03b1\u03c9n+2 log \u03c9n \u00b7 P \u0000max 1\u2264i\u2264m |\u03bdi| \u2265rm,n \u0001 . (4.47) We next analyze P \u0000max1\u2264i\u2264m |\u03bdi| \u2265rm,n \u0001 . Recall rm,n = 2\u221am log p + \u03c9n, where \u03c9n is as in (4.34). Recall that we only discuss Case 3 when \u03b4 \u2264t < \u03b4 \u2228\u03c9n, and this is only meaningful 27 \fwhen \u03c9n > \u03b4. Thus, \u03b4 \u2228\u03c9n = \u03c9n \u2264 \u221an\u03b4 100 . Thus, from (4.44) we have lim n\u2192\u221epme\u03b1\u03c9n+2 log \u03c9nP \u0012\u03bb1(W{1,...,m}) \u2212n \u221an \u2265rm,n \u0013 = 0. (4.48) By (4.45), lim n\u2192\u221epme\u03b1\u03c9n+2 log \u03c9nP \u0012\u03bbm(W{1,...,m}) \u2212n \u221an \u2264\u2212rm,n \u0013 = 0. (4.49) Since max1\u2264i\u2264m |\u03bdi| = max(\u03bd1, \u2212\u03bdm), by combining (4.48) and (4.49), we see that lim n\u2192\u221epme\u03b1\u03c9n+2 log \u03c9nP \u0012 max 1\u2264i\u2264m |\u03bdi| \u2265rm,n \u0013 = 0. Combining this with (4.47), we further have lim n\u2192\u221e sup \u03b4\u2264t\u2264\u03b4\u2228\u03c9n pme\u03b1tt2P \u0012 1 \u221an(\u03bb1(W{1,...,m}) \u2212n) \u22652 p m log p + t \u0013 = 0. (4.50) This completes our analysis for Case 3. Now, we combine (4.40), (4.44) and (4.50), and arrive at lim n\u2192\u221esup t\u2265\u03b4 pme\u03b1tt2P \u0012 1 \u221an(\u03bb1(W{1,...,m}) \u2212n) \u22652 p m log p + t \u0013 = 0. This and (4.33) conclude lim n\u2192\u221esup t\u2265\u03b4 e\u03b1tt2P \u0012 1 \u221an(Tm,n,p \u2212n) \u22652 p m log p + t \u0013 = 0. Proof of Proposition 4. Noticing the expectation in (3.3) is non-increasing in \u03b4. Without loss of generality, we assume \u03b4 < 1. Here we discuss two scenarios that are similar to those in the proof of Theorem 2. They are 1) \u03b4 \u2264t \u22642\u221am log p \u2212m\u03bep and 2) t > 2\u221am log p \u2212m\u03bep, where \u03bep = log log log p. Scenario 1: \u03b4 \u2264t \u22642\u221am log p \u2212m\u03bep. Similar to the proof of Theorem 2, we de\ufb01ne the event AS as follows. For each S \u2282{1, ..., p} with |S| = m, set AS = n 1 \u221an(Wkk \u2212n) \u2265\u03c4m,p,t, Wij \u221an \u2265\u03c4m,p,t for all i, j, k \u2208S and i < j o , (4.51) 28 \fwhere \u03c4m,p,t = (1 \u2212\u03b5m,p,t) q 4 log p m and \u03b5m,p,t = (4m log p)\u22121/2t. We also de\ufb01ne Qm,n,p = X S\u2282{1,...,p}: |S|=m 1AS. (4.52) Similar to the discussion between (4.6) and (4.12) in the proof of Theorem 2, we have P \u0010 Tm,n,p \u22642 p m log p \u2212t \u0011 \u2264V ar(Qm,n,p) E(Qm,n,p)2 . (4.53) In the rest of the discussion under Scenario 1, we will develop a lower bound for E(Qm,n,p) and an upper bound for V ar(Qm,n,p) in two steps. Step 1: the estimate of E(Qm,n,p). For a m \u00d7 m symmetric matrix M, we use \u2225M\u2225to denote its spectral norm. Set S0 = {1, 2, \u00b7 \u00b7 \u00b7 , m}. Review \u03c9n in (4.34). Since {1AS; S \u2282 {1, ..., p} with |S| = m} are identically distributed, we have E(Qm,n,p) = \u0012 p m \u0013 P(AS0) \u2265 \u0012 p m \u0013 P(AS0 \u2229Lm,n,p), (4.54) where Lm,n,p := \u001a\u2225W{1,...,m} \u2212nIm\u2225 \u221an \u2264sm,n,p \u001b and sm,n,p = max n 10 p m log p, 2 p m log p + \u03c9n o . It is easy to check that Assumption 1 in (3.1) implies sm,n,p \u221an \u21920 and \u221ams3 m,n,p \u221an log p \u21920. (4.55) Similar to Lemma 9, we need the following lemma, which quanti\ufb01es the speed that a Wishart matrix converges to a Wigner matrix. The di\ufb00erence is that the spectral norm \u2225\u00b7 \u2225is used here instead of \u2225\u00b7 \u2225\u221ein Lemma 9. Write W{1,...,m} for WS above (2.1) with S = {1, 2, \u00b7 \u00b7 \u00b7 , m}. Review that the Wigner matrix \u02dc W{1,...,m} = ( \u02dc wij)m\u00d7m, where \u02dc wij\u2019s are as in (2.3). Lemma 10. Let fm,n(w) be the density function of 1 \u221an(W{1,...,m} \u2212nIm) and \u02dc fm(w) be the density function of \u02dc W{1,...,m}. If m3 = o(n), then log fm,n(w) \u2212log \u02dc fm(w) = o(1) + O \u0000m2n\u22121/2\u2225w\u2225+ m2n\u22121\u2225w\u22252 + mn\u22121/2\u2225w\u22253\u0001 29 \ffor all m \u00d7 m symmetric matrix w with \u2225w\u2225\u22642 3 \u221an. Below, we combine the above lemma and some change of measure arguments to obtain a lower bound of P(AS0 \u2229Lm,n,p). De\ufb01ne a non-random set Bm,p = {wij : wij \u2265\u03c4m,p,t, 1 \u2264i \u2264 j \u2264m}. By the \ufb01rst limit from (4.55), sm,n,p \u22642 3 \u221an. Therefore, from Lemma 10 we have P (AS0 \u2229Lm,n,p) = Z w\u2208Bm,p,\u2225w\u2225\u2264sm,n,p elog fm,n(w)dw = Z w\u2208Bm,p,\u2225w\u2225\u2264sm,n,p \u02dc fm(w) \u00b7 exp{log fm,n(w) \u2212log \u02dc fm(w)}dw = exp \u001a o(1) + O \u0012m2sm,n,p \u221an + m2s2 m,n,p n + ms3 m,n,p \u221an \u0013\u001b \u00b7 P( \u02dc A{1,...,m} \u2229\u02dc Lm,n,p) \u22651 2 \u00b7 exp \u001a O \u0012m2sm,n,p \u221an + m2s2 m,n,p n + ms3 m,n,p \u221an \u0013\u001b \u00b7 h P( \u02dc A{1,...,m}) \u2212P( \u02dc Lc m,n,p) i , where \u02dc A{1,...,m} is as in (4.7) with S = {1, \u00b7 \u00b7 \u00b7 , m} and \u02dc Lm,n,p = {\u2225\u02dc W{1,...,m}\u2225\u2264sm,n,p}. Under Assumption 1 in (3.1), evidently m s2 m,n,p \u21920 and m \u221an sm,n,p \u21920. This implies that m2sm,n,p \u221an + m2s2 m,n,p n + ms3 m,n,p \u221an = ms3 m,n,p \u221an \u0012 m s2 m,n,p + m \u221an sm,n,p + 1 \u0013 = O \u0012ms3 m,n,p \u221an \u0013 . Thus, we have P (AS0 \u2229Lm,n,p) \u22651 2 \u00b7 eO(ms3 m,n,p/\u221an) n P( \u02dc A{1,...,m}) \u2212P( \u02dc Lc m,n,p) o . (4.56) Obviously, E(Qm,n,p) = \u0000 p m \u0001 P(A{1,...,m}). Recalling \u02dc A{1,...,m} and \u02dc Qm,p as in (4.7) and (4.9), respectively, we see that E( \u02dc Qm,p) = \u0000 p m \u0001 P( \u02dc A{1,...,m}). Thus, we further have from (4.54) and (4.56) that E(Qm,n,p) \u22651 2 \u00b7 eO(ms3 m,n,p/\u221an) \u001a E( \u02dc Qm,p) \u2212 \u0012 p m \u0013 P( \u02dc Lc m,n,p) \u001b . (4.57) 30 \fTo further obtain a lower bound of the above expression, we analyze each term on the right-hand side. Recall the de\ufb01nition of \u03b5m,p,t below (4.7), we know \u03b5m,p,t \u2208(0, 1). By (4.19), E( \u02dc Qm,p) \u2265 exp \b \u03b5m,p,tm log p + O(m2 log log p) \t \u2265 exp \u001a\u03b4 2 p m log p + O(m2 log log p) \u001b \u2265 exp \u001a\u03b4 4 p m log p \u001b (4.58) where the condition m = o((log p)1/3/ log log p) from Assumption 1 in (3.1) is essentially used in the last step. Now, \u0012 p m \u0013 P \u0010 \u02dc Lc m,n,p \u0011 \u2264pmP \u0010 \u2225\u02dc W{1,...,m}\u2225\u2265sm,n,p \u0011 \u2264pmP \u0010 \u03bb1( \u02dc W{1,...,m}) \u2265sm,n,p \u0011 + pmP \u0010 \u03bbm( \u02dc W{1,...,m} \u2264\u2212sm,n,p) \u0011 = 2pmP \u0010 \u03bb1( \u02dc W{1,...,m}) \u2265sm,n,p \u0011 , (4.59) where the fact that \u02dc W{1,..,m} and \u2212\u02dc W{1,...,m} have the same distribution is used in the last step. The following lemma help us estimate the last probability. Lemma 11. [Lemma 4.1 from Jiang and Li (2015)] Let \u02dc W{1,...,m} be de\ufb01ned by \u02dc WS above (2.4) with S = {1, ..., m}. Then there is a constant \u03ba > 0 such that P \u0010 \u03bb1( \u02dc W{1,...,m}) \u2265x or \u03bbm( \u02dc W{1,...,m}) \u2264\u2212x \u0011 \u2264\u03ba \u00b7 e\u2212x2 4 +\u03ba\u221amx for all x > 0 and all m \u22652. By letting x = sm,n,p in Lemma 11, we have P \u0010 \u03bb1( \u02dc W{1,...,m}) \u2265sm,n,p \u0011 \u2264exp \u001a \u2212s2 m,n,p 4 + \u03ba\u221amsm,n,p \u001b . Combining the above inequality with (4.59), we arrive at \u0012 p m \u0013 P( \u02dc Lc m,n,p) \u22642 \u00b7 exp \u001a m log p \u2212s2 m,n,p 4 + \u03ba\u221amsm,n,p \u001b . Since sm,n,p \u226510\u221am log p, we know m log p \u22121 4s2 m,n,p \u2264\u22126 25s2 m,n,p. Moreover, \u221amsm,n,p = 31 \fo(s2 m,n,p). Consequently, \u0012 p m \u0013 P \u0010 \u02dc Lc m,n,p \u0011 \u2264exp \u001a \u2212 \u0010 6 25 + o(1) \u0011 s2 m,n,p \u001b . (4.60) Comparing the above inequality with (4.58), we arrive at \u0012 p m \u0013 P \u0010 \u02dc Lc m,n,p \u0011 = o(1) = o(E( \u02dc Qm,p)). This result, combined with (4.57), gives E(Qm,n,p) \u22651 3 \u00b7 eO(ms3 m,n,p/\u221an)E( \u02dc Qm,p), (4.61) which joint with (4.19) concludes E(Qm,n,p) \u2265 1 3 exp \b [1 \u2212(1 \u2212\u03b5m,p,t)2]m log p + O(m2 log log p + mn\u22121/2s3 m,n,p) \t . (4.62) This completes our analysis for E(Qm,n,p). Step 2: the estimate of V ar(Qm,n,p). Replacing \u201c \u02dc AS\u201d in (4.7) with \u201cAS\u201d in (4.52), and using the same argument as obtaining (4.23), we have from Lemma 5 that V ar(Qm,n,p) \u2264 E(Qm,n,p) + m max l=1,...,m\u22121 p2m\u2212lP \u0000A{1,...,m} \u2229A{1,...,l,m+1,...,2m\u2212l} \u0001 . (4.63) Now we bound the last term above. Review L2m,n,p below (4.54). Trivially, P \u0000A{1,...,m} \u2229A{1,...,l,m+1,...,2m\u2212l} \u0001 \u2264P \u0000A{1,...,m} \u2229A{1,...,l,m+1,...,2m\u2212l} \u2229L2m,n,p \u0001 + P(Lc 2m,n,p). (4.64) By (4.60), mp2mP(Lc 2m,n,p) = o(1). Let f2m,n(w) be the density function of 1 \u221an(W{1,...,2m} \u2212nI2m) and \u02dc f2m(w) be the density function of \u02dc W{1,...,2m}. Review (4.51). De\ufb01ne (non-random) set BS = \b (wij)i,j\u2208S; wij \u2265\u03c4m,p,t for all i, j \u2208S with i \u2264j \t . 32 \fThen, P \u0000A{1,...,m} \u2229A{1,...,l,m+1,...,2m\u2212l} \u2229L2m,n,p \u0001 = Z B \u02dc f2m(w) \u00b7 exp{log f2m,n(w) \u2212log \u02dc f2m(w)}dw where B := B{1,\u00b7\u00b7\u00b7 ,m} \u2229B{1,...,l,m+1,...,2m\u2212l} \u2229{\u2225w\u2225\u2264s2m,n,p}. By Lemma 10 and by a changemeasure argument similar to the one getting (4.56), we see P \u0000A{1,...,m} \u2229A{1,...,l,m+1,...,2m\u2212l} \u2229L2m,n,p \u0001 \u22642 \u00b7 eO(ms3 m,n,p/\u221an)P( \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l}). (4.65) The bene\ufb01t of the above step is transferring the probability on the Wishart matrix to that on the Wigner matrix up to a certain error. Combining (4.64)-(4.65), we have mp2m\u2212lP \u0000A{1,...,m} \u2229A{1,...,l,m+1,...,2m\u2212l} \u0001 \u22642 \u00b7 eO(ms3 m,n,p/\u221an) \u00b7 mp2m\u2212lP( \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l}) + o(1). Combining this with (4.63), we have V ar(Qm,n,p) \u2264E(Qm,n,p) + 2 \u00b7 eO(s3 m,n,p/\u221an) \u00b7 Gm + o(1). where Gm := max l=1,...,m\u22121 n mp2m\u2212lP( \u02dc A{1,...,m} \u2229\u02dc A{1,...,l,m+1,...,2m\u2212l}) o . Thus, V ar(Qm,n,p) E(Qm,n,p)2 \u2264eO(ms3 m,n,p/\u221an)\u0002 E(Qm,n,p) \u0003\u22122 \u00b7 Gm + \u0002 E(Qm,n,p) \u0003\u22121 + o \u0010\u0002 E(Qm,n,p) \u0003\u22122\u0011 . (4.66) According to (4.26) and (4.61), the \ufb01rst term on the right-hand side of the above inequality is no more than 9 \u00b7 exp \u001a \u2212 h 1 \u22121 m(1 \u2212\u03b5m,p,t)2i log p + O \u0010 m2 log log p + ms3 m,n,p \u221an \u0011\u001b . Notice that 1 \u2212\u03b5m,p,t \u22641 and m \u22652 and O(m2 log log p + mn\u22121/2s3 m,n,p) = o(log p). Thus, 33 \fthe above display further implies eO(s3 m,n,p/\u221an)\u0000E(Qm,n,p) \u0001\u22122 \u00b7 Gm \u2264exp \u001a \u2212 \u00101 2 + o(1) \u0011 log p \u001b . We next study the last two terms from (4.66). By the condition m = o((log p)1/3/ log log p) from Assumption 1 in (3.1) and the second limit in (4.55), O \u0012ms3 m,n,p \u221an + m2 log log p \u0013 = o \u0000p m log p \u0001 . (4.67) Recall (4.8). It is readily seen that [1 \u2212(1 \u2212\u03b5m,p,t)2]m log p \u2265\u03b5m,p,tm log p \u2265 t 2 \u221am log p. Consequently, it is known from (4.62) that E(Qm,n,p) \u22651 3 \u00b7 exp n t 2 p m log p o uniformly over \u03b4 \u2264t \u22642\u221am log p\u2212m\u03bep. Therefore, we conclude from (4.62) and (4.67) that \u0000E(Qm,n,p) \u0001\u22121 + o \u0010\u0000E(Qm,n,p) \u0001\u22122\u0011 =(1 + o(1)) \u0000E(Qm,n,p) \u0001\u22121 \u22643 \u00b7 exp \u001a \u2212 \u00101 2 + o(1) \u0011 t p m log p \u001b . (4.68) Combining (4.66)-(4.68), we see V ar(Qm,n,p) E(Qm,n,p)2 \u2264exp n \u2212 \u00101 2 + o(1) \u0011 log p o + 3 \u00b7 exp n \u2212 \u00101 2 + o(1) \u0011 t p m log p o . By (4.53) and the above inequality, P \u0010 Tm,n,p \u22642 p m log p \u2212t \u0011 \u2264exp n \u2212 \u00101 2 + o(1) \u0011 log p o + 3 \u00b7 exp n \u2212 \u00101 2 + o(1) \u0011 t p m log p o . Finally, from the inequality t2 \u22642et we have that lim sup n\u2192\u221e sup \u03b4\u2264t\u22642\u221am log p\u2212m\u03bep e\u03b1tt2P \u0010 Tm,n,p \u22642 p m log p \u2212t \u0011 = 0 (4.69) 34 \ffor any \u03b1 > 0 and \u03b4 > 0. Scenario 2: t > 2\u221am log p \u2212m\u03bep. Review (2.1). By the fact that \u03bb1(M) \u2265max1\u2264i\u2264m Mii for any non-negative de\ufb01nite matrix M = (Mij)m\u00d7m, we have Tm,n,p \u2265 max S\u2282{1,...,p},|S|=m \u03bb1(WS) \u2265max 1\u2264i\u2264p Wii, where Wii = Pn j=1 x2 ji and {Wii; 1 \u2264i \u2264m} are i.i.d. random variables. Thus, by independence, P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 \u2264 P \u0012W11 \u2212n \u221an \u22642 p m log p \u2212t \u0013p . (4.70) Note that W11 = Pn j=1 x2 j1 is a sum of i.i.d. random variables with V ar(x11) = 2 and E(x6 11) < \u221e. We discuss two situations: 2\u221am log p \u2212m\u03bep \u2264t \u22644\u221am log p and t \u22654\u221am log p. Assuming 2\u221am log p\u2212m\u03bep \u2264t \u22644\u221am log p for now. Recalling \u03a6(x) = (2\u03c0)\u22121/2 R x \u2212\u221ee\u2212t2/2 dt, we get from the Berry-Essen Theorem that P \u0012W11 \u2212n \u221an \u22642 p m log p \u2212t \u0013 \u2264\u03a6 \u0012p 2m log p \u2212 t \u221a 2 \u0013 + \u03ba \u221an \u22642 \u00b7 max \u001a \u03a6 \u0012p 2m log p \u2212 t \u221a 2 \u0013 , \u03ba \u221an \u001b for some constant \u03ba > 0. Combine the above inequalities with (4.30) to see P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 \u22642 \u00b7 max n e\u2212mp0.9, e\u2212p log n 2 (1+o(1))o . By (3.1), \u221am log p \u2264log p. It is easy to check lim n\u2192\u221e sup 2\u221am log p\u2212m\u03bep\u2264t\u22644\u221am log p e\u03b1tt2P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 = 0. (4.71) We proceed to the second situation: t \u22654\u221am log p. In this case, 2\u221am log p \u2212t \u2264 35 \f\u22122\u221am log p. By Lemma 1 from Laurent and Massart (2000), P(W11 \u2212n \u2264\u22122\u221anx) \u2264e\u2212x for any x > 0. Thus, P \u0012W11 \u2212n \u221an \u22642 p m log p \u2212t \u0013 \u2264 exp ( \u2212 \u0012 t 2 \u2212 p m log p \u00132) \u2264 exp \u001a \u2212t2 16 \u001b . This inequality and (4.70) yield P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 \u2264exp \u001a \u2212pt2 16 \u001b . Consequently, sup t\u22654\u221am log p e\u03b1tt2P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 \u2264 sup t\u22654\u221am log p exp \u001a \u2212pt2 16 + \u03b1t + 2 log t \u001b \u2264exp {\u2212mp(log p)(1 + o(1))} . Hence, lim n\u2192\u221e sup t\u22654\u221am log p e\u03b1tt2P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 = 0. (4.72) By collecting (4.69), (4.71) and (4.72) together, we arrive at lim n\u2192\u221esup t\u2265\u03b4 e\u03b1tt2P \u0012Tm,n,p \u2212n \u221an \u22642 p m log p \u2212t \u0013 = 0. The proof is completed. 4.2.3 Proofs of Theorem 3 and Remark 3 The following lemma serves the proof of Theorem 3. Its own proof is placed in Appendix. 36 \fLemma 12. Let \u02dc W = \u02dc Wm\u00d7m be as de\ufb01ned in (3.7) with 0 \u2264\u03b7 \u22642. Then P(\u03bb1( \u02dc W) \u2265x) \u2264m1.5 log m \u03b4m \u00b7 exp n \u2212 (x \u22122r\u03b4)2 2m\u22121(\u03b7 \u22122) + 4 o + 2 \u00b7 e\u2212r2/8 for all r \u22654m, \u03b4 \u2208(0, 1) and x > 2r\u03b4 + 1. Proof of Theorem 3. For any 0 < \u03b5 < 1, we \ufb01rst show that P \u0010 \u03bb1( \u02dc W{1,...,m}) \u2265(1 + \u03b5) p {4m + 2(\u03b7 \u22122)} log p \u0011 = o(p\u2212m) (4.73) by using Lemma 12. To do so, set x = (1 + \u03b5) p [4m + 2(\u03b7 \u22122)] log p, r = \u221a128m log p and \u03b4 = (8r)\u22121\u03b5 p [4m + 2(\u03b7 \u22122)] log p. Rewrite \u03b4 such that \u03b4 = 1 64 r 2m + \u03b7 \u22122 m ! \u03b5. It is easy to check that the coe\ufb03cient of \u03b5 is always sitting in [1/64, 2/64] for any m \u22652 and \u03b7 \u2208[0, 2]. This, the fact that supk\u22652(k1.5 log k) \u00b7 \u03b4k < \u221eand the de\ufb01nition of r lead to m1.5 log m \u03b4m = O(\u03b5\u22122m) and e\u2212r2/8 = o(p\u22126m). (4.74) We can see that x \u2212r\u03b4 = \u0010 1 + 7 8\u03b5 \u0011 \u00b7 p [4m + 2(\u03b7 \u22122)] log p. It follows that exp \u001a \u2212 (x \u22122r\u03b4)2 2m\u22121(\u03b7 \u22122) + 4 \u001b \u2264exp ( \u2212 \u00001 + \u03b5 2 \u00012[4m + 2(\u03b7 \u22122)] log p 2m\u22121(\u03b7 \u22122) + 4 ) =p\u2212[1+(\u03b5/2)]2m. This and (4.74) implies (4.73). Consequently, P \u0010 \u02dc Tm,p \u2265(1 + \u03b5) p [4m + 2(\u03b7 \u22122)] log p \u0011 \u2264pmP \u0010 \u03bb1( \u02dc W{1,...,m}) \u2265(1 + \u03b5) p [4m + 2(\u03b7 \u22122)] log p \u0011 \u21920. 37 \fTo complete the proof, it is enough to check that P \u0010 \u02dc Tm,p < (1 \u2212\u03b5) p [4m + 2(\u03b7 \u22122)] log p \u0011 \u21920 (4.75) for each \u03b5 \u2208(0, 1). For notational simplicity, let Km = 4m + 2(\u03b7 \u22122) and \u03c4m,p = log p/Km. Similar to the proof of Theorem 2, de\ufb01ne \u02dc AS = n \u02dc Wii \u22652(1 \u2212\u03b5)\u03b7\u221a\u03c4m,p, \u02dc Wij \u22654(1 \u2212\u03b5)\u221a\u03c4m,p for all i, j \u2208S and i \u2264j o for each S \u2282{1, ..., p} with |S| = m. We next compute P( \u02dc AS0) and P( \u02dc AS0\u2229\u02dc AS1), respectively, where S0 = {1, ..., m} and S1 = {1, ..., l, m + 1, ..., 2m \u2212l}. By independence, P \u0010 \u02dc AS0 \u0011 = m Y i=1 P \u0010 \u02dc Wii \u22652(1 \u2212\u03b5)\u03b7\u221a\u03c4m,p \u0011 \u00b7 Y 1\u2264i 0 and \u03b4 > 0. (ii) lim p\u2192\u221eE(e\u03b1Zp) = 1 for all \u03b1 > 0. (iii) lim p\u2192\u221eE(Z\u03b1 p ) = 0 for all \u03b1 > 0. (iv) lim p\u2192\u221eP(Zp \u2265\u03b4) = 0 for all \u03b4 > 0. (v) lim p\u2192\u221eVar(Zp) = 0 for all \u03b1 > 0. Then, (i)\u21d0 \u21d2(ii) = \u21d2(iii) = \u21d2(iv) and (v). Here, \u201cA \u21d0 \u21d2B\u201d means two statements A and B are equivalent, and A = \u21d2B means statement A implies statement B. Acknowledgment The research of Tony Cai was supported in part by NSF Grant DMS-1712735 and NIH grants R01-GM129781 and R01-GM123056. Tiefeng Jiang is partially supported by NSF Grant DMS-1406279. Xiaoou Li is partially supported by NSF Grant DMS-1712657." + }, + { + "url": "http://arxiv.org/abs/1804.03018v1", + "title": "High-dimensional Linear Discriminant Analysis: Optimality, Adaptive Algorithm, and Missing Data", + "abstract": "This paper aims to develop an optimality theory for linear discriminant\nanalysis in the high-dimensional setting. A data-driven and tuning free\nclassification rule, which is based on an adaptive constrained $\\ell_1$\nminimization approach, is proposed and analyzed. Minimax lower bounds are\nobtained and this classification rule is shown to be simultaneously rate\noptimal over a collection of parameter spaces. In addition, we consider\nclassification with incomplete data under the missing completely at random\n(MCR) model. An adaptive classifier with theoretical guarantees is introduced\nand optimal rate of convergence for high-dimensional linear discriminant\nanalysis under the MCR model is established. The technical analysis for the\ncase of missing data is much more challenging than that for the complete data.\nWe establish a large deviation result for the generalized sample covariance\nmatrix, which serves as a key technical tool and can be of independent\ninterest. An application to lung cancer and leukemia studies is also discussed.", + "authors": "T. Tony Cai, Linjun Zhang", + "published": "2018-04-09", + "updated": "2018-04-09", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "main_content": "Introduction Classi\ufb01cation is one of the most important tasks in statistics and machine learning with applications in a broad range of \ufb01elds. See, for example, Hastie et al. (2009). The problem has been well studied in the low-dimensional setting. In particular, consider the Gaussian case where one wishes to classify a new random vector Z drawn with equal probability from 1Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104. The research was supported in part by NSF Grant DMS-1712735 and NIH Grant R01 GM-123056. 1 arXiv:1804.03018v1 [stat.ME] 9 Apr 2018 \fone of two Gaussian distributions Np(\u00b51, \u03a3) (class 1) and Np(\u00b52, \u03a3) (class 2). In the ideal setting where all the parameters \u03b8 = (\u00b51, \u00b52, \u03a3) are known, Fisher\u2019s linear discriminant rule, which is given by C\u03b8(Z) = \uf8f1 \uf8f2 \uf8f3 1, \u03b4\u22a4\u2126(Z \u2212\u00b51+\u00b52 2 ) < 0 2, \u03b4\u22a4\u2126(Z \u2212\u00b51+\u00b52 2 ) \u22650, (1) where \u03b4 = \u00b52 \u2212\u00b51, and \u2126= \u03a3\u22121 is the precision matrix, is well known to be optimal (Anderson, 2003). Fisher\u2019s rule separates the two classes by a linear combination of features and its misclassi\ufb01cation error is Ropt(\u03b8) = \u03a6 \u0000\u22121 2\u2206 \u0001 , where \u03a6 is the cumulative distribution function of the standard normal distribution and \u2206= \u221a \u03b4\u22a4\u2126\u03b4 is the signal-to-noise ratio. Although Fisher\u2019s rule can serve as a useful performance benchmark, it is not practical for real data analysis as the parameters \u00b51, \u00b52 and \u03a3 are typically unknown and need to be estimated from the data. In applications, it is desirable to construct a data-driven classi\ufb01cation rule based on two observed random samples, X(1) 1 , ..., X(1) n1 i.i.d. \u223cNp(\u00b51, \u03a3) and X(2) 1 , ..., X(2) n2 i.i.d. \u223c Np(\u00b52, \u03a3). In the conventional low-dimensional setting, this is easily achieved by plugging in Fisher\u2019s linear discriminant rule (1) the corresponding sample means and pooled sample covariance matrix for the parameters \u00b51, \u00b52 and \u03a3 respectively. This classi\ufb01cation rule is asymptotically optimal when the dimension p is \ufb01xed. See, for example, Anderson (2003). Driven by many contemporary applications, much recent attention has been on the high-dimensional setting where the dimension is much larger than the sample size. In this case, the sample covariance matrix is not even invertible and it is di\ufb03cult to estimate the precision matrix \u2126. The standard linear discriminant rule thus fails completely. Several regularized classi\ufb01cation methods, including the regularized logistic regression (Shevade and Keerthi, 2003), Naive Bayes method (Bickel and Levina, 2004), hard thresholding (Shao et al., 2011), direct estimation methods in (Cai and Liu, 2011; Mai et al., 2012), have been proposed for classi\ufb01cation of high-dimensional data. In particular, Cai and Liu (2011) introduced a direct estimation method for the high-dimensional linear discriminant analysis based on the key observation that the ideal Fisher\u2019s discriminant rule given in (1) depends on the parameters \u00b51, \u00b52 and \u03a3 only through the discriminant direction \u03b2 = \u2126\u03b4. They proposed to estimate the discriminant direction \u03b2 directly instead of estimating \u03a3 and \u03b4 separately, under the assumption that \u03b2 is sparse. It was shown that their classi\ufb01cation rule is consistent. Despite much recent progress in methodological development on high-dimensional classi\ufb01cation problems, there has been relatively little fundamental study on the optimality theory for the discriminant analysis. Minimax study of high-dimensional discriminant analysis has been considered in Azizyan et al. (2013) and Li et al. (2017) in the special case 2 \fwhere the covariance matrix \u03a3 = \u03c32I for some \u03c3 > 0. However, even in this relatively simple setting there is still a gap between the minimax upper and lower bounds. It is unclear what the optimal rate of convergence for the minimax misclassi\ufb01cation risk is and which classi\ufb01cation rule is rate optimal under the general Gaussian distribution. The \ufb01rst major goal of the present paper is to provide answers to these questions. Furthermore, although the problem of missing data arises frequently in the analysis of high-dimensional data, compared to the conventional low-dimensional setting, there is a paucity of methods for inference with incomplete high-dimensional data. The second goal of this paper is to develop an optimality theory for high-dimensional discriminant analysis with incomplete data and to construct in this setting a data-driven adaptive classi\ufb01er with theoretical guarantees. Given two random samples, X(1) 1 , ..., X(1) n1 i.i.d. \u223cNp(\u00b51, \u03a3) and X(2) 1 , ..., X(2) n2 i.i.d. \u223cNp(\u00b52, \u03a3), we wish to construct a classi\ufb01er \u02c6 C to classify a future data point Z drawn from these two distributions with equal prior probabilities, into one of the two classes. Given the observed data, the performance of the classi\ufb01cation rule is measured by the misclassi\ufb01cation error R\u03b8( \u02c6 C) = P\u03b8(label(Z) \u0338= \u02c6 C(Z)), (2) where \u03b8 = (\u00b51, \u00b52, \u03a3), P\u03b8 denotes the probability with respect to Z \u223c 1 2Np(\u00b51, \u03a3) + 1 2Np(\u00b52, \u03a3) and Z is independent of the observed X\u2019s. label(Z) denotes the true class of Z. For a given classi\ufb01er \u02c6 C, we use the excess misclassi\ufb01cation risk relative to the oracle rule (1), R\u03b8( \u02c6 C) \u2212Ropt(\u03b8), to measure the performance of the classi\ufb01er \u02c6 C. Let n = min{n1, n2}. We consider in this paper a collection of the parameter spaces G(s, Mn,p) de\ufb01ned by G(s, Mn,p) = {\u03b8 = (\u00b51, \u00b52, \u03a3) : \u00b51, \u00b52 \u2208Rp, \u03a3 \u2208Rp\u00d7p, \u03a3 \u227b0, \u2225\u03b2\u22250 \u2264s, M\u22121 \u2264\u03bbmin(\u03a3) \u2264\u03bbmax(\u03a3) \u2264M, Mn,p \u2264\u2206\u22643Mn,p}, (3) where M > 1 is a constant, Mn,p > 0 can potentially grow with n and p, and \u03bbmax(\u03a3) and \u03bbmin(\u03a3) are respectively the largest and smallest eigenvalue of \u03a3. The notation \u03a3 \u227b0 means that \u03a3 is symmetric and positive de\ufb01nite. Recall that \u2206= \u221a \u03b4\u22a4\u2126\u03b4 and \u03b2 = \u2126\u03b4. Combining the upper and lower bounds results given in Section 3 leads to the following minimax rates of convergence for the excess misclassi\ufb01cation risk. Theorem 1. Consider the parameter space G(s, Mn,p), s and p approach in\ufb01nity as n grows to in\ufb01nity, and Mn,p q s log p n = o(1) with n \u2192\u221e, 1. If Mn,p is a \ufb01xed constant not depending on n and p, then for any constant \u03b1 > 0, there exist two constants C(2) \u03b1 > C(1) \u03b1 > 0, such that inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) P \u0012 C(1) \u03b1 \u00b7 s log p n < R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) < C(2) \u03b1 \u00b7 s log p n \u0013 \u22651 \u2212\u03b1. 3 \f2. If Mn,p \u2192\u221eas n \u2192\u221e, then these exists a sequence \u03b4n with limn\u2192\u221e\u03b4n = 0, such that for any constant \u03b1 > 0, there exist two constants C(2) \u03b1 > C(1) \u03b1 > 0 satisfying, for su\ufb03ciently large n, inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) P \u0012 C(1) \u03b1 s log p n e\u2212( 1 8 +\u03b4n)M 2 n,p < R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) < C(2) \u03b1 s log p n e\u2212( 1 8 \u2212\u03b4n)M 2 n,p \u0013 \u22651\u2212\u03b1. It is worth noting that Mn,p controls the magnitude of \u2206, which is interpreted as the signal-to-noise ratio. As shown in the second case, when the signal-to-noise ratio grows, the classi\ufb01cation problem becomes easier and our result precisely characterizes that the convergence rate is exponentially faster with an additional factor exp \u0000\u2212(1/8 + o(1)) M2 n,p \u0001 . Furthermore, we propose a three-step data-driven classi\ufb01cation rule, called AdaLDA, by using an adaptive constrained \u21131 minimization approach which takes into account the variability of individual entries. This classi\ufb01cation rule is shown to be simultaneously rate optimal over the collection of parameter spaces G(s, Mn,p). To the best of our knowledge, this is the \ufb01rst optimality result for classi\ufb01cation of high-dimensional Gaussian data. Furthermore, in contrast to many classi\ufb01cation rules proposed in the literature, which require to choose tuning parameters, this procedure is data-driven and tuning free. In addition, we also consider classi\ufb01cation in the presence of missing data. As in the conventional low-dimensional setting, the problem of missing data also arises frequently in the analysis of high-dimensional data from in a range of \ufb01elds such as genomics, epidemiology, engineering, and social sciences (Graham, 2009; Libbrecht and Noble, 2015; White et al., 2011). Compared to the low-dimensional setting, there are relatively few inferential methods for missing data in the high-dimensional setting. Examples include high-dimensional linear regression (Loh and Wainwright, 2012), sparse principal component analysis (Lounici, 2013), covariance matrix estimation (Cai and Zhang, 2016), and vector autoregressive (VAR) processes (Rao et al., 2017). In this paper, following the missing mechanism considered in the aforementioned papers, we investigate high-dimensional discriminant analysis in the presence of missing observations under the missing completely at random (MCR) model. We construct a data-driven adaptive classi\ufb01er with theoretical guarantees based on incomplete data and also develop an optimality theory for high-dimensional linear discriminant analysis under the MCR model. The technical analysis for the case of missing data is much more challenging than that for the complete data, although the classi\ufb01cation procedure and the resulting convergence rates look similar. To facilitate the theoretical analysis, we establish a key technical tool, which is a large deviation result for the generalized sample covariance matrix. This is related to the masked covariance matrix estimator considered in Levina and Vershynin (2012) and Chen et al. (2012), see further discussions in Section 2.3. 4 \fThis technical tool can be of independent interest as it is potentially useful for other related problems in high-dimensional statistical inference with missing data. The proposed adaptive classi\ufb01cation algorithms can be cast as linear programs and are thus easy to implement. Simulation studies are carried out to investigate the numerical performance of the classi\ufb01cation rules. The results show that the proposed classi\ufb01ers enjoy superior \ufb01nite sample performance in comparison to existing methods for high-dimensional linear discriminant analysis. The proposed classi\ufb01ers are also illustrated through an application to the analysis of lung cancer and leukemia datasets. The results show that they outperform existing methods. The rest of the paper is organized as follows. In Section 2, after basic notation and de\ufb01nitions are reviewed, we introduce an adaptive algorithm for high-dimensional discriminant analysis with the complete data and then propose a more general procedure for the setting of incomplete data. Section 3 studies the theoretical properties of these classi\ufb01cation rules and related estimators. In addition, minimax lower bounds are given. The upper and lower bounds together establish the optimal rates of convergence for the minimax misclassi\ufb01cation risk. Numerical performance of the classi\ufb01cation rules are investigated in Section 4 and the proofs of the main results are given in Section 5. Technical lemmas are proved in the Supplementary Material (Cai and Zhang, 2018). 2 Methodology In this section, we \ufb01rstly introduce an adaptive algorithm for high-dimensional linear discriminant analysis with the complete data. This algorithm is called AdaLDA (Adaptive Linear Discriminant Analysis rule). We then propose a data-driven classi\ufb01er, called ADAM (Adaptive linear Discriminant Analysis with randomly Missing data), for the incomplete data under the MCR model. 2.1 Notation and de\ufb01nitions We begin with basic notation and de\ufb01nitions. Throughout the paper, vectors are denoted by boldface letters. For a vector x \u2208Rp, the usual vector \u21130, \u21131, \u21132 and \u2113\u221enorms are denoted respectively by \u2225x\u22250, \u2225x\u22251, \u2225x\u22252 and \u2225x\u2225\u221e. Here the \u21130 norm counts the number of nonzero entries in a vector. The support of a vector x is denoted by supp(x). The symbol \u25e6denotes the Hadamard product. For p \u2208N, [p] denotes the set {1, 2, ..., p}. For j \u2208[p], denote by ej the j-th canonical basis in Rp. For a matrix \u03a3 = (\u03c3ij)1\u2264i,j\u2264p \u2208Rp\u00d7p, the Frobenius norm is de\ufb01ned as \u2225\u03a3\u2225F = qP i,j \u03c32 ij and the spectral norm is de\ufb01ned to be \u2225\u03a3\u22252 = sup\u2225x\u22252=1 \u2225\u03a3x\u22252. The vector \u2113\u221enorm of the matrix \u03a3 is |\u03a3|\u221e= maxi,j |\u03c3ij|. For 5 \fa symmetric matrix \u03a3, we use \u03bbmax(\u03a3) and \u03bbmin(\u03a3) to denote respectively the largest and smallest eigenvalue of \u03a3. \u03a3 \u227b0 means that \u03a3 is positive de\ufb01nite. For a positive integer s < p, let \u0393(s) = {u \u2208Rp : \u2225uSC\u22251 \u2264\u2225uS\u22251, for some S \u2282[p] with |S| = s}, where uS denotes the subvector of u con\ufb01ned to S. For two sequences of positive numbers an and bn, an \u2272bn means that for some constant c > 0, an \u2264cbn for all n, and an \u224dbn if an \u2272bn and bn \u2272an. We say an event An holds with high probability if lim inf n\u2192\u221eP(An) = 1. Finally, c0, c1, c2, C, C1, C2, . . . denote generic positive constants that may vary from place to place. The complete data X(1) 1 , ..., X(1) n1 and X(2) 1 , ..., X(2) n2 are independent realizations of X(1) \u223cNp(\u00b51, \u03a3) and X(2) \u223cNp(\u00b52, \u03a3). We assume n1 \u224dn2 and de\ufb01ne n = min{n1, n2}. In our asymptotic framework, we let n be the driving asymptotic parameter, s and p approach in\ufb01nity as n grows to in\ufb01nity. The missing completely at random (MCR) model assumes that one observes samples {X(1) 1 , ..., X(1) n1 } and {X(2) 1 , ..., X(2) n2 } with missing values, where the observed coordinates of X(k) t are indicated by an independent vector S(k) t \u2208 {0, 1}p for t = 1, ..., nk, k = 1, 2, that is, X(k) tj is observed if S(k) tj = 1 and X(k) tj is missing if S(k) tj = 0; t \u2208[nk], j \u2208[p], k = 1, 2. (4) Here X(k) tj and S(k) tj are respectively the j-th coordinate of the vectors X(k) t and S(k) t . Generally, we use the superscript \u201c\u2217\u201d to denote objects related to missing values. The incomplete samples with missing values are denoted by X(1)\u2217= {X(1)\u2217 1 , ..., X(1)\u2217 n1 } and X(2)\u2217= {X(2)\u2217 1 , ..., X(2)\u2217 n2 }. Regarding the mechanism for missingness, the MCR model is formally stated as below. This assumption is more general than the one considered previously by Loh and Wainwright (2012) and Lounici (2013). Assumption 1. (Missing Completely at Random (MCR)) S = \b S(k) t \u2208{0, 1}p: t = 1, ..., nk, k = 1, 2 \t is independent of the values of X(1) t and X(2) t for t = 1, ..., nk, k = 1, 2. Here S(k) t can be either deterministic or random, but independent of X(1) t and X(2) t . A major goal of the present paper is to construct a classi\ufb01cation rule \u02c6 C in the high dimensional setting where p \u226bn for both complete and incomplete data. 2.2 Data-driven adaptive classi\ufb01er for complete data We \ufb01rst consider the case of complete data. In this setting, as mentioned in the introduction, a number of high-dimensional linear discriminant rules have been proposed in the literature. In particular, Cai and Liu (2011) introduced a classi\ufb01cation rule called LPD (Linear Programming Discriminant) rule by directly estimating the discriminant direction 6 \f\u03b2 through solving the following optimization problem: \u02c6 \u03b2LPD = arg min \u03b2 n \u2225\u03b2\u22251 : subject to \u2225\u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51)\u2225\u221e\u2264\u03bbn o , (5) where \u02c6 \u00b51, \u02c6 \u00b52, \u02c6 \u03a3 are sample means and pooled sample covariance matrix respectively, and \u03bbn = C p log p/n is the tuning parameter with some constant C. Based on \u02c6 \u03b2LPD, the LPD rule is then given by \u02c6 CLPD(Z) = \uf8f1 \uf8f2 \uf8f3 1, \u02c6 \u03b2\u22a4 LPD(Z \u2212\u02c6 \u00b51+\u02c6 \u00b52 2 ) < 0 2, \u02c6 \u03b2\u22a4 LPD(Z \u2212\u02c6 \u00b51+\u02c6 \u00b52 2 ) \u22650 . (6) The LPD rule is easy to implement and possesses a number of desirable properties as shown in Cai and Liu (2011). It has, however, three drawbacks. A major shortcoming of the LPD rule is that it uses a common constraint \u03bbn for all coordinates of a = \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51). This essentially treats the random vector a as homoscedastic, while in fact a is intrinsically heteroscedastic and the coordinates could have a wide range of variability. The resulting estimator \u02c6 \u03b2LPD obtained in (5) of the discriminant direction \u03b2 has yet to be shown as rate optimal; secondly, the procedure is not adaptive in the sense that the tuning parameter \u03bbn is not fully speci\ufb01ed and needs to be chosen through an empirical method such as crossvalidation. The third drawback is that the LPD rule \u02c6 CLPD does not come with theoretical optimality guarantees. To overcome these drawbacks, we now introduce an adaptive algorithm for the highdimensional linear discriminant analysis with complete data, called AdaLDA (Adaptive Linear Discriminant Analysis rule), which takes into account the heteroscedasticity of the random vector a = \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51). The AdaLDA is fully data-driven and adaptive to the variability of individual entries. Before we describe the classi\ufb01er in detail, it is helpful to state the following key technical result which provides the motivation for the new procedure. Lemma 1. Suppose {X(1) t }n1 t=1 and {X(2) t }n2 t=1 are i.i.d. random samples from Np(\u00b51, \u03a3) and Np(\u00b52, \u03a3) respectively with \u03a3 = (\u03c3ij)1\u2264i,j\u2264p. Let \u03b4 = \u00b52 \u2212\u00b51, \u03b2 = \u2126\u03b4, \u2206= p \u03b2\u22a4\u03b4 and a = \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51), where \u02c6 \u00b51, \u02c6 \u00b52, \u02c6 \u03a3 are sample means and pooled sample covariance matrix respectively. Then with probability at least 1 \u22124p\u22121, |aj| \u22644\u221a\u03c3jj \u00b7 r 25\u22062 2 + 1 ! \u00b7 r log p n , j = 1, ..., p. (7) A major step in the construction of the adaptive data-driven procedure is to make the constraint in (5) adaptive to the variability of individual entries based on Lemma 1, instead of using a uniform upper bound \u03bbn for all the entries. In order to apply Lemma 1, we need 7 \fto estimate the diagonal elements of \u03a3, \u03c3jj (j = 1, ..., p) and \u22062. Note that \u03c3jj can be easily estimated by the sample variances \u02c6 \u03c3jj , but \u22062 is harder to estimate. The data-driven adaptive classi\ufb01cation procedure AdaLDA is constructed in three steps. Step 1 (Estimating \u22062). Fix \u03bb0 = 25/2, we estimate \u03b2 by a preliminary estimator \u02dc \u03b2 = arg min \u03b2 \u2225\u03b2\u22251 subject to |e\u22a4 j \u0010 \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51) \u0011 | \u22644 r log p n \u00b7 p \u02c6 \u03c3jj \u00b7 (\u03bb0\u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51) + 1), j \u2208[p]. (8) Then we estimate \u22062 by \u02c6 \u22062 = | \u02dc \u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51)|. Step 2 (Adaptive estimation of \u03b2). Given \u02c6 \u22062, the \ufb01nal estimator \u02c6 \u03b2AdaLDA of \u03b2 is constructed through the following linear optimization \u02c6 \u03b2AdaLDA = arg min \u03b2 \u2225\u03b2\u22251 subject to |e\u22a4 j \u0010 \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51) \u0011 | \u22644 r log p n \u00b7 q \u03bb0\u02c6 \u03c3jj \u02c6 \u22062 + \u02c6 \u03c3jj, j \u2208[p]. (9) Step 3 (Construction of AdaLDA). The AdaLDA classi\ufb01cation rule is obtained by plugging \u02c6 \u03b2AdaLDA into Fisher\u2019s rule (1), \u02c6 CAdaLDA(Z) = \uf8f1 \uf8f2 \uf8f3 1, \u02c6 \u03b2\u22a4 AdaLDA \u0000Z \u2212\u02c6 \u00b51+\u02c6 \u00b52 2 \u0001 \u22650, 2, \u02c6 \u03b2\u22a4 AdaLDA \u0000Z \u2212\u02c6 \u00b51+\u02c6 \u00b52 2 \u0001 < 0. (10) This classi\ufb01cation rule does not require a tuning parameter and the estimator \u02c6 \u03b2AdaLDA adapts to the variability of individual entries by using an entry-dependent threshold for each individual coordinate of \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51). Note that the optimization problems (8) and (9) can be cast as linear programs, so the proposed AdaLDA rule is computationally easy to implement. It will be shown in Section 3 that the AdaLDA classi\ufb01cation rule is adaptively minimax rate optimal. Our theoretical analysis also shows that the resulting estimator \u02c6 \u03b2AdaLDA is rate optimally adaptive whenever \u03bb0 is a su\ufb03ciently large constant. In particular, it can be taken as \ufb01xed at \u03bb0 = 25/2, which is derived from the concentration inequality given in Lemma 1. Remark 1. The LPD rule uses a universal tuning parameter \u03bbn = C p log p/n which does not take into account the heteroscedasticity of the random vector a = \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51), and 8 \fthe optimality of estimation is unknown. The cross-validation method can be used to choose the tuning parameter in LPD. However, the estimator obtained through cross-validation can be variable and its theoretical properties are unclear. In contrast, the AdaLDA procedure does not depend on any unknown parameter and the estimator will be shown to be minimax rate optimal. 2.3 ADAM with randomly missing data We now turn to the case of incomplete data under the MCR model. To generalize AdaLDA to the incomplete data case, we proceed by \ufb01rstly estimating \u00b51, \u00b52 and \u03a3. The following estimators follow the idea in Cai and Zhang (2016), and for completeness, we present their proposed estimators below. Let n(k)\u2217 ij = nk X t=1 S(k) ti S(k) tj , 1 \u2264i, j \u2264p, k = 1, 2. Here n(k)\u2217 ij is the number of vectors X(k) t in which the ith and jth entries are both observed. In addition, we denote n(k)\u2217 i = n(k)\u2217 ii for simplicity and n\u2217 min = min i,j,k n(k)\u2217 ij . (11) In the presence of missing values, the usual sample mean and sample covariance matrix can no longer be calculated. Instead, the \u201cgeneralized sample mean\u201d is proposed, de\ufb01ned by \u02c6 \u00b51 = (\u02c6 \u00b5\u2217 1i)1\u2264i\u2264p with \u02c6 \u00b5\u2217 1i = 1 n(1)\u2217 i n1 X t=1 X(1) ti S(1) ti , 1 \u2264i \u2264p; \u02c6 \u00b52 = (\u02c6 \u00b5\u2217 2i)1\u2264i\u2264p with \u02c6 \u00b5\u2217 2i = 1 n(2)\u2217 i n2 X t=1 X(2) ti S(2) ti , 1 \u2264i \u2264p. The \u201cgeneralized sample covariance matrix\u201d is then de\ufb01ned by \u02c6 \u03a3 = (\u02c6 \u03c3\u2217 ij)1\u2264i,j\u2264p with \u02c6 \u03c3\u2217 ij = 1 n(1)\u2217 ij + n(2)\u2217 ij n1 X t=1 (X(1) ti \u2212\u02c6 \u00b5\u2217 1i)(X(1) tj \u2212\u02c6 \u00b5\u2217 1j)S(1) ti S(1) tj + n2 X t=1 (X(2) ti \u2212\u02c6 \u00b5\u2217 2i)(X(2) tj \u2212\u02c6 \u00b5\u2217 2j)S(2) ti S(2) tj ! . For these generalized estimators, we have the following bound under the MCR model. Lemma 2. Let \u03b4 = \u00b52 \u2212\u00b51, \u03b2 = \u2126\u03b4, \u2206= \u221a \u03b4\u22a4\u2126\u03b4 and a\u2217= \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51). Then conditioning on S, we have with high probability, |a\u2217 j| \u22644\u221a\u03c3jj \u00b7 \u0010p 64\u22062 + 1 \u0011 \u00b7 s log p n\u2217 min , j = 1, ..., p. (12) 9 \fRemark 2. Although the above result has a form that is similar to Lemma 1, its derivation is quite di\ufb00erent and relies on a new technical tool, the large deviation bound for \u02c6 \u03a3. This is of independent interest and is related to that of the masked sample covariance estimator considered in Levina and Vershynin (2012) and Chen et al. (2012). In particular, the masked sample covariance estimator considered in Chen et al. (2012) applies the mask matrix to the sample covariance maxtrix, while our proposed estimator \u02c6 \u03a3 can be interpreted as applying the mask matrix to each i.i.d. sample, and thus is more general. The proof of Lemma 2 uses the idea of Lemma 2.1 in Cai and Zhang (2016), but yields a sharper bound. The detailed proof is given in Section A.3.2 in the supplement (Cai and Zhang, 2018). We propose to estimate \u03b2 adaptively and construct ADAM (Adaptive linear Discriminant Analysis with randomly Missing data) in the following way: Step 1 (Estimating \u22062). Fix \u03bb1 = 64. We estimate \u03b2 by a preliminary estimator \u02dc \u03b2 = arg min \u03b2 \u2225\u03b2\u22251 subject to |e\u22a4 j \u0010 \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51) \u0011 | \u22644 q \u02c6 \u03c3\u2217 jj \u00b7 s log p n\u2217 min \u00b7 (\u03bb1\u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51) + 1), j \u2208[p]. (13) Then we estimate \u22062 by \u02c6 \u2206\u22172 = | \u02dc \u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51)|. Step 2 (Adaptive estimation of \u03b2). Given \u02c6 \u2206\u22172, the \ufb01nal estimator \u02c6 \u03b2ADAM of \u03b2 is constructed by the following linear optimization problem \u02c6 \u03b2ADAM = arg min \u03b2 \u2225\u03b2\u22251 subject to |e\u22a4 j \u0010 \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51) \u0011 | \u22644 s log p n\u2217 min \u00b7 q \u03bb1\u02c6 \u03c3\u2217 jj \u02c6 \u2206\u22172 + \u02c6 \u03c3\u2217 jj, j \u2208[p]. (14) Step 3 (Construction of ADAM). Given the estimator \u02c6 \u03b2ADAM of the discriminant direction \u03b2, we then construct the following ADAM classi\ufb01cation rule by plugging \u02c6 \u03b2ADAM into the oracle rule (1): \u02c6 CADAM(Z) = \uf8f1 \uf8f2 \uf8f3 1, \b Z \u2212(\u02c6 \u00b51 + \u02c6 \u00b52)/2 \t\u22a4\u02c6 \u03b2ADAM \u22650, 2, \b Z \u2212(\u02c6 \u00b51 + \u02c6 \u00b52)/2 \t\u22a4\u02c6 \u03b2ADAM < 0. (15) As shown in Section 3, \u02c6 CADAM has the similar theoretical performance as \u02c6 CAdaLDA. 10 \f3 Theoretical properties of AdaLDA and ADAM In this section, we develop an optimality theory for high-dimensional linear discriminant analysis for both the complete data and the incomplete data settings. We \ufb01rst investigate the theoretical properties of the AdaLDA and ADAM algorithms proposed in Section 2 and obtain the upper bounds for the excess misclassi\ufb01cation risk. We then establish the lower bounds for the rate of convergence. The upper and lower bounds together yield the minimax rates of convergence and show that AdaLDA and ADAM are adaptively rate optimal. 3.1 Theoretical Analysis of AdaLDA We begin by considering the properties of the estimator \u02c6 \u03b2AdaLDA of the discriminant direction \u03b2. The following theorem shows that \u02c6 \u03b2AdaLDA attains the convergence rate of Mn,p p s log p/n over the class of sparse discriminant vectors G(s, Mn,p) de\ufb01ned in (3). The matching lower bound given in Section 3.3 implies that this rate is optimal. Therefore, AdaLDA adapts to both the sparsity pattern of \u03b2 as well as the signal-to-noise ratio \u2206. Theorem 2. Consider the parameter space G(s, Mn,p) with Mn,p > cL for some cL > 0. Suppose X(1) 1 , ..., X(1) n1 i.i.d. \u223cNp(\u00b51, \u03a3), X(2) 1 , ..., X(2) n2 i.i.d. \u223cNp(\u00b52, \u03a3) and n1 \u224dn2. Assume that Mn,p q s log p n = o(1). Then sup \u03b8\u2208G(s,Mn,p) E[\u2225\u02c6 \u03b2AdaLDA \u2212\u03b2\u22252] \u2272Mn,p r s log p n . We then proceed to characterize the accuracy of the classi\ufb01cation rule \u02c6 CAdaLDA, measured by the excess misclassi\ufb01cation risk R\u03b8( \u02c6 C) \u2212Ropt(\u03b8). Note that the conditional misclassi\ufb01cation rate of \u02c6 CAdaLDA given the two samples can be analytically calculated as R\u03b8( \u02c6 CAdaLDA) = 1 2\u03a6( (\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2AdaLDA q \u02c6 \u03b2\u22a4 AdaLDA\u03a3 \u02c6 \u03b2AdaLDA ) + 1 2 \u00af \u03a6( (\u02c6 \u00b5 \u2212\u00b52)\u22a4\u02c6 \u03b2AdaLDA q \u02c6 \u03b2\u22a4 AdaLDA\u03a3 \u02c6 \u03b2AdaLDA ), where \u02c6 \u00b5 = (\u02c6 \u00b51 + \u02c6 \u00b52)/2. We are interested in the excess misclassi\ufb01cation risk R\u03b8( \u02c6 CAdaLDA) \u2212Ropt(\u03b8). That is, we compare \u02c6 CAdaLDA with the oracle Fisher\u2019s rule, whose risk is given by Ropt(\u03b8) def = R\u03b8(C\u03b8) = \u03a6 \u0012 \u22121 2\u2206 \u0013 . The following theorem provides an upper bound for the excess misclassi\ufb01cation risk of the AdaLDA rule. 11 \fTheorem 3. Consider the parameter space G(s, Mn,p) with Mn,p > cL for some cL > 0 and assume the conditions in Theorem 2 hold. 1. If Mn,p \u2264Cb for some Cb > 0, then there exists some constant C > 0, inf \u03b8\u2208G(s,Mn,p) P \u0012 R\u03b8( \u02c6 CAdaLDA) \u2212Ropt(\u03b8) \u2264C \u00b7 s log p n \u0013 \u22651 \u22128p\u22121. 2. If Mn,p \u2192\u221eas n \u2192\u221e, then there exist some constant C > 0 and \u03b4n = o(1), such that inf \u03b8\u2208G(s,Mn,p) P \u0012 R\u03b8( \u02c6 CAdaLDA) \u2212Ropt(\u03b8) \u2264C \u00b7 e\u2212( 1 8 +\u03b4n)M2 n,p \u00b7 s log p n \u0013 \u22651 \u22128p\u22121. Remark 3. The result in Theorem 3 improves the convergence rate of the misclassi\ufb01cation risk of the LPD rule given in Cai and Liu (2011). Consider the \ufb01rst case where Mn,p is a constant not depending on n and p, Theorem 3 of Cai and Liu (2011) shows that the convergence rate is R\u03b8( \u02c6 CLPD) \u2212Ropt(\u03b8) = OP ((s log p/n)1/2), while Theorem 3 here shows a faster rate OP ((s log p/n)) when Mn,p is a constant. The lower bounds given in Section 3.3 shows that both convergence rates in Theorem 3 are indeed optimal. 3.2 Theoretical Analysis of ADAM We now investigate the theoretical properties of the ADAM procedure in the presence of missing data. Similar rates of convergence for estimation and excess misclassi\ufb01cation risk can be obtained, but the technical analysis is much more involved under the MCR model. Under the MCR model, suppose that the missingness pattern S \u2208{0, 1}n1\u00d7p\u00d7{0, 1}n2\u00d7p is a realization of a distribution F. We consider the distribution space \u03a8(n0; n, p) given by \u03a8(n0; n, p) = {F : PS\u223cF (c1n0 \u2264n\u2217 min(S) \u2264c2n0) \u22651 \u2212p\u22121}, for some constants c1, c2 > 0, and n\u2217 min(S) is de\ufb01ned for S as in (11). Remark 4. This distribution space includes the missing uniformly and completely at random (MUCR) model considered in Loh and Wainwright (2012); Lounici (2013) and Lounici (2014). More speci\ufb01cally, MUCR model assumes each entry X(k) i,j (k \u2208[2], i \u2208[nk], j \u2208[p]) is missing independently with probability \u03f5. As shown in Section A.6 in the supplement, when 1 (1\u2212\u03f5)2 q log p n = o(1) as n \u2192\u221e, the MUCR model is in the distribution space \u03a8(n(1 \u2212\u03f5)2; n, p). In addition, this distribution space allows a more general variant of MUCR model that each entry X(k) i,j is missing independently with di\ufb00erent probabilities \u03f5(k) ij . If we assume \u02dc c1 \u00b7 \u03f5 \u2264mini,j,k \u03f5(k) ij \u2264maxi,j,k \u03f5(k) ij \u2264\u02dc c2 \u00b7 \u03f5 for some constants \u02dc c1, \u02dc c2 > 0, then use the similar 12 \ftechnique, this missingness pattern is included in \u03a8(n(1\u2212\u03f5)2; n, p) when 1 (1\u2212\u03f5)2 q log p n = o(1) as n \u2192\u221e. The following two theorems provide respectively the convergence rates for the discriminant vector estimator \u02c6 \u03b2ADAM and the excess misclassi\ufb01cation rate of \u02c6 CADAM over the parameter space G(s, Mn,p) for \u03b8 and the distribution space \u03a8(n0; n, p). Theorem 4. Consider the parameter space G(s, Mn,p) with Mn,p > cL for some cL > 0 and the distribution space \u03a8(n0; n, p) with Mn,p q s log p n0 = o(1). Suppose X(1) 1 , ..., X(1) n1 and X(2) 1 , ..., X(2) n2 are i.i.d. samples from Np(\u00b51, \u03a3) and Np(\u00b52, \u03a3) respectively. Assuming that X\u2217(1) 1 , ..., X\u2217(1) n1 and X\u2217(2) 1 , ..., X\u2217(2) n2 de\ufb01ned in (4) is observed and Assumption 1 with S = {S(k) t }t\u2208[nk],k\u2208[2] holds. Then the risk of estimating the discriminant direction \u03b2 by ADAM satis\ufb01es sup \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) E[\u2225\u02c6 \u03b2ADAM \u2212\u03b2\u22252] \u2272Mn,p r s log p n0 . Theorem 5. Suppose the conditions of Theorem 4 hold. 1. If Mn,p \u2264Cb for some Cb > 0, then there exists some constant C > 0, such that inf \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) P \u0012 R\u03b8( \u02c6 CADAM) \u2212Ropt(\u03b8) \u2264C \u00b7 s log p n0 \u0013 \u22651 \u221212p\u22121. 2. If Mn,p \u2192\u221eas n \u2192\u221e, then there exist some constant C > 0 and \u03b4n = o(1), such that inf \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) P \u0012 R\u03b8( \u02c6 CADAM) \u2212Ropt(\u03b8) \u2264C \u00b7 e\u2212( 1 8 +\u03b4n)M2 n,p \u00b7 s log p n0 \u0013 \u22651 \u221212p\u22121. In the complete data case, we have n0 = n, so the rates of convergence shown in Theorem 4 and 5 match those in Theorems 2 and 3. In addition, in the special case of MUCR model, Theorem 4 and 5 imply the following result. Corollary 1. Under the conditions of Theorem 3 and consider the MUCR model with missing probability \u03f5. If (M2 n,p s log p n \u2228 q log p n ) \u00b7 1 (1\u2212\u03f5)2 = o(1), then the risk of estimating the discriminant direction \u03b2 by ADAM over the class G(s, Mn,p) satis\ufb01es sup \u03b8\u2208G(s,Mn,p) E[\u2225\u02c6 \u03b2ADAM \u2212\u03b2\u22252] \u2272Mn,p s s log p n(1 \u2212\u03f5)2 . Moreover, there exist constant C > 0 and \u03b4n = o(1), such that the excess misclassi\ufb01cation risk over the class G(s, Mn,p) satis\ufb01es inf \u03b8\u2208G(s,Mn,p) P \u0012 R\u03b8( \u02c6 CADAM) \u2212Ropt(\u03b8) \u2264C \u00b7 e\u2212( 1 8 +\u03b4n)M2 n,p \u00b7 s log p n(1 \u2212\u03f5)2 \u0013 \u22651 \u221213p\u22121. 13 \fThis result shows that, although the sample size only loses a proportion of \u03f5, the convergence rates for the estimation risk and misclassi\ufb01cation rate shrunk at the rate of n(1 \u2212\u03f5)2 under the MUCR model. 3.3 Minimax lower bounds To understand the di\ufb03culty of the classi\ufb01cation problem and the related estimation problem as well as to establish the optimality for the AdaLDA and ADAM classi\ufb01ers, it is essential to obtain the minimax lower bounds for the estimation risk and the excess misclassi\ufb01cation risk. In this section, we only state the results for the missing data setting as the complete data setting can be treated as a special case. The following lower bound results show that the rates of convergence obtained by AdaLDA and ADAM algorithms are indeed optimal, for both estimation of the discriminant direction \u03b2 and classi\ufb01cation. Theorem 6. Consider the parameter space G(s, Mn,p) with Mn,p > cL for some cL > 0 and the distribution space \u03a8(n0; n, p) with Mn,p q s log p n0 = o(1). For any n0 > 1, suppose 1 \u2264s \u2264o( n0 log p) and log p log(p/s) = O(1). Then under MCR model, the minimax risk of estimating the discriminant direction \u03b2 over the class G(s, Mn,p) and \u03a8(n0; n, p) satis\ufb01es inf \u02c6 \u03b2 sup \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) E[\u2225\u02c6 \u03b2 \u2212\u03b2\u22252] \u2273Mn,p r s log p n0 . Theorem 7. Consider the parameter space G(s, Mn,p) with Mn,p > cL for some cL > 0 and the distribution space \u03a8(n0; n, p) with Mn,p q s log p n0 = o(1). For any n0 \u22651, suppose 1 \u2264s \u2264o( n0 log p) and log p log(p/s) = O(1). Then under the MCR model, the minimax risk of the excess misclassi\ufb01cation error over the class G(s, Mn,p) and \u03a8(n0; n, p) satis\ufb01es that 1. If Mn,p \u2264Cb for some Cb > 0, then for any \u03b1 > 0, there are some constants C\u03b1 > 0 such that inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1 \u00b7 s log p n0 ) \u22651 \u2212\u03b1. 2. If Mn,p \u2192\u221eas n \u2192\u221e, then for any \u03b1 > 0, there are some constants C\u03b1 > 0 and \u02dc \u03b4n = o(1) such that inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1 \u00b7 e\u2212( 1 8 +\u02dc \u03b4n)M2 n,p \u00b7 s log p n0 ) \u22651 \u2212\u03b1. Remark 5. In the complete data case, n\u2217 min = min{n1, n2} = n, so Theorems 6 and 7 together with Theorems 1-4 imply that both AdaLDA and ADAM algorithms attain the optimal rates of convergence in terms of estimation and classi\ufb01cation error. 14 \fWe should also note that the proof of Theorem 7 is not straightforward. This is partially due to the fact that the excess risk R\u03b8( \u02c6 C)\u2212Ropt(\u03b8) does not satisfy the triangle inequality that is required by standard lower bound techniques. A key technique here is to make a connection to an alternative risk function. For a generic classi\ufb01cation rule \u02c6 C, we de\ufb01ne L\u03b8( \u02c6 C) = P\u03b8( \u02c6 C(Z) \u0338= C\u03b8(Z)), (16) where C\u03b8(Z) is the Fisher\u2019s linear discriminant rule in (1). The following lemma enables us to reduce the loss R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) to the risk function L\u03b8( \u02c6 C). Lemma 3. Let Z \u223c1 2Np(\u00b51, \u03a3)+ 1 2Np(\u00b52, \u03a3) with parameter \u03b8 = (\u00b51, \u00b52, \u03a3). If a classi\ufb01er \u02c6 C satisfying L\u03b8( \u02c6 C) = o(1) as n \u2192\u221e, then for su\ufb03ciently large n, R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265 \u221a 2\u03c0\u2206 8 e\u22062/8 \u00b7 L2 \u03b8( \u02c6 C). Lemma 3 shows the relationship between the risk function R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) and a more \u201cstandard\u201d risk function L\u03b8( \u02c6 C), who has the following property which served the same purpose as the triangle inequality. Lemma 4. Let \u03b8 = (\u00b5, \u2212\u00b5, Ip) and \u02dc \u03b8 = (\u02dc \u00b5, \u2212\u02dc \u00b5, Ip) with \u2225\u00b5\u22252 = \u2225\u02dc \u00b5\u22252 = \u2206/2. For any classi\ufb01er C, if \u2225\u00b5 \u2212\u02dc \u00b5\u22252 = o(1) as n \u2192\u221e, then for su\ufb03ciently large n, L\u03b8(C) + L \u02dc \u03b8(C) \u22651 \u2206e\u2212\u22062/8 \u00b7 \u2225\u00b5 \u2212\u02dc \u00b5\u22252. Using Lemmas 3 and 4, we can then use Fano\u2019s inequality to complete the proof of Theorem 7. The details are shown in Section 5. In addition, similar minimax lower bounds for estimating \u03b2 and the excess misclassi\ufb01cation error can be established under the MUCR model. The following result shows that the convergence rates in Corollary 1 are minimax rate optimal. Theorem 8. Under the conditions of Theorem 6 and MUCR model with missing probability \u03f5, and further assume that ((M2 n,p s log p n ) \u2228 q log p n ) \u00b7 1 (1\u2212\u03f5)2 = o(1), then the minimax risk of estimating the discriminant direction \u03b2 by ADAM over the class G(s, Mn,p) under the MUCR model satis\ufb01es inf \u02c6 \u03b2 sup \u03b8\u2208G(s,Mn,p) E[\u2225\u02c6 \u03b2 \u2212\u03b2\u22252] \u2273Mn,p s s log p n(1 \u2212\u03f5)2 . Moreover, if Mn,p \u2192\u221eand \u03f5 < 1 \u2212cB for some cB \u2208(0, 1), the minimax risk of the misclassi\ufb01cation error over the class G(s, Mn,p) satis\ufb01es that for any \u03b1, \u03b4 > 0, there are some constants C\u03b1 > 0, such that inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1 \u00b7 e\u2212( 1 8 +\u03b4)M2 n,p \u00b7 s log p n(1 \u2212\u03f5)2 ) \u22651 \u2212\u03b1. 15 \f4 Numerical results The proposed AdaLDA and ADAM classi\ufb01ers are easy to implement, and the MATLAB code is available at https://github.com/linjunz/ADAM. We investigate in this section the numerical performance of AdaLDA and ADAM using both simulated and real data. 4.1 Simulations In all the simulations, the sample size is n1 = n2 = 200 while the number of variables p varies from 100, 200 to 400. The probability of being in either of the two classes is equal. The discriminant vector \u03b2 = (1, . . . , 1, 0, . . . , 0)\u22a4is sparse such that only the \ufb01rst s = 10 entries are nonzero. We consider the following three models for the covariance matrix \u03a3. Model 1 Erd\u02dd os-R\u00b4 enyi random graph: Let \u02dc \u2126= (\u02dc \u03c9ij) where \u02dc \u03c9ij = uij\u03b4ij, \u03b4ij \u223c Ber(\u03c1) being the Bernoulli random variable with success probability 0.2 and uij \u223c Unif[0.5, 1] \u222a[\u22121, \u22120.5]. After symmetrizing \u02dc \u2126, set \u2126= \u02dc \u2126+ {max(\u2212\u03c6min(\u02dc \u2126), 0) + 0.05}Ip to ensure the positive de\ufb01niteness. Finally, \u2126is standardized to have unit diagonals and \u03a3 = \u2126\u22121. Model 2 Block sparse model: \u2126= (B + \u03b4Ip)/(1 + \u03b4) where bij = bji = 0.5 \u00d7 Ber(1, 0.2) for 1 \u2264i \u2264p/2, i < j \u2264p; bij = bji = 0.5 for p/2 + 1 \u2264i < j \u2264p; bii = 1 for 1 \u2264i \u2264p. In other words, only the \ufb01rst p/2 rows and columns of \u2126are sparse, whereas the rest of the matrix is not sparse. Here \u03b4 = max(\u2212\u03c6min(B), 0) + 0.05. The matrix \u2126is also standardized to have unit diagonals and \u03a3 = \u2126\u22121. Model 3 AR(1) model: (\u03a3ij)p\u00d7p with \u03a3ij = 0.9|i\u2212j|. Given the covariance matrix \u03a3 generated by the model above, the means are \u00b51 = (0, . . . , 0)\u22a4and \u00b52 = \u00b51 \u2212\u03a3\u03b2. The missing mechanism is chosen such that each entry Xki is observed with probability p = 1 \u2212\u03f5 \u2208(0, 1). We change the missing proportion \u03f5 from 0 to 0.2. We apply AdaLDA rule when the data is complete, i.e. \u03f5 = 0, and apply ADAM rule when \u03f5 > 0. The AdaLDA rule is then compared with the LPD (Cai and Liu, 2011), SLDA (Shao et al., 2011), FAIR (Fan and Fan, 2008), and NSC (Tibshirani et al., 2002) rules whose tuning parameters are chosen by cross-validation. We also note that one commonly used method, the Naive Bayes rule is a special case of the NSC rule with tuning parameter \u03bb\u2206= 0, so it\u2019s not included in the comparison. The misclassi\ufb01cation errors are recorded in the following table. For each setting, the number of repetition is set to 100. It can be seen from the simulation results that the proposed AdaLDA algorithm, which is purely data-driven and tuning free, has a very similar performance to that of the LPD 16 \fTable 1: Misclassi\ufb01cation errors (%) for Model 1 Method ADAM AdaLDA LPD SLDA FAIR NSC Oracle (s, p)\\\u03f5 0.2 0.15 0.1 0.05 0 0 0 0 0 0 (10,100) 16.82 15.97 15.10 14.33 12.89 12.23 18.42 16.34 17.43 10.22 (20,100) 12.63 12.17 11.92 11.68 11.61 12.08 28.76 15.02 15.60 7.06 (10,200) 29.43 28.11 27.94 2658 25.90 27.32 39.87 33.22 37.78 21.50 (20,200) 18.89 17.78 17.72 17.20 15.42 14.67 26.78 18.87 21.13 11.63 (10,400) 31.34 30.28 29.40 29.09 28.92 30.89 37.33 46.45 36.12 25.46 (20,400) 26.22 25.78 24.21 23.54 22.07 23.96 34.78 32.45 33.82 15.63 Table 2: Misclassi\ufb01cation errors (%) for Model 2 Method ADAM AdaLDA LPD SLDA FAIR NSC Oracle (s, p)\\\u03f5 0.2 0.15 0.1 0.05 0 0 0 0 0 0 (10,100) 3.69 3.59 3.42 3.38 3.31 3.40 3.48 4.03 4.41 3.00 (20,100) 0.33 0.30 0.29 0.25 0.20 0.22 0.24 0.24 0.37 0.14 (10,200) 5.07 4.73 4.13 4.03 3.80 3.73 3.98 4.95 5.59 3.39 (20,200) 0.60 0.53 0.43 0.42 0.40 0.46 0.53 0.82 0.67 0.33 (10,400) 5.54 5.18 5.05 4.21 4.04 4.11 4.20 5.20 6.89 3.68 (20,400) 1.12 1.00 0.76 0.67 0.61 0.65 0.81 1.38 1.30 0.41 Table 3: Misclassi\ufb01cation errors (%) for Model 3 Method ADAM AdaLDA LPD SLDA FAIR NSC Oracle (s, p)\\\u03f5 0.2 0.15 0.1 0.05 0 0 0 0 0 0 (10,100) 18.91 18.72 18.58 18.16 18.04 17.99 23.08 23.72 23.00 11.78 (20,100) 19.93 18.98 18.92 18.13 17.63 17.69 23.01 23.44 22.65 10.78 (10,200) 19.15 18.38 18.00 17.84 17.41 17.94 22.67 23.87 23.71 11.78 (20,200) 18.50 18.36 18.20 17.81 17.66 17.77 23.00 24.36 24.04 10.74 (10,400) 18.95 18.73 18.59 18.27 17.86 18.66 24.93 27.29 26.08 11.71 (20,400) 18.47 18.39 18.25 18.10 17.82 18.14 24.78 26.77 26.49 10.68 17 \falgorithm with optimally chosen tuning parameters and outperforms all the other methods. In addition, ADAM does not lose much accuracy in the presence of missing data. 4.2 Real data analysis In addition to the simulation studies, we also illustrate the merits of the AdaLDA and ADAM classi\ufb01ers in an analysis of two real datasets to further investigate the numerical performance of the proposed methods. One dataset, available at www.chestsurg.org, is the Lung cancer data analyzed by Gordon et al. (2002). Another dataset is the Leukemia data from high-density A\ufb00ymetrix oligonucleotide arrays that was previously analyzed in Golub et al. (1999), and is available at www.broad.mit.edu/cgi-bin/cancer/datasets.cgi. These two datasets were frequently used for illustrating the empirical performance of the classi\ufb01er for high-dimensional data in recent literature. We will compare AdaLDA and ADAM with the existing methods. 4.2.1 Lung cancer data We evaluate the proposed methods by classifying between malignant pleural mesothelioma (MPM) and adenocarcinoma (ADCA) of the lung. There are 181 tissue samples (31 MPM and 150 ADCA) and each sample is described by 12533 genes in the lung cancer dataset in Gordon et al. (2002). This dataset has been analyzed in Fan and Fan (2008) using FAIR and NSC. In this section we apply the AdaLDA and ADAM rules to this dataset for disease classi\ufb01cation. When ADAM rule is used, we make each entry in the dataset missing uniformly and independently with probability \u03f5. In the simulation, given the small sample size, we choose \u03f5 = 0.05 and \u03f5 = 0.1. The sample variances of the genes range over a wide interval. We \ufb01rst compute the sample variances for each gene and drop the lower and upper 6-quantiles to control the condition number of \u02c6 \u03a3. The average misclassi\ufb01cation errors are computed by using 5-fold cross-validation for various methods with 50 repetitions. To reduce the computational costs, in each repetition, only 1500 genes with the largest absolute values of the two sample t statistics are used. As seen in the Table 4, the classi\ufb01cation result of AdaLDA is better than existing methods, including LPD (Cai and Liu, 2011), FAIR (Fan and Fan, 2008), and NSC (Tibshirani et al., 2002) methods, although only 1500 genes were used. Moreover, in the incomplete data case, ADAM still has satisfactory accuracy. 18 \fTable 4: Classi\ufb01cation error of Lung cancer data by various methods ADAM(\u03f5=0.1) ADAM(\u03f5=0.05) AdaLDA LPD SLDA FAIR NSC Testing error 5.53% 3.22% 2.09% 2.11% 4.88% 3.64% 7.30% Table 5: Classi\ufb01cation error of Leukemia data by various methods ADAM(\u03f5=0.1) ADAM(\u03f5=0.05) AdaLDA LPD FAIR SLDA NSC Testing error 8.47% 7.53% 2.94% 3.09% 2.94% 5.76% 8.82% 4.2.2 Leukemia data Golub et al. (1999) applied gene expression microarray techniques to study human acute leukemia and discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL). There are 72 tissue samples (47 ALL and 25 AML) and 7129 genes in the Leukemia dataset. In this section, we apply the AdaLDA rule to this dataset and compare the classi\ufb01cation results with those obtained by LPD (Cai and Liu, 2011), FAIR, NSC (Fan and Fan, 2008), SLDA (Shao et al., 2011) and Naive Bayes (NB) methods. Same as the analysis of lung cancer data, when ADAM rule is used, we make each entry in the dataset missing independently with probability \u03f5 \u2208{0.05, 0.1} As in the analysis of the lung cancer data, we \ufb01rst drop genes with extreme sample variances out of lower and upper 6-quantiles. Similar to the analysis of the lung cancer data, the average misclassi\ufb01cation errors are computed by using two-fold cross-validation for various methods with 50 repetitions, and to control the computational costs, we use 2000 genes with the largest absolute values of the two sample t statistics in each repetition. Classi\ufb01cation results are summarized in Table 5. The AdaLDA has the similar performance as the LPD rule and FAIR, as obtain the misclassi\ufb01cation error of about 3%. In contrast, the navie-Bayes rule misclassi\ufb01es 20.59% testing samples and SLDA misclassi\ufb01es 5.76% testing samples. Fan and Fan (2008) report a test error rate of 2.94% for FAIR and a test error rate of 8.82% for NSC proposed by Tibshirani et al. (2002). In the presence of missing data, ADAM misclassi\ufb01es 7.53% and 8.47% testing samples when the missing proportion is 0.05 and 0.1 respectively. 5 Proofs In this section, we prove the main results, Theorem 2, 3, 4 5, 6 and 7. Theorem 1 follows from Theorems 3 and 7. Since n1 \u224dn2, without loss of the generality we shall assume n1 = n2 = n in the proofs. For reasons of space, the proofs of the technical lemmas are given in the Supplementary Material (Cai and Zhang, 2018). 19 \f5.1 Proof of Theorem 2 To prove Theorem 2 we begin by collecting a few important technical lemmas that will be used in the main proofs. 5.1.1 Auxiliary Lemmas Lemma 5. Suppose X1, ..., Xn i.i.d. \u223cNp(\u00b5, \u03a3), and assume that \u02c6 \u00b5, \u02c6 \u03a3 are the sample mean and sample covariance matrix respectively. Let \u0393(s) = {u \u2208Rp : \u2225uSC\u22251 \u2264 \u2225uS\u22251, for some S \u2282[p] with |S| = s}, then with probability at least 1 \u2212p\u22121, sup u\u2208\u0393(s) u\u22a4(\u02c6 \u00b5 \u2212\u00b5) \u2272 r s log p n ; sup u,v\u2208\u0393(s) u\u22a4(\u02c6 \u03a3 \u2212\u03a3)v \u2272 r s log p n . Lemma 6. Suppose x, y \u2208Rp. Let h = x \u2212y and S = supp(y). If \u2225x\u22251 \u2264\u2225y\u22251, then h \u2208\u0393(s) with s = |S|, that is, \u2225hSc\u22251 \u2264\u2225hS\u22251. 5.1.2 Main proof of Theorem 2 Recall that \u02c6 \u03b2AdaLDA is constructed by the following two steps. Step 1. Estimating \u22062 \u02dc \u03b2 = arg min \u03b2 ( |e\u22a4 j \u0010 \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51) \u0011 | \u22644 r log p n \u00b7 p \u02c6 \u03c3jj \u00b7 (\u03bb0\u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51) + 1), j \u2208[p] ) . (17) Then we estimate \u22062 by \u02c6 \u22062 = |\u27e8\u02dc \u03b2, \u02c6 \u00b52 \u2212\u02c6 \u00b51\u27e9|. Step 2. Adaptive estimation of \u03b2. Given \u02c6 \u22062, the \ufb01nal estimator \u02c6 \u03b2AdaLDA of \u03b2 is constructed by the following linear optimization problem \u02c6 \u03b2AdaLDA = arg min \u03b2 ( |e\u22a4 j \u0010 \u02c6 \u03a3\u03b2 \u2212(\u02c6 \u00b52 \u2212\u02c6 \u00b51) \u0011 | \u22644 r log p n \u00b7 q \u03bb0\u02c6 \u03c3jj \u02c6 \u22062 + \u02c6 \u03c3jj, j \u2208[p] ) . (18) Firstly, let\u2019s show the consistency of estimating \u22062. Recall the de\ufb01nition of \u02dc \u03b2 and using 20 \fLemma 5, we have with high probability at least 1 \u22123p\u22121, |( \u02dc \u03b2 \u2212\u03b2)\u22a4\u03a3( \u02dc \u03b2 \u2212\u03b2)| \u2264|( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u03a3 \u02dc \u03b2 \u2212\u02c6 \u03b4)| + |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u03a3 \u2212\u03a3) \u02dc \u03b2)| + |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u03b4 \u2212\u02c6 \u03b4)| \u2264\u2225\u02dc \u03b2 \u2212\u03b2\u22251\u2225\u02c6 \u03a3 \u02dc \u03b2 \u2212\u02c6 \u03b4\u2225\u221e+ |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u03a3 \u2212\u03a3)( \u02dc \u03b2 \u2212\u03b2))| + |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u03a3 \u2212\u03a3)\u03b2)| + |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u03b4 \u2212\u02c6 \u03b4)| \u2272\u221as\u2225\u02dc \u03b2 \u2212\u03b2\u22252 \u00b7 \u2225\u02c6 \u03a3 \u02dc \u03b2 \u2212\u02c6 \u03b4\u2225\u221e+ \u2225\u02dc \u03b2 \u2212\u03b2\u22252 \u00b7 r s log p n \u00b7 \u2225\u03b2 \u2212\u02dc \u03b2\u22252 + \u2225\u03b2 \u2212\u02dc \u03b2\u22252 r s log p n \u00b7 \u2225\u03b2\u22252 + \u2225\u03b2 \u2212\u02dc \u03b2\u22252 r s log p n , (19) where the third inequality uses Lemma 5 and the fact that \u03b2, \u02dc \u03b2 \u2212\u03b2 \u2208\u0393(s). In fact, \u03b2 is a feasible solution to (8) due to Lemma 1 and thus \u2225\u02dc \u03b2\u22251 \u2264\u2225\u03b2\u22251. Then by Lemma 6, we have \u02dc \u03b2 \u2212\u03b2 \u2208\u0393(s). In addition, \u2225\u03b2\u22250 \u2264s, so we have \u03b2 \u2208\u0393(s). In addition, by standard derivation of the accuracy of sample variance, since M\u22121 \u2264 \u03bbmin(\u03a3) \u2264\u03bbmax(\u03a3) \u2264M, by using the union bound technique, we have with probability at least 1 \u2212p\u22121, max i\u2208[p] |\u02c6 \u03c3ii \u2212\u03c3ii| \u2272 r log p n , which implies with probability at least 1 \u2212p\u22121, max i\u2208[p] |\u02c6 \u03c3ii| \u22642M. In addition, since \u2206\u2265Mn,p \u2265cL > 0, then with probability at least 1 \u22123p\u22121, \u2225\u02c6 \u03a3 \u02dc \u03b2 \u2212\u02c6 \u03b4\u2225\u221e\u22644 r log p n \u00b7 p \u02c6 \u03c3jj \u00b7 (\u03bb0 \u02dc \u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51) + 1) \u2272 r log p n \u00b7 |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51) + 1| + r log p n \u00b7 |\u03b2\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51)| \u2264 r log p n \u00b7 (|( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u00b52 \u2212\u00b51)| + |( \u02dc \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u00b52 \u2212\u02c6 \u00b51 \u2212\u00b52 + \u00b51)| + 1) + r log p n \u00b7 (|\u03b2\u22a4(\u00b52 \u2212\u00b51)| + |\u03b2\u22a4(\u00b52 \u2212\u00b51 \u2212\u02c6 \u00b52 + \u02c6 \u00b51)|) \u2272 r log p n \u2206\u2225\u02dc \u03b2 \u2212\u03b2\u22252 + \u221as \u00b7 log p n \u2225\u02dc \u03b2 \u2212\u03b2\u22252 + r log p n \u22062 + \u221as \u00b7 log p n \u2206, where the last inequality uses the fact that \u2225\u00b52\u2212\u00b51\u22252, \u2225\u03b2\u22252 \u2272\u2206, since \u2206= p (\u00b52 \u2212\u00b51)\u22a4\u2126(\u00b52 \u2212\u00b51) \u2265 1 \u221a M \u2225\u00b52 \u2212\u00b51\u22252, and \u2206= p \u03b2\u22a4\u03a3\u03b2 \u2265 1 \u221a M \u2225\u03b2\u22252. 21 \fIt follows that with probability at least 1 \u22126p\u22121, |( \u02dc \u03b2 \u2212\u03b2)\u22a4\u03a3( \u02dc \u03b2 \u2212\u03b2)| \u2272 r s log p n \u2206\u2225\u02dc \u03b2 \u2212\u03b2\u22252 2 + s log p n \u2225\u02dc \u03b2 \u2212\u03b2\u22252 2 + r s log p n \u22062\u2225\u02dc \u03b2 \u2212\u03b2\u22252 + s log p n \u2206\u2225\u02dc \u03b2 \u2212\u03b2\u22252 + r s log p n \u00b7 \u2225\u02dc \u03b2 \u2212\u03b2\u22252 2 + r s log p n \u2206\u2225\u02dc \u03b2 \u2212\u03b2\u22252 + \u2225\u02dc \u03b2 \u2212\u03b2\u22252 \u00b7 r s log p n \u2272 r s log p n \u2206\u2225\u02dc \u03b2 \u2212\u03b2\u22252 2 + r s log p n \u22062\u2225\u02dc \u03b2 \u2212\u03b2\u22252, where the last inequality uses the fact that \u2206\u2265Mn,p \u2265cL > 0. On the other hand, since |( \u02dc \u03b2 \u2212\u03b2)\u22a4\u03a3( \u02dc \u03b2 \u2212\u03b2)| \u2265\u03bbmin(\u03a3)\u2225\u02dc \u03b2 \u2212\u03b2\u22252 2 \u22651 M \u2225\u02dc \u03b2 \u2212\u03b2\u22252 2. We then have, with probability at least 1 \u22126p\u22121, \u2225\u02dc \u03b2 \u2212\u03b2\u22252 \u2272 r s log p n \u0010 \u2206\u2225\u02dc \u03b2 \u2212\u03b2\u22252 + \u22062\u0011 , Assuming Mn,p q s log p n = o(1), which implies \u2206 q s log p n = o(1), then we have \u2225\u02dc \u03b2 \u2212\u03b2\u22252 \u2272 \u22062 q s log p n 1 \u2212\u2206 q s log p n . Since \u2225\u02dc \u03b2\u22251 \u2264\u2225\u03b2\u22251 and combining with Lemma 5, we then have with probability at least 1 \u22127p\u22121, | \u02c6 \u22062 \u2212\u22062 \u22062 | \u2264| \u02dc \u03b2\u22a4(\u03b4 \u2212\u02c6 \u03b4)| + |\u03b4\u22a4( \u02dc \u03b2 \u2212\u03b2)| \u22062 \u2264\u2225\u03b2\u22251 \u00b7 \u2225\u03b4 \u2212\u02c6 \u03b4\u2225\u221e+ \u2225\u03b4\u22252 \u00b7 \u2225\u03b2 \u2212\u02dc \u03b2\u22252 \u22062 \u2264 \u221as \u00b7 \u2225\u03b2\u22252 \u00b7 \u2225\u03b4 \u2212\u02c6 \u03b4\u2225\u221e+ \u2225\u03b4\u22252 \u00b7 \u2225\u03b2 \u2212\u02dc \u03b2\u22252 \u22062 \u2272 \u221as \u00b7 \u2206 q log p n + \u2206\u00b7 \u22062 q s log p n 1\u2212\u2206 q s log p n \u22062 = o(1), given \u2206\u2265cL and \u2206 q s log p n = o(1). Secondly, let\u2019s proceed to showing the accuracy of \u02c6 \u03b2AdaLDA. We use \u02c6 \u03b2 to denote \u02c6 \u03b2AdaLDA in this subsection for simplicity. By Lemma 1, \u03b2 lies in the feasible set of (9), so \u2225\u02c6 \u03b2\u22251 \u2264 22 \f\u2225\u03b2\u22251. By a similar argument as in (19), we have that with probability at least 1 \u22123p\u22121, |( \u02c6 \u03b2 \u2212\u03b2)\u22a4\u03a3( \u02c6 \u03b2 \u2212\u03b2)| \u2264|( \u02c6 \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u03a3 \u02c6 \u03b2 \u2212\u02c6 \u03b4)| + |( \u02c6 \u03b2 \u2212\u03b2)\u22a4(\u02c6 \u03a3 \u2212\u03a3) \u02c6 \u03b2)| + |( \u02c6 \u03b2 \u2212\u03b2)\u22a4(\u03b4 \u2212\u02c6 \u03b4)| \u2272\u221as\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 \u00b7 \u2225\u02c6 \u03a3 \u02c6 \u03b2 \u2212\u02c6 \u03b4\u2225\u221e+ \u2225\u02c6 \u03b2 \u2212\u03b2\u22252 \u00b7 r s log p n \u00b7 \u2225\u03b2 \u2212\u02c6 \u03b2\u22252 + \u2225\u03b2 \u2212\u02c6 \u03b2\u22252 r s log p n \u00b7 \u2225\u03b2\u22252 + \u2225\u03b2 \u2212\u02c6 \u03b2\u22252 r s log p n . (20) Now since we have | \u02c6 \u22062\u2212\u22062 \u22062 | = o(1) with probability at least 1 \u22127p\u22121, this implies with probability at least 1 \u221210p\u22121, \u2225\u02c6 \u03a3 \u02c6 \u03b2 \u2212\u02c6 \u03b4\u2225\u221e\u2264 r log p n \u00b7 q \u02c6 \u03c3jj \u02c6 \u22062 + 2\u02c6 \u03c3jj \u2272\u2206 r log p n . Then using the fact |( \u02dc \u03b2 \u2212\u03b2)\u22a4\u03a3( \u02dc \u03b2 \u2212\u03b2)| \u2265\u03bbmin(\u03a3)\u2225\u02dc \u03b2 \u2212\u03b2\u22252 2 again, we have with probability at least 1 \u221210p\u22121, \u2225\u02c6 \u03b2 \u2212\u03b2\u22252 2 \u2272\u2206 r s log p n \u00b7 \u2225\u02c6 \u03b2 \u2212\u03b2\u22252 + r s log p n \u00b7 \u2225\u02c6 \u03b2 \u2212\u03b2\u22252 2. This implies that there exists some constant C > 0, such that with probability at least 1 \u221210p\u22121, \u2225\u02c6 \u03b2AdaLDA \u2212\u03b2\u22252 \u2264C\u2206\u00b7 r s log p n . In addition, since \u2225\u02c6 \u03b2AdaLDA\u22251 \u2264\u2225\u03b2\u22251 \u2264\u221ap\u2225\u03b2\u22252 \u2264\u221apM \u00b7 \u2206, we then have E[\u2225\u02c6 \u03b2AdaLDA \u2212\u03b2\u22252] \u2264E[\u2225\u02c6 \u03b2AdaLDA \u2212\u03b2\u22252 \u00b7 1 {\u2225\u02c6 \u03b2AdaLDA\u2212\u03b2\u22252>C\u2206\u00b7 q s log p n }] + E[\u2225\u02c6 \u03b2AdaLDA \u2212\u03b2\u22252 \u00b7 1 {\u2225\u02c6 \u03b2AdaLDA\u2212\u03b2\u22252\u2264C\u2206\u00b7 q s log p n }] \u2264 p pM \u00b7 \u2206\u00b7 10p\u22121 + C\u2206\u00b7 r s log p n \u2272\u2206\u00b7 r s log p n \u2272Mn,p \u00b7 r s log p n . 5.2 Proofs of Theorem 3 For a vector x \u2208Rp, we de\ufb01ne \u2225x\u22252,s = sup\u2225y\u22252=1,y\u2208\u0393(s) |x\u22a4y|. We start with the following lemma. Lemma 7. For two vectors \u03b3 and \u02c6 \u03b3, if \u2225\u03b3 \u2212\u02c6 \u03b3\u22252 = o(1) as n \u2192\u221e, and \u2225\u03b3\u22252 \u2265c for some constant c > 0, then when n \u2192\u221e, \u2225\u03b3\u22252 \u00b7 \u2225\u02c6 \u03b3\u22252 \u2212\u03b3\u22a4\u02c6 \u03b3 \u224d\u2225\u03b3 \u2212\u02c6 \u03b3\u22252 2. 23 \fWe postpone the proof of Lemma 7 to Section A.6 in the supplement, and continue the proof of Theorem 3. Let \u03b4n = \u2225\u02c6 \u03b2 \u2212\u03b2\u22252 \u2228\u2225\u02c6 \u00b51 \u2212\u00b51\u22252,s \u2228\u2225\u02c6 \u00b52 \u2212\u00b52\u22252,s. We are going to show R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2272e\u2212\u22062/8 \u00b7 \u2206\u00b7 \u03b42 n. Given the estimators \u02c6 \u03c9, \u02c6 \u00b5k, and \u02c6 \u03b2, the sample Z is classi\ufb01ed as \u02c6 C(Z) = \uf8f1 \uf8f2 \uf8f3 1, (Z \u2212(\u02c6 \u00b51 + \u02c6 \u00b52)/2)\u22a4\u02c6 \u03b2 \u22650 2, (Z \u2212(\u02c6 \u00b51 + \u02c6 \u00b52)/2)\u22a4\u02c6 \u03b2 < 0. Let \u02c6 \u2206= q \u02c6 \u03b2\u22a4\u03a3 \u02c6 \u03b2 and \u02c6 \u00b5 = \u02c6 \u00b51+\u02c6 \u00b52 2 . The misclassi\ufb01cation error is R\u03b8( \u02c6 C) = 1 2\u03a6 \u0010 \u2212(\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2 \u02c6 \u2206 \u0011 + 1 2 \u00af \u03a6 \u0010 \u2212(\u02c6 \u00b5 \u2212\u00b52)\u22a4\u02c6 \u03b2 \u02c6 \u2206 \u0011 , with Ropt(\u03b8) = 1 2\u03a6 \u0010 \u2212\u2206/2 \u0011 + 1 2 \u00af \u03a6 \u0010 \u2206/2 \u0011 . De\ufb01ne an intermediate quantity R\u2217= 1 2\u03a6 \u0010 \u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u0011 + 1 2 \u00af \u03a6 \u0010\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u0011 . We \ufb01rst show that R\u2217\u2212Ropt(\u03b8) \u2272e\u2212\u22062/8 \u00b7 \u2206\u00b7 \u03b42 n. Applying Taylor\u2019s expansion to the two terms in R\u2217at \u2206 2 and \u2212\u2206 2 respectively, we obtain R\u2217\u2212Ropt(\u03b8) = 1 2 \u0010\u2206 2 \u2212\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 \u0011 \u03a6\u2032\u0010\u2206 2 \u0011 + 1 2 \u0010 \u2212\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 + \u2206 2 \u0011 \u03a6\u2032\u0010 \u2212\u2206 2 \u0011 + O \u0010 e\u2212\u22062/8 1 \u2206\u00b7 \u03b44 n \u0011 , (21) In fact, the remaining term can be written as 1 2 \u0010\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 \u2212\u2206 2 \u00112 \u03a6\u2032\u2032(t1,n) + \u0010\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 \u2212\u2206 2 \u00112 \u03a6\u2032\u2032(t2,n), where t1,n, t2,n are some constants satisfying |t1,n|, |t2,n| are between \u2206 2 and \u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 . Therefore, the remaining term can be bounded by using the facts that \f \f \f\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 \u2212\u2206 2 \f \f \f = O( 1 \u2206\u03b42 n), and \u03a6\u2032\u2032(tn) = O(e\u2212\u22062/8\u2206), for |tn| is between \u2206 2 and \u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 . In fact, for the \ufb01rst term, we can obtain this inequality by letting \u03b3 = \u03a31/2\u03b2 and \u02c6 \u03b3 = \u03a31/2 \u02c6 \u03b2 in Lemma 7. Then \f \f \f\u2206\u2212\u03b4\u22a4\u02c6 \u03b2 \u02c6 \u2206 \f \f \f = \f \f\u2225\u03b3\u22252 \u2212\u03b3\u22a4\u02c6 \u03b3 \u2225\u02c6 \u03b3\u22252 \f \f = \f \f\u2225\u03b3\u22252\u2225\u02c6 \u03b3\u22252 \u2212\u03b3\u22a4\u02c6 \u03b3 \u2225\u02c6 \u03b3\u22252 \f \f \u22721 \u2206\u2225\u02c6 \u03b3 \u2212\u03b3\u22252 2 \u22721 \u2206\u03b42 n. 24 \fIn addition, since as \u03b4n \u21920, (\u03b4\u2217)\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u2192\u2206 2 , we then have |\u03a6\u2032\u2032(tn)| \u224d\u2206\u00b7 e\u2212(\u2206/2)2 2 = \u2206\u00b7 e\u2212\u22062/8. Then (21) can be further expanded such that R\u2217\u2212Ropt(\u03b8) \u224d \u0010 \u2212\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 + \u2206 2 \u0011 e\u22121 2 \u0000\u2206 2 \u00012 + \u0010 \u2212\u03b4\u22a4\u02c6 \u03b2 2 \u02c6 \u2206 + \u2206 2 \u0011 e\u22121 2 \u0000\u2212\u2206 2 \u00012 + O \u0010 e\u2212\u22062/8 1 \u2206\u00b7 \u03b44 n \u0011 = exp \u0010 \u2212\u22062 8 \u0011 \u00b7 \u0010 \u2212\u03b4\u22a4\u02c6 \u03b2 \u02c6 \u2206 + \u2206 \u0011 + O \u0010 e\u2212\u22062/8 1 \u2206\u00b7 \u03b44 n \u0011 \u2272e\u2212\u22062/8 \u00b7 |\u03b4\u22a4\u02c6 \u03b2 \u02c6 \u2206 \u2212\u2206| + O \u0010 e\u2212\u22062/8 1 \u2206\u00b7 \u03b44 n \u0011 \u2272e\u2212\u22062/8 \u00b7 \u03b42 n. Eventually we obtain R\u2217\u2212Ropt(\u03b8) \u2272e\u2212\u22062/8\u2206\u00b7 \u03b42 n. To upper bound R\u03b8( \u02c6 C) \u2212R\u2217, applying Taylor\u2019s expansion to R\u03b8( \u02c6 C), R\u03b8( \u02c6 C) = 1 2 ( \u03a6 \u0010\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u0011 + (\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2 \u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u03a6\u2032\u0010\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u0011 + O \u0010 e\u2212\u22062/8\u2206\u00b7 \u03b42 n \u0011) \u22121 2 ( \u00af \u03a6 \u0010\u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u0011 + (\u02c6 \u00b5 \u2212\u00b52)\u22a4\u02c6 \u03b2 + \u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u03a6\u2032\u0010 \u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \u0011 + O \u0010 e\u2212\u22062/8\u2206\u00b7 \u03b42 n \u0011) , where the remaining term can be obtained similarly as (21) by using the fact \f \f \f(\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2 \u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \f \f \f = O(\u03b4n) and |\u03a6\u2032\u2032(\u00b7)| = O(e\u2212\u22062/8\u2206). In fact, when | \u02c6 \u2206\u2212\u2206| \u2264| q ( \u02c6 \u03b2 \u2212\u03b2)\u03a3( \u02c6 \u03b2 \u2212\u03b2)| \u2272\u2225\u02c6 \u03b2 \u2212\u03b2\u22252 \u2272\u03b4n = o(1), we have \f \f \f(\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2 \u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \f \f \f \u2264 1 2\u2206|(\u02c6 \u00b52 \u2212\u02c6 \u00b51 \u2212\u00b52 + \u00b52) \u02c6 \u03b2| \u2272\u03b4n. This leads to |R\u03b8( \u02c6 C) \u2212R\u2217| \u2272 \f \f \f\u03b4\u22a4\u02c6 \u03b2/2 \u2212(\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2 \u02c6 \u2206 \u03a6\u2032(\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 ) + \u03b4\u22a4\u02c6 \u03b2/2 + (\u02c6 \u00b5 \u2212\u00b52)\u22a4\u02c6 \u03b2 \u02c6 \u2206 \u03a6\u2032(\u2212\u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 ) + O \u0010 e\u2212\u22062/8\u2206\u00b7 \u03b42 n \u0011\f \f \f = \f \f \f\u03b4\u22a4\u02c6 \u03b2/2 \u2212(\u02c6 \u00b5 \u2212\u00b51)\u22a4\u02c6 \u03b2 \u02c6 \u2206 e\u22121 2 \b \u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \t2 + \u03b4\u22a4\u02c6 \u03b2/2 + (\u02c6 \u00b5 \u2212\u00b52)\u22a4\u02c6 \u03b2 \u02c6 \u2206 e\u22121 2 \b \u03b4\u22a4\u02c6 \u03b2/2 \u02c6 \u2206 \t2 + O \u0010 e\u2212\u22062/8\u2206\u00b7 \u03b42 n \u0011\f \f \f. Since \u03b4/2 \u2212(\u02c6 \u00b5 \u2212\u00b51) + \u03b4/2 + (\u02c6 \u00b5 \u2212\u00b52) = \u03b4 \u2212(\u00b52 \u2212\u00b51) = 0, 25 \fthen it follows that |R\u03b8( \u02c6 C) \u2212R\u2217| \u2272e\u2212\u22062/8\u2206\u00b7 \u03b42 n. Combining the pieces, we obtain R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2272e\u2212\u22062/8 \u00b7 \u2206\u00b7 \u03b42 n. Finally, by Lemma 5 and the derivation in Theorem 2, with probability at least 1\u221212p\u22121, \u03b4n \u2272Mn,p q s log p n . In addition, \u2206\u2208[Mn,p, 3Mn,p], we then have with probability at least 1 \u221212p\u22121, R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2272e\u2212M2 n,p/8 \u00b7 M3 n,p \u00b7 s log p n . Now we consider the two cases. On the one hand, when Mn,p is bounded by Cb, we have R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2272e\u2212M2 n,p/8 \u00b7 s log p n . On the other hand, when Mn,p \u2192\u221eas n grows, R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2272e \u2212( 1 8 \u22123 log Mn,p M2 n,p )M2 n,p \u00b7 M3 n,p \u00b7 s log p n , where 3 log Mn,p M2 n,p is an o(1) term as n \u2192\u221e. 5.3 Proofs of Theorems 4 and 5 We proceed to proving Theorems 4 and 5 under the event {c1n0 \u2264n\u2217 min(S) \u2264c2n0} that happens with probability at least 1 \u2212p\u22121. The results then rely on the following lemma. Lemma 8. Consider the MCR model and assume that \u02c6 \u00b5, \u02c6 \u03a3 are the generalized sample mean and sample covariance matrix respectively. If c1n0 \u2264n\u2217 min(S) \u2264c2n0. then with probability at least 1 \u2212p\u22121, sup u\u2208\u0393(s) u\u22a4(\u02c6 \u00b5 \u2212\u00b5) \u2272 r s log p n0 ; sup u,v\u2208\u0393(s) u\u22a4(\u02c6 \u03a3 \u2212\u03a3)v \u2272 r s log p n0 . Given Lemma 8, the derivation of Theorems 4 is very similar to the case with AdaLDA in Section 5.1, and 5 can be derived from Theorem 4 by using the same logic as in Section 5.2, and thus are omitted. 26 \f5.4 Proofs of the minimax lower bound results (Theorems 6 and 7) In this section we are going to prove Theorems 6 and 7. We start with providing lemmas that will be used in the proof. 5.4.1 Auxiliary lemmas The proof of Theorem 6 relies on the following Fano\u2019s Lemma. Lemma 9 (Tsybakov (2009)). Suppose \u0398p is a parameter space consisting of M parameters \u03b80, \u03b81, ..., \u03b8M \u2208\u0398p for some M > 0, and d(\u00b7, \u00b7) : \u0398p\u00d7\u0398p \u2192R+ is some distance. Denote P\u03b8 to be some probability measure parametrized by \u03b8. If for some constants \u03b1 \u2208(0, 1/8), \u03b3 > 0, KL(P\u03b8i, P\u03b80) \u2264\u03b1 log M/n for all 1 \u2264i \u2264M, and d(\u03b8i, \u03b8j) \u2265\u03b3 for all 0 \u2264i \u0338= j \u2264M, then inf \u02c6 \u03b8 sup i\u2208[M] E\u03b8i[d\u03b8i(\u02c6 \u03b8, \u03b8i)] \u2273\u03b3. The proof of Theorem 7, however, is not straightforward, since the excess risk R\u03b8( \u02c6 C) \u2212 Ropt(\u03b8) is not a distance as required in Lemma 9. The key step in our proof of Theorem 7 is to reduce the excess risk R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) to L\u03b8( \u02c6 C), de\ufb01ned in (16). The following lemma suggests that it su\ufb03ces to provide a lower bound for L\u03b8( \u02c6 C), and L\u03b8( \u02c6 C) satis\ufb01es an approximate triangle inequality (Lemma 4). Although L\u03b8( \u02c6 C) is not a distance function and does not satisfy an exact triangle inequality, the following lemma provides a variant of Fano\u2019s Lemma. Lemma 10 (Tsybakov (2009)). Let M \u22650 and \u03b80, \u03b81, ..., \u03b8M \u2208\u0398p. For some constants \u03b10 \u2208(0, 1/8], \u03b3 > 0, and any classi\ufb01er \u02c6 C, if KL(P\u03b8i, P\u03b80) \u2264\u03b10 log M/n for all 1 \u2264i \u2264M, and L\u03b8i( \u02c6 C) < \u03b3 implies L\u03b8j( \u02c6 C) \u2265\u03b3 for all 0 \u2264i \u0338= j \u2264M, then inf \u02c6 C sup i\u2208[M] P\u03b8i(L\u03b8i( \u02c6 C) \u2265\u03b3) \u2265 \u221a M \u221a M + 1 (1 \u22122\u03b10 \u2212 r 2\u03b10 log M ). Lemma 11 (Tsybakov (2009)). De\ufb01ne Ap,s = {u : u \u2208{0, 1}p, \u2225u\u22250 \u2264s}. If p \u22654s, then there exists a subset {u0, u1, ..., uM} \u2282Ap,s such that u0 = {0, ..., 0}\u22a4, \u03c1H(ui, uj) \u2265s/2 and log(M + 1) \u2265s 5 log( p s), where \u03c1H denotes the Hamming distance. 5.4.2 Proof of Theorem 6 In this section we prove the lower bound of estimation of \u03b2. First we construct a subset of the parameter space \u0398 that characterizes the hardness of the problem. By Lemma 11, there exist u0, u1, ..., uM \u2208Ap,s = {u \u2208{0, 1}p : \u2225u\u22250 \u2264s}, such that \u03c1H(ui, uj) > s/2 and log(M + 1) \u2265s 5 log( p s), denote this collection of ui by \u02dc Ap,s. In addition, denote u0 = 0p. 27 \fSince log p log(p/s) = O(1), so for su\ufb03ciently large p, we have s < p/2. De\ufb01ne b0 be the p-dimensional vector with the last s entries being Mn,p \u221as and the rest being 0, so we have \u2225b0\u22252 = Mn,p. Let r = \u2308p/2\u2309. For u \u2208 \u02dc Ap,s = {u0, u1, ..., uM}, let Bu be the p \u00d7 p symmetric matrix whose i-th row and column are both \u03f5 \u00b7 ui \u00b7 b0 Mn,p for i \u2208{1, ..., r}, where \u03f5 is to be determined later. The parameter set we considered is \u03980 = {\u03b8 = (\u00b51, \u00b52, \u03a3) : \u00b51 = b0, \u00b52 = \u2212b0, \u03a3 = (Ip + Bu)\u22121; u \u2208\u02dc Ap,s \u222a{0p}}. For a given u, the corresponding discriminating direction is \u03b2u = \u22122(Ip + Bu)b0, which implies \u2225\u03b2u \u2212\u03b2\u02dc u\u22252 2 = 4\u2225(Bu \u2212B\u02dc u)b0\u22252 2 \u22654\u03c1H(u, \u02dc u)\u03f52\u2225b0\u22252 2 \u22652sM2 n,p\u03f52. In addition, when \u2225Bu\u22252 = o(1), for su\ufb03ciently large n, we have \u2206= q 4b\u22a4 0 (Ip + Bu)b0 \u2208 (Mn,p, 3Mn,p), which implies that \u03980 \u2282G(s, Mn,p). We then proceed to bound KL(P\u03b8ui, P\u03b8u0) for i \u2208[M], where P\u03b8ui, P\u03b8u0 denote the distributions Np(b0, (Ip + Bui)\u22121) and Np(b0, Ip) respectively. We then have KL(P\u03b8ui, P\u03b8u0) = 1 2 [\u2212log |Ip + Bui| \u2212p + tr(Ip + Bui))] . Note that b0 Mn,p is a unit vector. If we take \u03f5 such that \u2225Bui\u22252 \u2264\u2225Bu\u2225F \u2264 \u221a 2s \u00b7 \u03f52 = o(1), and denote the eigenvalues of Ip + Bui by 1 + \u2206\u03bb1,...,1 + \u2206\u03bbp with \u2206\u03bbj = o(1). We then have KL(P\u03b8ui, P\u03b8u0) =1 2 \uf8ee \uf8f0\u2212 p X j=1 log(1 + \u2206\u03bbj) \u2212p + p X j=1 (1 + \u2206\u03bbj) \uf8f9 \uf8fb \u224d1 4 p X j=1 \u22062 \u03bbj = 1 4\u2225Bu\u22252 F \u22641 2s\u03f52 where we use the fact that log(1+x) \u224dx\u2212x2 2 when x = o(1). Now let \u03f5 = 1 5 \u221a 2 q log p n , then KL(P\u03b8ui, P\u03b8u0) \u2264\u03b1 log M/n for \u03b1 = 1/8. In addition, let \u03b3 = 1 10Mn,p q s log p n , then for 0 \u2264i \u0338= j \u2264M and any \u02c6 \u03b2 \u2208Rp, such that \u2225\u02c6 \u03b2 \u2212\u03b2ui\u22252 \u2264\u03b3, we have \u2225\u02c6 \u03b2\u2212\u03b2uj\u22252 \u2265\u2225\u03b2uj\u2212\u03b2uj\u22252\u2212\u2225\u02c6 \u03b2\u2212\u03b2ui\u22252 \u22651 5Mn,p r s log p n \u22121 10Mn,p r s log p n = 1 10Mn,p r s log p n = \u03b3. Then by Fano\u2019s lemma (Lemma 9), we have inf \u02c6 \u03b2 supi\u2208[M] E\u2225\u02c6 \u03b2 \u2212\u03b2ui\u22252 \u2273Mn,p q s log p n . For the incomplete data case with n0 \u22651, we consider a special pattern of missingness S0: (S0)ij = 1{1\u2264i\u2264n0,1\u2264j\u2264p} with probability 1. 28 \fUnder this missingness pattern, n\u2217 min = n0 with probability 1, and the problem essentially becomes complete data problem with n0 samples, which implies inf \u02c6 \u03b2 sup \u03b8\u2208G(s,Mn,p) S\u2208\u03a8(n0;n,p) E[\u2225\u02c6 \u03b2 \u2212\u03b2\u22252] \u2273Mn,p r s log p n0 . 5.4.3 Proof of Theorem 7 We proceed by applying Lemma 10 to obtain the minimax lower bound for the excess misclassi\ufb01cation error. We \ufb01rst construct a subset of the parameter space \u0398 that characterizes the hardness of the problem. Let e1 be the basis vector in the standard Euclidean space whose \ufb01rst entry is 1 and zero elsewhere. By Lemma 11, there exist u1, ..., uM \u2208 \u02c7 Ap,s = {u \u2208{0, 1}p :, u\u22a4e1 = 0, \u2225u\u22250 = s}, such that \u03c1H(ui, uj) > s/2 and log(M + 1) \u2265 s 5 log( p\u22121 s ). Note the \ufb01rst entry in uj is 0 for all j = 1, . . . , M. De\ufb01ne the parameter space \u03981 = {\u03b8 = (\u00b51, \u00b52, \u03a3) : \u00b51 = \u03f5u + \u03bbe1, \u00b52 = \u2212\u00b51, \u03a3 = \u03c32Ip; u \u2208\u02c7 Ap,s}, where \u03f5 = \u03c3 p log p/n, \u03c32 = O(1) and \u03bb is chosen to ensure \u03b8 \u2208G(s, Mn,p) such that (\u00b51 \u2212\u00b52)T \u03a3\u22121(\u00b51 \u2212\u00b52) = 4\u2225\u03f5u + \u03bbe1\u22252 2 \u03c32 = Mn,p. To apply Lemma 10, we need to verify two conditions: (i) the upper bound on the KL divergence between P\u03b8u and P\u03b8v, and (ii) the lower bound of L\u03b8u( \u02c6 C) + L\u03b8v( \u02c6 C) for u \u0338= v \u2208\u02c7 Ap,s. We calculate the KL divergence \ufb01rst. For u \u2208 \u02c7 Ap,s, denote \u00b5u = \u03f5u + \u03bbe1. For \u03b8u = (\u00b5u, \u2212\u00b5u, \u03c32Ip) \u2208\u03981, we consider the distribution Np(\u00b5u, \u03c32Ip). Then, the KL divergence between P\u03b8u and P\u03b8v can be bounded by KL(P\u03b8u, P\u03b8v) \u22641 2\u2225\u00b5u \u2212\u00b5v\u22252 2 \u2264\u03c32 \u00b7 s log p n . (22) In addition, by applying Lemma 4, we have that for any u, v \u2208\u02c7 Ap,s, L\u03b8u( \u02c6 C) + L\u03b8v( \u02c6 C) \u2273 1 Mn,p e\u2212M2 n,p/8 r s log p n . So far we have veri\ufb01ed the aforementioned conditions (i) and (ii). Lemma 10 immediately implies that, there is some C\u03b1 \u22650, such that inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) P(L\u03b8( \u02c6 C) \u2265C\u03b1 1 Mn,p e\u2212M2 n,p/8 r s log p n ) \u22651 \u2212\u03b1. (23) 29 \fFinally combining (23) with Lemma 3, we obtain the desired lower bound for the excess misclass\ufb01ciation error inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1 1 Mn,p e\u2212M2 n,p/8 s log p n ) \u22651 \u2212\u03b1. Under this missingness data case, we consider the same missingness pattern S0 as described in Section 5.4.2 with nmin = n0. Then we have inf \u02c6 \u03b2 sup \u03b8\u2208G(s,Mn,p) S\u2208\u03a8(n0;n,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1 1 Mn,p e\u2212M2 n,p/8 s log p n0 ) \u22651 \u2212\u03b1. This implies that 1. If Mn,p \u2264Cb for some Cb > 0, then inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1e\u22121 8 M2 n,p \u00b7 s log p n0 ) \u22651 \u2212\u03b1. 2. If Mn,p \u2192\u221eas n \u2192\u221e, then for any \u03b4 > 0, inf \u02c6 C sup \u03b8\u2208G(s,Mn,p) F\u2208\u03a8(n0;n,p) P(R\u03b8( \u02c6 C) \u2212Ropt(\u03b8) \u2265C\u03b1e\u2212( 1 8 +\u03b4)M2 n,p \u00b7 s log p n0 ) \u22651 \u2212\u03b1." + }, + { + "url": "http://arxiv.org/abs/1801.00518v1", + "title": "Statistical and Computational Limits for Sparse Matrix Detection", + "abstract": "This paper investigates the fundamental limits for detecting a\nhigh-dimensional sparse matrix contaminated by white Gaussian noise from both\nthe statistical and computational perspectives. We consider $p\\times p$\nmatrices whose rows and columns are individually $k$-sparse. We provide a tight\ncharacterization of the statistical and computational limits for sparse matrix\ndetection, which precisely describe when achieving optimal detection is easy,\nhard, or impossible, respectively. Although the sparse matrices considered in\nthis paper have no apparent submatrix structure and the corresponding\nestimation problem has no computational issue at all, the detection problem has\na surprising computational barrier when the sparsity level $k$ exceeds the\ncubic root of the matrix size $p$: attaining the optimal detection boundary is\ncomputationally at least as hard as solving the planted clique problem.\n The same statistical and computational limits also hold in the sparse\ncovariance matrix model, where each variable is correlated with at most $k$\nothers. A key step in the construction of the statistically optimal test is a\nstructural property for sparse matrices, which can be of independent interest.", + "authors": "T. Tony Cai, Yihong Wu", + "published": "2018-01-01", + "updated": "2018-01-01", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.IT", + "math.IT", + "stat.TH" + ], + "main_content": "Introduction 2 1.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Statistical and computational limits . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Organization and notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Main results 7 3 Test procedures and upper bounds 9 3.1 A structural property of sparse matrices . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Highly sparse regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 Moderately sparse regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 \u2217T.T. Cai is with the Statistics Department, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104, USA, tcai@wharton.upenn.edu. The research of T.T. Cai was supported in part by NSF Grant DMS-1712735 and NIH Grant R01 GM-123056. Y. Wu is with the Department of Statistics and Data Science, Yale University, New Haven, CT, yihong.wu@yale.edu. The research of Y. Wu was supported in part by the NSF Grant IIS-1447879, CCF-1527105, and an NSF CAREER award CCF-1651588. 1 \f4 Minimax lower bound 13 4.1 General strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Least favorable prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Key lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.4 Proof of Theorem 1: lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5 Detecting sparse covariance matrices 19 5.1 Test procedures and upper bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Proof of the lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6 Computational limits 24 7 Discussions 26 7.1 Alternative hypothesis de\ufb01ned by the Frobenius norm . . . . . . . . . . . . . . . . . 26 7.2 Localization and denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1 Introduction The problem of detecting sparse signals arises frequently in a wide range of \ufb01elds and has been particularly well studied in the Gaussian sequence setting (cf. the monograph [38]). For example, detection of unstructured sparse signals under the Gaussian mixture model was studied in [37, 25] for the homoskedastic case and in [14] for the heteroscedastic case, where sharp detection boundaries were obtained and adaptive detection procedures proposed. Optimal detection of structured signals in the Gaussian noise model has also been investigated in [7, 6, 19]. One common feature of these vector detection problems is that the optimal statistical performance can always be achieved by computationally e\ufb03cient procedures such as thresholding or convex optimization. Driven by contemporary applications, much recent attention has been devoted to inference for high-dimensional matrices, including covariance matrix estimation, principle component analysis (PCA), image denoising, and multi-task learning, all of which rely on detecting or estimating high-dimensional matrices with low-dimensional structures such as low-rankness or sparsity. For a suite of matrix problems, including sparse PCA [10], biclustering [9, 45, 52, 15], sparse canonical correlation analysis (CCA) [30] and community detection [32], a new phenomenon known as computational barriers has been recently discovered, which shows that in certain regimes attaining the statistical optimum is computationally intractable, unless the planted clique problem can be solved e\ufb03ciently.1 In a nutshell, the source of computational di\ufb03culty in the aforementioned problems is their submatrix sparsity, where the signal of interests is concentrated on a submatrix within a large noisy matrix. This combinatorial structure provides a direct connection to, and allows these matrix problems to be reduced in polynomial time from, the planted clique problem, thereby creating computational gaps for not only the detection but also support recovery and estimation. In contrast, another sparsity structure for matrices postulates the rows and columns are individually sparse, which has been well studied in covariance matrix estimation [12, 40, 20, 26]. The motivation is that in many real-data applications each variable is only correlated with a few others. Consequently, each row and each column of the covariance matrix are individually sparse but, unlike sparse PCA, biclustering, or group-sparse regression, their support sets need not be 1The planted clique problem [2] refers to detecting or locating a clique of size o(\u221an) planted in the Erd\u00a8 os-R\u00b4 enyi graph G(n, 1/2). Conjectured to be computationally intractable [39, 28], this problem has been frequently used as a basis for quantifying hardness of average-case problems [35, 1]. 2 \faligned. Therefore this sparsity model does not postulate any submatrix structure of the signal; indeed, it has been shown for covariance matrix estimation that entrywise thresholding of the sample covariance matrix proposed in [12] attains the minimax estimation rate [20]. The focus of the present paper is to understand the fundamental limits of detecting sparse matrices from both the statistical and computational perspectives. While achieving the optimal estimation rate does not su\ufb00er from any computational barrier, it turns out the detection counterpart does when and only when the sparsity level exceeds the cubic root of the matrix size. This is perhaps surprising because the sparsity model itself does not enforce explicitly any submatrix structure, which has been responsible for problems such as sparse PCA to be reducible from the planted clique. Our main result is a tight characterization of the statistical and computational limits of detecting sparse matrices in both the Gaussian noise model and the covariance matrix model, which precisely describe when achieving optimal detection is easy, hard, and impossible, respectively. 1.1 Setup We start by formally de\ufb01ning the sparse matrix model: De\ufb01nition 1. We say a p \u00d7 p matrix M is k-sparse if all of its rows and columns are k-sparse vectors, i.e., with no more than k non-zeros. Formally, denote the ith row of M by Mi\u2217and the ith column by M\u2217i. The following parameter set M(p, k) = {M \u2208Rp\u00d7p : \u2225Mi\u2217\u22250 \u2264k, \u2225M\u2217i\u22250 \u2264k, \u2200i \u2208[p]}. (1) denotes the collection of all k-spares p \u00d7 p matrices, where \u2225x\u22250 \u225cP i\u2208[p] 1{xi \u0338= 0} for x \u2208Rp. Consider the following \u201csignal + noise\u201d model, where we observe a sparse matrix contaminated with Gaussian noise: X = M + Z (2) where M is a p \u00d7 p unknown mean matrix, and Z consists of i.i.d. entries normally distributed as N(0, \u03c32). Without loss of generality, we shall assume that \u03c3 = 1 throughout the paper. Given the noisy observation X, the goal is to test whether the mean matrix is zero or a k-sparse nonzero matrix, measured in the spectral norm. Formally, we consider the following hypothesis testing problem: ( H0 : M = 0 H1 : \u2225M\u22252 \u2265\u03bb, M is k-sparse (3) where the mean matrix M belongs to the parameter space \u0398(p, k, \u03bb) = {M \u2208Rp\u00d7p : M \u2208M(p, k), \u2225M\u22252 \u2265\u03bb}. (4) Here we use the spectral norm \u2225\u00b7 \u22252, namely, the largest singular value, to measure the signal strength under the alternative hypothesis. It turns out that if we use the Frobenius norm to de\ufb01ne the alternative hypothesis, the sparsity structure does not help detection, in the sense that, the minimal \u03bb required to detect 1-sparse matrices is within a constant factor of that in the non-sparse case; furthermore, the matrix problem collapses to its vector version (see Section 7.1 for details). For covariance model, the counterpart of the detection problem (4) is the following. Consider the Gaussian covariance model, where we observe n independent samples drawn from the p-variate normal distribution N(0, \u03a3) with an unknown covariance matrix \u03a3. In the sparse covariance matrix model, each coordinate is correlated with at most k others. Therefore each row of the covariance 3 \fmatrix \u03a3 has at most k non-zero o\ufb00-diagonal entries. This motivates the following detection problem: ( H0 : \u03a3 = I H1 : \u2225\u03a3 \u2212I\u22252 \u2265\u03bb, \u03a3 \u2212I is k-sparse (5) Under the null hypothesis, the samples are pure noise; under the alternative, there exists at least one signi\ufb01cant factor and the entire covariance matrix is k-sparse. The goal is to determine the smallest \u03bb so that the factor can be detected from the samples. 1.2 Statistical and computational limits For ease of exposition, let us focus on the additive Gaussian noise model and consider the following asymptotic regime, wherein the sparsity and the signal level grow polynomially in the dimension as follows: k = p\u03b1 and \u03bb = p\u03b2 with \u03b1 \u2208[0, 1] and \u03b2 > 0 held \ufb01xed and p \u2192\u221e. Theorem 1 in Section 2 implies that the critical exponent of \u03bb behaves according to the following piecewise linear function: \u03b2\u2217= ( \u03b1 \u03b1 \u22641 3 1+\u03b1 4 \u03b1 \u22651 3 in the sense that if \u03b2 > \u03b2\u2217, there exists a test that achieves vanishing probability of error of detection uniformly over all k-sparse matrices; conversely, if \u03b2 < \u03b2\u2217, no test can outperform random guessing asymptotically. 0 1 3 1 2 \u03b2 1 3 1 2 1 \u03b1 impossible thresholding +spectrum spectrum PC hard Figure 1: Statistical and computational limits in detecting sparse matrices. More precisely, as shown in Figure 1, the phase diagram of \u03b1 versus \u03b2 is divided into four regimes: 4 \f(I) \u03b2 > \u03b1: The test based on the largest singular value of the entrywise thresholding estimator succeeds. In particular, we reject if \u2225XTh\u22252 \u2273k\u221alog p, where XTh ij = Xij1 \b |Xij| = \u2126(\u221alog p) \t . (II) \u03b2 > 1 2: The test based on the large singular value of the direct observation succeeds. In particular, we reject if \u2225X\u22252 \u2273\u221ap. (III) 1+\u03b1 4 < \u03b2 < \u03b1 \u22271 2: detection is as hard as solving the planted clique problem. (IV) \u03b2 < \u03b1 \u22271+\u03b1 4 : detection is information-theoretically impossible. As mentioned earlier, the computational intractability in detecting sparse matrices is perhaps surprising because (a) achieving the optimal estimation rate does not present any computational di\ufb03culty; (b) unlike problems such as sparse PCA, the sparse matrix model in De\ufb01nition 1 does not explicitly impose any submatrix sparsity pattern as the rows are individually sparse and need not share a common support. The result in Figure 1 shows that in the moderately sparse regime of p1/3 \u226ak \u226ap, outperforming entrywise thresholding is at least as hard as solving planted clique. However, it is possible to improve over entrywise thresholding using computationally ine\ufb03cient tests. We brie\ufb02y describe the construction of the optimal test: The \ufb01rst stage is a standard \u03c72-test, which rejects the null hypothesis if the mean matrix M has a large Frobenius norm. Under the alternative, if the data can survive this test, meaning \u2225M\u2225F is small, then M has small stable rank (i.e. \u2225M\u22252 F/\u2225M\u22252 2) thanks to the assumption that \u2225M\u22252 is large. The key observation is that for sparse matrices with small stable rank there exists a sparse approximate singular vector v, in the sense that \u2225Mv\u2225\u2273\u2225M\u22252\u2225v\u2225. Then in the second stage we perform a scan test designed in the similar spirit as in detecting submatrices or sparse principle components. The key structural property of sparse matrices is established using a celebrated result of Rudelson and Vershynin [49] in randomized numerical linear algebra which shows that the Gram matrix of any matrix M of low stable rank can be approximated by that of a small submatrix of M. This shows the existence of sparse approximate singular vector by means of probabilistic method but does not provide a constructive method to \ufb01nd it, which, as Figure 1 suggests, is likely to be computationally intractable. To conclude this part, we mention that, the same statistical and computational limits in Figure 1 also apply to detecting sparse covariance matrices when \u03bb is replaced by \u03bb\u221an, under appropriate assumptions on the sample size; see Section 6 for details. 1.3 Related work As opposed to the vector case, there exist various notions of sparsity for matrices as motivated by speci\ufb01c applications, such as: \u2022 Vector sparsity: the total number of nonzeros in the the matrix is constrained [21], e.g., in robust PCA. \u2022 Row sparsity: each row of the matrix is sparse, e.g. matrix denoising [41]. \u2022 Group sparsity: each row of the matrix is sparse and shares a common support, e.g., groupsparse regression [44]. \u2022 Submatrix sparsity: the matrix is zero except for a small submatrix, e.g., sparse PCA [11, 17], biclustering [13, 9, 45, 52], sparse SVD [53], sparse CCA [30], and community detection [33]. 5 \fThe sparse matrix model (De\ufb01nition 1) studied in this paper is stronger than the vector or row sparsity and weaker than submatrix sparsity. The statistical and computational aspects of detecting matrices with submatrix sparsity has been investigated in the literature for the Gaussian mean, covariance and the Bernoulli models. In particular, for the spiked covariance model where the leading singular vector is assumed to be sparse, the optimal detection rate has been obtained in [11, 18]. Detecting submatrices in additive Gaussian noise was studied by Butucea and Ingster [13] who not only found the optimal rate but also determined the sharp constants. In the random graph (Bernoulli) setting, the problem of detecting the presence of a small denser community planted in an Erd\u00a8 os-R\u00b4 enyi graph was studied in [8]; here the entry of the mean adjacency matrix is p on a small submatrix and q < p everywhere else. The computational lower bounds in all three models were established in [10, 45, 33] by means of reduction to the planted clique problem. Another work that is closely related to the present paper is [3, 4], where the goal is to detect covariance matrices with sparse correlation. Speci\ufb01cally, in the n-sample Gaussian covariance model, the null hypothesis is the identity covariance matrix and the alternative hypothesis consists of covariances matrices whose o\ufb00-diagonals are equal to a positive constant on a submatrix and zero otherwise. Assuming various combinatorial structure of the support set, the optimal tradeo\ufb00 between the sample size, dimension, sparsity and the correlation level has been studied. Other work on testing high-dimensional covariance matrices that do not assume sparse alternatives include testing independence and sphericity, with speci\ufb01c focus on asymptotic power analysis and the limiting distribution of test statistics [22, 16, 47, 48]. Finally, we mention that yet another twodimensional detection problem in Gaussian noise [5], where the sparse alternative corresponds to paths in a large graph. 1.4 Organization and notations We introduce the main notation used in this paper: For any sequences {an} and {bn} of positive numbers, we write an \u2273bn if an \u2265cbn holds for all n and some absolute constant c > 0, an \u2272bn if an \u2273bn, and an \u224dbn if both an \u2273bn and an \u2272bn hold. In addition, we use \u224dk to indicate that the constant depends only on k. For any q \u2208[1, \u221e], the \u2113q \u2192\u2113q induced operator norm of an matrix M is de\ufb01ned as \u2225M\u2225q \u225c max\u2225x\u2225\u2113q \u22641 \u2225Mx\u2225\u2113q. In particular, \u2225M\u22252 is the spectral norm, i.e., the largest singular value of M, and \u2225M\u22251 (resp. \u2225M\u2225\u221e) is the largest \u21131-norm of the columns (resp. rows) of M. For any p \u00d7 p matrix M and I, J \u2282[p], let MIJ denote the submatrix (Mij)i\u2208I,j\u2208J. Let I and J denote the identity and the all-one matrix. Let 1 denote the all-one vector. Let Sp denotes the set of p \u00d7 p positive-semide\ufb01nite matrices. The rest of the paper is organized as follows: Section 2 presents the main results of the paper in terms of the minimax detection rates for both the Gaussian noise model and the covariance matrix model. Minimax upper bounds together with the testing procedures for the mean model are presented in Section 3, shown optimal by the lower bounds in Section 4; in particular, Section 3.1 introduces a structural property of sparse matrices which underpins the optimal tests in the moderately sparse regime. Results for the covariance model are given in Section 5 together with additional proofs. Section 6 discusses the computational aspects and explains how to deduce the computational limit in Figure 1 from that of submatrix detection and sparse PCA. Section 7 concludes the paper with a discussion on related problems. 6 \f2 Main results We begin with the Gaussian noise model. To quantify the fundamental limit of the hypothesis testing problem (3), we de\ufb01ne \u01eb\u2217(p, k, \u03bb) as the optimal sum of Type-I and Type-II probability of error: \u01eb\u2217(p, k, \u03bb) = inf \u03c6 ( P0(\u03c6 = 1) + sup M\u2208\u0398(p,k,\u03bb) PM(\u03c6 = 0) ) (6) where PM denotes the distribution of the observation X = M + Z conditioned on the mean matrix M, and the in\ufb01mum is taken over all decision rules \u03c6 : Rp\u00d7p \u2192{0, 1}. Our main result is a tight characterization of the optimal detection threshold for \u03bb. We begin with the Gaussian noise model. De\ufb01ne the following upper and lower bounds, which di\ufb00er by at most a factor of O \u0010q log p log log p \u0011 : \u03bb1(k, p) \u225c ( k\u221alog p k \u2264( p log p) 1 3 \u0000kp log ep k \u0001 1 4 k \u2265( p log p) 1 3 (7) and \u03bb0(k, p) \u225c \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 k r log \u0010 p log p k3 \u0011 k \u2264(p log p) 1 3 \u0000kp log ep k \u0001 1 4 k \u2265(p log p) 1 3 . (8) Theorem 1 (Gaussian noise model). There exists absolute constant k0, c0, c1, such that the following holds for all k0 \u2264k \u2264p: 1. For any c > c1, if \u03bb \u2265c\u03bb1(k, p), (9) then \u01eb\u2217(k, p, \u03bb) \u2264\u01eb1(c), where \u01eb1(c) \u21920 as c \u2192\u221e. 2. Conversely, for any c > c0, if \u03bb \u2264c\u03bb0(p, k), (10) then \u01eb\u2217(k, p, \u03bb) \u2265\u01eb0(c) \u2212op\u2192\u221e(1), where \u01eb0(c) \u21921 as c \u21920. To parse the result of Theorem 1, let us denote by \u03bb\u2217(p, k) the optimal detection threshold, i.e., the minimal value of \u03bb so that the optimal probability of error \u01eb\u2217(p, k, \u03bb) is at most a constant, say, 0.1. Then we have the following characterization: \u2022 High sparsity: k \u2264p1/3\u2212\u03b4: \u03bb\u2217\u224d\u03b4 k p log p \u2022 Moderate sparsity: k \u2273(p log p)1/3: \u03bb\u2217\u224d \u0010 kp log ep k \u0011 1 4 \u2022 Boundary case: ( p log p)1/3 \u2272k \u2272(p log p)1/3: k r log ep log p k3 \u2272\u03bb\u2217\u2272 \u0010 kp log ep k \u0011 1 4 , where the upper and lower bounds are within a factor of O \u0010q log p log log p \u0011 . 7 \fFurthermore, two generalizations of Theorem 1 will be evident from the proof: (a) the upper bound in Theorem 1 as well as the corresponding optimal tests apply as long as the noise matrix consists of independent entries with subgaussian distribution with constant proxy variance; (b) the lower bound in Theorem 1 continues to hold up even if the mean matrix is constrained to be symmetric. Thus, symmetry does not improve the minimax detection rate. Next we turn to the sparse covariance model: Given n independent samples drawn from N(0, \u03a3), the goal is to test the following hypothesis ( H0 : \u03a3 = I H1 : \u03a3 \u2208\u039e(p, k, \u03bb, \u03c4), where the parameter space for sparse covariances matrices is \u039e(p, k, \u03bb, \u03c4) = {\u03a3 \u2208Sp : \u03a3 \u2208M(p, k), \u2225\u03a3 \u2212I\u22252 \u2265\u03bb, \u2225\u03a3\u2225\u2264\u03c4}. (11) In other words, under the alternative, the covariance is equal to identity plus a sparse perturbation. Throughout the paper, the parameter \u03c4 is assumed to be a constant. De\ufb01ne the minimax probability of error as: \u01eb\u2217 n(p, k, \u03bb) = inf \u03c6 ( PI(\u03c6 = 1) + sup \u03a3\u2208\u039e(p,k,\u03bb,\u03c4) P\u03a3(\u03c6 = 0) ) (12) where \u03c6 \u2208{0, 1} is a function of the samples (X1, . . . , Xn)i.i.d. \u223cN(0, \u03a3). Analogous to Theorem 1, the next result characterizes the optimal detection threshold for sparse covariance matrices. Theorem 2 (Covariance model). There exists absolute constants k0, C, c0, c1, such that the following holds for all k0 \u2264k \u2264p. 1. Assume that n \u2265C log p. For any c > c1, if \u03bb \u2265 c \u221an\u03bb1(k, p), (13) then \u01eb\u2217 n(k, p, \u03bb) \u2264\u01eb1(c), where \u01eb1(c) \u21920 as c \u2192\u221e. 2. Assume that n \u2265C\u03bb0(p, k)2 log p (14) and n \u2265C \u00b7 ( k6 p \u0000 p k3 \u00012\u03b4 log2 p k \u2264p1/3 p k \u2265p1/3 , (15) where \u03b4 is any constant in (0, 2 3]. If \u03bb \u2264 c \u221an\u03bb0(k, p), (16) then \u01eb\u2217(k, p, \u03bb) \u2265\u01eb0(c) \u2212op\u2192\u221e(1), where \u01eb0(c) \u21921 as c \u21920 8 \fIn comparison with Theorem 1, we note that the rate-optimal lower bound in Theorem 2 holds under the assumption that the sample size is su\ufb03ciently large. In particular, the condition (14) is very mild because, by the assumption that \u2225\u03a3\u22252 is at most a constant, in order for the right hand side of (16) to be bounded, it is necessary to have n \u2265\u03bb0(p, k)2. The extra assumption (15), when k \u2265p1/4, does impose a non-trivial constraint on the sample size. This assumption is due to the current lower bound technique based on the \u03c72-divergence. In fact, the lower bound in [16] for testing covariance matrix without sparsity uses the same method and also requires n \u2273p. The results of Theorems 1 and 2 also demonstrate the phenomenon of the separation of detection and estimation, which is well-known in the Gaussian sequence model. The minimax estimation of sparse matrices has been systematically studied by Cai and Zhou [20] in the covariance model, where it is shown that entrywise thresholding achieves the minimax rate in the spectral norm loss of k q log p n provided that n \u2273k2 log3 p and log n \u2272log p; similar rate of k\u221alog p also holds for the Gaussian noise model. In view of this result, an interesting question is whether a \u201cplug-in\u201d approach for testing, namely, using the spectral norm of the minimax estimator as the test statistic, achieves the optimal detection rate. This method is indeed optimal in the very sparse regime of k \u226ap1/3, but fails to achieve the optimal detection rate in the moderately sparse regime of k \u226bp1/3, which, in turn, can be attained by a computationally intensive test procedure. This observation should be also contrasted with the behavior in the vector case. To detect the presence of a k-sparse pdimensional vector in Gaussian noise, entrywise thresholding, which is the optimal estimator for all sparsity levels, achieves the minimax detection rate in \u21132-norm when k \u226a\u221ap, while the \u03c72-test, which disregards sparsity, is optimal when k \u226b\u221ap. 3 Test procedures and upper bounds In this section we consider the two sparsity regimes separately and design the corresponding rate-optimal testing procedures. In the highly sparse regime of k \u2272( p log p) 1 3, tests based on componentwise thresholding turns out to achieve the optimal rate of detection. In the moderately sparse regime of k \u2273( p log p) 1 3, chi-squared test combined with the approximate singular vector property in Section 3.1 is optimal. 3.1 A structural property of sparse matrices Before we proceed to the construction of the rate-optimal tests, we \ufb01rst present a structural property of sparse matrices, which may be of independent interest. Recall that a matrix M is k-sparse in the sense of De\ufb01nition 1 if its rows and columns are sparse but need not to have a common support. If we further know that M has low rank, then the row support sets must be highly aligned, and therefore M has a sparse eigenvector. The main result of this section is an extension of this result to approximately low-rank matrices and their approximate eigenvectors. De\ufb01nition 2. We say v \u2208Rp is an \u01eb-approximate singular vector of \u03a3 if \u2225\u03a3v\u22252 \u2265(1 \u2212\u01eb) \u2225\u03a3\u22252 \u2225v\u2225. We also need the notion of stable rank (also known as numerical rank): sr(M) \u225c\u2225M\u22252 F \u2225M\u22252 2 , (17) which is always a lower bound of rank(M). The following lemma gives su\ufb03cient conditions for a sparse matrix to have sparse approximate singular vectors. The key ingredient of the proof is a celebrated result of Rudelson-Vershynin [49] 9 \fin randomized numerical linear algebra which shows that the Gram matrix of any matrix M of stable rank at most r can be approximated by that of a submatrix of M formed by O(r log r) rows. The following is a restatement of [49, Theorem 3.1] without the normalization: Lemma 1. There exists an absolute constant C0 such that the following holds. Let y \u2208Rn be a random vector with covariance matrix K = E[yy\u22a4]. Assume that \u2225y\u22252 \u2264M holds almost surely. Let y1, . . . , yd be iid copies of y. Then E \r \r \r \r 1 d d X i=1 yiy\u22a4 i \u2212K \r \r \r \r 2 \u2264C0M r \u2225K\u22252 log d d , provided that the right-hand side is less than \u2225K\u22252. Theorem 3 (Concentration of operator norm on small submatrices). Let k \u2208[p]. Let M be a p\u00d7p k-sparse matrix (not necessarily symmetric) in the sense that all rows and columns are k-sparse. Let r = sr(M). Then there exists I, J \u2282[p], such that \u2225MIJ\u22252 \u22651 8 \u2225M\u22252 , |I| \u2264Ckr, |J| \u2264Ckr log r where C is an absolute constant. Remark 1. The intuition behind the above result is the following: consider the ideal case where X is low-rank, say, rank(X) \u2264r. Then its right singular vector belongs to the span of at most r rows and is hence kr-sparse; so is the left singular vector. Theorem 3 extends this simple observation to stable rank with an extra log factor. Furthermore, the result in Theorem 3 cannot be improved beyond this log factor. To see this, consider a matrix M consisting of an m \u00d7 m submatrix with independent Bern(q) entries and zero elsewhere, where q = k/(2m) \u226a1. Then with high probability, M is k-sparse, \u2225M\u22252 \u2248qm, and \u2225M\u22252 F \u2248qm2. Although the rank of M is approximately m, its stable rank is much lower sr(M) \u22481 q, and the leading singular vector of M is m-sparse, with m = \u0398(k sr(M)). In fact, this example plays a key role in constructing the least favorable prior for proving the minimax lower bound in Section 4. Proof. Denote the ith row of M by Mi\u2217. Denote the jth row of M by M\u2217j. Let I0 \u225c{i \u2208[p] : \u2225Mi\u2217\u22252 \u2265\u03c4} J0 \u225c{j \u2208[p] : \u2225M\u2217j\u22252 \u2265\u03c4} , where \u03c4 > 0 is to be chosen later. Then |I0| \u2228|J0| \u2264\u2225M\u22252 F \u03c4 2 . (18) Since the operator norm and Frobenius norm are invariant under permutation of rows and columns, we may and will assume that I0, J0 corresponds to the \ufb01rst few rows or columns of M. Write M = \u0000 A C D B \u0001 where B = MIc 0Jc 0. Since each row of B is k-sparse, by the Cauchy-Schwartz inequality its \u21131-norm is at most \u221a k\u03c4. Consequently its \u2113\u221e\u2192\u2113\u221eoperator norm satis\ufb01es \u2225B\u2225\u221e= maxi \u2225Bi\u2217\u22251 \u2264 \u221a k\u03c4. Likewise, \u2225B\u22251 = maxj \u2225B\u2217j\u22251 \u2264 \u221a k\u03c4. By duality (see, e.g., [31, Corollary 2.3.2]), \u2225B\u22252 \u2264 p \u2225B\u22251\u2225B\u2225\u221e\u2264 \u221a k\u03c4. (19) 10 \fLet X = (A C) and Y = \u0000 A D \u0001 . By triangle inequality, we have \u2225M\u22252 \u2264\u2225X\u22252 + \u2225Y \u22252 + \u2225B\u22252. Setting \u03c4 = \u2225M\u22252 2 \u221a k , we have \u2225B\u22252 \u2264\u2225M\u22252 /2 and hence \u2225X\u22252 \u2228\u2225Y \u22252 \u2265\u2225M\u22252 4 . Without loss of generality, assume henceforth \u2225X\u22252 \u2265\u2225M\u22252 4 . Set I = I0. Note that X \u2208R\u2113\u00d7p, where \u2113= |I| \u2264\u2225M\u22252 F \u03c4 2 = 4k\u2225M\u22252 F \u2225M\u22252 2 = 4L. Furthermore, sr(X) = \u2225X\u22252 F \u2225X\u22252 2 \u2264 \u2225M\u22252 F \u2225M\u22252 2/16 = 16r. Next we show that X has a submatrix formed by a few columns whose operator norm is large. We proceed as in the proof of [49, Theorem 1.1]. Write X = \" x\u22a4 1 . . . x\u22a4 \u2113 # , \u02dc X = 1 \u221a d \" y\u22a4 1 . . . y\u22a4 d # . De\ufb01ne the random vector y by P n y = \u2225X\u2225F \u2225xi\u22252 xi o = \u2225xi\u22252 2 \u2225X\u22252 F and let y1, . . . , yd which are iid copies of y. Then X\u22a4X = E[yy\u22a4] and \u02dc X\u22a4\u02dc X = 1 d Pd i=1 yiy\u22a4 i . Furthermore, \u2225y\u22252 \u2264\u2225X\u2225F almost surely and \r \rE[yy\u22a4] \r \r 2 = \u2225X\u22252 2. By Lemma 1, E \r \r \r \u02dc X\u22a4\u02dc X \u2212X\u22a4X \r \r \r 2 \u2264C0 r log d d \u2225X\u2225F \u2225X\u22252 \u22641 4 \u2225X\u22252 2 , where the last inequality follows by choosing d = \u2308Cr log r\u2309with C being a su\ufb03ciently large universal constant. Therefore there exists a realization of \u02dc X so that the above inequality holds. Let J be the column support of \u02dc X. Since the rows of \u02dc X are scaled version of those of X which are k-sparse, we have |J| \u2264dk. Let v denote a leading right singular vector of \u02dc X, i.e., \u02dc X\u22a4\u02dc Xv = \u2225\u02dc X\u22252 2v and \u2225v\u22252 = 1. Then supp(v) \u2282J. Note that \u2225Xv\u22252 2 = v\u22a4X\u22a4Xv = v\u22a4\u02dc X\u22a4\u02dc Xv + v\u22a4(X\u22a4X \u2212\u02dc X\u22a4\u02dc X)v \u2265\u2225\u02dc X\u22252 2 \u2212\u2225X\u22a4X \u2212\u02dc X\u22a4\u02dc X\u22252 \u2265\u2225X\u22252 2 \u22122\u2225X\u22a4X \u2212\u02dc X\u22a4\u02dc X\u22252 \u22651 2\u2225X\u22252 2. Therefore \u2225X\u2217J\u22252 \u2265\u2225Xv\u22252 \u2265 1 \u221a 2 \u2225X\u22252 \u2265 1 4 \u221a 2 \u2225M\u22252. The proof is completed by noting that X\u2217J = MIJ. 3.2 Highly sparse regime It is has been shown that, in the covariance model, entrywise thresholding is rate-optimal for estimating the matrix itself with respect to the spectral norm [20]. It turns out that in the very sparse regime entrywise thresholding is optimal for testing as well. De\ufb01ne \u02c6 M = (Xij1{|Xij| \u2265\u03c4}). and the following test \u03c8(X) = 1 n \u2225\u02c6 M\u22252 \u2265\u03bb o (20) Theorem 4. For any \u01eb \u2208(0, 1), if \u03bb > 2k r 2 log 4p2 \u01eb (21) 11 \fthen the test (20) with \u03c4 = q 2 log 4p2 \u01eb satis\ufb01es P0(\u03c8 = 1) + sup M\u2208\u0398(p,k,\u03bb) PM(\u03c8 = 0) \u2264\u01eb for all 1 \u2264k \u2264p. Proof. Denote the event E = {\u2225Z\u2225\u2113\u221e\u2264\u03c4}. Conditioning on E, for any k-sparse matrix M \u2208 M(p, k), we have \u02c6 M \u2208M(p, k) and \u2225\u02c6 M \u2212M\u22252 \u2264k\u03c4. (22) To see this, note that for any i, j, \u02c6 Mij = 0 whenever Mij = 0. Therefore \u2225\u02c6 Mi\u2217\u2212Mi\u2217\u2225\u21131 \u2264 k\u2225Z\u2225\u2113\u221e\u2264k\u03c4 and, consequently, \u2225\u02c6 M \u2212M\u22251 = maxi \u2225\u02c6 Mi\u2217\u2212Mi\u2217\u2225\u21131 \u2264k\u03c4. Similarly, \u2225\u02c6 M \u2212M\u2225\u221e= maxj \u2225\u02c6 M\u2217j \u2212M\u2217j\u2225\u21131 \u2264k\u03c4. Therefore (22) follows from the fact that \u2225\u00b7 \u22252 2 \u2264\u2225\u00b7 \u22251\u2225\u00b7 \u2225\u221efor matrix induced norms. Therefore if \u03bb > 2k\u03c4, then P0(\u03c8 = 1) + sup M\u2208\u0398(p,k,\u03bb) PM(\u03c8 = 0) \u22642P {\u2225Z\u2225\u2113\u221e> \u03c4} \u22644p2e\u2212\u03c4 2/2. This completes the proof. 3.3 Moderately sparse regime Our test in the moderately sparse regime relies on the existence of sparse approximate eigenvectors established in Theorem 3. More precisely, the test procedure is a combination of the matrix-wise \u03c72-test and the scan test based on the largest spectral norm of m \u00d7 m submatrices, which is detailed as follows: Let m = C s kp log ep k . where C is the universal constant from Theorem 3. De\ufb01ne the following test statistic Tm(X) = max{\u2225XIJ\u22252 : I, J \u2282[p], |I| = |J| = m} (23) and the test \u03c8(X) = 1 \b \u2225X\u22252 F \u2265p2 + s \t \u22281{Tm(X) \u2265t} (24) where s \u225c2 log 1 \u01eb + 2p r log 1 \u01eb , t \u225c2\u221am + 4 r m log ep m . (25) Theorem 5. There exists a universal constant C0 such that the following holds. For any \u01eb \u2208 (0, 1/2), if \u03bb \u2265C0 \u001a kp log 1 \u01eb log \u0012p k log 1 \u01eb \u0013\u001b 1 4 , (26) then the test (24) satis\ufb01es P0(\u03c8 = 1) + sup M\u2208\u0398(p,k,\u03bb) PM(\u03c8 = 0) \u2264\u01eb holds for all 1 \u2264k \u2264p. 12 \fProof. First consider the null hypothesis, where M = 0 and X = Z has iid standard normal entries so that \u2225Z\u22252 F \u2212p2 = OP (p). By standard concentration equality for \u03c72 distribution, we have P \b |\u2225Z\u22252 F \u2212p2| > s \t \u2264\u01eb, where s \u225c2 log 1 \u01eb + 2p r log 1 \u01eb . Consequently the false alarm probability satis\ufb01es P0(\u03c8 = 1) \u2264P \b \u2225Z\u22252 F \u2212p2 > C0p \t | {z } \u2264\u01eb + \u0012 p m \u00132 P {\u2225W\u22252 \u2265t}. where t = 2\u221am + 4 q m log ep m and W \u225cZ[m],[m]. By the Davidson-Szarek inequality [24, Theorem II.7], \u2225W\u22252 s.t. \u2264N(2\u221am, 1). Then P {\u2225W\u22252 \u2265t} \u2264(em p )m. Hence the false alarm probability vanishes. Next consider the alternative hypothesis, where, by assumption, M is row/column k-sparse and \u2225M\u2225\u2265\u03bb. To begin, suppose that \u2225M\u2225F \u22652\u221as. Then since \u2225X\u22252 F \u2212p2 = \u2225M\u22252 F + 2 \u27e8M, Z\u27e9+ \u2225Z\u22252 F \u2212p2, we have P{\u2225M + Z\u22252 F \u2212p2 < s} \u2264P \b \u2225M\u22252 F + 2 \u27e8M, Z\u27e9< 2s \t + P \b \u2225Z\u22252 F \u2212p2 < \u2212s \t \u2264exp(\u2212s2/8) + \u01eb. Therefore, as usual, if \u2225M\u2225F is large, the \u03c72-test will succeeds with high probability. Next assume that \u2225M\u2225F < 2\u221as. Therefore M is approximately low-rank: sr(M) \u2264r \u225c4s \u03bb2 . By Theorem 3, there exists an absolute constant C and I, J \u2282[p] of cardinality at most m = Ckr log r = Ck 4s \u03bb2 log 4s \u03bb2 , such that \u2225MIJ\u22252 \u2265 1 8\u03bb. Therefore the statistic de\ufb01ned in (23) satis\ufb01es Tm(X) \u2265\u2225XIJ\u22252 \u2265 \u03bb 8 \u2212\u2225ZIJ\u22252. Therefore Tm(X) \u2265\u03bb 8 \u22123\u221am with probability at least 1 \u2212exp(\u2212\u2126(m)). Choose \u03bb so that \u03bb 8 \u22123\u221am \u2265t, Since t + 3\u221am = 5\u221am + 4 q m log ep m \u22649 q m log ep m, it su\ufb03ces to ensure that \u03bb \u2265c0 q m log ep m for some absolute constant c0. Plugging the expression of m, we found a su\ufb03cient condition is \u03bb \u2265C0(ks log es k ) 1 4 for some absolute constant C0. The proof is completed by noting that s \u22642p(log 1 \u01eb + log 1 \u01eb) and s 7\u2192s log es k is increasing. 4 Minimax lower bound In this section we prove the lower bound part of Theorem 1. In Section 4.1, we \ufb01rst present a general strategy of deriving minimax lower bound for functional hypothesis testing problems, which involves priors not necessarily supported on the parameter space. To apply this strategy, in Section 4.2 we specify a prior under which the matrix is k-sparse with high probability. The lower bound is proved by bounding the \u03c72-divergence between the null distribution and the mixture of the alternatives. 13 \f4.1 General strategy We begin by providing a general strategy of constructing lower bounds for composite hypothesis testing problem, which is in particular useful for testing functional values. Given an experiment {P\u03b8 : \u03b8 \u2208\u0398} and two parameter subsets \u03980, \u03981 \u2282\u0398, consider the following composite hypothesis testing problem H0 : \u03b8 \u2208\u03980 v.s. H1 : \u03b8 \u2208\u03981 (27) De\ufb01ne the minimax sum of Type-I and Type-II error probabilities as E(\u03980, \u03981) \u225cinf A sup{P\u03b80(A) + P\u03b81(Ac) : \u03b8i \u2208\u0398i, i = 0, 1}, where the in\ufb01mum is taken over all measurable sets A. By the minimax theorem (c.f., e.g., [43, p. 476]), the minimax probability error is given by least favorable Bayesian problem: E(\u03980, \u03981) = 1 \u2212inf{TV(P\u03c00, P\u03c01) : \u03c00 \u2208M(\u03980), \u03c01 \u2208M(\u03981)}. (28) where M(\u0398i) denotes the set of all probability measures supported on \u0398i, and P\u03c0(\u00b7) = R P\u03b8(\u00b7)\u03c0(d\u03b8) denotes the mixture induced by the prior \u03c0. Therefore, any pair of priors give rise to a lower bound on the probability of error; however, sometimes it is di\ufb03cult to construct priors that are supported on the respective parameter subsets. To overcome this hurdle, the following lemma, which is essentially the same as [50, Theorem 2.15 (i)], allows priors to be supported on a possibly extended parameter space \u0398\u2032. For completeness, we state a simpler and self-contained proof using the data processing inequality [23] and the triangle inequality for total variation. Lemma 2 (Lower bound for testing). Let \u0398\u2032 \u2283\u0398. Let \u03c00, \u03c01 be priors supported on \u0398\u2032. If TV(P\u03c00, P\u03c01) \u22641 \u2212\u03b4 (29) and \u03c0i(\u0398i) \u22651 \u2212\u01ebi, i = 0, 1, (30) then E(\u03980, \u03981) \u2265\u03b4 \u2212\u01eb0 \u2212\u01eb1. (31) Proof. De\ufb01ne the following priors by conditioning: for i = 0, 1, let \u02dc \u03c0i = \u03c0i|\u0398i, i.e., \u02dc \u03c0i(\u00b7) = \u03c0i(\u00b7\u2229\u0398i) \u03c0i(\u0398i . Then, by triangle inequality, TV(P\u02dc \u03c00, P\u02dc \u03c01) \u2264TV(P\u03c00, P\u03c01) + TV(P\u02dc \u03c00, P\u03c00) + TV(P\u02dc \u03c01, P\u03c01) (a) \u2264TV(P\u03c00, P\u03c01) + TV(\u02dc \u03c00, \u03c00) + TV(\u02dc \u03c01, \u03c01) (b) \u22641 \u2212\u03b4 + \u01eb0 + \u01eb1. where (a) follows from the data-processing inequality (or convexity) of the total variation distance; (b) follows from that TV(\u02dc \u03c0i, \u03c0i) = \u03c0i(\u0398c i). The lower bound (31) then follows from the characterization (28). 14 \f4.2 Least favorable prior To apply Lemma 2 to the detection problem (3), we have \u03980 = {0} and \u03981 = \u0398(p, k, \u03bb), with \u03c00 = \u03b40 and \u03c01 a prior distribution under which the matrix is sparse with high probability. Next we describe the prior that leads to the optimal lower bound in Theorem 1. Let I be chosen uniformly at random from all subsets of [p] of cardinality m. Let u = (u1, . . . , up) be independent Rademacher random variables. Let B be a p\u00d7p matrix with i.i.d. Bern( k m) entries and let (u, I, B) be independent. Let UI denote the diagonal matrix de\ufb01ned by (UI)ii = ui1{i \u2208I}. Let t > 0 be speci\ufb01ed later. Let the prior \u03c01 be the distribution of the following random sparse matrix: M = tUIBUI, (32) Equivalently, we can de\ufb01ne M by mij = 1{i \u2208I}1{j \u2208I}uiujbij. Therefore the non-zero pattern of M has the desired marginal distribution Bern(k p), but the entries of M are dependent. Alternatively, M can be generated as follows: First choose an m\u00d7m principal submatrix with a uniformly chosen support I, \ufb01ll it with i.i.d. Bern( k m) entries, then preand post-multiply by a diagonal matrix consisting of independent Rademacher variables, which used to randomize the sign of the leading eigenvector. By construction, with high probability, the matrix M is O(k)-sparse and, furthermore, its operator norm satis\ufb01es \u2225M\u22252 \u2265kt. Furthermore, the corresponding eigenvector is approximately 1J, which is m-sparse. The construction of this prior is based on the following intuition. The operator norm of a matrix highly depends on the correlation of the rows. Given the \u21132-norm of the rows, the largest spectral norm is achieved when all rows are aligned (rank-one), while the smallest spectral norm is achieved when all rows are orthogonal. In the sparse case, aligned support results in large spectral norm while disjoint support in small spectral norm. However, if all rows are aligned, then the signal is prominent enough to be distinguished from noise. Note that a submatrix structure strikes a precise balance between the extremal cases of completely aligned and disjoint support, which enforces that the row support sets are contained in a set of cardinality m, which is much larger than the row sparsity k but much smaller than the matrix size p. In fact, the optimal choice of the submatrix size given by m \u224dk2 \u2227\u221akp, which matches the structural property given in Theorem 3. The structure of the least favorable prior, in a way, shows that the optimality of tests based on approximate singular vector is not a coincidence. Another perspective is that the sparsity constraint on the matrix forces the marginal distribution of each entry in the nonzero pattern (1{Mij \u0338= 0}) to be Bern(k p). However, if all the entries were independent, then it would be very easy to test from noise. Indeed, perhaps the most straightforward choice of prior is Mij i.i.d. \u223ct \u00b7 Bern(k p), where t \u224dk p. However, the linear test statistic based on P ij Mij succeeds unless \u03bb \u22721. We can improve the prior by randomize the eigenvector, i.e., Mij i.i.d. \u223ctuiujBern(k p), but the \u03c72-test in Theorem 5 unless \u03bb \u2272 \u221a k, which still falls short of the desired \u03bb \u224d(kp)1/4. Thus, we see that the coupling between the entries is useful to make the mixture distribution closer to the null hypothesis. 4.3 Key lemmas The main tool for our lower bound is the \u03c72-divergence, de\ufb01ned by \u03c72(P \u2225Q) \u225c Z \u0012dP dQ \u22121 \u00132 dQ, 15 \fwhich is the variance of the likelihood ratio dP dQ under Q. The \u03c72-divergence is related to the total variation via the following inequality [27, p. 1496]: \u03c72 \u2265TV log 1 + TV 1 \u2212TV (33) Therefore the total variation distance cannot goes to one unless the \u03c72-divergence diverges. Furthermore, if \u03c72-divergence vanishes, then the total variation also vanishes, which is equivalently to, in view of (28), that P cannot be distinguished from Q better than random guessing. The following lemma due to Ingster and Suslina (see, e.g., [38, p. 97]) gives a formula for the \u03c72-divergence of a normal location mixture with respect to the standard normal distribution. Lemma 3. Let P be an arbitrary distribution on Rm. Then \u03c72(N(0, Im) \u2217P \u2225N(0, Im)) = E[exp(\u27e8X, \u02dc X\u27e9)] \u22121 where \u2217denotes convolution and X and \u02dc X are independently drawn from P. The proof of the lower bound in Theorem 1 relies on the following lemmas. These results give non-asymptotic both necessary and su\ufb03cient conditions for certain moment generating functions involving hypergeometric distributions to be bounded, which show up in the \u03c72-divergence calculation. Let H \u223cHypergeometric(p, m, m), with P {H = i} = (m i )(p\u2212m m\u2212i) ( p m) , i = 0, . . . , m. Lemma 4 ([18, Lemma 1]). Let p \u2208N and m \u2208[p]. Let B1, . . . , Bm be independently Rademacher distributed. Denote by Gm \u225c m X i=1 Bi the position of a symmetric random walk on Z starting at 0 after m steps. Then there exist an absolute constant a0 > 0 and function A : (0, a0) 7\u2192R+ with A(0+) = 0, such that if t = a m log ep m and a < a0, then E \u0002 exp \u0000tG2 H \u0001\u0003 \u2264A(a). (34) Lemma 5 ([32, Lemma 15, Appendix C]). Let p \u2208N and m \u2208[p]. Then there exist an absolute constant b0 > 0 and function B : (0, b0) 7\u2192R+ with B(0+) = 0, such that if \u03bb = b \u0010 1 m log ep m \u2227p2 m4 \u0011 and b < b0, then E \u0002 exp \u0000\u03bbH2\u0001\u0003 \u2264B(b). (35) Remark 2 (Tightness of Lemmas 4\u20135). The purpose of Lemma 4 is to seek the largest t, as a function of p and m, such that E \u0002 exp \u0000tG2 H \u0001\u0003 is upper bounded by a constant non-asymptotically. The condition that t \u224d1 m log ep m is in fact both necessary and su\ufb03cient. To see the necessity, note that P {GH = H|H = i} = 2\u2212i. Therefore E \u0002 exp \u0000tG2 H \u0001\u0003 \u2265E \u0002 exp(tH2)2\u2212H\u0003 \u2265exp(tm2)2\u2212m P {H = m} \u2265exp \u0012 tm2 \u2212m log 2p m \u0013 , which cannot be upper bound bounded by an absolute constant unless t \u22721 m log ep m. Similarly, the condition \u03bb \u22721 m log ep m \u2227p2 m4 in Lemma 5 is also necessary. To see this, note that E [H] = m2 p . By Jensen\u2019s inequality, we have E \u0002 exp \u0000\u03bbH2\u0001\u0003 \u2265exp(\u03bbm4 p2 ). Therefore a necessary condition for (34) is that \u03bb \u2264p2 log B m4 . On the other hand, we have E \u0002 exp \u0000\u03bbH2\u0001\u0003 \u2265exp(\u03bbm2 \u2212 m log p m), which implies that \u03bb \u22721 m log ep m. 16 \f4.4 Proof of Theorem 1: lower bound Proof. Step 1: Fix t > 0 to be determined later. Recall the random sparse matrix M = tUIBUI de\ufb01ned in (32), where I is chosen uniformly at random from all subsets of [p] of cardinality k, u = (u1, . . . , up)\u22a4consists of independent Rademacher entries, B is a p\u00d7p matrix with i.i.d. Bern( k m) entries, and (u, I, B) are independent. Equivalently, mij = 1{i \u2208I}1{j \u2208J}uiujbij. Next we show that the hypothesis H0 : X = Z versus H1 : X = M + Z cannot be tested with vanishing probability of error, by showing that the \u03c72-divergence is bounded. Let ( \u02dc U, \u02dc I, \u02dc B) be an independent copy of (U, I, B). Then \u02dc M = \u02dc U\u02dc I \u02dc B \u02dc U\u02dc I is an independent copy of M. Put s = t2. By Lemma 3, we have \u03c72(PX|H0 \u2225PX|H1) + 1 = E h exp \u0010 \u27e8M, \u02dc M\u27e9 \u0011i = E h exp \u0010 t2\u27e8UIBUI, \u02dc U\u02dc I \u02dc B \u02dc U\u02dc I\u27e9 \u0011i = E \uf8ee \uf8f0exp \uf8eb \uf8eds X i\u2208I\u2229\u02dc I X j\u2208I\u2229\u02dc I ui\u02dc uiuj \u02dc ujbij\u02dc bij \uf8f6 \uf8f8 \uf8f9 \uf8fb (a) = E \uf8ee \uf8f0exp \uf8eb \uf8eds X i\u2208I\u2229\u02dc I X j\u2208I\u2229\u02dc I uiujaij \uf8f6 \uf8f8 \uf8f9 \uf8fb (b) = E \uf8ee \uf8f0Y i\u2208I\u2229\u02dc I Y j\u2208J\u2229\u02dc J \u0012 1 + k2 m2 (esuiuj \u22121) \u0013\uf8f9 \uf8fb (c) \u2264E \uf8ee \uf8f0exp \uf8f1 \uf8f2 \uf8f3 k2 m2 X i\u2208I\u2229\u02dc I X j\u2208I\u2229\u02dc I (esuiuj \u22121) \uf8fc \uf8fd \uf8fe \uf8f9 \uf8fb = E \uf8ee \uf8f0exp \uf8f1 \uf8f2 \uf8f3 k2 m2 X i\u2208I\u2229\u02dc I X j\u2208I\u2229\u02dc I (uiuj sinh(s) + cosh(s) \u22121) \uf8fc \uf8fd \uf8fe \uf8f9 \uf8fb = E \uf8ee \uf8f0exp \uf8f1 \uf8f2 \uf8f3 k2 sinh(s) m2 X i\u2208I\u2229\u02dc I ui !2 + k2(cosh(s) \u22121) m2 |I \u2229\u02dc I|2 \uf8fc \uf8fd \uf8fe \uf8f9 \uf8fb, (36) where (a) is due to (um\u02dc um, . . . , um\u02dc um) (d) =(u1, . . . , um); (b) follows from aij \u225cbij\u02dc bij i.i.d. \u223cBern( k2 m2 ); (c) follows from the fact that log(1 + x) \u2264x for all x > \u22121; (d) is because for b \u2208{\u00b11}, we have exp(sb) = b sinh(s) + cosh(s) \u22121. Recall from Lemma 4 that {Gm : m \u22650} denotes the symmetric random walk on Z. Since I, \u02dc I are independently and uniformly drawn from all subsets of [p] of cardinality k, we have H \u225c|I \u2229\u02dc I| \u223cHypergeometric(p, m, m). De\ufb01ne A(m, s) \u225cE \u0014 exp \u001a2k2 sinh(s) m2 G2 H \u001b\u0015 , (37) B(m, s) \u225cE \u0014 exp \u001a2k2(cosh(s) \u22121) m2 H2 \u001b\u0015 . (38) 17 \fApplying the Cauchy-Schwartz inequality to the right-hand side of (36), we obtain \u03c72(PX|H0 \u2225PX|H1) + 1 \u2264 p A(m, s)B(m, s). (39) Therefore upper bounding the \u03c72-divergence boils down to controlling the expectations in (37) and (38) separately. Applying Lemma 4 and Lemma 5 to A(m, s) and B(m, s) respectively, we conclude that k2(cosh(s) \u22121) m2 \u2264c \u0012 1 m log ep m \u2227p2 m4 \u0013 \u21d2 A(m, s) \u2264C (40) k2 sinh(s) m2 \u2264c m log ep m \u21d2 B(m, s) \u2264C (41) where c, C are constants so that C \u21920 as c \u21920. Therefore the best lower bound we get for s is s\u2217= max k\u2264m\u2264p \u001a (cosh \u22121)\u22121 \u0012cm k2 log ep m \u2227cp2 m2k2 \u0013 \u2227sinh\u22121 \u0010cm k2 log ep m \u0011\u001b , (42) where the inverses sinh\u22121 and (cosh \u22121)\u22121 are de\ufb01ned with the domain restricted to R+. To simplify the maximization in (42), we use the following bounds of the hyperbolic functions: sinh\u22121(y) \u2265log(2y), (cosh \u22121)\u22121(y) \u2265log y, y \u22650. (43) Therefore s\u2217\u2265log max k\u2264m\u2264p \u0012cm k2 log ep m \u2227cp2 m2k2 \u0013 . Choosing m = \u0010 p2 log p \u0011 1 3 yields s\u2217\u2273log+ \u0012p log p k3 \u0013 , (44) where log+ \u225cmax{log, 0}. Note that the above lower bound is vacuous unless k \u2264(p log p) 1 3 . To produce a non-trivial lower bound for k \u2265(p log p) 1 3, note that (43) can be improved as follows. If the argument y is restricted to the unit interval, then sinh\u22121(y) \u2265sinh\u22121(1) y, (cosh \u22121)\u22121(y) \u2265\u221ay, y \u2208[0, 1], (45) which follows from the Taylor expansion of cosh and the convexity of sinh. Applying (45) to (42), s\u2217= max m: cm k2 log ep m \u22641 r cp2 m2k2 \u2227c sinh\u22121(1)m k2 log ep m ! . Choosing m = q pk 4c2 log ep k yields cm k2 log ep k \u22641. We then obtain s\u2217\u2273 r p k3 log ep k . (46) Step 2: We invoke Lemma 2 to conclude kt as a valid lower bound for \u03bb with t = \u221a s\u2217given in (44) and (46). To this end, we need to show that with high probability, M is O(k)-sparse and \u2225M\u22252 = \u2126(kt). De\ufb01ne events E1 = {M \u2208M(p, 2k)}, E2 = {\u2225M\u22252 \u2265kt/2}. 18 \fIt remains to show that both are high-probability events. Since I is independent of B, we shall assume, without loss of generality, that I = [m]. For the event E1, by the union bound and Hoe\ufb00ding\u2019s inequality, we have P {Ec 1} = P {BII / \u2208\u0398(m, 2k)} \u2264m2P ( m X i=1 bi1 \u22652k ) \u2264m2 exp(\u2212mk2) = o(1), (47) where bi1 i.i.d. \u223cBern( k m). For the event E2, again by Hoe\ufb00ding\u2019s inequality, P {E2} = P {\u2225BII\u22252 \u2265k/2} \u2265P \u001a \u2225M1I\u22252 \u2265k 2\u22251I\u22252 \u001b \u2265P \uf8f1 \uf8f2 \uf8f3 m X j=1 bij \u2265k 2, \u2200i \u2208[m] \uf8fc \uf8fd \uf8fe\u22651 \u2212m P \uf8f1 \uf8f2 \uf8f3 m X j=1 b1j < k 2 \uf8fc \uf8fd \uf8fe (48) \u22651 \u2212m exp(\u2212mk2/4) = 1 \u2212o(1). (49) The desired lower bound now follows from Lemma 2. Finally, we note that the lower bound continues to hold up to constant factors even if M is constrained to be symmetric. Indeed, we can replace M with the symmetrized version M\u2032 = [ 0 M M\u22a40 ] and note that the bound on \u03c72-divergence remains valid since \u27e8M\u2032, \u02dc M\u2032\u27e9= 2\u27e8M, \u02dc M\u27e9. 5 Detecting sparse covariance matrices 5.1 Test procedures and upper bounds Let X1, . . . , Xn be independently sampled from N(0, \u03a3). De\ufb01ne the sample covariance matrix as S = 1 n n X i=1 XiX\u22a4 i , (50) which is a su\ufb03cient statistic for \u03a3. The following result is the counterpart of Theorem 4 for entrywise thresholding that is optimal in the highly sparse regime: Theorem 6. Let C, C\u2032 be constants that only depend on \u03c4. Let \u01eb \u2208(0, 1). De\ufb01ne \u02c6 \u03a3 = (Sij1{|Sij| \u2265t}), where \u03c4 = q C log p \u01eb. Assume that n \u2265C\u2032 log p. If \u03bb\u221an > 2kt, then the test \u03c8(S) = 1 n \u2225\u02c6 \u03a3\u22252 \u2265\u03bb o satis\ufb01es PI(\u03c8 = 1) + sup \u03a3\u2208\u039e(p,k,\u03bb,\u03c4) P\u03a3(\u03c8 = 0) \u2264\u01eb for all 1 \u2264k \u2264p. To extend the test (24) to covariance model, we need a test statistic for \u2225\u03a3 \u2212I\u22252 F. Consider the following U-statistic proposed in [22, 16]: Q(S) = p + 1 \u0000n 2 \u0001 X 1\u2264i 0 and k \u2264m \u2264p are to be speci\ufb01ed later. De\ufb01ne the event E3 = \u001a \u2225M\u22252 \u22641 2 \u001b (61) E4 = \uf8f1 \uf8f2 \uf8f3\u2225BII \u2212pJII\u22252 \u2264 v u u tc0 k \u2228 log m log e log m k !\uf8fc \uf8fd \uf8fe (62) 21 \fand set E\u22c6= E3 \u2229E4. Next we show that P {E\u22c6} \u22651 \u2212o(1). (63) For E3, note that \u2225M\u22252 = t \u2225BII\u2225. Since I is independent of B, we shall assume that I = [m] and let B\u2032 = BII. Since \u2225BII\u22252 \u2264\u2225BII\u22251\u2225BII\u2225\u221e, similar to (49), Hoe\ufb00ding inequality implies that P {\u2225BII\u22251 \u22652k} = P {\u2225BII\u2225\u221e\u22652k} \u2264m2 exp(\u2212mk2/4). Therefore \u2225M\u22252 \u22642tk with probability at least 1 \u22122m2 exp(\u2212mk2/4). For E4, it follows2 from [34, Theorem 5] that \u2225BII \u2212E[BII]\u22252 \u2264 v u u tc0 k \u2228 log m log e log m k ! , with probability at least 1 \u2212exp(\u2212c1(k \u2228 log m log e log m k )). Thus P {E4} = 1 \u2212o(1). Consider \u03a3 = I2p + T, where T = \u0014 0 M M\u22a4 0 \u0015 . Note that TT \u22a4= \u0014MM\u22a4 0 0 M\u22a4M \u0015 and hence \u2225T\u22252 = \u2225M\u22252. Let Q denote the law of T conditioned on the event E\u22c6, and By (47) and (49), we have \u03a3 \u2208\u039e(2p, 2k, kt/2, 3/2) with probability tending to one. Thus it remains to bound the \u03c72-divergence. Let \u03c0n Q denote the mixture of N(0, I2p + T)\u2297n induced by the prior Q. Let \u02dc M, \u02dc S and \u02dc E\u22c6are independent copies of M, T, and E\u22c6, respectively. Applying Lemma 6 with \u03bb = 1 and \u03b4 = 1 2, we have \u03c72(\u03c0n Q \u2225N(0, I2p)\u2297n) \u2264E h exp \u0010 n\u27e8T, \u02dc T\u27e9 \u0011 \f \f \fE\u22c6, \u02dc E\u22c6i 1 2 E h exp \u0010 2n\u2225T \u02dc T\u22252 F \u0011 \f \f \fE\u22c6, \u02dc E\u22c6i 1 2 \u22121 = E h exp \u0010 2n\u27e8M, \u02dc M\u27e9 \u0011 \f \f \fE\u22c6, \u02dc E\u22c6i 1 2 E h exp \u0010 2n(\u2225M \u02dc M\u22a4\u22252 F + \u2225M\u22a4\u02dc M\u22252 F) \u0011 \f \f \fE\u22c6, \u02dc E\u22c6i 1 2 \u22121 (a) \u2264E h exp \u0010 2n\u27e8M, \u02dc M\u27e9 \u0011 \f \f \fE\u22c6, \u02dc E\u22c6i 1 2 E h exp \u0010 4n\u2225M \u02dc M\u22252 F \u0011 \f \f \fE\u22c6, \u02dc E\u22c6i 1 2 \u22121 \u2264 1 P {E\u22c6}2 E h exp \u0010 2n\u27e8M, \u02dc M\u27e9 \u0011i 1 2 E h exp \u0010 4n\u2225M \u02dc M\u22252 F \u0011i 1 2 \u22121 where (a) follows from the Cauchy-Schwarz inequality and the fact that M and M\u22a4have the same distribution. In Section 4.4 we have already shown that E h exp \u0010 2n\u27e8M, \u02dc M\u27e9 \u0011i is bounded by a constant, provided that (40) and (41) holds. To complete the proof of the lower bound, it remains to show that E h exp \u0010 4n\u2225M \u02dc M\u22252 F \u0011i (64) is bounded under the condition of (14) and (15). To this end, recall from (32) that M = tH and \u02dc M = t \u02dc H, where H = UIBUI and \u02dc H = \u02dc U\u02dc I \u02dc B \u02dc U\u02dc I. Note that E[B] = \u01ebJ, where \u01eb = k m. Write H = UIBUI = UI(B \u2212\u01ebJ)UI + \u01ebUIJUI = UI(B \u2212\u01ebJ)UI + \u01ebvv\u22a4 2The result in [34, Theorem 5] deals with symmetric matrices. Here, since BII is an m \u00d7 m matrix consisting of iid Bern(k/m) entries, the result follows from combining [34, (15)] and Talagrand\u2019s concentration inequality at the end of the proof therein. 22 \fwhere v = UI1 is supported on I with m independent Rademacher non-zeros. Then H \u02dc H = \u01eb2vv\u22a4\u02dc v\u02dc v\u22a4 | {z } h1 + \u01ebUI(B \u2212\u01ebJ)UI\u02dc v\u02dc v\u22a4 | {z } h2 + \u01ebvv\u22a4\u02dc U\u02dc I( \u02dc B \u2212\u01ebJ) \u02dc U\u02dc I | {z } h3 + UI(B \u2212\u01ebJ)UI \u02dc U\u02dc I( \u02dc B \u2212\u01ebJ) \u02dc U\u02dc I | {z } h4 where the \ufb01rst three terms are rank-one matrices. Since \u2225H \u02dc H\u22252 F \u22644 P4 i=1 \u2225h1\u22252 F, H\u00a8 older\u2019s inequality implies E h exp \u0010 4n\u2225M \u02dc M\u22252 F \u0011i \u2264 4 Y i=1 \u0002 E \u0002 exp \u000064nt4\u2225hi\u22252 F \u0001\u0003\u00031/4 We proceed to bound the four terms separately. First note that h1 = \u01eb2\u27e8v, \u02dc v\u27e9v\u02dc v\u22a4and hence \u2225h1\u22252 F = \u01eb4\u27e8v, \u02dc v\u27e92\u2225v\u22252 2\u2225\u02dc v\u22252 2 = \u01eb4m2\u27e8v, \u02dc v\u27e92. Note that \u27e8v, \u02dc v\u27e9 (d) =GH, where GH, de\ufb01ned in Lemma 4, is the sum of Hypergeometric(p, m, m) number of independent Rademacher random variables. In view of Lemma 4, we have E \u0002 exp \u000064nt4\u2225h1\u22252 F \u0001\u0003 \u22721, provided that nt4\u01eb4m2 = k4s2 m2n \u22721 m log ep m. (65) Next we bound h2 and h3, which have the same distribution. Note that h2 = \u01ebUI(B \u2212\u01ebJ)UI\u02dc v\u02dc v\u22a4 and hence \u2225h2\u22252 F = \u01eb2\u2225UI(B \u2212\u01ebJ)UI\u02dc v\u22252 2\u2225\u02dc v\u22252 = \u01eb2m\u22251I(B \u2212\u01ebJ)UI\u02dc v\u22252 2. Therefore E \u0002 exp \u000064nt4\u2225h2\u22252 F \u0001\u0003 = E \u0002 exp \u000064nt4\u01eb2m\u22251I(B \u2212\u01ebJ)UI\u02dc v\u22252 F \u0001\u0003 = E h exp \u0010 64nt4\u01eb2m\u2225(B \u2212\u01ebJ)I,I\u2229\u02dc I\u22252 F \u0011i , = E \u0002 E \u0002 exp \u0000\u03c4(S \u2212\u01ebL)2\u0001 |L \u0003m\u0003 , where L \u225c|I \u2229\u02dc I|, S \u223cBinom(L, \u01eb), and \u03c4 \u225c64nt4\u01eb2m = 64k2s2 mn . Assume that \u03c4 \u22721 m. (66) Recall that for any \u01eb, Bern(\u01eb) is subgaussian with parameter at most a constant c. Therefore S is subgaussian with parameter at most cL. By the equivalent characterization of subgaussian random variables [51, Proposition 2.5.2], we have E \u0002 exp \u0000\u03c4(S \u2212\u01ebL)2\u0001 |L \u0003 \u2264exp \u0000c\u03c4 2L \u0001 provided that \u03c4 2L \u2264c\u2032. Therefore E \u0002 exp \u000064nt4\u2225h2\u22252 F \u0001\u0003 \u2264E \u0002 exp \u0000c\u03c4 2mL \u0001\u0003 (a) \u2264 \u0012 1 + m p (exp \u0000c\u03c4 2m \u0001 \u22121) \u0013m (b) \u2264exp \u0012 c\u03c4 2m3 p \u0013 (c) \u22721 where (a) follows from the fact that hypergeometric distribution is stochastically dominated by binomial in the convex ordering [36]; (b) and (c) follow from (66) and hence \u03c4 2m \u22641 and \u03c4 2m3 p \u2264 \u03c4 2m2 \u22721. 23 \fFinally, we deal with h4, which is the term that requires the extra condition (15) on the sample size. Note that rank(h4) \u2264|I \u2229\u02dc I| and that \u2225h4\u22252 \u2264\u2225BII \u2212\u01ebJII\u22252 \u2225\u02dc B\u02dc I \u02dc I \u2212\u01ebJ\u02dc I \u02dc I\u22252. In view of the event E\u22c6we have conditioned on, Therefore \u2225h4\u22252 F \u2264\u2225BII \u2212\u01ebJII\u22252 2 \u2225\u02dc B\u02dc I \u02dc I \u2212\u01ebJ\u02dc I \u02dc I\u22252 2|I \u2229\u02dc I| \u2264c\u03c12|I \u2229\u02dc I|, where \u03c1 \u225ck \u2228 log m log e log m k . Hence E \u0002 exp \u000064nt4\u2225h4\u22252 F \u0001\u0003 \u2264E h exp \u0010 cnt4\u03c12|I \u2229\u02dc I| \u0011i = E \u0014 exp \u0012c\u03c12s2 n |I \u2229\u02dc I| \u0013\u0015 \u2264 \u0012 1 + m p \u0012 exp \u0012c\u03c12s2 n \u0013 \u22121 \u0013\u0013m \u2264exp \u0012cm2\u03c12s2 np \u0013 \u22721, provided that \u03c12s2 n \u22721 (67) and m2\u03c12s2 np \u22721. (68) To \ufb01nish the proof, we need to choose the parameters to ensure that that (40), (41), (65)\u2013(68) hold simultaneously. Let s = \u03b4 log p ck3 , m = k2 \u0010 p k3 \u0011\u03b4 , if k \u2264p1/3 s = rcp k3 log ep k , m = s cpk log ep k , if k \u2265p1/3 so that (40) and (41) hold; here \u03b4 is any constant in (0, 2 3]. Moreover, the basic assumption on the sample size n \u2273k2s log2 p (69) guarantees (65), (66) and (67). Finally, the extra assumption on the sample size that n \u2273m2k2s2 p = ( k6 p \u0000 p k3 \u00012\u03b4 log2 p k \u2264p1/3 p k \u2265p1/3 (70) ensures (68) hold simultaneously. The lower bound kp s n follows from Lemma 2 and Lemma 6, completing the proof. 6 Computational limits In this section we address the computational aspects of detecting sparse matrices in both the Gaussian noise and the covariance model. 24 \fGaussian noise model The computational hardness of the red region (reducibility from planted clique) in Figure 1 follows from that of submatrix detection in Gaussian noise [13, 45], which is a special case of the model considered here. The statistical and computational boundary of submatrix detection is shown in Figure 2(b), in terms of the tradeo\ufb00between the sparsity k = p\u03b1 and the spectral norm of the signal \u03bb = p\u03b2. Below we explain how Figure 2(b) follows from the results in [45]. The setting in [45] also deals with the additive Gaussian noise model (2), where, under the alternative, the entries of the mean matrix M is at least \u03b8 on a k \u00d7k submatrix and zero elsewhere, with k = p\u03b1 and \u03b8 = p\u2212\u03b3. Since \u2225M\u22252 \u2265k\u03b8, this instance is included in the alternative hypothesis in (3) with \u03bb = p\u03b2 and \u03b2 = \u03b1 \u2212\u03b3. It is shown that (see [45, Theorem 2 and Fig. 1]) detection is computationally at least as hard as solving the planted clique problem when \u03b3 > 0 \u2228(2\u03b1 \u22121), i.e., \u03b2 < \u03b1 \u2227(1 \u2212\u03b1). Note that this bound is not monotone in \u03b1, which can be readily improved to \u03b2 < \u03b1 \u22271 2, corresponding to the computational limit in Figure 2(b). Similarly, detection is statistically impossible when \u03b3 > \u03b1 2 \u2228(2\u03b1 \u22121), i.e., \u03b2 < \u03b1 2 \u2227(1 \u2212\u03b1). Taking the monotone upper envelope leads to \u03b2 < \u03b1 2 \u22271 3, yielding the statistical limit in Figure 2(a). Finally, Figure 1 can be obtained by superimposing the statistical-computational limits in Figure 2(a) on top of the statistical limit obtained in the present paper as plotted in Figure 2(b). 0 1 3 \u03b2 1 3 1 \u03b1 impossible possible (a) Statistical boundary for detecting sparse matrices (this paper). 0 1 3 1 2 1 \u03b2 1 2 2 3 1 \u03b1 impossible easy PC hard (b) Statistical-computational boundary for detecting submatrices [45]. Figure 2: Detection boundary for k-sparse matrices and k\u00d7k submatrices M in noise, where k = p\u03b1 and \u2225M\u22252 = \u03bb = p\u03b2. Sparse covariance model For the problem of detecting sparse covariance matrices, which is de\ufb01ned by the 4-tuple (n, p, k, \u03bb), the picture is less complete than the additive-noise counterpart; this is mainly due to the extra parameter n. Indeed, the statistical lower bound in Theorem 2 holds under the extra assumptions (14) and (15) that the sample size is su\ufb03ciently large, while the current computational lower bound for sparse PCA in the literature [13, 52, 30] also requires a number of conditions including the assumption of n \u2264p. Nevertheless, if we still let k = p\u03b1 and \u03bb\u221an = p\u03b2 and focus on the tradeo\ufb00between the (\u03b1, \u03b2) pair, the statistical and computational limits in Figure 1 continue to hold. Next we explain how to deduce the computational hardness of the red region from that of sparse PCA in the spiked Gaussian covariance model [30]. To this end, due to monotonicity, it su\ufb03ces to demonstrate a \u201chard instance\u201d, i.e., a sequence 25 \fof triples (n, \u03bb, k) indexed by p, for every (\u03b1, \u03b2) such that 1 3 < \u03b1 < 1 2 and \u03b2 < 1. Given samples X1, . . . , Xn i.i.d. \u223cN(0, \u03a3), the computational aspect of testing H0 : \u03a3 = I, versus H1 : \u03a3 = I + \u03bbuu\u22a4, (71) where the eigenvector u is both k-sparse and unit-norm, has been studied in [30]. Fix \u03b1 \u2208(1 3, 1 2). Let n = p\u03b7, k = p\u03b1 and \u03bb = ck2 n log2 n, so that \u03b2 = 2\u03b1 \u2212\u03b7, and let 1 a \u2264\u03b7 \u22641 to be chosen later; here a > 1 and c > 0 are absolute constants from [30, Theorem 5.4]. By assumption, (2\u03b1, 4\u03b1)\u2229( 1 a, 1) \u0338= \u2205; pick any \u03b7 therein. Then we have \u03bb \u226a1 and (71) is indeed an instance of (5). By the choice of the parameters, the conditions of [30, Theorem 5.4] are ful\ufb01lled, namely, \u03b2 < \u03b1 and \u03b1 > \u03b7 4, and the detection problem (71) and hence (5) are at least as hard as the planted clique problem. 7 Discussions In this paper, we studied the fundamental limits for sparse matrix detection from both the statistical and computational perspectives, where the alternative hypothesis is de\ufb01ned in terms of the spectral norm. The sparse matrices considered here have no apparent combinatorial structure and the corresponding estimation problem has no computational issue at all, but the detection problem has a surprising computational barrier when the sparsity level exceeds the cubic root of the matrix size. In this section we discuss two related problems, one is the detection problem when the alternative hypothesis is de\ufb01ned in terms of the Frobenius norm and another is the localization and estimation of a sparse matrix. 7.1 Alternative hypothesis de\ufb01ned by the Frobenius norm As opposed to the alternative hypothesis in (3) for k-sparse matrices de\ufb01ned by the spectral norm, one can consider the detection problem with the alternative hypothesis de\ufb01ned in terms of the Frobenius norm: ( H0 : M = 0 H1 : \u2225M\u2225F \u2265\u03bb, M \u2208\u0398(p, k, \u03bb). (72) It turns out that in this case the sparsity plays no role in improving the detection boundary, in the sense that the optimal separation scales as \u03bb\u2217(k, p) \u224d\u221ap for all k \u22651. The intuition behind this result is the well-known fact that in the Gaussian sequence model, the sparsity of the signal does not help in the so-called \u201cdense regime\u201d when the sparsity level exceeds the square-root of the dimension [37, 25]. Here for k-sparse p \u00d7 p matrices in the sense of De\ufb01nition 1, the number of nonzeros can be as large as kp (e.g., block diagonal consisting of p/k number of k \u00d7 k blocks), which, since the ambient dimension is p2, lies in the dense regime. This result can be proved rigorously as follows. By the classical result of detection in the Gaussian sequence model (cf. e.g. [38, Sec. 3.3.6]), without sparsity, the optimal \u03bb for (72) is \u0398(p), achieved by the \u03c72-test, namely, thresholding on \u2225X\u2225F. Next we show that this is optimal even when k = 1. To see this, consider the prior where M is a random permutation matrix, which is 1-sparse by de\ufb01nition and \u2225M\u2225F = p with probability one. By Lemma 3, the \u03c72-divergence between the null and the alternative is \u03c72(PX|H0 \u2225PX|H1) + 1 = E h exp \u0010 \u27e8M, \u02dc M\u27e9 \u0011i = E [exp(Sp)] (73) where Sp is the number of \ufb01xed points of a uniform random permutation over p elements. Furthermore, it is well-known that (cf. [29, Section IV.4]) Sp converges in distribution to Poisson(1) as 26 \fp \u2192\u221eand, furthermore, P {Sn = \u2113} = 1 \u2113! Pn\u2212\u2113 m=0 (\u22121)m m! \u22642 \u2113! for any \u2113\u22650, which is faster than any exponential tail. Therefore, by [42, Theorem 1], the moment generating function of Sn converges to that of Poisson(1) pointwise. In particular, E [exp(Sp)] \u2192ee\u22121 as p \u2192\u221e. Hence the probability of error for testing is non-vanishing in view of (33). 7.2 Localization and denoising A problem that is closely related to detecting the presence of a sparse matrix is localization. That is, the goal is to identify the support of the mean or covariance matrix with high probability. Under the row/column-wise sparsity assumption, if we measure the signal strength by the minimum non-zero entrywise magnitude, then it is easy to show that entrywise thresholding attains the minimax rate and there is no computational issue. In contrast, in the submatrix model, achieving the optimal rate for localization is again computationally di\ufb03cult as shown in [15] and [33] in the context of Gaussian noise model and the community detection model, respectively. Denoising high-dimensional matrices with submatrix sparsity was studied in [46], where the goal is to estimate the mean matrix M based on the noisy observation in (2). It turns out the computational di\ufb03culty of attaining the optimal rates crucially depends on the loss function [45, Section 5.2]. For instance, for Frobenius norm loss entrywise thresholding is rate-optimal, while achieving the optimal rate for the spectral norm loss is no easier than planted clique whenever k = p\u03b1 for any \ufb01xed 0 < \u03b1 < 1. In contrast, as mentioned earlier, for the sparsity model studied in this paper, entrywise thresholding achieves the minimax rate simultaneously for both the Frobenius norm and the spectral norm losses [20]." + }, + { + "url": "http://arxiv.org/abs/1709.03907v1", + "title": "Weighted Message Passing and Minimum Energy Flow for Heterogeneous Stochastic Block Models with Side Information", + "abstract": "We study the misclassification error for community detection in general\nheterogeneous stochastic block models (SBM) with noisy or partial label\ninformation. We establish a connection between the misclassification rate and\nthe notion of minimum energy on the local neighborhood of the SBM. We develop\nan optimally weighted message passing algorithm to reconstruct labels for SBM\nbased on the minimum energy flow and the eigenvectors of a certain Markov\ntransition matrix. The general SBM considered in this paper allows for\nunequal-size communities, degree heterogeneity, and different connection\nprobabilities among blocks. We focus on how to optimally weigh the message\npassing to improve misclassification.", + "authors": "T. Tony Cai, Tengyuan Liang, Alexander Rakhlin", + "published": "2017-09-12", + "updated": "2017-09-12", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction The stochastic block model (SBM), or planted partition model, is a celebrated model that captures the clustering or community structure in large networks. Fundamental phase transition phenomena and limitations for ef\ufb01cient algorithms have been established for the \u201cvanilla\u201d SBM, with equal-size communities [9, 10, 27, 30, 31, 24, 1, 16, 2, 11]. However, when applying the algorithms to real network datasets, one needs to carefully examine the validity of the vanilla SBM model. First, real networks are heterogeneous and imbalanced; they are often characterized by unequal community size, degree heterogeneity, and distinct connectivity strengths across communities. Second, in real networks, additional side information is often available. This additional information may come, for instance, in the form of a small portion of revealed community memberships, or in the form of node features, or both. In this paper, we aim to address the above concerns by answering the following questions: Algorithm For a general stochastic block model that allows for heterogeneity and contains noisy or partial side information, how to utilize this information to achieve better classi\ufb01cation performance? Theory What is the transition boundary on the signal-to-noise ratio for a general heterogeneous stochastic block model? Is there a physical explanation for the optimal misclassi\ufb01cation error one can achieve? 1.1 Problem Formulation We de\ufb01ne the general SBM with parameter bundle (n,k,N \u2208Rk,Q \u2208Rk\u00d7k) as follows. Let n denote the number of nodes and k the number of communities. The vector N = [n1,n2,...,nk]T denotes the number of nodes in each community. The symmetric matrix Q = [Qi j] represents the connection probability: Qi j 1 arXiv:1709.03907v1 [math.ST] 12 Sep 2017 \fis the probability of a connection between a node in community i to a node in community j. Speci\ufb01cally, one observes a graph G(V,E) with |V | = n, generated from SBM as follows. There is a latent disjoint partition that divides V = Sk l=1Vl into k communities. De\ufb01ne \u2113(\u00b7) : V \u2192[k] to be the label (or, community) of a node v. For any two nodes v,u \u2208V , there is an edge between (u \u2194v) \u2208E with probability Q\u2113(u),\u2113(v). The goal is to recover the latent label \u2113(v) for each node v. Here we consider the following kinds of heterogeneity: unequal size communities (represented by [ni]), different connection probabilities across communities (as given by [Qi j]), and degree heterogeneity (due to both [ni] and [Qi j]). We study the problem when either noisy or partial label information is available in addition to the graph structure and show how to \u201coptimally\u201d improve the classi\ufb01cation result (in terms of misclassi\ufb01cation error). We argue that this is common for many practical problems. First, in real network datasets, a small portion of labels (or, community memberships) is often available. Second, a practitioner often has certain initial guess of the membership, either through training regression models using node features and partially revealed labels as side information, or running certain clustering algorithms (for example, spectral clustering using non-backtracking matrix, semi-de\ufb01nite programs or modularity method) on a subset or the whole network. We will show that as long as these initial guesses are better than random assignments, one can \u201coptimally weigh\u201d the initial guess according to the network structure to achieve small misclassi\ufb01cation error. Formally, the noisy (or partial) information is de\ufb01ned as a labeling \u02dc \u2113prior on the nodes of the graph with the following stochastic description. The parameter \u03b4 quanti\ufb01es either (a) the portion of randomly revealed true labels (with the rest of entries in \u02dc \u2113prior missing), or (b) the accuracy of noisy labeling \u02dc \u2113prior, meaning P( \u02dc \u2113prior(v) = \u2113(v)) = 1\u2212\u03b4 k +\u03b4, and when \u02dc \u2113prior(v) \u0338= \u2113(v), each label occurs with equal probability. 1.2 Prior Work In the literature on vanilla SBM (equal size communities, symmetric case), there are two major criteria \u2014 weak and strong consistency. Weak consistency asks for recovery better than random guessing in a sparse random graph regime (p,q \u224d1/n), and strong consistency requires exact recovery for each node above the connectedness theshold (p,q \u224dlogn/n). Interesting phase transition phenomena in weak consistency for SBM have been discovered in [10] via the insightful cavity method from statistical physics. Sharp phase transitions for weak consistency have been thoroughly investigated in [9, 30, 31, 32, 27]. In particular for k = 2, spectral algorithms on the non-backtracking matrix have been studied in [27] and the non-backtracking walk in [32]. In these two fundamental papers, the authors resolved the conjecture on the transition boundary for weak consistency posed in [10]. Spectral algorithms as initialization and belief propagation as further re\ufb01nement to achieve better recovery was established in [31]. Recent work of [3] establishes the positive detectability result down to the Kesten-Stigum bound for all k via a detailed analysis of a modi\ufb01ed version of belief propagation. For strong consistency, [1, 16, 17] established the phase transition using information-theoretic tools and semi-de\ufb01nite programming (SDP) techniques. In the statistics literature, [40, 14] investigated the misclassi\ufb01cation rate of the standard SBM. For the general SBM with connectivity matrix Q, [15, 5, 7] provided sharp non-asymptotic upper bound analysis on the performance of a certain semi-de\ufb01nite program. They investigated the conditions on Q for a targeted recovery accuracy, quanti\ufb01ed as the loss (as a matrix norm) between the SDP solution and the ground truth. The results are more practical for heterogeneous real networks. However, for the analysis of SDP to work, these results all assume certain density gap conditions, i.e., max1\u2264i 0 is de\ufb01ned as E(r,i) := X v\u2208T i(\u21ddv)2r|v|. The minimum energy E\u2217(r) is E\u2217(r) := inf i E(r,i), where the in\ufb01mum is over all valid unit \ufb02ows. Denote the minimum energy \ufb02ow as i\u2217. When assigning resistance rd to edges that are d-depth away from the root, the energy enjoys the natural physical interpretation. We also remark that for a given resistance level, one can calculate the minimum energy \ufb02ow i\u2217on the tree using Thomson\u2019s principal. We identify the reciprocal of resistance level with the conductance level. Now we are ready to de\ufb01ne the branching number of a tree T through minimum energy. De\ufb01nition 3 (Branching Number). The branching number br(T ) can be de\ufb01ned as br(T ) := sup{r : E(r) < \u221e} = sup{r : inf i X v\u2208T i(\u21ddv)2r|v| < \u221e}. It is well known that the branching number not only captures the growth rate of the tree, but also the more detailed structure, such as imbalance [26]. 2.2 Broadcasting Trees and SBM When viewed locally, stochastic block models in the sparse regime share similarities with a label broadcasting process on a Galton-Watson tree. In fact, the local neighborhood of SBM can be coupled with a broadcasting tree with high probability as n \u2192\u221e. This phenomenon has been investigated in studying the detectability and reconstruction threshold for vanilla SBM (equal-size communities, symmetric case), as in [30]. Let us formally de\ufb01ne the label broadcasting process conditioned on a tree T (o). 5 \fDe\ufb01nition 4 (Label Broadcasting). Given a tree T (o), the k-broadcasting process on T with the Markov transition matrix K \u2208Rk\u00d7k describes the following process of label evolution. Conditioning on a node v and its label \u2113(v) \u2208[k], the labels of children u \u2208C (v) are sampled independently from P(\u2113(u)|\u2113T|v|(o)) = P(\u2113(u)|\u2113(v)) = K\u2113(v),\u2113(u), where the \ufb01rst equality is the Markov property. Let us review the de\ufb01nition of the multi-type Galton-Watson tree. We shall only consider the Poisson branching process. De\ufb01nition 5 (Multi-type Galton-Watson Tree). Consider a k-types Galton-Watson process with the mean matrix M \u2208Rk\u00d7k. For a node v, given its type \u2113(v) = i, the number of type j children of v enjoys a Poisson(mi j) distribution, independently of other types. Start the process recursively for t generations from root o. The tree Tt(o) is called a multi-type Galton-Watson tree. 2.3 Notation The moment generating function (MGF) for a random variable X is denoted by \u03a8X (\u03bb) = Ee\u03bbX . For asymptotic order of magnitude, we use a(n) = O(b(n)) to denote that \u2200n,a(n) \u2264Cb(n) for some universal constant C, and use O\u2217(\u00b7) to omit the poly-logarithmic dependence. As for notation \u227e,\u227f: a(n) \u227eb(n) if and only if lim n\u2192\u221e a(n) b(n) \u2264C, with some constant C > 0, and vice versa. The square bracket [\u00b7] is used to represent the index set [k] := [1,2,...,k]; in particular when k = 2, [2] := {+,\u2212} for convenience. Recall that the hyperbolic tangent is tanhx = ex\u2212e\u2212x ex+e\u2212x . The message-passing algorithm in the following sections involves a non-linear update rule de\ufb01ned through a function f\u03b81,\u03b82(x) := log 1+\u03b81 tanh x 2 1\u2212\u03b82 tanh x 2 , (3) for 0 < \u03b81,\u03b82 < 1. Note that the derivative f \u2032 \u03b81,\u03b82(0) = \u03b81+\u03b82 2 . 3 Two Communities In this Section we will illustrate the main results for the case of two, possibly imbalanced, communities. We motivate the weighted message passing algorithm, and its relation to minimum energy \ufb02ow. We investigate the connection between misclassi\ufb01cation and minimum energy, as well as the corresponding transition threshold for general SBM. 6 \f3.1 Main Algorithmic and Theoretical Results This section serves as an informal summary of the results for k = 2. As a start, we introduce the following weighted message passing (WMP) Algorithm 1. Algorithm 1: Weighted Message Passing Data: Graph G(V,E) with noisy label information \u02dc \u2113prior. Parameters: neighborhood radius \u00af t and conductance level \u00af \u03b82. Result: The labeling for each node o \u2208V . for each node o \u2208V , do Open the tree neighborhood T\u00af t(o) induced by the graph G(V,E) ; Layer \u00af t: for every node u \u2208C \u00af t(o) with distance \u00af t to the root on T\u00af t(o), initialize its message M(u,0) = \u00af \u03b8\u22122|u| \u00b7i\u2217(\u21ddu)\u00b7sign[ \u02dc \u2113prior(u)], where i\u2217(\u21ddu) is the minimum energy \ufb02ow to u calculated via Thomson\u2019s principal on T\u00af t(o) with conductance level \u00af \u03b82 ; for t = 1,... \u00af t, do Layer \u00af t \u2212t: for every node u \u2208C \u00af t\u2212t(o), calculate the message M(u,t) through the linearized update rule M(u,t) = X v\u2208C (u) \u00af \u03b8M(v,t \u22121). end Output \u02c6 \u2113wmp(u) = sign[M(o, \u00af t)]. end We remark that WMP can run in parallel for all nodes due to its decentralized nature. For \ufb01xed depth \u00af t and sparse SBM (when n maxi,j Qi j \u227elogn), the algorithm runs in O\u2217(n) time. The following theorem is a simpli\ufb01ed version of Theorems 2 and 3 below: Theorem 1 (General SBM: k = 2). Consider the general stochastic block model G(V,E) with parameter bundle (n,k = 2,N,Q), with either partial or noisy label information \u02dc \u2113prior with parameter 0 < \u03b4 < 1. Assume that n maxi,j Qi j \u227eno(1). For any node o \u2208V and its depth t leaf labels \u02dc \u2113prior(C t(o)), de\ufb01ne the worst-case misclassi\ufb01cation error of a local estimator \u03c3t(o) : \u02dc \u2113prior(C t(o)) \u2192{+,\u2212} as Err(\u03c3t) := max l\u2208{+,\u2212} P(\u03c3t(o) \u0338= \u2113(o)|\u2113(o) = l). (4) De\ufb01ne \u00af \u03b8 := 1 4 \u00b5n1Q11 \u2212n2Q12 n1Q11 +n2Q12 + n2Q22 \u2212n1Q21 n1Q21 +n2Q22 \u00b6 (5) \u03bb := \u03bb1 \u00b5\u00b7n1Q11 n2Q12 n1Q21 n2Q22 \u00b8\u00b6 . (6) Let E\u2217(\u00af \u03b8\u22122) be the minimum energy on Tt(o) with conductance level \u00af \u03b82 as t \u2192\u221e. The transition boundary for this general SBM depends on the value SNR = \u03bb\u00af \u03b82. 7 \fOn the one hand, if \u03bb\u00af \u03b82 > 1, the WMP Algorithm 1, denoted as \u02c6 \u2113wmp, enjoys the following upper bound on misclassi\ufb01cation limsup t\u2192\u221e limsup n\u2192\u221e Err( \u02c6 \u2113wmp) \u2264exp \u00b5 \u2212 1 2E\u2217(1/\u00af \u03b82) \u00b6 \u22271 2, (7) for any \ufb01xed \u03b4 > 0. On the other hand, if \u03bb\u00af \u03b82 < 1, for any local estimator \u03c3t that uses only label information on depth t leaves, the minimax misclassi\ufb01cation error is lower bounded by liminf t\u2192\u221eliminf n\u2192\u221e inf \u03c3t Err(\u03c3t) = 1 2. (8) Remark 1. We remark that Algorithm 1 is stated for the case when noisy label information is known for all nodes in layer \u00af t. For the case of partial label information, there are two options to modify the initialization of the algorithm: (1) view the partial label information with parameter \u03b4 as the noisy label information on layer \u00af t only, with P( \u02dc \u2113prior(u) = \u2113(u)) = \u03b4 + (1 \u2212\u03b4) 1 2 \u2014 with probability \u03b4, the label is revealed exactly, and with probability 1 \u2212\u03b4, the label is decided using coin-\ufb02ip \u2014 then proceed with the algorithm; (2) view the partial information as on each layer there is a \u03b4 portion of nodes whose label is shown exactly. Call the set of these nodes V l(T\u00af t(o)). Then we need to initialize the message M(u) for all u \u2208V l(T\u00af t(o)) \ufb01rst before using the recursion M(u) = P v\u2208C (u) \u00af \u03b8M(v). It can be shown that these two treatments enjoy similar asymptotic performance in terms of misclassi\ufb01cation error, above the SNR threshold. However, the latter performs better numerically for \ufb01xed depth tree as it utilizes more information. We decompose the proof of Theorem 1 into several building steps: (1) conditioned on the local tree structure, prove concentration-of-measure on WMP messages when label propagates according to a Markov transition matrix K ; (2) for a typical tree instance generated from multi-type Galton-Watson process, establish connection among the misclassi\ufb01cation rate, transition boundary and minimum energy through the concentration result; (3) show that in the sparse graph regime of interest, the local neighborhood of general SBM can be coupled with a multi-type Galton-Watson with Markov transition matrix K := \" n1Q11 n1Q11+n2Q12 n2Q12 n1Q11+n2Q12 n1Q21 n1Q21+n2Q22 n2Q22 n1Q21+n2Q22 # for label broadcasting (the explicit expression based on Eq. (1)). We remark that (3) follows similar proof strategy as in [30], where the coupling for vanilla SBM has been established. The lower bound follows from Le Cam\u2019s testing argument, and the dif\ufb01culty lies in analyzing the distance between measures recursively on the local tree. Remark 2. When the local tree is regular and symmetric and \u03bb\u00af \u03b82 > 1, the minimum energy can be evaluated exactly as E\u2217(\u00af \u03b8\u22122) = 1 \u03bb\u00af \u03b82 \u22121 , which implies that misclassi\ufb01cation error takes the exponentially decaying form exp \u00a1 \u2212SNR\u22121 2 \u00a2 . Hence, the result provides a detailed understanding of the strength of the SNR and its effect on misclassi\ufb01cation, i.e., the inference guarantee. More concretely, for the vanilla SBM in the regime p = a/n,q = b/n, the boundary is SNR = n(p\u2212q)2 2(p+q) > 1, which is equivalent to the boundary (a \u2212b)2 2(a +b) > 1 8 \ffor weak consistency in [32, 27]. In addition, one observes that SNR > 1+2logn implies Err( \u02c6 \u2113) < 1/n \u2192 0, which asserts strong consistency. This condition on SNR is satis\ufb01ed, for instance, by taking p = a logn/n,q = b logn/n in vanilla SBM and computing the relationship between a,b to ensure SNR = n(p\u2212q)2 2(p+q) > 1+2logn. This relationship is precisely pa \u2212 p b p 2 > s 1+ 1 2logn \u00b7 p 2(a +b) pa + p b > 1. The above agrees with the threshold for strong recovery in [1, 16]. 3.2 Weighted Message Passing & Minimum Energy Flow In this section, we will motivate our proposed weighted message passing (WMP) from the well-known belief propagation (BP) on trees. There are two interesting components in the WMP Algorithm 1: the linearization part, and the initialization part. We will discuss each one in details in this section. Recall the De\ufb01nition 4 of the label broadcasting process on tree T (o) with k = 2. For convenience, let us denote the Markov transition matrix K to be K = \"1+\u03b81 2 1\u2212\u03b81 2 1\u2212\u03b82 2 1+\u03b82 2 # . (9) The BP algorithm is the Bayes optimal algorithm on trees given the labels of leaves. De\ufb01ne for a node u \u2208V the BP message as B(u,t) := log P(\u2113(u) = +)|\u2113obs(Tt(u))) P(\u2113(u) = \u2212|\u2113obs(Tt(u))) , which is the posterior logit of u\u2019s label given the observed labels \u2113obs(Tt(u))). Using Bayes rule and conditional independence, one can write out the explicit evolution for BP message through f\u03b81,\u03b82 in (3) B(u,t) = X v\u2208C (u) log \u00c3 1+\u03b81 tanh B(v,t\u22121) 2 1\u2212\u03b82 tanh B(v,t\u22121) 2 ! = X v\u2208C (u) f\u03b81,\u03b82 (B(v,t \u22121)), (10) with \u03b81,\u03b82 as in Markov transition matrix K . While the method is Bayes optimal, the density of the messages B(u,t) is dif\ufb01cult to analyze, due to the blended effect of the dependence on revealed labels and the non-linearity of f\u03b81,\u03b82. However, the WMP Algorithm 1 \u2014 a linearized BP \u2014 shares the same transition threshold with BP , and is easier to analyze. Above a certain threshold, the WMP succeeds, which implies that the optimal BP will also work. Below the same threshold, even the optimal BP will fail, and so does the WMP . The updating rule for WMP messages M(u,t) is simply a replacement of Eq. (10) by its linearized version, M(u,t) = X v\u2208C (u) \u03b81 +\u03b82 2 M(v,t \u22121). The initialization of the WMP messages on the leaves M(u,0) whose labels have been observed is crucial to the control of the misclassi\ufb01cation error of the root node, especially for general SBM with heterogeneous degrees. For general SBM, one should expect to initialize the messages according to the detailed local tree structure, where the degree for each node could be very different. It turns out that 9 \fthe optimal misclassi\ufb01cation for WMP is related to a notion called the minimum energy E\u2217. Moreover, the optimal initialization for leaf message u is proportional to the minimum energy \ufb02ow i\u2217(\u21ddu) on the local tree, with conductance level \u00af \u03b82. In plain language, i\u2217(\u21ddu) provides a quantitative statement of the importance of the vote u has for the root. Note that for imbalanced trees, i\u2217could vary signi\ufb01cantly from node to node, and can be computed ef\ufb01ciently given the tree structure Tt(o) for a speci\ufb01ed conductance level. 3.3 Concentration, Misclassi\ufb01cation & Energy We now prove the concentration-of-measure phenomenon on WMP messages. Through the concentration, we will show the close connection between misclassi\ufb01cation and energy. We will \ufb01rst state the result conditioned on the tree structure Tt(o). Lemma 1 (Concentration on Messages). Recall the label broadcasting process with Markov transition kernel K \u2208R2\u00d72 on tree T\u00af t(o). Assume the MGF of messages on leaves M(u,0) satis\ufb01es the following E h e\u03bbM(u,0)|\u2113(u) = + i \u2264e\u03bb\u00b50(u,+)e \u03bb2\u03c32 0(u) 2 E h e\u03bbM(u,0)|\u2113(u) = \u2212 i \u2264e\u03bb\u00b50(u,\u2212)e \u03bb2\u03c32 0(u) 2 for any \u03bb, with parameter \u00b50(u) = \u00b7\u00b50(u,+) \u00b50(u,\u2212) \u00b8 \u2208R2, \u03c32 0(u) \u2208R. De\ufb01ne the following updating rules for a node v \u00b5t(v) = X u\u2208C (v) \u00af \u03b8K \u00b5t\u22121(u) (11) \u03c32 t (v) = X u\u2208C (v) \u00af \u03b82 \u00bd \u03c32 t\u22121(u)+ \u00b7\u00b5t\u22121(u,+)\u2212\u00b5t\u22121(u,\u2212) 2 \u00b82\u00be . (12) Then the following concentration-of-measure holds for the root message M(o, \u00af t): M(o, \u00af t)|\u2113(o)=+ \u2265\u00b5\u00af t(o,+)\u2212x \u00b7\u03c3\u00af t(o) M(o, \u00af t)|\u2113(o)=\u2212\u2264\u00b5\u00af t(o,\u2212)+ x \u00b7\u03c3\u00af t(o) both with probability 1\u2212exp(\u2212x2 2 ). In addition, if we choose \u00b5\u00af t(o,+)+\u00b5\u00af t(o,\u2212) 2 as the cut-off to provide classi\ufb01cation \u02c6 \u2113wmp, then the misclassi\ufb01cation error is upper bounded by exp \u00c3 \u2212[\u00b5\u00af t(o,+)\u2212\u00b5\u00af t(o,\u2212)]2 8\u03c32 \u00af t (o) ! . (13) The above Lemma provides an expression on the classi\ufb01cation error. The next Theorem will show that with the \u201coptimal\u201d initialization for WMP , the misclassi\ufb01cation error is connected to the minimum energy. 10 \fTheorem 2 (Connection between Misclassi\ufb01cation & Energy). De\ufb01ne the current \ufb02ow i(\u21ddv) = \u00af \u03b82|v|[\u00b5t\u2212|v|(v,+)\u2212\u00b5t\u2212|v|(v,\u2212)] [\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] . Then it is a valid unit \ufb02ow on Tt(o), and the following equation holds \u03c32 t (o) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 = (1+ot(1)) X v\u2208Tt(o) i(\u21ddv)2 \u00a1\u00af \u03b8\u22122\u00a2|v| = (1+ot(1))Et(i, \u00af \u03b8\u22122) when limt\u2192\u221eEt(i, \u00af \u03b8\u22122) < \u221e. Moreover, if we choose \u00b50(v) so that i is the minimum energy \ufb02ow, then under the condition br[T (o)]\u00af \u03b82 > 1, we have E\u2217(\u00af \u03b8\u22122) < \u221eand lim t\u2192\u221einf i \u03c32 t (o) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 \u2264 X v\u2208T (o) i\u2217(\u21ddv)2 \u00a1\u00af \u03b8\u22122\u00a2|v| = E\u2217(\u00af \u03b8\u22122). (14) Remark 3. The above Theorem 2 and Lemma 1 together state the fact that if br[T (o)]\u00af \u03b82 > 1, E\u2217(\u00af \u03b8\u22122) is \ufb01nite, and the optimal initialization of WMP enjoys the asymptotic misclassi\ufb01cation error bound of exp \u00b5 \u2212 1 2E\u2217(\u00af \u03b8\u22122) \u00b6 . Qualitatively, the smaller the minimum energy is, the smaller the misclassi\ufb01cation error is, and it decays exponentially. On the contrary, if the minimum energy is in\ufb01nite (br[T (o)]\u00af \u03b82 < 1), the misclassi\ufb01cation error bound for WMP becomes vacuous. Another remark is that when the tree is regular, the minimum energy takes the simple form E\u2217(\u00af \u03b8\u22122) = 1 br[T (o)]\u00af \u03b82\u22121, which implies the upper bound exp(\u2212br[T (o)]\u00af \u03b82\u22121 2 ) on asymptotic misclassi\ufb01cation error. 3.4 Below the Threshold: Limitation of Local Algorithms In this section, we will show that the SNR threshold (for WMP algorithm) is indeed sharp for the local algorithm class. The argument is based on Le Cam\u2019s method. Let us prove a generic lower bound for any \ufb01xed tree Tt(o), and for the k = 2 label broadcasting process with transition matrix K (as in Eq. (9)). Theorem 3 (Limitation of Local Algorithms). Recall the label broadcasting process with Markov transition kernel K on tree Tt(o). Consider the case when noisy label information (with parameter \u03b4) is known on the depth-t layer leaf nodes. Denote the following two measures \u03c0+ \u2113Tt (o),\u03c0\u2212 \u2113Tt (o) as distributions on leaf labels given \u2113(o) = +,\u2212respectively. Under the condition br[T (o)]\u00af \u03b82 < 1, if log(1+ 4\u03b42 1\u2212\u03b42 ) \u22641\u2212br[T (o)]\u00af \u03b82, the following equality on total variation holds lim t\u2192\u221ed2 TV \u00b3 \u03c0+ \u2113Tt (o),\u03c0\u2212 \u2113Tt (o) \u00b4 = 0. 11 \fFurthermore, the above equation implies lim t\u2192\u221einf \u03c3t sup l\u2208{+,\u2212} P(\u03c3t(o) \u0338= \u2113(o)|\u2113(o) = l) = 1 2 where \u03c3t(o) : \u02dc \u2113prior(C t(o)) \u2192{+,\u2212} is any estimator mapping the prior labels in the local tree to a decision. The above theorem is stated under the case when the noisy label information is known and only known for all nodes in layer t. One can interpret the result as, below the threshold br[T (o)]\u00af \u03b82 < 1, one cannot do better than random guess for the root\u2019s label based on noisy leaf labels at depth t as t \u2192\u221e. The proof relies on a technical lemma on branching number and cutset as in [37]. We would like to remark that the condition log(1+ 4\u03b42 1\u2212\u03b42 ) \u22641\u2212br[T (o)]\u00af \u03b82 can be satis\ufb01ed when \u03b4 is small. 4 General Number of Communities In this section, we will extend the algorithmic and theoretical results to the general SBM for any \ufb01xed k or growing k with a slow rate (with respect to n). There are several differences between the general k case and the k = 2 case. First, algorithmically, the procedure for general k requires another layer of weighted aggregation besides the weights introduced by minimum energy \ufb02ow (according to the detailed tree irregularity). The proposed procedure introduces the weights on the types of labels (k types) revealed, and then aggregates the information in the most \u201cinformative direction\u201d to distinguish the root\u2019s label. Second, the theoretical tools we employ enable us to formally describe the intuition that in some cases for general SBM, one can distinguish the communities i, j from k, but not being able to tell i and j apart. We will call this the set identi\ufb01cation. 4.1 Summary of Results We summarize in this section the main results for general SBM with k unequal size communities, and introduce the corresponding weighted message passing algorithm (WMP). We need one additional notation before stating the main result. For a vector w \u2208Rk, assume there are m unique values for wl,l \u2208[k]. Denote by Si,1 \u2264i \u2264m, the sets of equivalent values associated with w \u2014 for any l,l\u2032 \u2208[k], wl = wl \u2032 if and only if l,l\u2032 \u2208Si for some i \u2208[m]. Denote wSi to be the equivalent value wl,l \u2208Si. Theorem 4 (General SBM: k communities). Consider the general stochastic block model G(V,E) with parameter bundle (n,k,N,Q), with either partial or noisy label information \u02dc \u2113prior with parameter 0 < \u03b4 < 1. Assume that n maxi,j Qi j \u227eno(1). For any node o \u2208V and its depth t leaf labels \u02dc \u2113prior(C t(o)), de\ufb01ne the set misclassi\ufb01cation error of a local estimator \u03c3t(o) : \u02dc \u2113prior(C t(o)) \u2192[k] as, ErrS,T (\u03c3t) := max{P(\u03c3t(o) \u2208S|\u2113(o) \u2208T ),P(\u03c3t(o) \u2208T |\u2113(o) \u2208S)}, (15) where S,T \u2282[k] are two disjoint subsets. De\ufb01ne K := \u00a3 diag(QN) \u00a4\u22121Qdiag(N), M = Qdiag(N) (16) \u03b8 := \u03bb2(K ), \u03bb := \u03bb1(M). (17) Let E\u2217(1/\u03b82) be the minimum energy on Tt(o) with conductance level \u03b82 as t \u2192\u221e. Assume that K is symmetric and denote V \u2208Rk to be the space spanned by the second eigenvectors of K . Choose any w \u2208 V,w \u22a51 as the initialization vector in WMP Algorithm 2. 12 \fOn the one hand, when \u03bb\u03b82 > 1, the WMP Algorithm 2 initialized with w outputs \u02c6 \u2113wmp that can distinguish the indices set Si,1 \u2264i \u2264m limsup t\u2192\u221e limsup n\u2192\u221e max i,j\u2208[m] ErrSi ,S j ( \u02c6 \u2113wmp) \u2264exp \u00b5 \u2212 R2 2E\u2217(1/\u03b82) \u00b6 , (18) for any \ufb01xed \u03b4 > 0, where R2 = mini,j |wSi \u2212wS j | maxi,j |wSi \u2212wS j |. On the other hand, if \u03bb\u03b82 < 1, for any t-local estimator \u03c3t that only based on layer t\u2019s noisy labels, the minimax misclassi\ufb01cation error is lower bounded by liminf t\u2192\u221eliminf n\u2192\u221e inf \u03c3t sup i,j\u2208[k],i\u0338=j Erri,j(\u03c3t) \u22651 2k . (19) The proof for general k case requires several new ideas compared to the k = 2 case. Let us \ufb01rst explain the intuition behind some quantities here. Again we focus on the case when the network is sparse, i.e. n maxi,j Qi j \u227eno(1). According to the coupling Proposition 1, one can focus on the coupled multi-type Galton-Watson tree, for a shallow local neighborhood of a node o. K \u2208Rk\u00d7k then denotes the transition kernel for the label broadcasting process on the tree, and \u03bb denotes the branching number of the multitype Galton-Watson tree. The transition threshold \u03bb\u03b82 = 1, also called Kesten-Stigum bound, has been well-studied for reconstruction on trees [21, 22, 29, 18]. Our contribution lies in establishing the connection between the set misclassi\ufb01cation error, minimum energy \ufb02ow, as well as the second eigenvectors of K . This is done through analyzing Algorithm 2 (to be introduced next) with a novel initialization of the messages, using both minimum energy \ufb02ow and the eigenvectors of K . Remark 4. One distinct difference between the general k case and the k = 2 case is the notion of set misclassi\ufb01cation error, or set identi\ufb01cation. This formalizes the intuition that for general SBM that is asymmetric and imbalanced, it may be possible to distinguish communities i, j from community l, yet not possible to tell i and j apart. The above Theorem provides a mathematical description of the phenomenon, for any initialization using vectors in the eigen-space corresponding to the second eigenvalue. The key new ingredient compared to the Algorithm 1 is the introduction of additional weights w \u2208Rk 13 \fon the labels. The choice of w will become clear in a moment. Algorithm 2: Weighted Message Passing for Multiple Communities Data: Same as in Algorithm 1 and an additional weight vector w \u2208Rk. Result: The labeling for each node o \u2208V . for each node o \u2208V , do Open the tree neighborhood T\u00af t(o) ; Layer \u00af t: for every node u \u2208C \u00af t(o), initialize its message M(u,0) = \u03b8\u22122|u| \u00b7i\u2217(\u21ddu)\u00b7 w \u02dc \u2113prior(u), where w \u02dc \u2113prior(u) denotes the \u02dc \u2113prior(u)-th coordinate of the weight vector w, i\u2217(\u21ddu) is the minimum energy \ufb02ow ; Initialize parameters \u00b50(u) \u2208Rk,\u03c32 0(u) \u2208R as \u00b50(u,l) = \u03b4\u00b7\u03b8\u22122|u|i\u2217(\u21ddu)\u00b7 wl, for l \u2208[k] \u03c32 0(u) = \u00a1 \u03b8\u22122|u|i\u2217(\u21ddu) \u00a22 \u00b7 max i,j\u2208[k]|wi \u2212w j|2 for t = 1,... \u00af t, do Layer \u00af t \u2212t: for every node u \u2208C \u00af t\u2212t(o), update message M(u,t) through the linearized rule M(u,t) = X v\u2208C (u) \u03b8M(v,t \u22121). Update the parameters \u00b5t(u) \u2208Rk,\u03c32 t (u) \u2208R \u00b5t(u) = X v\u2208C (u) \u03b8K \u00b5t\u22121(v) \u03c32 t (u) = X v\u2208C (u) \u03b82 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03c32 t\u22121(v)+ \uf8ee \uf8f0 max i,j\u2208[k]|\u00b5t\u22121(v,i)\u2212\u00b5t\u22121(v, j)| 2 \uf8f9 \uf8fb 2\uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe . end Output \u02c6 \u2113wmp(o) = argminl\u2208[k] |M(o, \u00af t)\u2212\u00b5\u00af t(o,l)|. end 4.2 Vector Evolution & Concentration As in the k = 2 case, we establish the recursion formula for the parameter updates. However, unlike the k = 2 case, for a general initialization \u00b50, it is much harder to characterize \u00b5t(u),\u03c32 t (u) analytically, and thus relate the misclassi\ufb01cation error to the minimum energy. We will show that this goal can be achieved by a judicious choice of \u00b50. We will start with the following Lemma that describes the vector evolution and concentration-of-measure. Lemma 2 (Concentration, general k). Recall the label broadcasting process with Markov transition kernel 14 \fK \u2208Rk\u00d7k on tree T\u00af t(o). Assume the MGF of messages on the leaves M(u,0) satis\ufb01es, for any \u2113\u2208[k] E h e\u03bbM(u,0)|\u2113(u) = l i \u2264e\u03bb\u00b50(u,l)e \u03bb2\u03c32 0(u) 2 for any \u03bb, with parameter \u00b50(u) = [\u00b50(u,1),...,\u00b50(u,k)] \u2208Rk, \u03c32 0(u) \u2208R. De\ufb01ne the following updating rules for a node v \u00b5t(v) = X u\u2208C (v) \u03b8K \u00b5t\u22121(u) \u03c32 t (v) = X u\u2208C (v) \u03b82 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03c32 t\u22121(u)+ \uf8ee \uf8f0 max i,j\u2208[k]|\u00b5t\u22121(u,i)\u2212\u00b5t\u22121(u, j)| 2 \uf8f9 \uf8fb 2\uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe . The following concentration-of-measure holds for the root message M(o, \u00af t): M(o, \u00af t)|\u2113(o)=l \u2208\u00b5\u00af t(o,l)\u00b1 x \u00b7\u03c3\u00af t(o) with probability 1\u22122exp(\u2212x2 2 ). In addition, if we we classify the root\u2019s label as \u02c6 \u2113wmp(o) = argmin l\u2208[k] |M(o, \u00af t)\u2212\u00b5\u00af t(o,l)|, then the worst-case misclassi\ufb01cation error is upper bounded by exp(\u2212 mini,j\u2208[k] |\u00b5\u00af t(o,i)\u2212\u00b5\u00af t(o, j)|2 8\u03c32 \u00af t (o) ). (20) Remark 5. Unlike the k = 2 case, in general it is hard to quantitatively analyze this evolution system for \u00b5t(u),\u03c32 t (u). The main dif\ufb01culty stems from the fact that the coordinates that attain the maximum of maxi,j\u2208[k] |\u00b5t\u22121(u,i)\u2212\u00b5t\u22121(u, j)| vary with u,t. Hence, it is challenging to provide sharp bounds on \u03c32 t (u). In some sense, the dif\ufb01culty is introduced by the instability of the relative ordering of the coordinates of the vector \u00b5t(u) for an arbitrary initialization. As will be shown in the next section, one can resolve this problem by initializing \u00b50(u,l),l \u2208[k] in a \u201cmost informative\u201d way. This initialization represents the additional weights on label\u2019s types beyond the weights given by the minimum energy \ufb02ow. 4.3 Additional Weighting via Eigenvectors We show in this section that the vector evolution system with noisy initialization is indeed tractable if we weigh the label\u2019s type according to the second right eigenvector of K \u2208Rk\u00d7k. Theorem 5 (Weighting by Eigenvector). Assume that the second eigenvalue \u03b8 = \u03bb2(K ) of the Markov transition kernel K is a real, and denote the associated second right eigenvector by w \u2208Rk,\u2225w\u2225= 1,wT 1 = 0. Denote the minimum energy \ufb02ow on tree T (o) with conductance level \u03b82 by i\u2217. In the case of noisy label information with parameter \u03b4, if we initialize \u00b50(u,l) = \u03b4\u00b7\u03b8\u22122|u|i\u2217(\u21ddu)\u00b7 wl, for l \u2208[k], 15 \fand \u03c32 0(u) = \u00a1 \u03b8\u22122|u|i\u2217(\u21ddu) \u00a22 \u00b7 maxi,j\u2208[k] |wi \u2212w j|2, then the worst case misclassi\ufb01cation error is upper bounded by limsup t\u2192\u221e max i,j\u2208[k],i\u0338=j P( \u02c6 \u2113wmp(o) = i|\u2113(o) = j) \u2264exp(\u2212 R2 2E\u2217(\u03b8\u22122)) with R = mini,j |wi \u2212w j | maxi,j |wi \u2212w j |. Remark 6. Observe that the upper bound becomes trivial when mini,j |wi \u2212w j| = 0. In this case, one can easily modify in the proof of Theorem 5 so that the following non-trivial guarantee for set misclassi\ufb01cation error holds. Assume w has m distinct values, and denote the set Si,1 \u2264i \u2264m to be the distinct value sets associated with w. Then one has the following upper bound on the set misclassi\ufb01cation error limsup t\u2192\u221e max i,j\u2208[m],i\u0338=j P( \u02c6 \u2113wmp(o) \u2208Si|\u2113(o) \u2208S j) \u2264exp(\u2212 R2 S 2E\u2217(\u03b8\u22122)) (21) with RS = mini,j |wSi \u2212wS j | maxi,j |wSi \u2212wS j |. 4.4 Lower Bound: Sharp Threshold In this section we provide a new lower bound analysis through bounding the \u03c72 distance to the \u201caverage measure\u201d. The lower bound shows that the transition boundary \u03bb\u03b82 = 1 achieved by WMP is sharp for any k. To the best of our knowledge, the \ufb01rst lower bound for general k case is achieved in [18] using a notion of weighted \u03c72 distance. For completeness of the presentation, we provide here a different proof using the usual \u03c72 distance. In addition, our approach admits a clear connection to the upper bound analysis through matrix power iterations. Theorem 6 (Limitation for Local Algorithms, k-communities). Recall the label broadcasting process with Markov transition kernel K on tree Tt(o). Assume K \u2208Rk\u00d7k is symmetric. Consider the case when noisy label information (with parameter \u03b4) is known on the depth-t layer leaf nodes. Under the condition br[T (o)]\u03b82 < 1 and k\u03b42( 1 \u03b4+ 1\u2212\u03b4 k + 1 1\u2212\u03b4 k ) < 1\u2212br[T (o)]\u03b82, we have liminf t\u2192\u221e inf \u03c3t max l\u2208[k] P(\u03c3t(o) \u0338= \u2113(o)|\u2113(o) = l) \u22651 2(1\u22121 k ). where \u03c3t(o) : \u02dc \u2113prior(C t(o)) \u2192[k] is any estimator mapping the prior labels on leaves in the local tree to a decision. The above inequality also implies liminf t\u2192\u221e inf \u03c3t max i,j\u2208[k],i\u0338=j P(\u03c3t(o) = i|\u2113(o) = j) \u22651 2k . The above result shows that even belief propagation suffers the error at least 1 2k in distinguishing i, j, which is within a factor of 2 from random guess. We remark in addition that the condition k\u03b42( 1 \u03b4+ 1\u2212\u03b4 k + 1 1\u2212\u03b4 k ) < 1\u2212br[T (o)]\u03b82 can be satis\ufb01ed when \u03b4 is small. 16 \f4.5 Further Discussion Local versus Global Algorithms In the balanced case with k equal size communities, and p,q denoting the withinand between-community connection probabilities, the Kesten-Stigum threshold for local algorithm class takes the following expression SNR := n(p \u2212q)2 k2(q + p\u2212q k ) = 1. However, it is known that the limitation for global algorithm class for growing number of communities is SNR \u224dO( logk k ) ([3], weak consistency) and SNR \u224dO( logn k ) ([8], strong consistency). Therefore, as k grows, there is an interesting gap between local and global algorithms in terms of SNR. An interesting direction is to determine whether one can solve the problem down to the information-theoretic threshold O\u2217( 1 k ) with computationally ef\ufb01cient algorithms. 5 Numerical Studies We apply the message passing Algorithm 1 to the political blog dataset [4] (with a total of 1222 nodes) in the partial label information setting with \u03b4 portion randomly revealed labels. In the literature, the state-of-the-art result for a global algorithm appears in [19], where the misclassi\ufb01cation rate is 58/1222 = 4.75%. Here we run a weaker version of our WMP algorithm as it is much easier to implement and does not require parameter tuning. Speci\ufb01cally, we initialize the message with a uniform \ufb02ow on leaves (minimum energy \ufb02ow that corresponds to a regular tree). We will call this algorithm approximate message passing (AMP) within this section. We run AMP with three different settings \u03b4 = 0.1,0.05,0.025, repeating each experiment 50 times. As a benchmark, we compare the results to the spectral algorithm on the (1\u2212\u03b4)n sub-network. We focus on the local tree with depth 1 to 5, and output the error for message passing with each depth. The results are summarized as box-plots in Figure 1. The left \ufb01gure illustrates the comparison of AMP with depth 1 to 5 and the spectral algorithm, with red, green, blue boxes corresponding to \u03b4 = 0.025,0.05,0.1, respectively. The right \ufb01gure zooms in on the left plot with only AMP depth 2 to 4 and spectral, to better emphasize the difference. Remark that if we only look at depth 1, some of the nodes may have no revealed neighbors. In this setting, we classify this node as wrong (this explains why depth-1 error can be larger than 1/2). We present in this paragraph some of the statistics of the experiments, extracted from the above Figure 1. In the case \u03b4 = 0.1, from depth 2-4, the AMP algorithm produces the mis-classi\ufb01cation error rate (we took the median over the experiments for robustness) of 6.31%,5.22%,5.01%, while the spectral algorithm produces the error rate 6.68%. When \u03b4 = 0.05, i.e. about 60 node labels revealed, the error rates are 7.71%,5.44%,5.08% with depth 2 to 4, contrasted to the spectral algorithm error 6.66%. In a more extreme case \u03b4 = 0.025 when there are only \u223c30 node labels revealed, AMP depth 2-4 has error 10.20%,5.71%,5.66%, while spectral is 6.63%. In general, the AMP algorithm with depth 3-4 uniformly beats the vanilla spectral algorithm. Note that our AMP algorithm is a distributed decentralized algorithm that can be run in parallel. We acknowledge that the error \u223c5% (when \u03b4 is very small) is still slightly worse than the state-of-the-art degree-corrected SCORE algorithm in [19], which is 4.75%. 17 \f0.00 0.25 0.50 0.75 depth1 depth2 depth3 depth4 depth5 spectral depth_AMP error_rate delta 0.025 0.05 0.1 0.04 0.08 0.12 0.16 depth2 depth3 depth4 spectral depth_AMP error_rate delta 0.025 0.05 0.1 Figure 1: AMP algorithm on Political Blog Dataset. 6 Technical Proofs We will start with two useful results. The \ufb01rst one is a coupling proposition. The proof follows exactly the same idea as in Proposition 4.2 in [30]. The intuition is that when the depth of the tree is shallow, the SBM in the sparse regime can be coupled to a Galton-Watson tree with Poisson branching (as there are many nodes outside the radius R for the Poisson-Multinomial coupling, when R small). We want to prove a more general version for SBM with unequal size communities. The proof is delayed to Appendix 7. Proposition 1. Let R = R(n) = \u230a 1 4log[2np0+2logn] logn\u230b, where p0 = maxi,j Qi j. Denote (T,\u03c3T ) to be the multi-type Galton-Watson tree (with Poisson branching) with mean matrix Qdiag(N) and label transition kernel K = \u00a3 diag(QN) \u00a4\u22121Qdiag(N). Denote GR as the neighborhood of depth up to R induced by the graph G, for a particular node. There exists a coupling between (GR,\u2113GR) and (T,\u03c3T ) such that (GR,\u2113GR) = (TR,\u03c3TR) with high probability as n \u2192\u221e. Here the tree equivalence is up to a label preserving homomorphism. Lemma 3 (Hoeffding\u2019s Inequality). Let X be any real-valued random variable with expected value EX = 0 and such that a \u2264X \u2264b almost surely. Then, for all \u03bb > 0, E h e\u03bbX i \u2264exp \u00b5\u03bb2(b \u2212a)2 8 \u00b6 . Proof of Lemma 1. Recall the linearized message passing rule that \u201capproximates\u201d the Bayes optimal algorithm: M(u,t) = X v\u2208C (u) \u00af \u03b8 \u00b7 M(v,t \u22121), where \u00af \u03b8 = \u03b81 +\u03b82 2 . Let us analyze the behavior of the linearized messages M(u,t) for a particular node u. The proof follows by induction on t. The case t = 0 follows from the assumption about \u00b50(u),\u03c32 0(u) and Chernoff 18 \fbound. Now, assume that the induction premise is true for t \u22121. Note that E h e\u03bbM(u,t)|\u2113(u) = + i = Y v\u2208C (u) E h e\u03bb\u00af \u03b8M(v,t\u22121)|\u2113(u) = + i = Y v\u2208C (u) \u00bd E h e\u03bb\u00af \u03b8M(v,t\u22121)|\u2113(v) = + i 1+\u03b81 2 +E h e\u03bb\u00af \u03b8M(v,t\u22121)|\u2113(v) = \u2212 i 1\u2212\u03b81 2 \u00be \u2264 Y v\u2208C (u) e(\u03bb\u00af \u03b8)2 \u03c32 t\u22121(v) 2 \u00bd e\u03bb\u00af \u03b8\u00b5t\u22121(v,+) 1+\u03b81 2 +e\u03bb\u00af \u03b8\u00b5t\u22121(v,\u2212) 1\u2212\u03b81 2 \u00be \u2264 Y v\u2208C (u) e(\u03bb\u00af \u03b8)2 \u03c32 t\u22121(v) 2 e\u03bb\u00af \u03b8[\u00b5t\u22121(v,+) 1+\u03b81 2 +\u00b5t\u22121(v,\u2212) 1\u2212\u03b81 2 ]e(\u03bb\u00af \u03b8)2 [\u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212)]2 8 , where the last step uses the Hoeffding\u2019s Lemma. Rearranging the terms, E h e\u03bbM(u,t)|\u2113(u) = + i \u2264e\u03bbP v\u2208C (u) \u00af \u03b8\u2329K1\u00b7,\u00b5t\u22121(v)\u232ae \u03bb2 \u00af \u03b82 P v\u2208C (u) ( \u03c32 t\u22121(v)+ \u00b7 \u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212) 2 \u00b82) 2 = e\u03bb\u00b5t(u,+)e \u03bb2\u03c32 t (u) 2 , where K1\u00b7 denotes the \ufb01rst row of transition matrix K . Clearly, same derivation holds with \u2113(u) = \u2212. Applying the Chernoff bound and optimizing over \u03bb, one arrives at the exponential concentration bound. Induction completes. To upper bound the misclassi\ufb01cation error, simply plug in the standardized absolute values of the difference, namely x = \u00af \u00af \u00af \u00b5\u00af t(o,+)\u2212\u00b5\u00af t(o,\u2212) 2\u03c3\u00af t(o) \u00af \u00af \u00af. Remark 7. Now let us propose the choice of \u00b50(u) and \u03c32 0(u) for the case of noisy label information with parameter \u03b4. In WMP algorithm, choose M(u,0) = c(u)sign( \u02dc \u2113prior) with factor c(u) that depends on the node u. Using simple Hoeffding\u2019s concentration for Bernoulli r.v., one has \u00b50(u,+) = c(u)\u03b4, \u00b50(u,\u2212) = \u2212c(u)\u03b4, and \u03c32 0(u) = c(u)2. Proof of Theorem 2. Using the result of Lemma 1, the proof analyzes evolution of \u03c32 t (o) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 . First, let us derive the expression for \u00b5t(o,+) \u2212\u00b5t(o,\u2212). Denoting w = [1,\u22121]T , it is easy to verify that wT K = \u00af \u03b8wT . We have, \u00b5t(o,+)\u2212\u00b5t(o,\u2212) = X v\u2208C (o) \u00af \u03b8wT K \u00b5t\u22121(v) = X v\u2208C (o) \u00af \u03b82wT \u00b5t\u22121(v) = \u00af \u03b82 X v\u2208C (o) [\u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212)]. 19 \fUsing the above equation recursively, one can easily see that for any d,1 \u2264d \u2264t, \u00b5t(o,+)\u2212\u00b5t(o,\u2212) = \u00af \u03b82d X v\u2208C d(o) [\u00b5t\u2212d(v,+)\u2212\u00b5t\u2212d(v,\u2212)]. (22) Now for \u03c32 t (o) for \u03c32 t (\u03c1), one has \u03c32 t (o) = \u00af \u03b82 X v\u2208C (o) \u00bd \u03c32 t\u22121(v)+ \u00b7\u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212) 2 \u00b82\u00be which can be written, in turn, as \u00af \u03b82 X v\u2208C (o) \u00af \u03b82 X u\u2208C (v) \u00bd \u03c32 t\u22122(u)+ \u00b7\u00b5t\u22122(u,+)\u2212\u00b5t\u22122(u,\u2212) 2 \u00b82\u00be + \u00af \u03b82 X v\u2208C (\u03c1) \u00b7\u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212) 2 \u00b82 = ......+ \u00af \u03b84 X v\u2208C (\u03c1) X u\u2208C (v) \u00b7\u00b5t\u22122(u,+)\u2212\u00b5t\u22122(u,\u2212) 2 \u00b82 + \u00af \u03b82 X v\u2208C (o) \u00b7\u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212) 2 \u00b82 = X v\u2208Tt(o) \u00af \u03b82|v| \u00b7\u00b5t\u2212|v|(v,+)\u2212\u00b5t\u2212|v|(v,\u2212) 2 \u00b82 + X u\u2208C t(o) \u00af \u03b82t\u03c32 0(u). Using the above equation one can bound \u03c32 t (o) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 = P v\u2208Tt(o) \u00af \u03b82|v| h\u00b5t\u2212|v|(v,+)\u2212\u00b5t\u2212|v|(v,\u2212) 2 i2 h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 + P u\u2208C t(o) \u00af \u03b82t\u03c32 0(u) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 = X v\u2208Tt(o) \u00af \u03b82|v| \u00a3 \u00b5t\u2212|v|(v,+)\u2212\u00b5t\u2212|v|(v,\u2212) \u00a42 \u00a3 [\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] \u00a42 +R = X v\u2208Tt(o) \u00a1\u00af \u03b82|v|[\u00b5t\u2212|v|(v,+)\u2212\u00b5t\u2212|v|(v,\u2212)] \u00a22 \u00a1 [\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] \u00a22 \u00af \u03b8\u22122|v| +R (23) where the remainder R = P u\u2208C t(o) \u00af \u03b82t\u03c32 0(u) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 . Recall the de\ufb01nition of i(\u21ddv) = \u00af \u03b82|v|[\u00b5t\u2212|v|(v,+)\u2212\u00b5t\u2212|v|(v,\u2212)] [\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] . It is clear from Eq.(22) that i is a valid unit \ufb02ow, in the sense of De\ufb01nition 1. Continuing with Eq. (23), one has inf i \u03c32 t (o) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 \u2264 X v\u2208Tt(o) i\u2217(\u21ddv)2 \u00af \u03b8\u22122|v| +R = Et(i\u2217, \u00af \u03b8\u22122)+R. (24) 20 \fLet us now estimate R: R = P u\u2208C t(o) \u00af \u03b82t\u03c32 0(u) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 \u2264 X u\u2208C t(o) i\u2217(\u21ddu)2 \u00af \u03b8\u22122t \u00b7 max u\u2208C t(o) \u03c32 0(u) h[\u00b50(u,+)\u2212\u00b50(u,\u2212)] 2 i2 = X u\u2208C t(o) i\u2217(\u21ddu)2 \u00af \u03b8\u22122t 1 \u03b42 . The last step is because for noisy label information with parameter \u03b4, \u03c32 0(u) h[\u00b50(u,+)\u2212\u00b50(u,\u2212)] 2 i2 = 1 \u03b42 . In the case when limt\u2192\u221eEt(i\u2217, \u00af \u03b8\u22122) < \u221e, we know P u\u2208C t(o) i\u2217(\u21ddu)2 \u00af \u03b8\u22122t = Et(i\u2217, \u00af \u03b8\u22122)\u2212Et\u22121(i\u2217, \u00af \u03b8\u22122) \u21920. Therefore, R = 1 \u03b42 ot(1). Going back to Eq. (24), to minimize the LHS (ratio between noise and signal), one needs to make sure that i = i\u2217, the minimum energy \ufb02ow. Therefore, the optimal strategy is to initialize \u00b50(u) according to i\u2217(\u21ddu). Thus, if we choose \u00b50(u,+) = \u03b4\u00af \u03b8\u22122|u|i\u2217(\u21ddu),\u00b50(u,\u2212) = \u03b4\u00af \u03b8\u22122|u|i\u2217(\u21ddu), we obtain lim t\u2192\u221einf i \u03c32 t (o) h[\u00b5t(o,+)\u2212\u00b5t(o,\u2212)] 2 i2 = E\u2217(\u00af \u03b8\u22122). From De\ufb01nition 3, E\u2217(\u00af \u03b8\u22122) < \u221eiff \u00af \u03b8\u22122 < br[T (o)]. Proof of Theorem 5. Note that by Perron-Frobenius Theorem, we have |\u03b8| = |\u03bb2(K )| < 1. Thanks to the choice of w, E[M0(u)|\u2113(u) = l] = \u03b4\u03b8\u22122|u|i\u2217(\u21ddu)wl + 1\u2212\u03b4 k \u03b8\u22122|u|i\u2217(\u21ddu)wT 1 = \u03b4\u03b8\u22122|u|i\u2217(\u21ddu)wl. Let us \ufb01rst derive the formula for \u00b5t(o) \u2208Rk under the chosen initialization \u00b50(u). We claim that \u00b5t\u2212|v|(v) = \u03b4\u00b7\u03b8\u22122|v|i\u2217(\u21ddv)\u00b7 w. Proof is via induction. The base case |u| = t is exactly the choice of the initialization. Let us assume for |u| > |v| the claim is true, and prove for v: \u00b5t\u2212|v|(v) = X u\u2208C (v) \u03b8K \u00b5t\u22121(u) = X u\u2208C (v) \u03b8Kw \u00b7\u03b4\u03b8\u22122|v|\u22122i\u2217(\u21ddu) = X u\u2208C (v) \u03b82w \u00b7\u03b4\u03b8\u22122|v|\u22122i\u2217(\u21ddv) = \u03b4\u00b7\u03b8\u22122|v|i\u2217(\u21ddv)\u00b7 w, 21 \fcompleting the induction. Now let us bound \u03c32 t (o). Observe that in our derived formula for \u00b5t\u2212|v|(v), all the coordinates are proportional to w. In other words, \u00b5t\u2212|v|(v) stays in the direction of w for all v. This greatly simpli\ufb01es the expression for \u03c32 t (o). We have \u03c32 t (o) = X v\u2208Tt(o) \u03b82|v| \u00b7maxi,j\u2208[k] |\u00b5t\u2212|v|(v,i)\u2212\u00b5t\u2212|v|(v, j)| 2 \u00b82 + X u\u2208C t(o) \u03b82t\u03c32 0(u) = \u03b42 \u00b7maxi,j\u2208[k] |w(i)\u2212w(j)| 2 \u00b82 X v\u2208Tt(o) i\u2217(\u21ddv)2\u03b8\u22122|v| + \u00b7maxi,j\u2208[k] |w(i)\u2212w(j)| 2 \u00b82 X v\u2208C t(o) i\u2217(\u21ddv)2\u03b8\u22122|v|. Plugging in the de\ufb01nition R = mini,j |wi \u2212w j | maxi,j |wi \u2212w j |, under the condition br[T (o)]\u03b82 > 1, we have E(i\u2217,\u03b8\u22122) < \u221e, and \u03c32 \u00af t (o) hmini,j\u2208[k] |\u00b5\u00af t(o,i)\u2212\u00b5\u00af t(o,j)| 2 i2 = 1 R2 E(i\u2217,\u03b8\u22122)+ 1 \u03b42R2 ot(1). Proof of Theorem 6. Recall that \u03c0(\u2113\u2202Tt(o)\u2229Tt\u2212|u|(u)|\u2113(u) = i) denotes the probability measure on the leaf labels on depth t, given \u2113(u) = i. For a node u, when there is no confusion, we abbreviate the measure \u03c0(\u2113\u2202Tt(o)\u2229Tt\u2212|u|(u)|\u2113(u) = i) as \u03c0u(i). According to Perron-Frobenius Theorem, there is a unique left eigenvector for K with eigenvalue 1, denote this by w \u2208Rk. Under the assumption K being symmetric, we know that w = 1 k 1. Denote \u00af \u03c0u = Pk j=1 w(j)\u03c0u(j). Let us bound the d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) by deriving a recursive bound: log \u00a3 1+d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) \u00a4 = X v\u2208C (u) log \" 1+d\u03c72 \u00c3 k X l=1 Kil\u03c0v(l)|| k X j=1 k X l=1 w(j)K jl\u03c0v(l) !# = X v\u2208C (u) log \" 1+d\u03c72 \u00c3 k X l=1 Kil\u03c0v(l)|| \u00af \u03c0v !# since wT K = wT . By de\ufb01nition, the above expression is X v\u2208C (u) X v\u2208C (u) log \uf8ee \uf8f01+ Z \u00a3Pk l=1 Kil\u03c0v(l)\u2212\u00af \u03c0v \u00a42 \u00af \u03c0v \uf8f9 \uf8fb = X v\u2208C (u) X v\u2208C (u) log \uf8ee \uf8f01+ Z \u00a3Pk l=1 Kil(\u03c0v(l)\u2212\u00af \u03c0v) \u00a42 \u00af \u03c0v \uf8f9 \uf8fb \u2264 X v\u2208C (u) Z \u00a3Pk l=1 Kil(\u03c0v(l)\u2212\u00af \u03c0v) \u00a42 \u00af \u03c0v . 22 \fNow we know that k X i=1 log \u00a3 1+d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) \u00a4 \u2264 X v\u2208C (u) Z k X i=1 \u00a3Pk l=1 Kil(\u03c0v(l)\u2212\u00af \u03c0v) \u00a42 \u00af \u03c0v . Recall the following fact that for any z1,z2,...zk \u22650, log(1+ k X i=1 zi) \u2264 k X i=1 log(1+ zi). Using this fact the lower bound the LHS, we reach log \" 1+ k X i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) # \u2264 X v\u2208C (u) Z k X i=1 \u00a3Pk l=1 Kil(\u03c0v(l)\u2212\u00af \u03c0v) \u00a42 \u00af \u03c0v \u2264 X v\u2208C (u) Z \u2225K (\u03c0v(\u00b7)\u2212\u00af \u03c0v1)\u22252 \u00af \u03c0v \u2264\u03b82 X v\u2208C (u) Z \u2225(\u03c0v(\u00b7)\u2212\u00af \u03c0v1)\u22252 \u00af \u03c0v = \u03b82 X v\u2208C (u) k X i=1 d\u03c72 (\u03c0v(i)|| \u00af \u03c0v) where the last two lines use the fact that \u03c0v(\u00b7)\u2212\u00af \u03c0v1 \u22a51, therefore \u2225K (\u03c0v(\u00b7)\u2212\u00af \u03c0v1)\u22252 \u2264\u03b82\u2225\u03c0v(\u00b7)\u2212\u00af \u03c0v1\u22252. We will need the the following Lemma that describes the branching number through the cutset. Lemma 4 ([37], Lemma 3.3). Assume br[T ] < \u03bb. Then for all \u03f5 > 0, there exists a cutset C such that X x\u2208C \u00b5 1 \u03bb \u00b6|x| \u2264\u03f5 (25) and for all v such that |v| \u2264maxx\u2208C |x|, X x\u2208C\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u22641. (26) Here the notation |v| denotes the depth of v. Let us use the cutset argument to prove Pk i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) \u21920 when |u| \u2192\u221e. Fix any \u03bb such that \u03b8\u22122 > \u03bb > br[T (o)]. For any \u03f5 small, the above Lemma claims the existence of cutset C\u03f5 such that Eq. (25) and (26) hold. Let us prove through induction on maxx\u2208C\u03f5 |x| \u2212|v| that for any v such that |v| \u2264maxx\u2208C\u03f5 |x|, we have k X i=1 d\u03c72 (\u03c0v(i)|| \u00af \u03c0v) \u2264\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264\u03b7. (27) with the choice \u03b7 = k\u03b42( 1 \u03b4+ 1\u2212\u03b4 k + 1 1\u2212\u03b4 k ). First for the base case, the claim is true because of the choice of \u03b7. 23 \fPreceding with the induction, assume for v such that maxx\u2208C\u03f5 |x|\u2212|v| = t \u22121 equation (29) is satis\ufb01ed, and let us prove for v : maxx\u2208C\u03f5 |x|\u2212|u| = t. We recall the linearized recursion log \" 1+ k X i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) # \u2264\u03b82 X v\u2208C (u) k X i=1 d\u03c72 (\u03c0v(i)|| \u00af \u03c0v) \u2264\u03b82 X v\u2208C (u) \u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| = \u03b82\u03bb\u00b7\u03b7 X x\u2208C\u03f5\u2229T (u) \u00b5 1 \u03bb \u00b6|x|\u2212|u| Using the assumption \u03b82\u03bb < 1 1+\u03b7, the above can be upper bounded by \u03b7P x\u2208C\u03f5\u2229T (u) \u00a1 1 \u03bb \u00a2|x|\u2212|u| 1+\u03b7 \u2264 \u03b7P x\u2208C\u03f5\u2229T (u) \u00a1 1 \u03bb \u00a2|x|\u2212|u| 1+\u03b7P x\u2208C\u03f5\u2229T (u) \u00a1 1 \u03bb \u00a2|x|\u2212|u| where the last inequality uses the fact that P x\u2208C\u03f5\u2229T (u) \u00a1 1 \u03bb \u00a2|x|\u2212|u| < 1. Now we know that Pk i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) 1+Pk i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) \u2264log \" 1+ k X i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) # \u2264 \u03b7P x\u2208C\u03f5\u2229T (u) \u00a1 1 \u03bb \u00a2|x|\u2212|u| 1+\u03b7P x\u2208C\u03f5\u2229T (u) \u00a1 1 \u03bb \u00a2|x|\u2212|u| . By monotonicity of x/(1+ x) we have proved the induction claim holds as k X i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) \u2264\u03b7 X x\u2208C\u03f5\u2229T (u) \u00b5 1 \u03bb \u00b6|x|\u2212|u| . Take \u03f5 \u21920,\u03bb \u2192br[T (o)]. De\ufb01ne t\u03f5 := min{|x|,x \u2208C\u03f5}, it is also easy to see from equation (25) that \u00b5 1 \u03bb \u00b6t\u03f5 \u2264 X x\u2208C\u03f5 \u00b5 1 \u03bb \u00b6|x| \u2264\u03f5 \u21d2t\u03f5 > log(1/\u03f5) log\u03bb \u2192\u221e. Putting things together, under the condition \u03b7 \u22641\u2212br[T (o)]\u03b82, we have lim t\u2192\u221e 1 k k X i=1 d\u03c72 (\u03c0u(i)|| \u00af \u03c0u) \u2264\u03b7 k \u00b7lim \u03f5\u21920 X x\u2208C\u03f5\u2229T (o) \u00b5 1 \u03bb \u00b6|x| = 0. Finally, we invoke the multiple testing argument Theorem 2.6 in [39]). Lemma 5 ([39], Proposition 2.4, Theorem 2.6). Let P0,P1,...,Pk be probability measures on (X ,A ) satisfying 1 k k X i=1 d\u03c72(P j,P0) \u2264k\u03b1\u2217 then we have for any selector \u03c8 : X \u2192[k] max i\u2208[k] Pi(\u03c8 \u0338= i) \u22651 2(1\u2212\u03b1\u2217\u22121 k ). 24 \fPlugging in the result with P0 = \u00af \u03c0o and Pi = \u03c0o(i), we conclude that liminf t\u2192\u221e inf \u03c3 max l\u2208[k] P(\u03c3(o) \u0338= \u2113(o)|\u2113(o) = l) \u22651 2(1\u22121 k ). Proof of Theorem 1. Given Proposition 1, Theorem 2 and Theorem 3, the proof of Theorem 1 is simple. By Proposition 1, one can couple the local neighborhood of SBM with multi-type Galton Watson process asymptotically almost surely as n \u2192\u221e, where the label transition matrix is K := \" n1Q11 n1Q11+n2Q12 n2Q12 n1Q11+n2Q12 n1Q21 n1Q21+n2Q22 n2Q22 n1Q21+n2Q22 # . For the upper bound, Theorem 2 shows that the misclassi\ufb01cation error is upper bounded by exp \u00b3 \u2212 1 E\u2217(\u00af \u03b8\u22122) \u00b4 as the depth of the tree goes to in\ufb01nity. Note if we \ufb01rst send n \u2192\u221e, due to Proposition 1, the coupling is valid even when R \u2192\u221ewith a slow rate logn/loglogn. Therefore, the upper bound on misclassi\ufb01cation error holds. One can establish the lower bound using the same argument together with Theorem 3. Finally, for the expression on transition boundary, we know that condition on non-extinction, the branching number for this coupled multi-type Galton Watson tree is \u03bb1(Qdiag(N)) almost surely. Proof is completed. 7 Additional Proofs Proof of Lemma 2. The proof logic here is similar to the k = 2 case. Again, we analyze the message M(u,t) for a particular node u. Use induction on t for the claim E h e\u03bbM(u,t)|\u2113(u) = l i \u2264e\u03bb\u00b5t(u,l)e \u03bb2\u03c32 t (u) 2 . The case for t = 0 follows from the assumption about \u00b50(u),\u03c32 0(u) and Chernoff bound. Assume that the induction is true for t \u22121, and prove the case for t. Note that E h e\u03bbM(u,t)|\u2113(u) = l i = Y v\u2208C (u) E h e\u03bb\u03b8M(v,t\u22121)|\u2113(u) = l i = Y v\u2208C (u) ( k X i=1 E h e\u03bb\u03b8M(v,t\u22121)|\u2113(v) = i i Kli ) \u2264 Y v\u2208C (u) e(\u03bb\u03b8)2 \u03c32 t\u22121(v) 2 ( k X i=1 e\u03bb\u03b8\u00b5t\u22121(v,i)Kli ) \u2264 Y v\u2208C (u) e(\u03bb\u00af \u03b8)2 \u03c32 t\u22121(v) 2 e\u03bb\u03b8[Pk i=1 \u00b5t\u22121(v,i)Kli ]e(\u03bb\u03b8)2 maxi,j\u2208[k] |\u00b5t\u22121(v,i)\u2212\u00b5t\u22121(v,j)|2 8 , 25 \fwhere the last step uses the Hoeffding\u2019s Lemma. Rearrange the terms, one can see that the above equation implies E h e\u03bbM(u,t)|\u2113(u) = l i \u2264e\u03bbP v\u2208C (u) \u03b8\u2329Kl\u00b7,\u00b5t\u22121(u)\u232ae \u03bb2\u03b82 P v\u2208C (u) ( \u03c32 t\u22121(v)+maxi,j\u2208[k] \u00af \u00af \u00af \u00af \u00b5t\u22121(v,+)\u2212\u00b5t\u22121(v,\u2212) 2 \u00af \u00af \u00af \u00af 2) 2 = e\u03bb\u00b5t(u,l)e \u03bb2\u03c32 t (u) 2 , where Kl\u00b7 denotes the l\u2212row of transition matrix K . Apply the Chernoff bound to optimize over \u03bb, one can arrive the exponential concentration bound. Induction completes. To upper bound the misclassi\ufb01cation error, simply plug in |x| = mini,j\u2208[k] |\u00b5\u00af t(o,i)\u2212\u00b5\u00af t(o, j)| 2\u03c3\u00af t(o) . Proof of Theorem 3. We will gave the proof of Theorem 3 (for the \u03b4 noisy label information case) here. De\ufb01ne the measure \u03c0+ \u2113Tt (o) on the revealed labels, for a depth t tree rooted from o with label \u2113(o) = + (and similarly de\ufb01ne \u03c0\u2212 \u2113Tt (o)). We have the following recursion formula \u03c0+ \u2113Tt (o) = Y v\u2208C (o) \u00b71+\u03b81 2 \u03c0+ \u2113Tt\u22121(v) + 1\u2212\u03b81 2 \u03c0\u2212 \u2113Tt\u22121(v) \u00b8 . Recall that the \u03c72 distance between two absolute continuous measures \u00b5(x),\u03bd(x) is d\u03c72(\u00b5,\u03bd) = R \u00b52 \u03bd dx \u2212 1, and we have the total variation distance between these two measures is upper bounded by the \u03c72 distance dTV \u00a1 \u00b5,\u03bd \u00a2 \u2264 q d\u03c72 \u00a1 \u00b5,\u03bd \u00a2 . Let us upper bound the symmetric version of \u03c72 distance de\ufb01ned as DTt(o) := max n d\u03c72 \u00b3 \u03c0+ \u2113Tt (o),\u03c0\u2212 \u2113Tt (o) \u00b4 ,d\u03c72 \u00b3 \u03c0\u2212 \u2113Tt (o),\u03c0+ \u2113Tt (o) \u00b4o (abbreviate as Dt(o) when there is no confusion), we have the following recursion log h 1+d\u03c72 \u00b3 \u03c0+ \u2113Tt (o),\u03c0\u2212 \u2113Tt (o) \u00b4i = X v\u2208C (o) log \u00b7 1+d\u03c72 \u00b51+\u03b81 2 \u03c0+ \u2113Tt\u22121(v) + 1\u2212\u03b81 2 \u03c0\u2212 \u2113Tt\u22121(v), 1\u2212\u03b82 2 \u03c0+ \u2113Tt\u22121(v) + 1+\u03b82 2 \u03c0\u2212 \u2113Tt\u22121(v) \u00b6\u00b8 d\u03c72 \u00b51+\u03b81 2 \u03c0+ \u2113Tt\u22121(v) + 1\u2212\u03b81 2 \u03c0\u2212 \u2113Tt\u22121(v), 1\u2212\u03b82 2 \u03c0+ \u2113Tt\u22121(v) + 1+\u03b82 2 \u03c0\u2212 \u2113Tt\u22121(v) \u00b6 = \u00af \u03b82 Z \u00b3 \u03c0+ \u2113Tt\u22121(v) \u2212\u03c0\u2212 \u2113Tt\u22121(v) \u00b42 1\u2212\u03b82 2 \u03c0+ \u2113Tt\u22121(v) + 1+\u03b82 2 \u03c0\u2212 \u2113Tt\u22121(v) dx \u2264\u00af \u03b82 Z\u00b3 \u03c0+ \u2113Tt\u22121(v) \u2212\u03c0\u2212 \u2113Tt\u22121(v) \u00b42 \" 1\u2212\u03b82 2 1 \u03c0+ \u2113Tt\u22121(v) + 1+\u03b82 2 1 \u03c0\u2212 \u2113Tt\u22121(v) # dx \u2264\u00af \u03b82DTt\u22121(v), 26 \fwhere the second to last step follows from Jensen\u2019s inequality for function 1/x. Now we have the following recursion relationship log(1+DTt(o)) \u2264 X v\u2208C (o) log(1+ \u00af \u03b82 \u00b7DTt\u22121(v)). Invoke the following fact, log(1+\u03b82x) \u03b82 \u2264(1+\u03b7)log(1+ x) for all 0 \u2264x \u2264\u03b7, \u2200\u03b8, whose proof is in one line log(1+\u03b82x) \u03b82 \u2264x \u2264(1+\u03b7) x 1+ x \u2264(1+\u03b7)log(1+ x). Thus if DTt\u22121(v) \u2264\u03b7,\u2200v \u2208C (o), then the following holds log(1+DTt(o)) \u2264(1+\u03b7)\u00af \u03b82 X v\u2208C u(\u03c1) log(1+DTt\u22121(v)). (28) Denoting dTt(o) := log(1+DTt(o)), Equation (28) becomes dTt(o) \u2264(1+\u03b7)\u00af \u03b82 X v\u2208C u(\u03c1) dTt\u22121(v). We will again need the Lemma 4 that describes the branching number through the cutset. Fix any \u03bb such that \u00af \u03b8\u22122 > \u03bb > br[T (o)]. For any \u03f5 small, Lemma 4 claims the existence of cutset C\u03f5 such that Eq. (25) and (26) holds. Let\u2019s prove through induction on maxx\u2208C\u03f5 |x|\u2212|v| that for any v such that |v| \u2264maxx\u2208C\u03f5 |x|, we have dTC\u03f5(v) \u2264 \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264 \u03b7 1+\u03b7. (29) Note for the start of induction v \u2208C\u03f5, dTC\u03f5(v) = log(1+ 4\u03b42 1\u2212\u03b42 ) < \u03b7 1+\u03b7. Now precede with the induction, assume for u such that maxx\u2208C\u03f5 |x|\u2212|u| = t \u22121 equation (29) is satis\ufb01ed, let\u2019s prove for v : maxx\u2208C\u03f5 |x|\u2212|v| = t. Due to the fact for all u \u2208C (v), dTC\u03f5(u) \u2264 \u03b7 1+\u03b7 \u21d2DTC\u03f5(u) \u2264\u03b7, we can recall the linearized recursion dTC\u03f5(v) \u2264(1+\u03b7)\u00af \u03b82 X u\u2208C (v) dT\u2264C\u03f5(u) \u2264(1+\u03b7)\u00af \u03b82 X u\u2208C (v) \" \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (u) \u00b5 1 \u03bb \u00b6|x|\u2212|u|# \u2264 \u03b7 1+\u03b7 \u00b7(1+\u03b7)\u00af \u03b82\u03bb X u\u2208C (v) X x\u2208C\u03f5\u2229T (u) \u00b5 1 \u03bb \u00b6|x|\u2212|u|+1 \u2264\u03b7\u00af \u03b82\u03bb X u\u2208C (v) X x\u2208C\u03f5\u2229T (u) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264\u03b7\u00af \u03b82\u03bb X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264 \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| , 27 \fif \u00af \u03b82\u03bb \u2264 1 1+\u03b7. So far we have proved for any v, such that |v| \u2264maxx\u2208C\u03f5 |x| dT\u2264C\u03f5(v) \u2264 \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264 \u03b7 1+\u03b7 which implies DT\u2264C\u03f5(v) \u2264\u03b7 so that the linearized recursion (28) always holds. Take \u03f5 \u21920,\u03bb \u2192br[T (o)]. De\ufb01ne t\u03f5 := min{|x|,x \u2208C\u03f5}, it is also easy to see from equation (25) that \u00b5 1 \u03bb \u00b6t\u03f5 \u2264 X x\u2208C\u03f5 \u00b5 1 \u03bb \u00b6|x| \u2264\u03f5 \u21d2t\u03f5 > log(1/\u03f5) log\u03bb \u2192\u221e. Putting things together, under the condition log \u00b5 1+ 4\u03b42 1\u2212\u03b42 \u00b6 \u22641\u2212br[T (o)]\u00af \u03b82, we have lim t\u2192\u221eDTt(o) = lim \u03f5\u21920DTC\u03f5(o) \u2264 \u03b7 1+\u03b7 \u00b7lim \u03f5\u21920 X x\u2208C\u03f5\u2229T (o) \u00b5 1 \u03bb \u00b6|x| = 0. Proof of Proposition 1. The proof is a standard exercise following the idea from Proposition 4.2 in [30]. First, let\u2019s recall Bernstein inequality. Consider X \u223cBinom(n,p0), then the following concentration inequality holds P(X \u2265np0 + t) \u2264exp(\u2212 t2 2(np0 + t/3)). Hence if we plug in t = 2 3 logn + p 2np0 logn, we know |\u2202G1| sto. \u2264X \u2264np0 + 2 3 logn + q 2np0 logn \u22642np0 +2logn with probability at least 1\u2212n\u22121. Now, through union bound, we can prove that P \u00a1 \u2200r \u2264R,|\u2202Gr | \u2264(2np0 +2logn)r \u00a2 \u22651\u2212C \u00b7(2np0 +2logn)Rn\u22121 \u22651\u2212O(n\u22123/4). And we know that on the same event, |\u2202Gr | \u2264n1/4,\u2200r \u2264R. It is clear that bad events that GR is not a tree (with cycles) for each layer is bounded above by p2 0|\u2202Gr | + p0|\u2202Gr |2. Take a further union bound over all layers, we know this probability is bounded by O(n\u22121/8) provided p0 = o(n\u22125/8). Now we need to recursively use the Poisson-Binomial coupling (to achieve Poisson-Multinomial coupling). The following Lemma is taken from [30] (Lemma 4.6). Lemma 6. If m,n are positive integers then \u2225Binom(m, c n )\u2212Poisson(c)\u2225TV \u2264O(c2m n2 +c|m n \u22121|) 28 \fNow we condition on all the good events up to layer Gr\u22121, which happens with probability at least 1\u2212 n\u22121/8\u2212n\u22123/4. We can couple the next layer for nodes in \u2202Gr . Take a node v \u2208\u2202Gr as an example. Assume it is of color i, then the number of color j nodes in his children follows Binom(|V i >r |,pi j). Comparing to the Poisson version Poisson(ni pi j), we know with probability at least 1\u2212O(ni p2 i j + pi j|V i >r \u2212ni|), one can couple the Poisson and Binomial in the same probability space. Note that |V i >r \u2212ni| \u2264|\u2202Gr |. Repeat this recursively, and use the union bound, we can couple (GR,\u2113GR) = (TR,\u2113TR) with probability at least 1\u2212O(k maxi(ni)p2 0 +kp0n1/4)n1/4 logn = 1\u2212o(1). Therefore if np0 = no(1) and k \u227elogn, we have the bad event (when we cannot couple) happens with probability going to 0 as n \u2192\u221e. And if p0 = no(1), we can allow R to grow to in\ufb01nity at a slow rate as R \u227e logn log[no(1)+logn]. Acknowledgements The authors want to thank Elchanan Mossel for many valuable discussions." + }, + { + "url": "http://arxiv.org/abs/1605.04358v1", + "title": "Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data", + "abstract": "Missing data occur frequently in a wide range of applications. In this paper,\nwe consider estimation of high-dimensional covariance matrices in the presence\nof missing observations under a general missing completely at random model in\nthe sense that the missingness is not dependent on the values of the data.\nBased on incomplete data, estimators for bandable and sparse covariance\nmatrices are proposed and their theoretical and numerical properties are\ninvestigated.\n Minimax rates of convergence are established under the spectral norm loss and\nthe proposed estimators are shown to be rate-optimal under mild regularity\nconditions. Simulation studies demonstrate that the estimators perform well\nnumerically. The methods are also illustrated through an application to data\nfrom four ovarian cancer studies. The key technical tools developed in this\npaper are of independent interest and potentially useful for a range of related\nproblems in high-dimensional statistical inference with missing data.", + "authors": "T. Tony Cai, Anru Zhang", + "published": "2016-05-14", + "updated": "2016-05-14", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction The problem of missing data arises frequently in a wide range of \ufb01elds, including biomedical studies, social science, engineering, economics, and computer science. Statistical inference in the presence of missing observations has been well studied in classical statistics. See, e.g., Ibrahim and Molenberghs [18] for a review of missing data methods in longitudinal studies and Schafer [26] for literature on handling multivariate data with missing observations. See Little and Rubin [20] and the references therein for a comprehensive treatment of missing data problems. Missing data also occurs in contemporary high-dimensional inference problems, whose dimension p can be comparable to or even much larger than the sample size n. For example, in large-scale genome-wide association studies (GWAS), it is common for many subjects to have missing values on some genetic markers due to various reasons, including insu\ufb03cient resolution, image corruption, and experimental error during the laboratory process. Also, di\ufb00erent studies may have di\ufb00erent volumes of genomic data available by design. For instance, the four genomic ovarian cancer studies discussed in Section 4 have throughput measurements of mRNA gene expression levels, but only one of these also has microRNA measurements (Cancer Genome Atlas Research Network [11], Bonome et al. [4], Tothill et al. [27] and Dressman et al. [15]). Discarding samples with any missingness is highly ine\ufb03cient and could induce bias due to non-random missingness. It is of signi\ufb01cant interest to integrate multiple high-throughput studies of the same disease, not only to boost statistical power but also to improve the biological interpretability. However, considerable challenges arise when integrating such studies due to missing data. Although there have been signi\ufb01cant recent e\ufb00orts to develop methodologies and theories 2 \ffor high dimensional data analysis, there is a paucity of methods with theoretical guarantees for statistical inference with missing data in the high-dimensional setting. Under the assumption that the components are missing uniformly and completely at random (MUCR), Loh and Wainwright [21] proposed a non-convex optimization approach to high-dimensional linear regression, Lounici [23] introduced a method for estimating a low-rank covariance matrix and Lounici [22] considered sparse principal component analysis. In these papers, theoretical properties of the procedures were analyzed. These methods and theoretical results critically depend on the MUCR assumption. Covariance structures play a fundamental role in high-dimensional statistics. It is of direct interest in a wide range of applications including genomic data analysis, particularly for hypothesis generation. Knowledge of the covariance structure is critical to many statistical methods, including discriminant analysis, principal component analysis, clustering analysis, and regression analysis. In the high-dimensional setting with complete data, inference on the covariance structure has been actively studied in recent years. See Cai, Ren and Zhou [7] for a survey of recent results on minimax and adaptive estimation of high-dimensional covariance and precision matrices under various structural assumptions. Estimation of high-dimensional covariance matrices in the presence of missing data also has wide applications in biomedical studies, particularly in integrative genomic analysis which holds great potential in providing a global view of genome function (see Hawkins et al. [17]). In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random (MCR) model in the sense that the missingness is not dependent on the values of the data. Let X1, . . . , Xn be n independent copies of a p dimensional random vector X with mean \u00b5 and covariance matrix \u03a3. Instead of observing the complete sample {X1, . . . , Xn}, one observes the sample with missing values, where the observed coordinates of Xk are indicated by a vector Sk \u2208{0, 1}p, k = 1, ..., n. That is, Xjk is observed if Sjk = 1 and Xjk is missing if Sjk = 0. (1) 3 \fHere Xjk and Sjk are respectively the jth coordinate of the vectors Xk and Sk. We denote the incomplete sample with missing values by X\u2217= {X\u2217 1, . . . , X\u2217 n}. The major goal of the present paper is to estimate \u03a3, the covariance matrix of X, with theoretical guarantees based on the incomplete data X\u2217in the high-dimensional setting where p can be much larger than n. This paper focuses on estimation of high-dimensional bandable covariance matrices and sparse covariance matrices in the presence of missing data. These two classes of covariance matrices arise frequently in many applications, including genomics, econometrics, signal processing, temporal and spatial data analyses, and chemometrics. Estimation of these highdimensional structured covariance matrices have been well studied in the setting of complete data in a number of recent papers, e.g., Bickel and Levina [2, 3], Karoui [16], Rothman et al. [24], Cai and Zhou [10], Cai and Liu [5], Cai et al. [6, 9] and Cai and Yuan [8]. Given an incomplete sample X\u2217with missing values, we introduced a \u201cgeneralized\u201d sample covariance matrix, which can be viewed as an analog of the usual sample covariance matrix in the case of complete data. For estimation of bandable covariance matrices, where the entries of the matrix decay as they move away from the diagonal, a blockwise tridiagonal estimator is introduced and is shown to be rate-optimal. We then consider estimation of sparse covariance matrices. An adaptive thresholding estimator based on the generalized sample covariance matrix is proposed. The estimator is shown to achieve the optimal rate of convergence over a large class of approximately sparse covariance matrices under mild conditions. The technical analysis for the case of missing data is much more challenging than that for the complete data, although some of the basic ideas are similar. To facilitate the theoretical analysis of the proposed estimators, we establish two key technical results, \ufb01rst, a large deviation result for a sub-matrix of the generalized sample covariance matrix and second, a large deviation bound for the self-normalized entries of the generalized sample covariance matrix. These technical tools are not only important for the present paper, but also useful for other related problems in high-dimensional statistical inference with missing data. A simulation study is carried out to examine the numerical performance of the proposed 4 \festimation procedures. The results show that the proposed estimators perform well numerically. Even in the MUCR setting, our proposed procedures for estimating bandable, sparse covariance matrices, which do not rely on the information of the missingness mechanism, outperform the ones speci\ufb01cally designed for MUCR. The advantages are more signi\ufb01cant under the setting of missing completely at random but not uniformly. We also illustrate our procedure with an application to data from four ovarian cancer studies that have di\ufb00erent volumes of genomic data by design. The proposed estimators enable us to estimate the covariance matrix by integrating the data from all four studies and lead to a more accurate estimator. Such high-dimensional covariance matrix estimation with missing data is also useful for other types of data integration. See further discussions in Section 4.4. The rest of the paper is organized as follows. Section 2 considers estimation of bandable covariance matrices with incomplete data. The minimax rate of convergence is established for the spectral norm loss under regularity conditions. Section 3 focuses on estimation of highdimensional sparse covariance matrices and introduces an adaptive thresholding estimator in the presence of missing observations. Asymptotic properties of the estimator under the spectral norm loss is also studied. Numerical performance of the proposed methods is investigated in Section 4 through both simulation studies and an analysis of an ovarian cancer dataset. Section 5 discusses a few related problems. Finally the proofs of the main results are given in Section 6 and the Supplement. 2 Estimation of Bandable Covariance Matrices In this section, we consider estimation of bandable covariance matrices with incomplete data. Bandable covariance matrices, whose entries decay as they move away from the diagonal, arise frequently in temporal and spatial data analysis. See, e.g., Bickel and Levina [2] and Cai et al. [7] and the references therein. The procedure relies on a \u201cgeneralized\u201d sample covariance matrix. We begin with basic notation and de\ufb01nitions that will be used throughout the rest of 5 \fthe paper. 2.1 Notation and De\ufb01nitions Matrices and vectors are denoted by boldface letters. For a vector \u03b2 \u2208Rp, we denote the Euclidean q-norm by \u2225\u03b2\u2225q, i.e., \u2225\u03b2\u2225q = q pPp i=1 |\u03b2i|q. Let A = UDV\u22a4= P i \u03bbi(A)uiv\u22a4 i be the singular value decomposition of a matrix A \u2208Rp1\u00d7p2, where D = diag{\u03bb1(A), . . .} with \u03bb1(A) \u2265\u00b7 \u00b7 \u00b7 \u22650 being the singular values. For 1 \u2264q \u2264\u221e, the Schatten-q norm \u2225A\u2225q is de\ufb01ned by \u2225A\u2225q = {P i \u03bbq i(A)}1/q. In particular, \u2225A\u22252 = pP i \u03bb2 i (A) is the Frobenius norm of A and will be denoted as \u2225A\u2225F; \u2225A\u2225\u221e= \u03bb1(A) is the spectral norm of A and will be simply denoted as \u2225A\u2225. For 1 \u2264q \u2264\u221eand A \u2208Rp1\u00d7p2, we denote the operator \u2113q norm of A by \u2225A\u2225\u2113q which is de\ufb01ned as \u2225A\u2225\u2113q = maxx\u2208Rp2 \u2225Ax\u2225q/\u2225x\u2225q. The following are well known facts about the various norms of a matrix A = (aij), \u2225A\u2225\u21131 = max j p1 X i=1 |aij|, \u2225A\u2225\u21132 = \u2225A\u2225= \u03bb1(A), \u2225A\u2225\u2113\u221e= max i p2 X j=1 |aij|, (2) and, if A is symmetric, \u2225A\u2225\u21131 = \u2225A\u2225\u2113\u221e\u2265\u2225A\u2225\u21132. When R1, R2 are two subsets of {1, . . . , p1}, {1, . . . , p2} respectively, we note AR1\u00d7R2 = (aij)i\u2208R1,j\u2208R2 as the sub-matrix of A with indices R1 and R2. In addition, we simply write AR1\u00d7R1 as AR1. We denote by X1, . . . , Xn a complete random sample (without missing observations) from a p-dimensional distribution with mean \u00b5 and covariance matrix \u03a3. The sample mean and sample covariance matrix are de\ufb01ned as \u00af X = 1 n n X k=1 Xk, \u02c6 \u03a3 = 1 n n X k=1 \u0000Xk \u2212\u00af X \u0001 \u0000Xk \u2212\u00af X \u0001\u22a4. (3) Now we introduce the notation related to the incomplete data with missing observations. Generally, we use the superscript \u201c\u2217\u201d to denote objects related to missing values. Let S1, ..., Sn be the indicator vectors for the observed values (see (1)) and let X\u2217= {X\u2217 1, . . . , X\u2217 n} be the observed incomplete data where the observed entries are indexed by the vectors S1, ..., Sn \u2208 6 \f{0, 1}p. In addition, we de\ufb01ne n\u2217 ij = n X k=1 SikSjk, 1 \u2264i, j \u2264p. (4) Here n\u2217 ij is the number of vectors X\u2217 k in which the ith and jth entries are both observed. For convenience, we also denote n\u2217 i = n\u2217 ii, n\u2217 min = min i,j n\u2217 ij. (5) Given a sample X\u2217= {X\u2217 1, . . . , X\u2217 n} with missing values, the sample mean and sample covariance matrix can no longer be calculated in the usual way. Instead, we propose the \u201cgeneralized sample mean\u201d \u00af X\u2217de\ufb01ned by \u00af X\u2217= ( \u00af X\u2217 i )1\u2264i\u2264p with \u00af X\u2217 i = 1 n\u2217 i n X k=1 XikSik, 1 \u2264i \u2264p, (6) where Xik is the ith entry of Xk, and the \u201cgeneralized sample covariance matrix\u201d \u02c6 \u03a3\u2217de\ufb01ned by \u02c6 \u03a3\u2217= (\u02c6 \u03c3\u2217 ij)1\u2264i,j\u2264p with \u02c6 \u03c3\u2217 ij = 1 n\u2217 ij n X k=1 (Xik \u2212\u00af X\u2217 i )(Xjk \u2212\u00af X\u2217 j )SikSjk. (7) As will be seen later, the generalized sample mean \u00af X\u2217and the generalized sample covariance matrix \u02c6 \u03a3\u2217play similar roles as those of the conventional sample mean and sample covariance matrix in inference problems, but the technical analysis can be much more involved. Some distinctions between the generalized sample covariance matrix \u02c6 \u03a3\u2217and the usual sample covariance matrix \u02c6 \u03a3 are that \u02c6 \u03a3\u2217is in general not non-negative de\ufb01nite, and each entry \u02c6 \u03c3\u2217 ij is the average of a varying number (n\u2217 ij) of samples, which create additional di\ufb03culties in the technical analysis. Regarding the mechanism of missingness, the assumption we use for the theoretical analysis is missing completely at random. This is a more general setting than the one considered previously by Loh and Wainwright [21] and Lounici [22]. Assumption 2.1 (Missing Completely at Random (MCR)) S = {S1, . . . , Sn} is not dependent on the values of X. Here S can be either deterministic or random, but independent of X. 7 \fWe adopt Assumption 1 in Chen et al. [13] and assume that the random vector X is sub-Gaussian satisfying the following assumption. Assumption 2.2 (Sub-Gaussian Assumption) X = {X1, . . . , Xn}. Here the columns Xk are i.i.d. and can be expressed as Xk = \u0393Zk + \u00b5, k = 1, . . . , n, (8) where \u00b5 is a \ufb01xed p-dimensional mean vector, \u0393 \u2208Rp\u00d7q is a \ufb01xed matrix with q \u2265p so that \u0393\u0393\u22a4= \u03a3, Zk = (Z1k, . . . , Zmk)\u22a4is an m-dimensional random vector with the components mean 0, variance 1, and i.i.d. sub-Gaussian, with the exception of i.i.d. Rademacher. More speci\ufb01cally, each Zik satis\ufb01es that EZik = 0, var(Zik) = 1, 0 < var(Z2 ik) < \u221e, and there exists \u03c4 > 0 such that EetZik \u2264exp(\u03c4t2/2) for all t > 0. Note that the exclusion of the Rademacher distribution in Assumption 2.2 is only required for estimation of sparse covariance matrices. See Remark 3.3 for further discussions. 2.2 Rate-optimal Blockwise Tridiagonal Estimator We follow Bickel [2] and Cai et al. [9] and consider estimating the covariance matrix \u03a3 over the parameter space U\u03b1 = U\u03b1(M0, M) where U\u03b1(M0, M) = ( \u03a3 : max j X i {|\u03c3ij| : |i \u2212j| > k} \u2264Mk\u2212\u03b1 for all k, \u2225\u03a3\u2225\u2264M0 ) . (9) Suppose we have n i.i.d. samples with missing values X\u2217 1, . . . , X\u2217 n with covariance matrix \u03a3 \u2208U\u03b1(M0, M). We propose a blockwise tridiagonal estimator \u02c6 \u03a3bt to estimate \u03a3. We begin by dividing the generalized sample covariance matrix \u02c6 \u03a3\u2217given by (7) into blocks of size k\u00d7k for some k. More speci\ufb01cally, pick an integer k and let N = \u2308p/k\u2309. Set Ij = {(j \u22121)k+1, . . . , jk} for 1 \u2264j \u2264N \u22121, and IN = {(N \u22121)k + 1, . . . , p}. For 1 \u2264j, j\u2032 \u2264N and A = (ai1,i2)p\u00d7p, de\ufb01ne AIj\u00d7Ij\u2032 = (ai1,i2)i1\u2208Ij,i2\u2208Ij\u2032 8 \fand de\ufb01ne the blockwise tridiagonal estimator \u02c6 \u03a3bt by \u02c6 \u03a3Ij\u00d7Ij\u2032 = \uf8f1 \uf8f2 \uf8f3 \u02c6 \u03a3\u2217 Ij\u00d7Ij\u2032, if |j \u2212j\u2032| \u22641; 0, otherwise. (10) That is, \u02c6 \u03a3Ij\u00d7Ij\u2032 is estimated by its sample counterpart if and only if j and j\u2032 di\ufb00er by at most 1. The weight matrix of the blockwise tridiagonal estimator \u02c6 \u03a3bt is illustrated in Figure 1. Figure 1: Weight matrix for the blockwise tridiagonal estimator. Theorem 2.1 Suppose Assumptions 2.1 and 2.2 hold. Then, conditioning on S, the blockwise tridiagonal \u02c6 \u03a3bt with k = (n\u2217 min)1/(2\u03b1+1) satis\ufb01es sup \u03a3\u2208U\u03b1(M,M0) E\u2225\u02c6 \u03a3bt \u2212\u03a3\u22252 \u2264C(n\u2217 min)\u22122\u03b1/(2\u03b1+1) + C ln p n\u2217 min , (11) where C is a constant depending only on M, M0, and \u03c4 from Assumption 2.2. The optimal choice of block size k depends on the unknown \u201csmoothness parameter\u201d \u03b1. In practice, k can be chosen by cross-validation. See Section 4.1 for further discussions. Moreover, the convergence rate in (11) is optimal as we also have the following lower bound result. 9 \fProposition 2.1 For any n0 \u22651 such that p \u2264exp(\u03b3n0) for some constant \u03b3 > 0, conditioning on S we have inf \u02c6 \u03a3 sup \u03a3\u2208U\u03b1(M,M0) S:n\u2217 min\u2265n0 E \u0010 \u2225\u02c6 \u03a3 \u2212\u03a3\u22252\u0011 \u2265C(n0)\u22122\u03b1/(2\u03b1+1) + C ln p n0 . Remark 2.1 (Tapering and banding estimators) It should be noted that the same rate of convergence can also be attained by tapering and banding estimators with suitable choices of tapering and banding parameters. Speci\ufb01cally, let \u02c6 \u03a3tp and \u02c6 \u03a3bd be respectively the tapering and banded estimators proposed in Cai et al. [9] and Bickel and Levina [2] with \u02c6 \u03a3tp = \u02c6 \u03a3tp k = (wtp ij \u02c6 \u03c3\u2217 ij)1\u2264i,j\u2264p and \u02c6 \u03a3bd = \u02c6 \u03a3bd k = (wbd ij \u02c6 \u03c3\u2217 ij)1\u2264i,j\u2264p, (12) where wtp ij and wbd ij are the weights de\ufb01ned as wtp ij = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1, when |i \u2212j| \u2264k/2, 2 \u2212|i\u2212j| kh , when k/2 < |i \u2212j| < k 0, otherwise and wbd ij = \uf8f1 \uf8f2 \uf8f3 1, when |i \u2212j| \u2264k, 0, otherwise . (13) Then the estimators \u02c6 \u03a3tp and \u02c6 \u03a3bd with k = (n\u2217 min)1/(2\u03b1+1) attains the rate given in (11). The proof of Theorem 2.1 shares some basic ideas with that for the complete data case (See, e.g. Theorem 2 in Cai et al. [9]). However, it relies on a new key technical tool which is a large deviation result for a sub-matrix of the generalized sample covariance matrix under the spectral norm. This random matrix result for the case of missing data, stated in the following lemma, can be potentially useful for other, related high-dimensional missing data problems. The proof of Lemma 2.1, given in Section 6, is more involved than the complete data case, as in the generalized sample covariance matrix each entry, \u02c6 \u03c3\u2217 ij, is the average of a varying number of samples. Lemma 2.1 Suppose Assumptions 2.1 and 2.2 hold. Let \u02c6 \u03a3\u2217be the generalized sample covariance matrix de\ufb01ned in (7) and let A and B be two subsets of {1, . . . , p}. Then, conditioning 10 \fon S, the submatrix \u02c6 \u03a3\u2217 A\u00d7B satis\ufb01es Pr \u0010 \u2225\u02c6 \u03a3\u2217 A\u00d7B \u2212\u03a3A\u00d7B\u2225\u2264x \u0011 \u22651 \u2212C \u00b7 (49)|A\u222aB| exp ( \u2212cn\u2217 min min x2 \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, x \u03c4 2 (\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 !) (14) for all x > 0. Here C > 0 and c > 0 are two absolute constants. 3 Estimation of Sparse Covariance Matrices In this section, we consider estimation of high-dimensional sparse covariance matrices in the presence of missing data. We introduce an adaptive thresholding estimator based on incomplete data and investigate its asymptotic properties. 3.1 Adaptive Thresholding Procedure Sparse covariance matrices arise naturally in a range of applications including genomics. Estimation of sparse covariance matrices has been considered in several recent papers in the setting of complete data (see, e.g., Bickel and Levina [3], El Karoui [16], Rothman et al. [24], Cai and Zhou [10] and Cai and Liu [5]). Estimation of a sparse covariance matrix is intrinsically a heteroscedastic problem in the sense that the variances of the entries of the sample covariance matrix can vary over a wide range. To treat the heteroscedasticity of the sample covariances, Cai and Liu [5] introduced an adaptive thresholding procedure which adapts to the variability of the individual entries of the sample covariance matrix and outperforms the universal thresholding method. The estimator is shown to be simultaneously rate optimal over collections of sparse covariance matrices. In the present setting of missing data, the usual sample covariance matrix is not available. Instead we apply the idea of adaptive thresholding to the generalized sample covariance matrix \u02c6 \u03a3\u2217. The procedure can be described as follows. Note that \u02c6 \u03a3\u2217de\ufb01ned in (7) is a nearly unbiased 11 \festimate of \u03a3, we may write it element-wise as \u02c6 \u03c3\u2217 ij \u2248\u03c3ij + s \u03b8ij n\u2217 ij zij, 1 \u2264i, j \u2264p, where zi is approximately normal with mean 0 and variance 1, and \u03b8ij describes the uncertainty of estimator \u03c3\u2217 ij to \u03c3ij such that \u03b8ij = var {(Xi \u2212\u00b5i)(Xj \u2212\u00b5j) \u2212\u03c3ij} . We can estimate \u03b8ij by \u02c6 \u03b8\u2217 ij = 1 n\u2217 ij n X k=1 \b (Xik \u2212\u00af X\u2217 i )(Xjk \u2212\u00af X\u2217 j ) \u2212\u02c6 \u03c3\u2217 ij \t2 SikSjk. (15) Lemma 3.1 given at the end of this section shows that \u02c6 \u03b8\u2217 ij is a good estimate of \u03b8ij. Since the covariance matrix \u03a3 is assumed to be sparse, it is natural to estimate \u03a3 by individually thresholding \u02c6 \u03b8\u2217 ij according to its own variability as measured by \u02c6 \u03b8\u2217 ij. De\ufb01ne the thresholding level \u03bbij by \u03bbij = \u03b4 s \u02c6 \u03b8\u2217 ij ln p n\u2217 ij , 1 \u2264i, j \u2264p, where \u03b4 is a thresholding constant which can be taken as 2. Let T\u03bb be a thresholding function satisfying the following conditions, (1). |T\u03bb(z)| \u2264cT|y| for all z, y such that |z \u2212y| \u2264\u03bb; (2). T\u03bb(z) = 0 for |z| \u2264\u03bb; (3). |T\u03bb(z) \u2212z| \u2264\u03bb, for all z \u2208R. These conditions are met by many well-used thresholding functions, including the soft thresholding rule T\u03bb(z) = sgn(z)(z \u2212\u03bb)+, where sgn(z) is the sign function such that sgn(z) = 1 if z > 0, sgn(z) = 0 if z = 0, and sgn(z) = \u22121 if z < 0, and the adaptive lasso rule T\u03bb(z) = z(1 \u2212|\u03bb/z|\u03b7)+ with \u03b7 \u22651 (see Rothman et al. [24]). The hard thresholding function does not satisfy Condition (1), but our analysis also applies to hard thresholding under similar conditions. 12 \fThe covariance matrix \u03a3 is estimated by \u02c6 \u03a3at = (\u02c6 \u03c3at ij )1\u2264i,j\u2264p where \u02c6 \u03c3at ij is the thresholding estimator de\ufb01ned by \u02c6 \u03c3at ij = T\u03bbij(\u02c6 \u03c3\u2217 ij). (16) Note that here each entry \u02c6 \u03c3\u2217 ij is thresholded according to its own variability. 3.2 Asymptotic Properties We now investigate the properties of the thresholding estimator \u02c6 \u03a3at over the following parameter space for sparse covariance matrices, H(cn,p) = ( \u03a3 = (\u03c3ij) : max 1\u2264i\u2264p p X j=1 min ( (\u03c3ii\u03c3jj)1/2 , |\u03c3ij| p (ln p)/n ) \u2264cn,p ) . (17) The parameter space H(cn,p) contains a large collection of sparse covariance matrices and does not impose any constraint on the variances \u03c3ii, i = 1, ..., p. The collection H(cn,p) contains some other commonly used classes of sparse covariance matrices in the literature, including an \u2113q ball assumption maxi Pp j=1 |\u03c3ij|q \u2264sn,p in Bickel and Levina [3], and a weak \u2113q ball assumption max1\u2264j\u2264p \b\f \f\u03c3j[k] \f \fq\t \u2264sn,p/k for each integer k in Cai and Zhou [10] where \f \f\u03c3j[k] \f \f is the kth largest entry in magnitude of the jth row (\u03c3ij)1\u2264i\u2264p. See Cai et al. [7] for more discussions. We have the following result on the performance of \u02c6 \u03a3at over the parameter space H(cn,p). Theorem 3.1 Suppose that \u03b4 \u22652, ln p = o((n\u2217 min)1/3) and Assumptions 2.1 and 2.2 hold. Then, conditioning on S, there exists a constant C not depending on p, n\u2217 min or n such that for any \u03a3 \u2208H(cn,p), Pr \r \r \r \u02c6 \u03a3at \u2212\u03a3 \r \r \r \u2264Ccn,p s ln p n\u2217 min ! \u22651 \u2212O \b (ln p)\u22121/2p\u2212\u03b4+2\t . (18) Moreover, if we further assume that p \u2265(n\u2217 min)\u03be and \u03b4 \u22654 + 1/\u03be, we in addition have E \u0010 \u2225\u02c6 \u03a3at \u2212\u03a3\u22252\u0011 \u2264Cc2 n,p ln p n\u2217 min . (19) 13 \fMoreover, the lower bound result below shows that the rate in (19) is optimal. Proposition 3.1 For any n0 \u22651 and cn,p > 0 such that cn,p \u2264Mn1/2 0 (ln p)\u22123/2 for some constant M > 0, conditioning on S we have inf \u02c6 \u03a3 sup \u03a3\u2208H(cn,p) S:n\u2217 min\u2265n0 E \u0010 \u2225\u02c6 \u03a3 \u2212\u03a3\u22252\u0011 \u2265Cc2 n,p ln p n0 . Remark 3.1 (\u2113q norm loss) We focus in this paper on estimation under the spectral norm loss. The results given in Theorem 3.1 can be easily generalized to the general matrix \u2113q norm for 1 \u2264q \u2264\u221e. The results given in Equations (18) and (19) remain valid when the spectral norm is replaced by the matrix \u2113q norm for 1 \u2264q \u2264\u221e. Remark 3.2 (Positive de\ufb01niteness) Under mild conditions on \u03a3, the estimator \u02c6 \u03a3at is positive de\ufb01nite with high probability. However, \u02c6 \u03a3at is not guaranteed to be positive de\ufb01nite for a given data set. Whenever \u02c6 \u03a3at is not positive semi-de\ufb01nite, a simple extra step can make the \ufb01nal estimator \u02c6 \u03a3at + positive de\ufb01nite and also rate-optimal. Write the eigen-decomposition of \u02c6 \u03a3at as \u02c6 \u03a3at = Pp i=1 \u02c6 \u03bbi\u02c6 vi\u02c6 v\u22a4 i , where \u02c6 \u03bb1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u02c6 \u03bbp are the eigenvalues and \u02c6 vi are the corresponding eigenvectors. De\ufb01ne the \ufb01nal estimator \u02c6 \u03a3at + = \u02c6 \u03a3at + \u0012 |\u02c6 \u03bbp| + ln p n\u2217 min \u0013 I{\u02c6 \u03bbp < 0} \u00b7 Ip\u00d7p, where Ip\u00d7p is the p \u00d7 p identity matrix. Then \u02c6 \u03a3at + is a positive de\ufb01nite matrix with the same structure as that of \u02c6 \u03a3at. It is easy to show that \u02c6 \u03a3at + and \u02c6 \u03a3at attains the same rate of convergence over H(cn,p). See Cai, Ren and Zhou [7] for further discussions. Remark 3.3 (Exclusion of the Rademacher Distribution) To guarantee that \u02c6 \u03b8\u2217 ij is a good estimate of \u03b8ij, one important condition needed in the theoretical analysis is that \u03b8ij/\u221a\u03c3ii\u03c3jj is bounded from below by a positive constant. However when the components of Zk in (8) are i.i.d. Rademacher, it is possible that \u03b8ij/\u221a\u03c3ii\u03c3jj = 0. For example, If Z1 and Z2 are i.i.d. Rademacher and Xi = Z1+Z2 and Xj = Z1\u2212Z2, then var(XiXj) = var(Z2 1 \u2212Z2 2) = 0, and this implies \u03b8ij/\u221a\u03c3ii\u03c3jj = 0. 14 \fA key technical tool in the analysis of the adaptive thresholding estimator is a large deviation result for the self-normalized entries of the generalized sample covariance matrix. The following lemma, proved in Section 6, plays a critical role in the proof of Theorem 3.1 and can be useful for other high-dimensional inference problems with missing data. Lemma 3.1 Suppose ln p = o((n\u2217 min)1/3) and Assumptions 2.1 and 2.2 hold. For any constants \u03b4 \u22652, \u03b5 > 0, M > 0, conditioning on S, we have Pr |\u02c6 \u03c3\u2217 ij \u2212\u03c3ij| (\u02c6 \u03b8\u2217 ij)1/2 \u2265\u03b4 s ln p n\u2217 ij , \u22001 \u2264i, j \u2264p ! = O \b (ln p)\u22121/2p\u2212\u03b4+2\t , (20) Pr max ij |\u02c6 \u03b8\u2217 ij \u2212\u03b8ij| \u03c3ii\u03c3jj \u2265\u03b5 ! = O(p\u2212M). (21) In addition to optimal estimation of a sparse covariance matrix \u03a3 under the spectral norm loss, it is also of signi\ufb01cant interest to recover the support of \u03a3, i.e., the locations of the nonzero entries of \u03a3. The problem has been studied in the case of complete data in, e.g., Cai and Liu [5] and Rothman et al. [24]. With incomplete data, the support can be similarly recovered through adaptive thresholding. Speci\ufb01cally, de\ufb01ne the support of \u03a3 = (\u03c3ij)1\u2264i,j\u2264p by supp(\u03a3) = {(i, j) : \u03c3ij \u0338= 0}. Under the condition that the non-zero entries of \u03a3 are su\ufb03ciently bounded away from zero, the adaptive thresholding estimator \u02c6 \u03a3at recovers the support supp(\u03a3) consistently. It is noteworthy that in the support recovery analysis, the sparsity assumption is not directly needed. Theorem 3.2 (Support Recovery) Suppose ln p = o((n\u2217 min)1/3) and Assumptions 2.1 and 2.2 hold. Let \u03b3 be any positive constant. Suppose \u03a3 satis\ufb01es |\u03c3ij| > (4 + \u03b3) s \u03b8ij ln p n\u2217 ij , for all (i, j) \u2208supp(\u03a3). (22) Let \u02c6 \u03a3at be the adaptive thresholding estimator with \u03b4 = 2, then, conditioning on S, we have Pr n supp( \u02c6 \u03a3at) = supp(\u03a3) o \u21921 as n, p \u2192\u221e. (23) 15 \f4 Numerical Results We investigate in this section the numerical performance of the proposed estimators through simulations. The proposed adaptive thresholding procedure is also illustrated with an estimation of the covariance matrix based on data from four ovarian cancer studies. The estimators \u02c6 \u03a3bt and \u02c6 \u03a3at introduced in the previous sections all require speci\ufb01cation of the tuning parameters (k or \u03b4). Cross-validation is a simple and practical data-driven method for the selection of these tuning parameters. Numerical results indicate that the proposed estimators with the tuning parameter selected by cross-validation perform well empirically. We begin by introducing the following K-fold cross-validation method for the empirical selection of the tuning parameters. 4.1 Cross-validation For a pre-speci\ufb01ed positive integer N, we construct a grid T of non-negative numbers. For bandable covariance matrix estimation, we set T = \b 1, \u2308p1/N\u2309, . . . , \u2308pN/N\u2309 \t , and for sparse covariance matrix estimation, we let T = {0, 1/N, . . . , 4N/N}. Given n samples X\u2217\u2208Rp\u00d7n with missing values, for a given positive integer K, we randomly divide them into two groups of size n1 \u2248n(K \u22121)/K, n2 \u2248n/K for H times. For h = 1, . . . , H, we denote by Jh 1 and Jh 2 \u2286{1, . . . , n} the index sets of the two groups for the h-th split. The proposed estimator, \u02c6 \u03a3bt for bandable covariance matrices, or \u02c6 \u03a3at for sparse covariance matrices, is then applied to the \ufb01rst group of data X\u2217 Jh 1 with each value of the tuning parameter t \u2208T and denote the result by \u02c6 \u03a3bt h (t) or \u02c6 \u03a3at h (t) respectively. Denote the generalized sample covariance matrix of the second group of data X\u2217 Jh 2 by \u02c6 \u03a3\u2217 h and set \u02c6 R(t) = 1 H H X h=1 \u2225\u02c6 \u03a3h(t) \u2212\u02c6 \u03a3\u2217 h\u22252 F, (24) where \u02c6 \u03a3h(t) is either \u02c6 \u03a3bt(t) for bandable covariance matrices, or \u02c6 \u03a3at(t) for sparse covariance matrices. 16 \fThe \ufb01nal tuning parameter is chosen to be t\u2217= arg min T \u02c6 R(t) and the \ufb01nal estimator \u02c6 \u03a3bt (or \u02c6 \u03a3at) is calculated using this choice of the tuning parameter t\u2217. In the following numerical studies, we will use 5-fold cross-validation (i.e., K = 5) to select the tuning parameters. Remark 4.1 The Frobenius norm used in (24) can be replaced by other losses such as the spectral norm. Our simulation results indicate that using the Frobenius norm in (24) works well, even when the true loss is the spectral norm loss. 4.2 Simulation Studies In the simulation studies, we consider the following two settings for the missingness. The \ufb01rst is MUCR where each entry Xik is observed with probability 0 < \u03c1 \u22641, and the second is missing not uniformly but completely at random (MCR) where the complete data matrix X is divided into four equal-size blocks, X = \uf8ee \uf8f0X(11) X(12) X(21) X(22) \uf8f9 \uf8fb, X(11), X(12), X(21), X(22) \u2208R p 2 \u00d7 n 2 , and each entry of X(11) and X(22) is observed with probability \u03c1(1) and each entry of X(12) and X(21) is observed with probability \u03c1(2), for some 0 < \u03c1(1), \u03c1(2) \u22641. As mentioned in the introduction, high-dimensional inference for missing data has been studied in the case of MUCR and we would like to compare our estimators with the corresponding estimators based on a di\ufb00erent sample covariance matrix designed for the MUCR case. Under the assumption that EX = 0 and each entry of X is observed independently with probability \u03c1, Wainwright [21] and Lounici [23] introduced the following substitute of the usual sample covariance matrix \u02c6 \u03a3\u2022 = (\u03c3\u2022 ij)1\u2264i,j\u2264p with \u02c6 \u03c3\u2022 ij = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1 n(1\u2212\u03c1)2 Pn k=1 X\u2217 ikX\u2217 jk, i \u0338= j 1 n(1\u2212\u03c1) Pn k=1 X\u2217 ikX\u2217 jk, i = j (25) 17 \fwhere the missing entries of X\u2217are replaced by 0\u2019s. It is easy to show that \u02c6 \u03a3\u2022 is a consistent estimator of \u03a3 under MUCR and could be used similarly as the sample covariance matrix in the complete data setting. For more general settings where EX \u0338= 0 and the coordinates X1, X2, ..., Xp are observed with di\ufb00erent probabilities \u03c11, . . . , \u03c1p, \u02c6 \u03a3\u2022 can be generalized as \u02c6 \u03a3\u2022 = (\u02c6 \u03c3\u2022 ij)1\u2264i,j\u2264p with \u02c6 \u03c3\u2022 ij = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1 n(1\u2212\u02c6 \u03c1i)(1\u2212\u02c6 \u03c1j) Pn k=1 X\u2217 ik,cX\u2217 jk,c, i \u0338= j 1 n(1\u2212\u02c6 \u03c1i) Pn k=1 X\u2217 ik,cX\u2217 jk,c, i = j (26) where for i = 1, . . . , p and k = 1, . . . , n, \u02c6 \u03c1i = 1 n Pn k=1 Sik and X\u2217 ik,c = X\u2217 ik \u2212\u00af X\u2217 i . Based on \u02c6 \u03a3\u2022, we can analogously de\ufb01ne the corresponding blockwise tridiagonal estimator \u02c6 \u03a3bt\u2022 for bandable covariance matrices, and adaptive thresholding estimator \u02c6 \u03a3at\u2022 for sparse covariance matrices. We \ufb01rst consider estimation of bandable covariance matrices and compare the proposed blockwise tridiagonal estimator \u02c6 \u03a3bt with the corresponding estimator \u02c6 \u03a3bt\u2022. For both methods, the tuning parameter k is selected by 5-fold cross-validation with N varying from 20 to 50. The following bandable covariance matrices are considered: 1. (Linear decaying bandable model) \u03a3 = (\u03c3ij)1\u2264i,j\u2264p with \u03c3ij = max{0, 1 \u2212|i \u2212j|/5}. 2. (Squared decaying bandable model) \u03a3 = (\u03c3ij)1\u2264i,j\u2264p with \u03c3ij = (|i \u2212j| + 1)\u22122. For missingness, both MUCR and MCR are considered and (25) and (26) are used to calculate \u02c6 \u03a3\u2022 respectively. The proposed procedure \u02c6 \u03a3bt is compared with the estimator \u02c6 \u03a3bt\u2022, which is based on \u02c6 \u03a3\u2022. The results for the spectral norm, \u21131 norm and Frobenius norm losses are reported in Table 1. It is easy to see from Table 1 that the proposed estimator \u02c6 \u03a3bt generally outperforms \u02c6 \u03a3bt\u2022, especially in the fast decaying setting. Now we consider estimation of sparse covariance matrices with missing values under the following two models. 1. (Permutation Bandable Model) \u03a3 = (\u03c3ij)1\u2264i,j\u2264p, where \u03c3ij = max(0, 1\u22120.2\u00b7|s(i)\u2212s(j)|) and s(i), i = 1, . . . , p is a random permutation of {1, . . . , p}. 18 \fSpectral norm \u21131 norm Frobenius norm (p, n) \u02c6 \u03a3bt \u02c6 \u03a3bt\u2022 \u02c6 \u03a3bt \u02c6 \u03a3bt\u2022 \u02c6 \u03a3bt \u02c6 \u03a3bt\u2022 Linear Decay Bandable Model, MUCR \u03c1 = .5 (50, 50) 2.78(0.17) 2.88(0.18) 4.37(0.57) 4.57(0.76) 7.73(0.85) 7.85(0.80) (50, 200) 1.44(0.06) 1.56(0.07) 2.52(0.17) 2.71(0.19) 3.91(0.18) 4.16(0.16) (200, 100) 2.25(0.13) 2.44(0.16) 3.83(0.32) 4.22(0.46) 10.27(0.29) 10.89(0.29) (200, 200) 1.67(0.07) 1.82(0.08) 2.81(0.19) 3.08(0.22) 7.19(0.19) 7.68(0.14) (500, 200) 2.00(0.07) 2.18(0.10) 3.45(0.16) 3.74(0.27) 12.10(0.36) 12.87(0.42) Squared Decay Bandable Model, MUCR \u03c1 = .5 (50, 50) 1.34(0.08) 1.40(0.11) 2.28(0.16) 2.37(0.21) 3.78(0.19) 3.91(0.18) (50, 200) 0.82(0.01) 0.84(0.01) 1.47(0.03) 1.49(0.02) 2.24(0.02) 2.30(0.02) (200, 100) 1.13(0.01) 1.17(0.02) 2.12(0.05) 2.18(0.07) 5.74(0.04) 5.91(0.05) (200, 200) 0.92(0.00) 0.94(0.00) 1.66(0.02) 1.72(0.03) 4.49(0.02) 4.61(0.01) (500, 200) 0.97(0.00) 0.98(0.00) 1.80(0.02) 1.86(0.02) 7.15(0.01) 7.35(0.01) Linear Decay Bandable Model, MCR \u03c1(1) = .8, \u03c1(2) = .2 (50, 50) 2.76(0.26) 3.46(1.43) 4.24(0.73) 5.87(2.91) 7.03(1.25) 8.47(1.29) (50, 200) 1.51(0.11) 2.64(0.40) 2.52(0.30) 4.29(0.99) 3.62(0.30) 5.77(0.45) (200, 100) 2.32(0.22) 3.93(0.67) 3.73(0.47) 6.21(1.11) 9.04(0.48) 13.47(0.84) (200, 200) 1.67(0.10) 3.23(0.27) 2.71(0.26) 4.91(0.49) 6.32(0.11) 11.32(0.49) (500, 200) 1.98(0.09) 3.78(0.20) 3.19(0.20) 5.70(0.42) 10.39(0.12) 18.48(0.49) Squared Decay Bandable Model, MCR \u03c1(1) = .8, \u03c1(2) = .2 (50, 50) 1.26(0.08) 1.49(0.13) 2.21(0.23) 2.60(0.28) 3.48(0.14) 4.18(0.23) (50, 200) 0.82(0.01) 0.88(0.04) 1.47(0.05) 1.77(0.11) 2.18(0.04) 2.68(0.11) (200, 100) 1.06(0.01) 1.30(0.04) 1.96(0.04) 2.44(0.07) 5.32(0.02) 6.51(0.06) (200, 200) 0.90(0.00) 0.96(0.03) 1.60(0.02) 1.99(0.06) 4.27(0.02) 5.26(0.15) (500, 200) 0.93(0.00) 1.03(0.01) 1.69(0.01) 2.11(0.03) 6.73(0.01) 8.25(0.04) Table 1: Comparsion between \u02c6 \u03a3bt and \u02c6 \u03a3bt\u2022 in di\ufb00erent settings of bandable covariance matrix estimation. 19 \f2. (Randomly Sparse Model) \u03a3 = Ip+(D+D\u22a4)/(\u2225D+D\u22a4\u2225+0.01), where D is randomly generated as D = (dij)1\u2264i,j\u2264p, dij = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 w.p. 0.1 0 w.p. 0.8 \u22121 w.p. 0.1 for i \u0338= j; dii = 0. Similar to the sparse covariance matrix estimation, for missingness, we consider both MUCR and MCR. The results for the spectral norm, matrix \u21131 norm and Frobenius norm losses are summarized in Table 2. It can be seen from Table 2 that, even under the MUCR setting, the proposed estimator \u02c6 \u03a3at based on the generalized sample covariance matrix is uniformly better than the one based on \u02c6 \u03a3\u2022. In the more general MCR setting, the di\ufb00erence in the performance between the two estimators is even more signi\ufb01cant. 4.3 Comparison with Complete Samples For covariance matrix estimation with missing data, an interesting question is: what is the \u201ce\ufb00ective sample size\u201d? That is, for samples with missing values, we would like to know the equivalent size of complete samples such that the accuracy for covariance matrix estimation is approximately the same. We now compare the performance of the proposed estimator based on the incomplete data with the corresponding estimator based on the complete data for various sample sizes. We \ufb01x the dimension p = 100. For the incomplete data, we consider n = 1000 and MUCR with \u03c1 = .5. The covariance matrix \u03a3 is chosen as \u2022 Linear Decaying Bandable Model (in Bandable Covariance Matrix Estimation); \u2022 Permutation Bandable Model (in Sparse Covariance Matrix Estimation); Correspondingly, we consider the similar settings for the complete data with the same \u03a3 and p, but di\ufb00erent sample size nc, where nc can be one of the following three values, 1. n\u2217 pair = Pn i,j=1 n\u2217 ij/p2: the average number of pairs of (xi, xj)\u2019s that can be observed within the same sample; 20 \fSpectral norm \u21131 norm Frobenius norm (p, n) \u02c6 \u03a3at \u02c6 \u03a3at\u2022 \u02c6 \u03a3at \u02c6 \u03a3at\u2022 \u02c6 \u03a3at \u02c6 \u03a3at\u2022 Permutation Bandable Model, MUCR \u03c1 = .5 (50, 50) 4.26(0.24) 4.45(0.41) 5.58(0.58) 6.19(7.54) 11.34(0.79) 11.73(1.08) (50, 200) 1.70(0.05) 1.74(0.06) 3.31(0.32) 3.42(0.38) 4.93(0.09) 5.07(0.16) (200, 100) 3.48(0.07) 3.66(0.58) 5.80(0.39) 6.23(14.89) 18.34(0.81) 19.37(5.50) (200, 200) 2.12(0.04) 2.20(0.03) 4.17(0.29) 4.44(0.32) 11.46(0.14) 11.94(0.13) (500, 200) 2.28(0.03) 3.51(0.17) 4.17(0.15) 6.55(0.72) 16.85(0.10) 21.96(0.49) Randomly Sparse Model, MUCR \u03c1 = .5 (50, 50) 1.76(0.07) 1.96(0.62) 3.69(0.24) 4.20(5.89) 5.75(0.51) 6.27(2.95) (50, 200) 1.05(0.00) 1.06(0.00) 2.73(0.04) 2.74(0.05) 3.75(0.03) 3.77(0.04) (200, 100) 1.40(0.01) 1.45(0.01) 4.88(0.08) 4.94(0.09) 8.34(0.07) 8.50(0.07) (200, 200) 1.07(0.00) 1.09(0.01) 4.44(0.03) 4.46(0.03) 7.42(0.02) 7.43(0.02) (500, 200) 1.14(0.01) 1.31(0.01) 6.39(0.04) 6.65(0.08) 11.73(0.01) 12.23(0.05) Permutation Bandable Model, MCR \u03c1(1) = .8, \u03c1(2) = .2 (50, 50) 4.23(0.38) 4.71(1.17) 6.67(2.30) 7.46(8.92) 11.22(1.34) 11.71(2.01) (50, 200) 1.64(0.05) 2.79(0.39) 2.94(0.21) 4.52(0.95) 4.41(0.13) 6.29(0.46) (200, 100) 3.17(0.06) 4.16(0.57) 5.73(0.66) 8.11(1.87) 15.93(0.53) 18.03(0.77) (200, 200) 2.00(0.03) 3.22(0.18) 3.65(0.16) 5.70(0.60) 9.83(0.11) 13.29(0.55) (500, 200) 2.22(0.03) 3.45(0.17) 4.09(0.17) 6.44(0.96) 16.80(0.14) 21.93(0.45) Randomly Sparse Model, MCR \u03c1(1) = .8, \u03c1(2) = .2 (50, 50) 2.15(0.46) 2.19(0.49) 4.21(0.94) 4.47(4.65) 6.36(0.96) 7.25(1.57) (50, 200) 1.09(0.02) 1.16(0.04) 2.82(0.19) 2.99(0.32) 3.83(0.10) 4.00(0.20) (200, 100) 1.46(0.02) 1.82(0.03) 4.96(0.12) 5.61(0.21) 8.45(0.07) 10.10(0.14) (200, 200) 1.08(0.00) 1.20(0.01) 4.46(0.04) 4.57(0.05) 7.43(0.02) 7.66(0.04) (500, 200) 1.12(0.01) 1.33(0.01) 6.35(0.04) 6.60(0.07) 11.71(0.02) 12.20(0.06) Table 2: Comparsion between \u02c6 \u03a3at and \u02c6 \u03a3at\u2022 in di\ufb00erent settings of sparse covariance matrix estimation. 21 \fSetting sample size Spectral norm \u21131 norm Frobenius norm Bandable Covariance Matrix Estimation Missing Data n = 1000 0.72(0.01) 1.25(0.03) 2.40(0.01) Complete Data nc = n\u2217 pair 0.97(0.03) 1.49(0.05) 2.48(0.04) Complete Data nc = n\u2217 s 0.65(0.01) 1.01(0.03) 1.69(0.03) Complete Data nc = n 0.48(0.01) 0.73(0.01) 1.22(0.01) Sparse Covariance Matrix Estimation Missing Data n = 1000 0.75(0.01) 1.37(0.04) 2.90(0.02) Complete Data nc = n\u2217 pair 0.83(0.02) 1.31(0.05) 2.94(0.04) Complete Data nc = n\u2217 s 0.65(0.01) 1.01(0.03) 1.86(0.04) Complete Data nc = n 0.45(0.01) 0.64(0.01) 1.12(0.01) Table 3: Comparison between incomplete samples and complete samples. 2. n\u2217 s = Pn i=1 n\u2217 i /p: the average number of single xi\u2019s can be observed; 3. n: the same number of samples with the missing values. The results for all the settings are summarized in Table 3. It can be seen that the equivalent sample size depends on the loss function and in general it is between n\u2217 pair and n\u2217 s. Overall, the average risk under the missing data setting is most comparable to that under the complete data setting for the sample size of nc = n\u2217 pair, the average number of observed pairs. 4.4 Analysis of Ovarian Cancer Data In this section, we illustrate the proposed adaptive thresholding procedure with an application to data from four ovarian cancer genomic studies, Cancer Genome Atlas Research Network [11] (TCGA), Bonome et al. [4] (BONO), Dressman et al. [15] (DRES) and Tothill et al. [27] (TOTH). The method introduced in Sections 3 enables us to estimate the covariance matrix by integrating data from all four studies and thus yields a more accurate estimator. The 22 \fdata structure is illustrated in Figure 2. The gene expression markers (the \ufb01rst 426 rows) are observed in all four studies without any missingness (the top black block in Figure 2). The miRNA expression markers are observed in 552 samples from the TCGA study (the bottom left block in Figure 2) and completely missing in the 881 samples from the TOTH, DRES, BONO and part of TCGA studies (the white block in Figure 2). miRNA Expression Markers Gene Expression Markers p1=426 p-p1=799 n1=552 n \u2013 n1=881 Figure 2: Illustration of the ovarian cancer dataset. Black block = completely observed; White block = completely missing. Our goal is to estimate the covariance matrix \u03a3 of the 1225 variables with the particular interest in the cross-covariances between the gene and miRNA expression markers. It is clear that the missingness here is not uniformly at random. On the other hand, it is reasonable to assume the missingness does not depend on the value of the data and thus missing completely at random (Assumption 2.1) can be assumed. We apply the adaptive thresholding procedure with \u03b4 = 2 to estimate the covariance matrix and recover its support based on all the observations. The support of the estimate is shown in a heatmap in Figure 3. The left panel is for the whole covariance matrix and the right panel zooms into the cross-covariances between the gene and miRNA expression markers. It can be seen from Figure 3 that the two diagonal blocks, with 12.24% and 8.39% nonzero o\ufb00-diagonal entries respectively, are relatively dense, indicating that the relationships among 23 \f0 200 400 600 800 1000 1200 0 200 400 600 800 1000 1200 (a) Covariance matrix of the gene and miRNA expression markers. The gene expression markers are marked with lines. Gene Expression Markers miRNA Expression Markers 0 100 200 300 400 0 100 200 300 400 500 600 700 (b) Cross-covariances between the gene and miRNA expression markers. 1294 (.38%) gene-miRNA pairs were detected. Figure 3: Heatmaps of the covariance matrix estimate with all the observed data. the gene expression markers and those among the miRNA expression markers, as measured by their covariances, are relatively close. In contrast, the cross-covariances between gene and miRNA expression markers are very sparse with only 0.38% of signi\ufb01cant gene-miRNA pairs. The gene and miRNA expression markers a\ufb00ect each other through di\ufb00erent mechanisms, the cross-covariances between the gene and miRNA markers are of signi\ufb01cant interest (see Ko et al. [19]). It is worthwhile to take a closer look at the cross-covariance matrix displayed on the right panel in Figure 3. For each given gene, we count the number of miRNAs whose covariances with this gene are signi\ufb01cant, and then rank all the genes by the counts. Similarly, we rank all the miRNAs. The top 5 genes and the top 5 miRNA expression markers are shown in Table 4.4. Many of these gene and miRNA expression markers have been studied before in the literature. For example, the miRNA expression markers hsa-miR-142-5p and hsa-miR-142-3p have been demonstrated in Andreopoulos and Anastassiou [1] as standing out among the miRNA 24 \fGene Expression Marker Counts miRNA Expression Marker Counts ACTA2 61 hsa-miR-142-5p 31 INHBA 57 hsa-miR-142-3p 29 COL10A1 53 hsa-miR-22 26 BGN 46 hsa-miR-21* 24 NID1 41 hsa-miR-146a 21 Table 4: Genes and miRNA\u2019s with most selected pairs markers as having higher correlations with more genes, as well as methylation sites. Carraro et al. [12] \ufb01nds that inhibition of miR-142-3p leads to ectopic expression of the gene marker ACTA2. This indicates strong interaction between miR-142-3p and ACTA2. To further demonstrate the robustness of our proposed procedure against missingness, we consider a setting with additional missing observations. We \ufb01rst randomly select half of the 552 complete samples (where both gene and miRNA expression markers are observed) and half of the 881 incomplete samples (where only gene expression markers are observed), and then independently mask each entry of the selected samples with probability 0.05. The proposed adaptive thresholding procedure is then applied to the data with these additional missing values. The estimated covariance matrix is shown in heatmaps in Figure 4. These additional missing observations do not signi\ufb01cantly a\ufb00ect the estimation accuracy. Figure 4 is visually very similar to Figure 3. To quantify the similarity between the two estimates, we calculate the Matthews correlation coe\ufb03cient (MCC) between them. The value of MCC is equal to 0.9441, which indicates that the estimate based on the data with the additional missingness is very close to the estimate based on the original samples. We also pay close attention to the cross-covariance matrix displayed on the right panel in Figure 4 and rank the gene and miRNA expression markers in the same way as before. The top 5 genes and the top 5 miRNA expression markers, listed in Table 5, are nearly identical to those given in Table 4.4, which are based on the original samples. These results indicate that the proposed method is robust 25 \fagainst additional missingness. 0 200 400 600 800 1000 1200 0 200 400 600 800 1000 1200 (a) Covariance matrix of the gene and miRNA expression markers. The gene expression markers are marked with lines. Gene Expression Markers miRNA Expression Markers 0 100 200 300 400 0 100 200 300 400 500 600 700 (b) Cross-covariances between the gene and miRNA expression markers. 1176 (.35%) gene-miRNA pairs were detected. Figure 4: Heatmaps of the covariance matrix estimate with additional missing values. 5 Discussions We considered in the present paper estimation of bandable and sparse covariance matrices in the presence of missing observations. The pivotal quantity is the generalized sample covariance matrix de\ufb01ned in (7). The technical analysis is more challenging due to the missing data. We have mainly focused on the spectral norm loss in the theoretical analysis. Performance under other losses such as the Frobenius norm can also be analyzed. To illustrate the proposed methods, we integrated four ovarian cancer studies. These methods for high-dimensional covariance matrix estimation with missing data are also useful for other types of data integration. For example, linking multiple data sources such as electronic data records, medicare data, registry data and patient reported outcomes could greatly 26 \fGene Expression Marker Counts miRNA Expression Marker Counts ACTA2 60 hsa-miR-142-3p 31 INHBA 56 hsa-miR-142-5p 30 COL10A1 50 hsa-miR-146a 21 BGN 43 hsa-miR-150 21 NID1 40 hsa-miR-21* 21 Table 5: Genes and miRNA\u2019s with most selected pairs after masking increase the power of exploratory studies such as phenome-wide association studies (Denny et al. [14]). However, missing data inevitably arises and may hinder the potential of integrative analysis. In addition to random missingness due to unavailable information on a small fraction of patients, many variables such as the genetic measurements may only exist in one or two data sources and are hence structurally missing for other data sources. Our proposed methods could potentially provide accurate recovery of the covariance matrix in the presence of missingness. In this paper, we allowed the proportion of missing values to be non-negligible as long as the minimum number of occurrences of any pair of variables n\u2217 min is of order n. An interesting question is what happens when the number of observed values is large but n\u2217 min is small (or even zero). We believe that the covariance matrix \u03a3 can still be well estimated under certain global structural assumptions. This is out of the scope of the present paper and is an interesting problem for future research. The key ideas and techniques developed in this paper can be used for a range of other related problems in high-dimensional statistical inference with missing data. For example, the same techniques can also be applied to estimation of other structured covariance matrices such as Toeplitz matrices, which have been studied in the literature in the case of complete data. When there are missing data, we can construct similar estimators using the generalized sample covariance matrix. The large deviation bounds for a sub-matrix and self-normalized 27 \fentries of the generalized sample covariance matrix developed in Lemmas 3.1 and 2.1 would be helpful for analyzing the properties of the estimators. The techniques can also be used on two-sample problems such as estimation of di\ufb00erential correlation matrices and hypothesis testing on the covariance structures. The generalized sample covariance matrix can be standardized to form the generalized sample correlation matrix which can then be used to estimate the di\ufb00erential correlation matrix in the two-sample case. It is also of signi\ufb01cant interest in some applications to test the covariance structures in both oneand two-sample settings based on incomplete data. In the one-sample case, it is of interest to test the hypothesis {H0 : \u03a3 = I} or {H0 : R = I}, where R is the correlation matrix. In the two-sample case, one wishes to test the equality of two covariance matrices {H0 : \u03a31 = \u03a32}. These are interesting problems for further exploration in the future. 6 Proofs We prove Theorem 2.1 and the key technical result Lemma 6.1 for the bandable covariance matrix estimation in this section. 6.1 Proof of Lemma 2.1 To prove this lemma, we \ufb01rst introduce the following technical tool for the spectral norm of the sub-matrices. Lemma 6.1 Suppose \u03a3 \u2208Rp\u00d7p is any positive semi-de\ufb01nite matrix, A, B \u2208{1, . . . , p}, then \u2225\u03a3A\u00d7B\u2225\u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2. (27) The proof of Lemma 6.1 is provided later and now we move back to the proof of Lemma 2.1. Without loss of generality, we assume that \u00b5 = EX = 0. We further de\ufb01ne \u02d8 \u03a3\u2217= (\u02d8 \u03c3\u2217 ij)1\u2264i,j\u2264p, \u02d8 \u03c3\u2217 ij = 1 n\u2217 ij n X k=1 XikXjkSikSjk. (28) 28 \fAlso for convenience of presentation, we use C, C1, c, . . . to denote uniform constants, whose exact values may vary in di\ufb00erent senarios. The lemma is now proved in the following steps: 1. We \ufb01rst consider for \ufb01xed unit vectors a, b \u2208Rp with supp(a) \u2286A, supp(b) \u2286B, the tail bound of a\u22a4( \u02c6 \u03a3\u2217\u2212\u03a3)b. We would like to show that there exist uniform constants C1, c > 0 such that for all x > 0, Pr n\f \f \fa\u22a4\u0010 \u02c6 \u03a3\u2217\u2212\u03a3 \u0011 b \f \f \f \u2265x o \u2264C1 exp \u001a \u2212cn\u2217 min min \u0012 x2 \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, x \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u0013\u001b . (29) Speci\ufb01cally, we will bound a\u22a4( \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3)b and a\u22a4( \u02d8 \u03a3\u2217\u2212\u03a3)b separately in the next two steps. 2. We consider a\u22a4( \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3)b \ufb01rst. Since \u02d8 \u03c3\u2217 ij \u2212\u02c6 \u03c3\u2217 ij = 1 n\u2217 ij n X k=1 (Xjk \u00af X\u2217 i + Xik \u00af X\u2217 j )SikSjk \u2212\u00af X\u2217 i \u00af X\u2217 j , a\u22a4( \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217)b can be written as a\u22a4( \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217)b = p X i,j=1 aibj(\u02d8 \u03c3\u2217 ij \u2212\u02c6 \u03c3\u2217 ij) = p X i,j=1 aibj Pn k=1 XikSik n\u2217 i \u00b7 Pn l=1 XjlSilSjl n\u2217 ij + Pn k=1 XikSikSjk n\u2217 ij \u00b7 Pn l=1 XjlSjl n\u2217 j \u2212 Pn k=1 XikSik n\u2217 i \u00b7 Pn l=1 XjlSjl n\u2217 j ! = p X i,j=1 n X k,l=1 XikXjlaibj \u0012SikSilSjl n\u2217 i n\u2217 ij + SikSjkSjl n\u2217 ijn\u2217 j \u2212SikSjl n\u2217 i n\u2217 j \u0013 . (30) 29 \fWe can calculate from (30) that \f \f \fEa\u22a4( \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217)b \f \f \f = \f \f \f \f \f p X i,j=1 n X k=1 \u03c3ijaibj \u0012SikSikSjk n\u2217 i n\u2217 ij + SikSjkSjk n\u2217 ijn\u2217 j \u2212SikSjk n\u2217 i n\u2217 j \u0013\f \f \f \f \f = \f \f \f \f \f p X i,j=1 \u03c3ij ai n\u2217 i bj + p X i,j=1 ai bj n\u2217 j \u03c3ij \u2212 n X k=1 p X i,j=1 Sikai n\u2217 i Sjkbj n\u2217 j \u03c3ij \f \f \f \f \f \u2264 \f \f \f \f \u0012a1 n\u2217 1 , . . . , ap n\u2217 p \u0013 \u03a3b \f \f \f \f + \f \f \f \f \fa\u22a4\u03a3 \u0012 b1 n\u2217 1 , . . . , bp n\u2217 p \u0013\u22a4\f \f \f \f \f + n X k=1 \f \f \f \f \f \u0012S1ka1 n\u2217 1 , . . . , Spkap n\u2217 p \u0013 \u03a3 \u0012S1kb1 n\u2217 1 , . . . , Spkbp n\u2217 p \u0013\u22a4\f \f \f \f \f \u2264\u2225\u03a3A\u00d7B\u2225\u2225a\u22252\u2225b\u22252 n\u2217 min + \u2225\u03a3A\u00d7B\u2225\u2225a\u22252\u2225b\u22252 n\u2217 min + n X k=1 \u2225\u03a3A\u00d7B\u2225\u00b7 1 2 (\r \r \r \r \u0012S1ka1 n\u2217 1 , . . . , Spkap n\u2217 p \u0013\r \r \r \r 2 2 + \r \r \r \r \u0012S1kb1 n\u2217 1 , . . . , Spkbp n\u2217 p \u0013\r \r \r \r 2 2 ) . (31) For the last term in (31), we have the following bound, n X k=1 \u2225\u03a3A\u00d7B\u2225\u00b7 1 2 (\r \r \r \r \u0012S1ka1 n\u2217 1 , . . . , Spkap n\u2217 p \u0013\r \r \r \r 2 2 + \r \r \r \r \u0012S1kb1 n\u2217 1 , . . . , Spkbp n\u2217 p \u0013\r \r \r \r 2 2 ) =\u2225\u03a3A\u00d7B\u2225 n X k=1 p X i=1 1 2 \u0012Sika2 i n\u22172 i + Sikb2 i n\u22172 i \u0013 =\u2225\u03a3A\u00d7B\u2225 p X i=1 1 2 \u0012a2 i + b2 i n\u2217 i \u0013 \u2264\u2225\u03a3A\u00d7B\u2225 p X i=1 a2 i + b2 i 2n\u2217 min \u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 n\u2217 min . Thus, by (31) and the inequality above, we have \f \f \fEa\u22a4( \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217)b \f \f \f \u22643(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 n\u2217 min . (32) The last term of (30) can be treated as a quadratic form of the vectorization of X : vec(X) \u2208Rpn. We note the last term as vec(X)\u22a4Qvec(X), where Q \u2208Rpn\u00d7pn and Q(i,k),(j,l) = aibj \u0012SikSilSjl n\u2217 i n\u2217 ij + SikSjkSjl n\u2217 ijn\u2217 j \u2212SikSjl n\u2217 i n\u2217 j \u0013 , 1 \u2264i, j \u2264p, 1 \u2264k, l \u2264n. 30 \fQ has the following properties, \u2225Q\u22252 F = p X i,j=1 n X k,l=1 a2 i b2 j \u0012SikSilSjl n\u2217 i n\u2217 ij + SikSjkSjl n\u2217 ijn\u2217 j \u2212SikSjl n\u2217 i n\u2217 j \u00132 \u2264 p X i,j=1 a2 i b2 j n X k,l=1 \u0012 2SikSilSjl n\u22172 i n\u22172 ij + 2SikSjkSjl n\u22172 ij n\u22172 j + SikSjl n\u22172 i n\u22172 j \u0013 , since Sik \u2208{0, 1}; \u2264 p X i,j=1 a2 i b2 j 5 n\u22172 min = 5\u2225a\u22252 2\u2225b\u22252 2 n\u22172 min = 5 n\u22172 min ; (33) \u2225Q\u2225\u2264\u2225Q\u2225F \u2264 \u221a 5\u2225a\u22252\u2225b\u22252 n\u2217 min \u2264 \u221a 5 n\u2217 min . (34) For vec(X) \u2208Rpn, since its segments {Xk, k = 1, . . . , p} are independent and Xk = \u0393Zk, we can further write vec(X) = D\u0393vec(Z), where D\u0393 \u2208Rpn\u00d7qn is with n diagonal blocks of \u0393, vec(Z) is a (qn)-dimensional i.i.d. sub-Gaussian random vector. Based on Hanson-Wright\u2019s inequality (Theorem 1.1 in Rudelson and Vershynin [25]), Pr n\f \f \fa\u22a4\u0010 \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217\u0011 b \u2212Ea\u22a4\u0010 \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217\u0011 b \f \f \f \u2265x o = Pr \b\f \fvec(X)\u22a4Qvec(X) \u2212Evec(X)\u22a4Qvec(X) \f \f \u2265x \t = Pr \u0002\f \fvec(Z)\u22a4D\u22a4 \u0393QD\u0393vec(Z) \u2212E \b vec(Z)\u22a4D\u22a4 \u0393QD\u0393vec(Z) \t\f \f \u2265x \u0003 \u22642 exp \u001a \u2212c min \u0012 x2 \u03c4 4\u2225D\u22a4 \u0393QD\u0393\u22252 F , x \u03c4 2\u2225D\u0393QD\u0393\u2225 \u0013\u001b . (35) Here c > 0 is a uniform constant. Since Q is supported on {(i, k), (j, l) : i \u2208A, j \u2208 B}, we have D\u22a4 \u0393QD\u0393 = D\u22a4 \u0393AQA\u00d7BD\u0393B. Here D\u0393A \u2208R|A|n\u00d7qn, D\u0393B \u2208R|B|n\u00d7qn are with n diagonal block \u0393A\u00d7[q] and \u0393B\u00d7[q], respectively, where [q] = {1, . . . , q}. Since \u0393A\u00d7[q]\u0393\u22a4 A\u00d7[q] = \u03a3A, \u0393B\u00d7[q]\u0393\u22a4 B\u00d7[q] = \u03a3B, we know \u2225D\u0393A\u2225= \u2225\u0393A\u00d7[q]\u2225\u2264\u2225\u03a3A\u22251/2, \u2225\u0393B\u00d7[q]\u2225\u2264\u2225D\u0393B\u2225\u2264\u2225\u03a3B\u22251/2. 31 \fThen we further have Pr n\f \f \fa\u22a4\u0010 \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217\u0011 b \u2212Ea\u22a4\u0010 \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217\u0011 b \f \f \f \u2265x o \u22642 exp ( \u2212c min x2 \u03c4 4\u2225D\u22a4 \u0393AQA\u00d7BD\u0393B\u22252 F , x \u03c4 2\u2225D\u22a4 \u0393AQA\u00d7BD\u0393B\u2225 !) \u22642 exp ( \u2212c min x2 \u03c4 4\u2225D\u0393B\u22252\u2225D\u22a4 \u0393A\u22252\u2225Q\u22252 F , x \u03c4 2\u2225D\u0393B\u2225\u2225D\u22a4 \u0393A\u2225\u2225Q\u2225 !) \u22642 exp \u0014 \u2212c min \u001a x2 \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225\u2225Q\u22252 F , x \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2\u2225Q\u2225 \u001b\u0015 \u22642 exp \u0014 \u2212c min \u001a x2n\u22172 min \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, xn\u2217 min \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u001b\u0015 . (36) We de\ufb01ne x\u2032 = max \b x \u22123(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2/n\u2217 min, 0 \t , combining the inequality above and (32), we have Pr n\f \f \fa\u22a4\u0010 \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217\u0011 b \f \f \f \u2265x o \u2264Pr n\f \f \fa\u22a4\u0010 \u02d8 \u03a3\u2217\u2212\u02c6 \u03a3\u2217\u0011 b \u2212Ea\u22a4\u02c6 \u03a3\u2217b \f \f \f \u2265x\u2032o \u22642 exp \u0014 \u2212c min \u001a (x\u2032)2n\u22172 min \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, x\u2032n\u2217 min \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u001b\u0015 \u22642 exp \u0014 \u2212c\u2032 min \u001a x2n\u22172 min \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, xn\u2217 min \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u001b + C max \u0012 1 \u03c4 4, 1 \u03c4 2 \u0013\u0015 \u2264C exp \u0014 \u2212c\u2032 min \u001a x2n\u22172 min \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, xn\u2217 min \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u001b\u0015 . (37) In the last inequality above, we used a fact that \u03c4 is lower bounded by a uniform constant. This is due to Assumption 2.2 that E(Z) = 0, var(Z) = 1, E exp(tZ) \u2264 exp(t2\u03c4 2/2). Then, exp(4\u03c4 2/2) \u22651 2 {E exp(2Z) + E exp(\u22122Z)} = \u221e X k=0 22kEZ2k (2k)! \u22652EZ2 = 2, which implies \u03c4 2 \u22651 2 ln(2). 32 \f3. It is easy to see that E \u02d8 \u03a3\u2217= \u03a3, so Ea\u22a4( \u02d8 \u03a3\u2217\u2212\u03a3)b = 0. Then a\u22a4( \u02d8 \u03a3\u2217\u2212\u03a3)b = p X i,j=1 aibj 1 n\u2217 ij n X k=1 XikXjkSikSjk ! \u2212E p X i,j=1 aibj 1 n\u2217 ij n X k=1 XikXjkSikSjk ! = n X k=1 p X i,j=1 \u0012aibjSikSjk n\u2217 ij XikXjk \u2212EaibjSikSjk n\u2217 ij XikXjk \u0013 \u225c n X k=1 \u0000X\u22a4 k CkXk \u2212EX\u22a4 k CkXk \u0001 = n X k=1 \u0000Z\u22a4 k \u0393\u22a4Ck\u0393Zk \u2212EZ\u22a4 k \u0393\u22a4Ck\u0393Zk \u0001 . (38) Here Ck \u2208Rp\u00d7p is a matrix such that Ck ij = aibjSikSjk/n\u2217 ij. Note that Ck is supported on A \u00d7 B, we can prove the following properties of Ck. \u2225\u0393\u22a4Ck\u0393\u2225F = q tr \u0000Ck\u0393\u0393\u22a4Ck\u22a4\u0393\u0393\u22a4\u0001 = q tr \u0000Ck A\u00d7B\u0393B\u00d7[q]\u0393\u22a4 B\u00d7[q]Ck\u22a4 A\u00d7B\u0393A\u00d7[q]\u0393\u22a4 A\u00d7[q] \u0001 \u2264\u2225\u0393B\u00d7[q]\u2225\u2225\u0393A\u00d7[q]\u2225 p tr(CkCk\u22a4) \u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2p tr(CkCk\u22a4); (39) n X k=1 \u2225\u0393\u22a4Ck\u0393\u22252 F \u2264\u2225\u03a3A\u2225\u2225\u03a3B\u2225\u2225Ck\u22252 F = \u2225\u03a3A\u2225\u2225\u03a3B\u2225 n X k=1 p X i,j=1 \u0012aibjSikSjk n\u2217 ij \u00132 =\u2225\u03a3A\u2225\u2225\u03a3B\u2225 n X k=1 p X i,j=1 SikSjka2 i b2 j n\u22172 ij = \u2225\u03a3A\u2225\u2225\u03a3B\u2225 p X i,j=1 a2 i b2 j n\u2217 ij \u2264\u2225\u03a3A\u2225\u2225\u03a3B\u2225\u2225a\u22252 2\u2225b\u22252 2 n\u2217 min = \u2225\u03a3A\u2225\u2225\u03a3B\u2225 n\u2217 min ; (40) \u2225\u0393\u22a4Ck\u0393\u2225\u2264\u2225\u0393\u22a4Ck\u0393\u2225F \u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2p tr(CkCk\u22a4) \u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 v u u t p X i,j=1 \u0012aibjSikSjk n\u2217 ij \u00132 \u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 v u u t p X i,j=1 a2 i b2 j n\u22172 min \u2264 p \u2225\u03a3A\u2225\u2225\u03a3B\u2225 n\u2217 min . (41) 33 \fNow, note that the last line of (38) can be also equivalently written as vec(Z)\u22a4Cconvec(Z)\u22a4\u2212Evec(Z)\u22a4Cconvec(Z)\u22a4, Ccon = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 \u0393\u22a4C1\u0393 ... \u0393\u22a4Cn\u0393 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb\u2208R(qn)\u00d7(qn), where vec(Z) is the vectorization of Z, which is an qn-dimensional i.i.d. sub-Gaussian vector. Based on the properties of Ck above, we have \u2225Ccon\u22252 F = n X k=1 \u2225\u0393\u22a4Ck\u0393\u22252 F (40) \u2264\u2225\u03a3A\u2225\u2225\u03a3B\u2225 n\u2217 min , \u2225Ccon\u2225\u2264max 1\u2264k\u2264n \u2225\u0393\u22a4Ck\u0393\u2225 (41) \u2264(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 n\u2217 min . Now applying Hanson-Wright\u2019s inequality (Theorem 1.1 in Rudelson and Vershynin [25]), we have Pr \b\f \fvec(Zk)\u22a4Cconvec(Zk) \u2212Evec(Zk)\u22a4Cconvec(Zk) \f \f \u2265x \t \u22642 exp \u001a \u2212c min \u0012 x2 \u03c4 4\u2225Ccon\u22252 F , x \u03c4 2\u2225Ccon\u2225 \u0013\u001b \u22642 exp \u001a \u2212cn\u2217 min min \u0012 x2 \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, x \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u0013\u001b . (42) Thus, Pr n\f \f \fa\u22a4( \u02d8 \u03a3\u2217\u2212\u03a3)b \f \f \f \u2265x o \u22642 exp \u001a \u2212cn\u2217 min min \u0012 x2 \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, x \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u0013\u001b . (43) Here c is a uniform constant. Combining (43) and (37), we have (29). 4. Next, we use the \u03b5-net technique to give the bound on \u2225\u02c6 \u03a3\u2217 A\u00d7B \u2212\u03a3A\u00d7B\u2225. Denote D\u2217= \u02c6 \u03a3\u2217 A\u00d7B \u2212\u03a3A\u00d7B. Suppose SA 1/3 is the (1/3)-net for all unit vectors in R|A|; similarly SB 1/3 is the (1/3)-net for all unit vectors in R|B|. Based on the proof of Lemma 3 in Cai et al. 34 \f[9], we can let Card(SA 1/3) \u22647k, Card(SB 1/3) \u22647k. Since for all a, a0 \u2208R|A|, b, b0 \u2208R|B|, \f \fa\u22a4D\u2217b \f \f \u2212 \f \fa\u22a4 0 D\u2217b0 \f \f \u2264 \f \fa\u22a4D\u2217b \u2212a\u22a4 0 D\u2217b0 \f \f \u2264 \f \f(a \u2212a0)\u22a4D\u2217b \f \f + \f \fa\u22a4 0 D\u2217(b \u2212b0) \f \f \u2264(\u2225a \u2212a0\u22252 + \u2225b \u2212b0\u22252) \u2225D\u2217\u2225, (44) we have for all a \u2208R|A|, b \u2208R|B|, \u2225a\u22252 = \u2225b\u22252 = 1, we can \ufb01nd a0 \u2208SA 1/3, b0 \u2208SB 1/3 such that \u2225a0 \u2212a\u22252 \u22641/3, \u2225b0 \u2212b\u22252 \u22641/3, then |a\u22a4D\u2217b| \u2264|a\u22a4 0 D\u2217b0| + 2 3\u2225D\u2217\u2225\u2264 sup a0\u2208SA 1/3,b0\u2208SB 1/3 |a\u22a4 0 D\u2217b0| + 2 3\u2225D\u2217\u2225, \u2225D\u2217\u2225= sup a\u2208R|A|,b\u2208R|B|, \u2225a\u22252=\u2225b\u22252=1 |a\u22a4D\u2217b| \u2264 sup a0\u2208SA 1/3,b0\u2208SB 1/3 |a\u22a4 0 D\u2217b0| + 2 3\u2225D\u2217\u2225, which yields \u2225\u02c6 \u03a3\u2217 A\u00d7B \u2212\u03a3A\u00d7B\u2225= \u2225D\u2217\u2225\u22643 sup a0\u2208SA 1/3,b0\u2208SB 1/3 |a\u22a4 0 D\u2217b0|. (45) Finally, by combining (29) and the inequality above, we know there exist uniform constants C1, c > 0 such that for all t > 0, Pr \u0010 \u2225\u02c6 \u03a3\u2217 A\u00d7B \u2212\u03a3A\u00d7B\u2225\u2265x \u0011 \u2264Pr \uf8eb \uf8ed sup a0\u2208SA 1/3,b0\u2208SB 1/3 |a\u22a4 0 D\u2217b0| \u2265x 3 \uf8f6 \uf8f8 \u2264C1(7)|A|+|B| exp \u0014 \u2212cn\u2217 min min \u001a x2 \u03c4 4\u2225\u03a3A\u2225\u2225\u03a3B\u2225, x \u03c4 2(\u2225\u03a3A\u2225\u2225\u03a3B\u2225)1/2 \u001b\u0015 . (46) Since |A| + |B| \u22642|A \u222aB|, we have \ufb01nished the proof of Lemma 2.1. \u25a1 Proof of Lemma 6.1. Since \u03a3 is positive semi-de\ufb01nite, we can \ufb01nd the Cholesky decomposition such that \u03a3 = VV\u22a4. Then \u03a3A\u00d7B = VA\u00d7[p]V\u22a4 B\u00d7[p] and \u2225\u03a3A\u00d7B\u2225= max x\u2208R|A|,y\u2208R|B| \u2225x\u22252=\u2225y\u22252=1 x\u22a4VA\u00d7[p]V\u22a4 B\u00d7[p]y \u2264 max x\u2208R|A|,y\u2208R|B| \u2225x\u22252=\u2225y\u22252=1 \u0000x\u22a4VA\u00d7[p]V\u22a4 A\u00d7[p]x \u00011/2 \u0000y\u22a4VB\u00d7[p]V\u22a4 B\u00d7[p]y \u00011/2 = max x\u2208R|A|,y\u2208R|B| \u2225x\u22252=\u2225y\u22252=1 \u0000x\u22a4\u03a3Ax \u00011/2 \u0000y\u22a4\u03a3By \u00011/2 = \u2225\u03a3A\u2225\u2225\u03a3B\u2225. Here we have used the Cauchy-Schwarz inequality. \u25a1 35 \f6.2 Proof of Theorem 2.1 De\ufb01ne B = (bij)1\u2264i,j\u2264p such that bij = \u03c3ij if i \u2208Is, j \u2208Is\u2032 and |s \u2212s\u2032| \u22641, and 0 otherwise. Let \u2206= \u03a3 \u2212B. Then \u2225\u02c6 \u03a3bt \u2212\u03a3\u2225\u2264\u2225\u02c6 \u03a3bt \u2212B\u2225+ \u2225\u2206\u2225. It is easy to see that \u2225\u2206\u2225\u2264\u2225\u2206\u2225\u21131 \u2264max i X j:|i\u2212j|\u2265k |\u03c3ij| \u2264Mk\u2212\u03b1. To bound \u2225\u02c6 \u03a3bt \u2212B\u2225, note that \u2225\u02c6 \u03a3bt \u2212B\u2225= sup u\u2208Rp:\u2225u\u22252=1 \f \f \f\u27e8u, ( \u02c6 \u03a3bt \u2212B)u\u27e9 \f \f \f . For any u \u2208Rp, \u2225u\u22252 = 1, we have \f \f \f\u27e8u, ( \u02c6 \u03a3bt \u2212B)u\u27e9 \f \f \f \u2264 X s,s\u2032:|s\u2212s\u2032|\u22641 \f \f \f D uIs, ( \u02c6 \u03a3\u2217 Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032)uIs\u2032 E\f \f \f \u2264 X s,s\u2032:|s\u2212s\u2032|\u22641 \u2225uIs\u22252\u2225uIs\u2032\u22252\u2225\u02c6 \u03a3\u2217 Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032\u2225 \u2264 \uf8eb \uf8ed X s,s\u2032:|s\u2212s\u2032|\u22641 \u2225uIs\u22252\u2225uIs\u2032\u22252 \uf8f6 \uf8f8 \u0012 max |s\u2212s\u2032|\u22641 \u2225\u02c6 \u03a3\u2217 Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032\u2225 \u0013 . The Cauchy-Schwarz inequality yields X s,s\u2032:|s\u2212s\u2032|\u22641 \u2225uIs\u22252\u2225uIs\u2032\u22252 \u22641 2 X s,s\u2032:|s\u2212s\u2032|\u22641 \u0000\u2225uIs\u22252 2 + \u2225uIs\u2032\u22252 2 \u0001 \u22643 N X s=1 \u2225uIs\u22252 2 = 3. (47) Therefore, \u2225\u02c6 \u03a3bt \u2212\u03a3\u2225\u2264\u2225\u02c6 \u03a3\u2217\u2212B\u2225+ \u2225\u2206\u2225\u22643 max |s\u2212s\u2032|\u22641 \r \r \r \u02c6 \u03a3\u2217 Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032 \r \r \r + Mk\u2212\u03b1, which yields E\u2225\u02c6 \u03a3bt \u2212\u03a3\u22252 \u226418E \u0012 max |s\u2212s\u2032|\u22641 \r \r \r \u02c6 \u03a3\u2217 Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032 \r \r \r \u00132 + 2M 2k\u22122\u03b1. 36 \fAccording to lemma 2.1, there exists constant C, c > 0 which only depend on \u03c4 such that for all x > 0, Pr \u0012 max |s\u2212s\u2032|\u22641 \u2225\u02c6 \u03a3Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032\u2225\u2265x \u0013 \u2264C\u2308p k\u2309(49)k exp \u001a \u2212cn\u2217 min min \u0012 x2 \u2225\u03a3\u22252, x \u2225\u03a3\u2225 \u0013\u001b . (48) Now we set t = C\u2032(k + ln p)/n\u2217 min for C\u2032 large enough. The spectral norm risk satis\ufb01es E\u2225\u02c6 \u03a3bt \u2212\u03a3\u22252 \u226418E max |s\u2212s\u2032|\u22641 \r \r \r \u02c6 \u03a3\u2217 Is\u00d7Is\u2032 \r \r \r + 2M 2k\u22122\u03b1 \u226418 Z \u221e 0 Pr \u0012 max |s\u2212s\u2032|\u22641 \u2225\u02c6 \u03a3Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032\u22252 \u2265x \u0013 dx + 2M 2k\u22122\u03b1 \u226418t + 18 Z \u221e t Pr \u0012 max |s\u2212s\u2032|\u22641 \u2225\u02c6 \u03a3Is\u00d7Is\u2032 \u2212\u03a3Is\u00d7Is\u2032\u22252 \u2265x \u0013 dx + 2M 2k\u22122\u03b1 \u226418t + C\u2308p k\u2309(49)k Z \u221e t exp n \u2212c\u2032n\u2217 min min \u0010 x, x 1 2 \u0011o dx + 2M 2k\u22122\u03b1 \u226418t + C\u2308p k\u2309(49)k 1 n\u2217 min exp (\u2212c\u2032n\u2217 mint) + 2M 2k\u22122\u03b1, (49) then (49) yields E\u2225\u02c6 \u03a3bt \u2212\u03a3\u22252 \u2264C \u0012k + ln p n\u2217 min + k\u22122\u03b1 \u0013 , (50) where C only depends on \u03c4, M, M0. We can \ufb01nally \ufb01nish the proof of Theorem 2.1 by taking k = (n\u2217 min)1/(2\u03b1+1). \u25a1 Acknowledgments We thank Tianxi Cai for the ovarian cancer data set and for helpful discussions. We also thank the Editor, the Associate editor, one referee and Zoe Russek for useful comments which have helped to improve the presentation of the paper." + }, + { + "url": "http://arxiv.org/abs/1604.06474v1", + "title": "On Detection and Structural Reconstruction of Small-World Random Networks", + "abstract": "In this paper, we study detection and fast reconstruction of the celebrated\nWatts-Strogatz (WS) small-world random graph model \\citep{watts1998collective}\nwhich aims to describe real-world complex networks that exhibit both high\nclustering and short average length properties. The WS model with neighborhood\nsize $k$ and rewiring probability probability $\\beta$ can be viewed as a\ncontinuous interpolation between a deterministic ring lattice graph and the\nErd\\H{o}s-R\\'{e}nyi random graph. We study both the computational and\nstatistical aspects of detecting the deterministic ring lattice structure (or\nlocal geographical links, strong ties) in the presence of random connections\n(or long range links, weak ties), and for its recovery. The phase diagram in\nterms of $(k,\\beta)$ is partitioned into several regions according to the\ndifficulty of the problem. We propose distinct methods for the various regions.", + "authors": "T. Tony Cai, Tengyuan Liang, Alexander Rakhlin", + "published": "2016-04-21", + "updated": "2016-04-21", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.LG", + "stat.TH" + ], + "main_content": "Introduction The \u201csmall-world\u201d phenomenon aims to describe real-world complex networks that exhibit both high clustering and short average length properties. While most of the pairs of nodes are not friends, any node can be reached from another in a small number of hops. The Watts-Strogatz (WS) model, introduced in (Watts and Strogatz, 1998; Newman and Watts, 1999), is a popular generative model for networks that exhibit the small-world phenomenon. The WS model interpolates between the two extremes\u2014the regular lattice graph on the one hand, and the random graph on the other. Considerable effort has been spent on studying the asymptotic statistical behavior (degree distribution, average path length, clustering coef\ufb01cient, etc.) and the empirical performance of the WS model (Watts, 1999; Amaral et al., 2000; Barrat and Weigt, 2000; Latora and Marchiori, 2001; Van Der Hofstad, 2009). Successful applications of the WS model have been found in a range of disciplines, such as psychology (Milgram, 1967), epidemiology (Moore and Newman, 2000), medicine and health (Stam et al., 2007), to name a few. In one of the \ufb01rst algorithmic studies of the small-world networks, Kleinberg (2000) investigated the theoretical dif\ufb01culty of \ufb01nding the shortest path between any two nodes when one is restricted to use local algorithms, and further related the small-world notion to long range percolation on graphs (Benjamini and Berger, 2000; Coppersmith et al., 2002). The focus of the present paper is on statistical and computational aspects of the detection and recovery problems. Given a network, the \ufb01rst statistical and computational challenge is to detect whether it enjoys the small world property, or whether the observation may be explained by the Erd\u02dd os-R\u00e9nyi random graph (the null hypothesis). The second question is concerned with the reconstruction of the neighborhood structure if the network does exhibit the small world phenomenon. In the language of social network analysis, the detection problem corresponds to detecting the existence of local geographical links (or 1 arXiv:1604.06474v1 [math.ST] 21 Apr 2016 \fclose friend connections, strong ties) in the presence of long range links (or random connections, weak ties). The reconstruction problem corresponds to distinguishing between these local links and long range links. The problem is statistically and computationally dif\ufb01cult due to the high-dimensional unobserved latent variable\u2014the permutation matrix\u2014 which blurs the natural ordering of the ring structure. Let us parametrize the WS model in the following way: the number of nodes is denoted by n, the neighborhood size by k, and the rewiring probability by \u03b2. Provided the adjacency matrix A \u2208Rn\u00d7n, we are interested in identifying the choices of (n,k,\u03b2) when detection and reconstruction of the small world random graph is possible. Speci\ufb01cally, we focus on the following two questions. Detection Given the adjacency matrix A up to a permutation, when (in terms of n,k,\u03b2) and how (in terms of procedures) can one statistically distinguish whether it is a small world graph (\u03b2 < 1), or a usual random graph with matching degree (\u03b2 = 1). What if we restrict our attention to computationally ef\ufb01cient procedures? Reconstruction Once the presence of the neighborhood structure is con\ufb01rmed, when (in terms of n,k,\u03b2) and how (in terms of procedures) can one estimate the deterministic neighborhood structure? If one only aims to estimate the structure consistently (asymptotically correct), are there computationally ef\ufb01cient procedures, and what are their limitations? We address the above questions by presenting a phase diagram in Figure 1. The phase diagram divides the parameter space into four disjoint regions according to the dif\ufb01culty of the problem. We propose distinct methods for the regions where solutions are possible. 1.1 Why Small World Graph? Finding and analyzing the appropriate statistical models for real-world complex networks is one of the main themes in network science. Many real empirical networks\u2014for example, internet architecture, social networks, and biochemical pathways\u2014exhibit two features simultaneously: high clustering among individual nodes and short distance between any two nodes. Consider the local tree rooted at a person. The high clustering property suggests prevalent existence of triadic closure, which signi\ufb01cantly reduces the number of reachable people within a certain depth (in contrast to the regular tree case where this number grows exponentially with the depth), contradicting the short average length property. In a pathbreaking paper, Watts and Strogatz (1998) provided a mathematical model that resolves the above seemingly contradictory notions. The solution is surprisingly simple \u2014 interpolating between structural ring lattice graph and a random graph. The ring lattice provides the strong ties (i.e., homophily, connection to people who are similar to us) and triadic closure, while the random graph generates the weak ties (connection to people who are otherwise far-away), preserving the local-regular-branching-tree-like structure that induces short paths between pairs. Given the small world model, it is natural to ask the statistical question of distinguishing the local links (geographical) and long range links (non-geographical) based on the observed graph. 1.2 Rewiring Model Let us now de\ufb01ne the WS model. Consider a ring lattice with n nodes, where each node is connected with its k nearest neighbors (k/2 on the left and k/2 on the right, k even for convenience). The rewiring process contains two procedures: erase and reconnect. First, erase each currently connected edge with probability \u03b2, independently. Next, reconnect each edge pair with probability \u03b2 k n\u22121, allowing multiplic2 \fity.1 The observed symmetric adjacency matrix A \u2208{0,1}n\u00d7n has the following structure under some unobserved permutation matrix P\u03c0 \u2208{0,1}n\u00d7n. For 1 \u2264i < j \u2264n, [P\u03c0APT \u03c0 ]i j = 1 \u00bd w.p. 1\u2212\u03b2(1\u2212\u03b2 k n\u22121), if 0 < |i \u2212j| \u2264k 2 mod n \u22121\u2212k 2 w.p. \u03b2 k n\u22121, otherwise and entries are independent of each other. Equivalently, we have for 1 \u2264i < j \u2264n Ai j = \u03ba \u00a1 [P\u03c0BPT \u03c0 ]i j \u00a2 , (1) where \u03ba(\u00b7) is the entry-wise i.i.d. Markov channel, \u03ba(0) \u223cBernoulli \u00b5 \u03b2 k n \u22121 \u00b6 , \u03ba(1) \u223cBernoulli \u00b5 1\u2212\u03b2(1\u2212\u03b2 k n \u22121) \u00b6 , and B \u2208{0,1}n\u00d7n indicates the support of the structural ring lattice Bi j = \u00bd 1, if 0 < |i \u2212j| \u2264k 2 mod n \u22121\u2212k 2 0, otherwise . (2) We denote by WS(n,k,\u03b2) the distribution of the random graph generated from the rewiring model, and denote by ER(n, k n\u22121) the Erd\u02dd os-R\u00e9nyi random graph distribution (with matching average degree k). Remark that if \u03b2 = 1, the small world graph WS(n,k,\u03b2) reduces to ER(n, k n\u22121), with no neighborhood structure. In contrast, if \u03b2 = 0, the small world graph WS(n,k,\u03b2) corresponds to the deterministic ring lattice, without random connections. We focus on the dependence of the gap 1 \u2212\u03b2 = o(1) on n and k, such that distinguishing between WS(n,k,\u03b2) and ER(n, k n\u22121) or reconstructing the ring lattice structure is statistically and computationally possible. 1.3 Summary of Results The main theoretical and algorithmic results are summarized in this section. We \ufb01rst introduce several regions in terms of (n,k,\u03b2), according to the dif\ufb01culty of the problem instance, and then we present the results using the phase diagram in Figure 1. Except for the impossible region, we will introduce different algorithms with distinct computational properties. Impossible region: 1 \u2212\u03b2 \u227a q logn n \u2228logn k . Inside this region no multiple testing procedure (regardless of computational budget) can succeed in distinguishing among the class of models including all of WS(n,k,\u03b2) and ER(n, k n\u22121) with vanishing error. Hard region: q logn n \u2228logn k \u2aaf1\u2212\u03b2 \u227a q 1 k \u2228 p logn k . It is possible to detect between WS(n,k,\u03b2) and ER(n, k n\u22121) statistically with vanishing error; however the evaluation of the test statistic (5) (below) requires exponential time complexity (to the best of our knowledge). Easy region: q 1 k \u2228 p logn k \u2aaf1\u2212\u03b2 \u2aaf rq logn n \u2228logn k . There exists an ef\ufb01cient spectral test that can distinguish between the small world random graph WS(n,k,\u03b2) and the Erd\u02dd os-R\u00e9nyi graph ER(n, k n\u22121) in near linear time (in the matrix size). 1The original rewiring process in Watts and Strogatz (1998) does not allow multiplicity; however, for the simplicity of technical analysis, we focus on reconnection allowing multiplicity. These two rewiring processes are asymptotically equivalent. 3 \fReconstructable region: rq logn n \u2228logn k \u227a1 \u2212\u03b2 \u2aaf1. In this region, not only is it possible to detect the existence of the lattice structure in a small-world graph, but it is also possible computationally to consistently estimate/reconstruct the neighborhood structure via a novel correlation thresholding procedure. The following phase diagram provides an intuitive illustration of the above theoretical results. If we parametrize k \u224dnx,0 < x < 1 and 1 \u2212\u03b2 \u224dn\u2212y,0 < y < 1, each point (x, y) \u2208[0,1]2 corresponds to a particular problem instance with parameter bundle (n,k = nx,\u03b2 = 1\u2212n\u2212y). According to the location of (x, y), the dif\ufb01culty of the problem changes; for instance, the larger the x and the smaller the y is, the easier the problem becomes. The various above regions (plotted in [0,1]2) are: impossible region (red region I), hard region (blue region II), easy region (green region III), reconstructable region (cyan region IV). k = nx, 0 < x < 1 1 \u2212\u03b2 = n\u2212y, 0 < y < 1 y x 1/2 1/2 1/4 I II III IV Figure 1: Phase diagram for small-world network: impossible region (red region I), hard region (blue region II), easy region (green region III), and reconstructable region (cyan region IV). 1.4 Notation A,B,Z \u2208Rn\u00d7n denote symmetric matrices: A is the adjacency matrix, B is the structural signal matrix as in Equation (2), and Z = A \u2212EA is the noise matrix. We denote the matrix of all ones by J. Notations \u2aaf, \u2ab0, \u227a, \u227bdenote the asymptotic order: a(n) \u2aafb(n) if and only if limsup n\u2192\u221e a(n) b(n) \u2264c, with some constant c > 0, a(n) \u227ab(n) if and only if limsup n\u2192\u221e a(n) b(n) = 0. C,C \u2032 > 0 are universal constants that may change from line to line. For a symmetric matrix A, \u03bbi(A),1 \u2264i \u2264n denotes the ranked eigenvalues, in a decreasing order. The inner-product \u2329A,B\u232a= tr(AT B) overloads the usual Euclidian inner-product and matrix inner-product. For any integer n, [n] := {0,1,...,n \u22121} denotes the index set. Denote the permutation in symmetric group \u03c0 \u2208Sn and its associated matrix form as P\u03c0. For a graph G(V,E) generated from the Watts-Strogatz model WS(n,k,\u03b2) with associated permuta4 \ftion \u03c0, for each node vi \u2208V,1 \u2264i \u2264|V |, we denote N (vi) := \u00bd v j : 0 < |\u03c0\u22121(i)\u2212\u03c0\u22121(j)| \u2264k 2 mod n \u22121\u2212k 2 \u00be as the ring neighborhood nodes with respect to node vi, before the permutation \u03c0 applied. 1.5 Organization of the Paper The following sections are dedicated to the theoretical justi\ufb01cation of the various regions in Section 1.3. Speci\ufb01cally, Section 2 establishes the boundary for the impossible region I, where the problem is impossible to solve information theoretically. We contrast the hard region II with the regions III and IV in Section 3; here, the difference arises in statistical and computational aspects of detecting the strong tie structure inside random graph. Section 4 studies a correlation thresholding algorithm that reconstructs the neighborhood structure consistently when the parameters lie within the reconstructable region IV. We also study a spectral ordering algorithm which succeeds in reconstruction in a part of region III. Whether the remaining part of region III admits a recovery procedure is an open problem. Additional further directions are listed in Section 5. 2 The Impossible Region: Lower Bounds We start with an information theoretic result that describes the dif\ufb01culty of distinguishing among a class of models. The following Theorem 1 characterizes the impossible region, as in Section 1.3, in the language of minimax multiple testing error. The proof is postponed to Section 6. Theorem 1 (Impossible Region). Consider the following statistical models: P0 denotes the probability measure of the Erd\u02dd os-R\u00e9nyi random graph ER(n, k n\u22121), and P\u03c0,\u03c0 \u2208Sn\u22121 denote the probability measures of the Watts-Strogatz small-world graph WS(n,k,\u03b2) as in Equation (1) with different permutations \u03c0. Consider any selector \u03c6 : {0,1}n\u00d7n \u2192Sn\u22121 \u222a{0} that maps from the adjacency matrix A \u2208{0,1}n\u00d7n to a decision in Sn\u22121 \u222a{0}. Then for any \ufb01xed 0 < \u03b1 < 1/8, the following lower bound on multiple testing error holds: lim n\u2192\u221e min \u03c6 max ( P0(\u03c6 \u0338= 0), 1 (n \u22121)! X \u03c0\u2208Sn\u22121 P\u03c0(\u03c6 \u0338= \u03c0) ) \u22651\u22122\u03b1, when the parameters satisfy 1\u2212\u03b2 \u2264C\u03b1 \u00b7 s logn n or 1\u2212\u03b2 \u2264C \u2032 \u03b1 \u00b7 logn k \u00b7 1 log n logn k2 , with constants C\u03b1,C \u2032 \u03b1 that only depend on \u03b1. In other words, if 1\u2212\u03b2 \u227a s logn n \u2228logn k , no multiple testing procedure can succeed in distinguishing, with vanishing error, the class of models containing all of WS(n,k,\u03b2) and ER(n, k n\u22121). 5 \fThe missing latent random variable, the permutation matrix P\u03c0, is the object we are interested in recovering. A permutation matrix P\u03c0 induces a certain distribution on the adjacency matrix A. Thus the parameter space of interest, including models WS(n,k,\u03b2) and ER(n, k n\u22121), is of cardinality (n \u22121)! + 1. Based on the observed adjacency matrix, distinguishing among the models including all of WS(n,k,\u03b2) and ER(n, k n\u22121) is equivalent to a multiple testing problem. The impossible region characterizes the information theoretic dif\ufb01culty of this reconstruction problem by establishing the condition when minimax testing error does not vanish as n,k(n) \u2192\u221e. The \u201chigh dimensional\u201d nature of this problem is mainly driven by the unknown permutation matrix, and this latent structure introduces dif\ufb01culty both statistically and computationally. Statistically, via Le Cam\u2019s method, one can build a distance metric on permutation matrices using the distance between the corresponding measures (measures on adjacency matrices induced by the permutation structure). In order to characterize the intrinsic dif\ufb01culty of estimating the permutation structure, one needs to understand the richness of the set of permutation matrices within certain distance to one particular element, a combinatorial task. Computationally, the combinatorial nature of the problem makes the \u201cnaive\u201d approach computationally intensive. 3 Hard v.s. Easy Regions: Detection Statistics This section studies the hard and easy regions in Section 1.3. First, we propose a near optimal test, the maximum likelihood test, that detects the ring structure above the information boundary derived in Theorem 1. However, the evaluation of the maximum likelihood test requires O(nn) time complexity. The maximum likelihood test succeeds outside of region I, and, in particular, succeeds (statistically) in the hard region II. We then propose another ef\ufb01cient test, the spectral test, that detects the ring structure in time O\u2217(n2) via the power method. The method succeeds in regions III and IV. Theorem 2 below combines the results of Lemma 1 and Lemma 2. Theorem 2 (Detection: Easy and Hard Boundaries). Consider the following statistical models: P0 denotes the distribution of the Erd\u02dd os-R\u00e9nyi random graph ER(n, k n\u22121), and P\u03c0,\u03c0 \u2208Sn\u22121 denote distributions of the Watts-Strogatz small-world graph WS(n,k,\u03b2) with hidden permutation \u03c0. Consider any selector \u03c6 : {0,1}n\u00d7n \u2192{0,1} that maps an adjacency matrix to a binary decision (detection decision). We say that minimax detection for the small-world random model is possible when lim n\u2192\u221emin \u03c6 max ( P0(\u03c6 \u0338= 0), 1 (n \u22121)! X \u03c0\u2208Sn\u22121 P\u03c0(\u03c6 \u0338= 1) ) = 0. (3) If the parameter (n,k,\u03b2) satis\ufb01es hard boundary : 1\u2212\u03b2 \u2ab0 s logn n \u2228logn k , minimax detection is possible, and an exponential time maximum likelihood test (5) ensures (3). If, in addition, the parameter (n,k,\u03b2) satis\ufb01es easy boundary : 1\u2212\u03b2 \u2ab0 r 1 k \u2228 p logn k , then a near-linear time spectral test (7) ensures (3). Proof of Theorem 2 consists of two parts, which will be addressed in the following two sections, respectively. 6 \f3.1 Maximum Likelihood Test Consider the test statistic T1 as the objective value of the following optimization T1(A) := max P\u03c0 \u2329P\u03c0BPT \u03c0 , A\u232a, (4) where P\u03c0 \u2208{0,1}n\u00d7n is taken over all permutation matrices and A is the observed adjacency matrix. The maximum likelihood test \u03c61 : A \u2192{0,1} based on T1 by \u03c61(A) = ( 1 if T1(A) \u2265 k n\u22121nk +2 q k n\u22121nk \u00b7logn!+ 2 3 \u00b7logn! 0 o.w. (5) The threshold is chosen as the rate k2 +O \u00b3q k2n log n e \u2228n log n e \u00b4 : if the objective value is of a greater order, then we believe the graph is generated from the small-world rewiring process with strong ties; otherwise we we cannot reject the null, the random graph model with only weak ties. Lemma 1 (Guarantee for Maximum Likelihood Test). The maximum likelihood test \u03c61 in Equation (5) succeeds in detecting the small world random structure when 1\u2212\u03b2 \u2ab0 s logn n \u2228logn k , in the sense that lim n,k(n)\u2192\u221emax ( P0(\u03c61 \u0338= 0), 1 (n \u22121)! X \u03c0\u2208Sn\u22121 P\u03c0(\u03c61 \u0338= 1) ) = 0. Remark 1. Lemma 1 can be viewed as the condition on the signal and noise separation. By solving the combinatorial optimization problem, the test statistics aggregates the signal that separates from the noise the most. An interesting open problem is, if we solve a relaxed version of the combinatorial optimization problem (4) within polynomial time complexity \u03c6rel 1 , how much stronger the condition on 1\u2212\u03b2 needs to be to ensure power. 3.2 Spectral Test For the spectral test, we calculate the second largest eigenvalue of the adjacency matrix A as the test statistic T2(A) := \u03bb2(A). (6) The spectral test \u03c62 : A \u2192{0,1} is \u03c62(A) = \u00bd 1 if T2(A) \u2ab0 p k \u2228 p logn 0 o.w. (7) Namely, if \u03bb2(A) passes a certain threshold, we classify the graph as a small-world graph. Evaluation of (7) only requires near-linear time O\u2217(n2). 7 \fLemma 2 (Guarantee for Spectral Test). The second eigenvalue test \u03c62 in Equation (7) satis\ufb01es lim n,k(n)\u2192\u221emax ( P0(\u03c62 \u0338= 0), 1 (n \u22121)! X \u03c0\u2208Sn\u22121 P\u03c0(\u03c62 \u0338= 1) ) = 0 whenever 1\u2212\u03b2 \u2ab0 r 1 k \u2228 p logn k . The main idea behind Lemma 2 is as follows. Let us look at the expectation of the adjacency matrix, EA = (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)\u00b7PT \u03c0 BP\u03c0 +\u03b2 k n \u22121 \u00b7(J \u2212I), where J is the matrix of all ones. The main structure matrix PT \u03c0 BP\u03c0 is a permuted version of the circulant matrix (see e.g. (Gray, 2006)). The spectrum of the circulant matrix B is highly structured, and is of distinct nature in comparison to the noise matrix A \u2212EA. 4 Reconstructable Region: Fast Structural Reconstruction In this section, we discuss reconstruction of the ring structure in the Watts-Strogatz model. We show that in the reconstructable region (region IV in Figure 1), a correlation thresholding procedure succeed in reconstructing the ring neighborhood structure. As a by-product, once the neighborhood structure is known, one can distinguish between random edges and neighborhood edges for each node. A natural question is whether there is another algorithm that can work in a region (beyond region IV) where correlation thresholding fails. We show that in a certain regime with large k, a spectral ordering procedure outperforms the correlation thresholding procedure and succeeds in parts of regions III and IV (as depicted in Figure 2 below). 4.1 Correlation Thresholding Consider the following correlation thresholding procedure for neighborhood reconstruction. Algorithm 1: Correlation Thresholding for Neighborhood Reconstruction Data: An adjacency matrix A \u2208Rn\u00d7n for the graph G(V,E). Result: For each node vi,1 \u2264i \u2264n, an estimated set for neighborhood \u02c6 N (vi). 1. For each node vi, calculate the correlation \u2329Ai, A j\u232afor all j \u0338= i ; 2. Sort the \u00a9 \u2329Ai, A j\u232a, j \u2208[n]\\{i} \u00aa in a decreasing order, select the largest k ones to form the estimated set \u02c6 N (vi) ; Output: \u02c6 N (vi), for all i \u2208[n] The following lemma proves consistency of the above Algorithm 1. Note the computational complexity is O(n \u00b7min{logn,k}) for each node using quick-sort, with a total runtime O\u2217(n2). Lemma 3 (Consistency of Correlation Thresholding). Consider the Watts-Strogatz random graph WS(n,k,\u03b2). Under the reconstructable regime IV (in Figure 1), that is, 1\u2212\u03b2 \u227b s logn k \u2228 \u00b5logn n \u00b61/4 , (8) 8 \fcorrelation thresholding provides a consistent estimate of the neighborhood set N (vi) w.h.p in the sense that lim n,k(n)\u2192\u221emax i\u2208[n] | \u02c6 N (vi)\u25b3N (vi)| |N (vi)| = 0, where \u25b3denotes the symmetric set difference. One interesting question in small-world networks is to distinguish between strong ties (structural edges induced by the ring lattice structure) and weak ties (edges due to random connections). The above lemma addresses this question by providing a consistent estimate of the neighborhood set for each node. The condition under which consistency of correlation thresholding is ensured corresponds to the reconstructable region in Figure 1. One may ask if there is another algorithm that can provide a consistent estimate of the neighborhood set beyond region IV. The answer is yes, and we will show in the following section that under the regime when k is large (for instance, k \u2ab0n 15 16 ), indeed it is possible to slightly improve on Algorithm 1. 4.2 Spectral Ordering Consider the following spectral ordering procedure, which approximately reconstructs the ring lattice structure when k is large, i.e., k \u227bn 7 8 . Algorithm 2: Spectral Reconstruction of Ring Structure Data: An adjacency matrix A \u2208Rn\u00d7n for the graph G(V,E). Result: A ring embedding of the nodes V . 1. Calculate top 3 eigenvectors in the SVD A =U\u03a3U T . Denote second and third eigenvectors as u \u2208Rn and v \u2208Rn, respectively; 2. For each node i and the associated vector A\u00b7i \u2208Rn, calculate the associated angle \u03b8i for vector (uT A\u00b7i,vT A\u00b7i); Output: the sorted sequence {\u03b8i}n i=1 and the corresponding ring embedding of the nodes. For each node vi, \u02c6 N (vi) are the closest k nodes in the ring embedding. The following Lemma 4 shows that when k is large, Algorithm 2 also provides consistent reconstruction of the ring lattice. Its computational complexity is O\u2217(n2). Lemma 4 (Guarantee for Spectral Ordering). Consider the Watts-Strogatz graph WS(n,k,\u03b2). Assume k is large enough in the following sense: 1 > lim n,k(n)\u2192\u221e logk logn \u2265 lim n,k(n)\u2192\u221e logk logn > 7 8. Under the regime 1\u2212\u03b2 \u227bn3.5 k4 , (9) the spectral ordering provides consistent estimate of the neighborhood set N (vi) w.h.p. in the sense that lim n,k(n)\u2192\u221emax i\u2208[n] | \u02c6 N (vi)\u25b3N (vi)| |N (vi)| = 0, where \u25b3denotes the symmetric set difference. 9 \fIn Lemma 4, we can only prove consistency of spectral ordering under the technical condition that k is large. We do not believe this is due to an artifact of the proof. Even though the structural matrix (the signal) has large eigenvalues, the eigen-gap is not large enough. The spectral ordering succeeds when the spectral gap stands out over the noise level, which implies that k needs to be large enough. Let us compare the region described in Equation (9) with the reconstructable region in Equation (8). We observe that spectral ordering pushes slightly beyond the reconstructable region when k \u227bn 15 16 , as shown in Figure 2. k = nx, 0 < x < 1 1 \u2212\u03b2 = n\u2212y, 0 < y < 1 y x 1/2 1/2 1/4 I II III IV IV 0 Figure 2: Phase diagram for small-world networks: impossible region (red region I), hard region (blue region II), easy region (green region III), and reconstructable region (cyan region IV and IV\u2019). Compared to Figure 1, the spectral ordering procedure extends the reconstructable region (IV) when k \u227bn 15 16 (IV\u2019). 5 Discussion Reconstructable region We addressed the reconstruction problem via two distinct procedures, correlation thresholding and spectral ordering; however, whether there exists other computationally ef\ufb01cient algorithm that can signi\ufb01cantly improve upon the current reconstructable region is still unknown. Designing new algorithms requires a deeper insight into the structure of the small-world model, and will probably shed light on better algorithms for mixed membership models. Comparison to stochastic block model Recently, stochastic block models (SBM) have attracted considerable amount of attention from researchers in various \ufb01elds. Community detection in stochastic block models focuses on recovering the hidden community information from the adjacency matrix that contains both noise and the latent permutation. The hidden community structure for classic SBM is illustrated in Figure 3 (the left one), as a block diagonal matrix. An interesting but theoretically more challenging extension to the classic SBM is the mixed membership SBM, where each node may simul10 \ftaneously belong to several communities. The problem becomes more dif\ufb01cult when there are a growing number of communities and when each node belongs to several communities at the same time. Consider one easy case of the mix membership model, where the mix membership occurs only within neighborhood communities, as shown in the middle image of Figure 3. The small-world network we are investigating in this paper can be seen as an extreme case (shown on the right-most \ufb01gure) of this easy mixed membership SBM, where each node falls in effectively k local clusters. Figure 3: The structural matrices for stochastic block model (left), mixed membership SBM (middle), and small-world model (right). The black location denotes the support of the structural matrix. In the small-world networks, identifying the structural links and random links becomes challenging since that are many local clusters (in constrast to relative small number of communities in SBM). This multitude of local clusters makes it dif\ufb01cult to analyze the effect of the hidden permutation on the structural matrix. We view the current paper as an initial attempt at attacking this problem. 6 Technical Proofs Proof of Theorem 1. Denote the circulant matrix by B (it is B\u03c0 for any \u03c0 \u2208Sn\u22121). The likelihood on X \u2208 Rn\u00d7n for WS model is Ln,k,\u03b2(X |B) = exp ( log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2(1\u2212\u03b2 k n\u22121) \u00b7\u2329X ,B\u232a+log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 \u00b7\u2329X , J \u2212I \u2212B\u232a +nk log(\u03b2(1\u2212\u03b2 k n \u22121))+n(n \u22121\u2212k)log(1\u2212\u03b2 k n \u22121) \u00be = exp (\u00c3 log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2(1\u2212\u03b2 k n\u22121) \u2212log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 ! \u00b7\u2329X ,B\u232a+log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 \u00b7\u2329X , J \u2212I\u232a +nk log(\u03b2(1\u2212\u03b2 k n \u22121))+n(n \u22121\u2212k)log(1\u2212\u03b2 k n \u22121) \u00be . For the Erd\u02dd os-R\u00e9nyi model, the likelihood is Ln,k(X ) = exp ( log k n\u22121 1\u2212 k n\u22121 \u00b7\u2329X , J \u2212I\u232a+n(n \u22121)log(1\u2212 k n \u22121) ) . 11 \fThe Kullback-Leibler divergence between this two model is expressed in the following KL(PB||P0) = EX \u223cPB log PB(X ) P0(X ) = EX \u223cPB ( \u2212 \u00c3 log k n\u22121 1\u2212 k n\u22121 \u2212log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 ! \u00b7\u2329X , J \u2212I\u232a\u2212n(n \u22121)log(1\u2212 k n \u22121) + \u00c3 log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2(1\u2212\u03b2 k n\u22121) \u2212log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 ! \u00b7\u2329X ,B\u232a+nk log(\u03b2(1\u2212\u03b2 k n \u22121))+n(n \u22121\u2212k)log(1\u2212\u03b2 k n \u22121) ) = \u2212 \u00c3 log k n\u22121 1\u2212 k n\u22121 \u2212log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 ! \u00b7 \u00bf (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)B +\u03b2 k n \u22121(J \u2212I), J \u2212I \u00c0 + \u00c3 log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2(1\u2212\u03b2 k n\u22121) \u2212log \u03b2 k n\u22121 1\u2212\u03b2 k n\u22121 ! \u00b7 \u00bf (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)B +\u03b2 k n \u22121(J \u2212I),B \u00c0 \u2212n(n \u22121)log(1\u2212 k n \u22121)+nk log(\u03b2(1\u2212\u03b2 k n \u22121))+n(n \u22121\u2212k)log(1\u2212\u03b2 k n \u22121) = n(n \u22121)log 1\u2212\u03b2 k n\u22121 1\u2212 k n\u22121 \u2212nk log 1 \u03b2 \u2212 \" log 1 \u03b2 +log 1\u2212\u03b2 k n\u22121 1\u2212 k n\u22121 # nk \u00b7 1\u2212(1\u2212\u03b2)\u03b2 k n \u22121 \u00b8 + \" log 1 \u03b2 +log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2 k n\u22121 # nk \u00b7 1\u2212\u03b2(1\u2212\u03b2 k n \u22121) \u00b8 = \u2212log 1 \u03b2 \u00b7nk \u00b7 1+\u03b2\u2212\u03b2 k n \u22121 \u00b8 +log 1\u2212\u03b2 k n\u22121 1\u2212 k n\u22121 n \u00b7 (n \u22121\u2212k)+(1\u2212\u03b2)\u03b2 k2 n \u22121 \u00b8 +log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2 k n\u22121 \u00b7nk \u00b7 1\u2212\u03b2(1\u2212\u03b2 k n \u22121) \u00b8 . (10) Via the inequality log(1+ x) < x for all x > \u22121, we can further simplify the above expression as KL(PB||P0) \u2264nk(1\u2212\u03b2) \u00b7 \u2212\u03b2+\u03b2 k n \u22121 +(1\u2212\u03b2)\u03b2 k2 n(n \u22121\u2212k) \u00b8 + (1\u2212\u03b2)(1\u2212\u03b2 k n\u22121) \u03b2 k n\u22121 nk \u00b7 (1\u2212\u03b2)+\u03b22 k n \u22121 \u00b8 (11) \u2264nk(1\u2212\u03b2) \u00b7 (1\u2212\u03b2)\u03b2 k n \u22121 +(1\u2212\u03b2)\u03b2 k2 n(n \u22121\u2212k) \u00b8 + (1\u2212\u03b2)2(1\u2212\u03b2 k n\u22121) \u03b2 n(n \u22121) \u2264C \u00b7n2(1\u2212\u03b2)2, (12) where 0 < C < 1 2 k2 n(n\u22121) + 1 \u03b2 is some universal constant (note we are interested in the case when \u03b2 is close to 1). Remark that when k \u2aafn1/2, the above bound can be further strengthened, in the following sense (recall equation (10)) KL(PB||P0) \u2264nk(1\u2212\u03b2) \u00b7 \u2212\u03b2+\u03b2 k n \u22121 +(1\u2212\u03b2)\u03b2 k2 n(n \u22121\u2212k) \u00b8 +log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2 k n\u22121 \u00b7nk \u00b7 1\u2212\u03b2(1\u2212\u03b2 k n \u22121) \u00b8 \u2264 ( log 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2 k n\u22121 \u00b7 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) \u03b2 k n\u22121 ) \u00b7k2\u03b2 n n \u22121. 12 \fDenote t := 1\u2212\u03b2(1\u2212\u03b2 k n\u22121 ) \u03b2 k n\u22121 = 1\u2212\u03b2 \u03b2 n\u22121 k +\u03b2. Thus we have KL(PB||P0) \u2264t logt \u00b7k2\u03b2 n n \u22121. (13) Suppose for some constant \u03b1\u2217> 0, and \u03b1 = \u03b1\u2217\u00b7 1 \u03b2(1\u22121 n )2, we have the following t \u2264\u03b1 n log n e k2 \u00b7 1 log\u03b1 n log n e k2 (14) and t logt \u2264\u03b1 n log n e k2 \u00b7 \uf8eb \uf8ed1\u2212 loglog\u03b1 n log n e k2 log\u03b1 n logn k2 \uf8f6 \uf8f8< \u03b1 n log n e k2 . (15) Plug in the expression for t into (14), if 1\u2212\u03b2 \u03b2 \u2264\u03b1(1+ 1 n \u22121)\u00b7 log n e k \u00b7 1 log\u03b1 n log n e k2 \u2212 k n \u22121 \u224dlogn k 1 log n log n e k2 (16) we have t \u2264\u03b1 n log n e k2 \u00b7 1 log\u03b1 n log n e k2 \u21d2t logt < \u03b1 n log n e k2 which further implies (via equation (13)) 1 (n \u22121)! X \u03c0\u2208Sn\u22121 KL(PB\u03c0||P0) \u2264t logt \u00b7k2\u03b2 n n \u22121 \u2264\u03b1\u2217\u00b7log(n \u22121)!. Recalling Equation (10), if 1\u2212\u03b2 \u2264 s \u03b1\u2217 C \u00b7 (n \u22121)log n e n2 \u224d s logn n (17) we have 1 (n \u22121)! X \u03c0\u2208Sn\u22121 KL(PB\u03c0||P0) \u2264n2(1\u2212\u03b2)2 \u2264\u03b1\u2217\u00b7log(n \u22121)!. We invoke the following Lemma on minimax error through Kullbak-Leibler divergence. Lemma 5 (Tsybakov (2009), Proposition 2.3). Let P0, P1,..., PM be probability measures on (X ,A ) satisfying 1 M M X j=1 KL(P j||P0) \u2264\u03b1\u00b7logM (18) with 0 < \u03b1 < 1 8. Then for any \u03c8 : X \u2192[M +1] max ( P0(\u03c8 \u0338= 0), 1 M M X j=1 P j(\u03c8 \u0338= j) ) \u2265 p M p M +1 \u00c3 1\u22122\u03b1\u2212 s 2\u03b1 logM ! . 13 \fCollecting Equations (16) and (17), if either one of the conditions in Equations (16) and (17) holds, we have 1 (n \u22121)! X \u03c0\u2208Sn\u22121 KL(PB\u03c0||P0) \u2264\u03b1\u2217\u00b7log(n \u22121)!. (19) Putting things together, if 1\u2212\u03b2 \u227a s logn n \u2228logn k , we have that Equation (19) hold. Applying Lemma 5, we complete the proof lim n\u2192\u221e min \u03c6 max ( P0(\u03c6 \u0338= 0), 1 (n \u22121)! (n\u22121)! X i=1 Pi(\u03c6 \u0338= i) ) \u2265lim n\u2192\u221e p (n \u22121)! 1+ p (n \u22121)! \u00c3 1\u22122\u03b1\u2212 s 2\u03b1 log(n \u22121)! ! = 1\u22122\u03b1. Proof of Lemma 1. Let us state the well-known Bernstein\u2019s inequality (Boucheron et al. (2013), Theorem 2.10), which will be used in the proof of this lemma. Lemma 6 (Bernstein\u2019s inequality). Let X1,...,Xn be independent bounded real-valued random variables. Assume that there exist positive numbers v and c such that n X i=1 E[X 2 i ] \u2264v, Xi \u22643c,\u22001 \u2264i \u2264n a.s. then we have, for all t > 0, P \u00c3 n X i=1 (Xi \u2212EXi) \u2265 p 2vt +ct ! \u2264e\u2212t. (20) First, let us consider the case when the adjacency matrix A is generated from the Erd\u02dd os-R\u00e9nyi random graph ER(n, k n\u22121). Recall Bernstein\u2019s inequality Lemma 6, for any P\u03c0 with \u03c0 \u2208Sn\u22121, we know \u2329P\u03c0BPT \u03c0 , A\u232a has the same distribution as \u2329B, A\u232a. Thus \u2329P\u03c0BPT \u03c0 , A\u232a in law = = \u2329B, A\u232a= 2 X i>j Ai jBi j = 2 X i>j E[Ai j]Bi j +2 X i>j (Ai j \u2212E[Ai j])Bi j \u2264 k n \u22121nk +2 s k n \u22121nkt + 2 3t with probability at least 1\u2212exp(\u2212t). Here the last step is through Bernstein\u2019s inequality. There are nk/2 non-zero Bi,j,i > j, and it is clear that Ai j \u223cBernoulli( k n\u22121), 2P i>j E[Ai j]Bi j = nk k n\u22121. Thus we can take c = 1 3 and v = X i 0 such that (1\u2212\u03b2+\u03b22 k n \u22121)nk \u2212 q nk \u00b7logn > T > k n \u22121nk +2 s k n \u22121nk \u00b7logn!+ 2 3 \u00b7logn! (21) we have that lim n,k(n)\u2192\u221emax ( P0(\u03c61 \u0338= 0), 1 (n \u22121)! (n\u22121)! X i=1 Pi(\u03c61 \u0338= 1) ) \u2264 lim n,k(n)\u2192\u221e 1 n = 0. The detailed calculation of Equation (21) yields that the test succeeds with high probability whenever 1\u2212\u03b2 \u2ab0 s logn n \u2228logn k . Proof of Lemma 2. Under the rewiring model (Watts-Strogatz model) WS(n,k,\u03b2) with permutation P\u03c0 P\u03c0APT \u03c0 = (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)\u00b7B +\u03b2 k n \u22121 \u00b7(J \u2212I)+ Z where J = 11T \u2208Rn\u00d7n, B is the ring structured signal matrix de\ufb01ned in Equation (2). We denote in short A = EA + Z as this signal and the noise part, and Bi j = \u00bd 1 if 0 < |i \u2212j| \u2264k 2 mod n \u22121\u2212k 2 0 elsewhere B is a circulant matrix, whose spectrum is highly structured, and Z is a zero-mean noise random matrix. We \ufb01rst study the random \ufb02uctuation part, Z = A \u2212EA. Let us bound the expectation E\u2225A \u2212EA\u2225 as a starting step, for any adjacency matrix A \u2208Rn\u00d7n using the symmetrization trick. Denote A\u2032 \u223cA 15 \fas the independent copy of A sharing the same distribution. Take E,G \u2208Rn\u00d7n as random symmetric Rademacher and Gaussian matrices with entries Ei j, Gi j being, respectively, independent Rademacher and Gaussian. Denoting A \u25e6B as matrix Hadamard product, we have E\u2225A \u2212EA\u2225= E sup \u2225v\u2225\u21132=1 \u2329(A \u2212EA)v,v\u232a= E sup \u2225v\u2225\u21132=1 \u2329(A \u2212EA\u2032 A\u2032)v,v\u232a \u2264EAEA\u2032 sup \u2225v\u2225\u21132=1 \u2329(A \u2212A\u2032)v,v\u232a= EEEAEA\u2032 sup \u2225v\u2225\u21132=1 \u2329[E \u25e6(A \u2212A\u2032)]v,v\u232a \u2264EAEE sup \u2225v\u2225\u21132=1 \u2329[E \u25e6A]v,v\u232a+EA\u2032EE sup \u2225v\u2225\u21132=1 \u2329[\u2212E \u25e6A\u2032]v,v\u232a = 2EAEE sup \u2225v\u2225\u21132=1 \u2329[E \u25e6A]v,v\u232a\u2264 2 p 2/\u03c0 \u00b7EAEE sup \u2225v\u2225\u21132=1 \u2329[EG[|G|]\u25e6E \u25e6A]v,v\u232a \u2264 r\u03c0 2 \u00b7EAEEEG sup \u2225v\u2225\u21132=1 \u2329[|G|\u25e6E \u25e6A]v,v\u232a= r\u03c0 2 \u00b7EAEG sup \u2225v\u2225\u21132=1 \u2329[G \u25e6A]v,v\u232a = r\u03c0 2 \u00b7EA (EG\u2225G \u25e6A\u2225). Recall the following Lemma from Bandeira and van Handel (2014). Lemma 7 (Bandeira and van Handel (2014), Theorem 1.1). Let X be the n \u00d7n symmetric random matrix with X = G \u25e6A, where Gi j,i < j are i.i.d. N(0,1) and Ai j are given scalars. Then EG\u2225X \u2225\u2264max i sX j A2 i j +max i j |Ai j|\u00b7 q logn. Thus via Jensen\u2019s inequality and the above Lemma, we can continue E\u2225A \u2212EA\u2225\u2264 r\u03c0 2 \u00b7EA (EG\u2225G \u25e6A\u2225) \u227eEA max i sX j A2 i j +max i j |Ai j|\u00b7 q logn \u2264 s EA max i X j A2 i j + q logn \u2264 r k +C12 q k logn +C2 logn + q logn \u224d p k \u2228 q logn, where the last step uses Bernstein inequality Lemma 6. Moving from expectation E\u2225A \u2212EA\u2225to concentration on \u2225A\u2212EA\u2225is through Talagrand\u2019s concentration inequality (see, Talagrand (1996) and Tao (2012) Theorem 2.1.13), since \u2225\u00b7\u2225is 1\u2212Lipschitz convex function in our case (and the entries are bounded), thus with probability at least 1\u22121 n , \u2225A \u2212EA\u2225\u2264E\u2225A \u2212EA\u2225+C \u00b7 q logn \u224d p k \u2228 q logn. Now let us study the structural signal part. Matrix B is of the form circulant matrix, the associated polynomial is f (x) = (x + xn\u2212k/2)\u00b7 xk/2 \u22121 x \u22121 . 16 \fThe eigen-structure is: collect for all j = 0,1,...,n/2 (cos0,cos 2\u03c0j n ,cos 2\u03c02j n ,...,cos 2\u03c0n j n ) and (sin0,cos 2\u03c0j n ,sin 2\u03c02j n ,...,sin 2\u03c0n j n ) and the corresponding eigenvalue is \u03bbj = f (w j) = 2 k/2 X i=1 cos \u00b5 i 2\u03c0j n \u00b6 . Let us \ufb01rst assume k n \u22641 2, thus \u03bb is the second largest eigenvalue \u03bb = 2 k/2 X i=1 cos \u00b5 i 2\u03c0 n \u00b6 = 2sin k\u03c0 2n sin \u03c0 n cos (k +2)\u03c0 2n \u224dk. Using Weyl\u2019s interlacing inequality, if there exist a T > 0 such that \u03bb2(AWS) \u2265\u03bb2(E[AWS])\u2212\u2225Z\u2225> T > \u2225Z \u2032\u2225> \u03bb2(AER), where \u03bb2(M)\u2212\u2225Z\u2225\u2265(1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)\u03bb\u2212 p k \u2228 q logn, \u2225Z \u2032\u2225\u2264 p k \u2228 q logn, then we have lim n,k(n)\u2192\u221emax ( P0(\u03c62 \u0338= 0), 1 (n \u22121)! (n\u22121)! X i=1 Pi(\u03c62 \u0338= 1) ) = 0. Therefore, we have the condition for which the second eigenvalue test succeeds: (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)\u03bb > p k \u2228 q logn (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121) > p k logn \u2228logn 2sin k\u03c0 2n sin \u03c0 n cos (k+2)\u03c0 2n \u224d r 1 k \u2228 p logn k . Proof of Lemma 3. Take any two vectors Ai\u00b7, A j\u00b7 that are two rows of the adjacency matrix. Denote the i, j-th rows have distance |\u03c0\u22121(i)\u2212\u03c0\u22121(j)|ring = x. This is equivalent to saying that the Hamming distance of the corresponding signal vectors satis\ufb01es H(Bi\u00b7,B j\u00b7) = 2x \u22122. Therefore the union of signal nodes for i, j-th row is of cardinality |Si \u222aS j| = k +x \u22121, common signal nodes is of cardinality |Si \u2229S j| = k \u2212x +1, 17 \funique signal is of cardinality |Si\u25b3S j| = 2x \u22122, and |Sc i \u2229Sc j| = n \u2212k \u2212x \u22121. The signal nodes is 1 with probability p = 1\u2212\u03b2(1\u2212\u03b2 k n\u22121), non signal is 1 with probability q = \u03b2 k n\u22121, and we have \u2329Ai\u00b7, A j\u00b7\u232a= X l\u2208Si \u2229S j Ail A jl + X l\u2208Si \u25b3S j Ail A jl + X l\u2208Sc i \u2229Sc j Ail A jl. Observe as long as l \u0338= i, j, Ail and A jl are independent, and \u00a9 Ail A jl,l \u2208[n]\\{i, j} \u00aa are independent of each other. Let us bound each term via Bernstein\u2019s inequality Lemma 6, X l\u2208Si \u2229S j Ail A jl \u2208p2|Si \u2229S j|\u00b1 \u00b5q 2p2|Si \u2229S j|t + 1 3t \u00b6 X l\u2208Si \u25b3S j Ail A jl \u2208pq|Si\u25b3S j|\u00b1 \u00b5q 2pq|Si\u25b3S j|t + 1 3t \u00b6 X l\u2208Sc i \u2229Sc j Ail A jl \u2208q2|Sc i \u2229Sc j|\u00b1 \u00b5q 2q2|Sc i \u2229Sc j|t + 1 3t \u00b6 with probability at least 1\u22126exp(\u2212t). We take t = (2+\u03f5)logn for any \u03f5 > 0, such that with probability at least 1\u2212Cn\u2212\u03f5, the above bound holds for all pairs (i, j). Thus for all |\u03c0\u22121(i)\u2212\u03c0\u22121(j)|ring > k pairs, \u2329Ai\u00b7, A j\u00b7\u232a\u22642kpq +(n \u22122k \u22122)q2 + \u00b5q 4kpqt + q 2(n \u22122k \u22122)q2t + t \u00b6 , for |\u03c0\u22121(i)\u2212\u03c0\u22121(j)|ring \u2264x pairs \u2329Ai\u00b7, A j\u00b7\u232a\u2265(k \u2212x +1)p2 +(2x \u22122)pq +(n \u2212k \u2212x \u22121)q2 \u2212 \u00b5q 2(k \u2212x +1)p2t + p 2(2x \u22122)pqt + q 2(n \u2212k \u2212x \u22121)q2t + t \u00b6 . Thus, with t = (2+\u03f5)logn, p = 1\u2212\u03b2(1\u2212\u03b2 k n\u22121) and q = \u03b2 k n\u22121, if x < x0 with x0 k := 1\u2212C1 s logn k 1 1\u2212\u03b2 \u2212C2 s logn n 1 (1\u2212\u03b2)2 , we have (k \u2212x +1)(p \u2212q)2 \u22652t +(2 p 2+1) \u00b5q kp2 + q nq2 \u00b6p 2t \u22652t + \u00b5q 2kpq + q (n \u22122k \u22122)q2 + q (k \u2212x +1)p2 + p (2x \u22122)pq + q (n \u2212k \u2212x \u22121)q2 \u00b6p 2t, which further implies, min j:|\u03c0\u22121(i)\u2212\u03c0\u22121(j)|ring\u2264x0 \u2329Ai\u00b7, A j\u00b7\u232a\u2265max j\u2209N (vi ) \u2329Ai\u00b7, A j\u00b7\u232a,\u2200i max i\u2208[n] | \u02c6 N (vi)\u25b3N (vi)| |N (vi)| \u2264k \u2212x0 k = C1 s logn k 1 1\u2212\u03b2 +C2 s logn n 1 (1\u2212\u03b2)2 . 18 \fTherefore we can reconstruct the neighborhood consistently, under the condition 1\u2212\u03b2 \u227b s logn k \u2228 \u00b5logn n \u00b61/4 . Proof of Lemma 4. Since eigen-structure is not affected by permutation, we will work under the case when the true permutation is identity. We work under a mild technical assumption that we have two independent observation of the adjacency matrix, one used for calculated the eigen-vector, the other used for projection. Note this is only a technical assumption for simplicity, and does not affect the theoretical result. Recall that A = M + Z, where M = (1\u2212\u03b2)(1\u2212\u03b2 k n\u22121)\u00b7B +\u03b2 k n\u22121 \u00b7(J \u2212I) is the signal matrix. Denote the eigenvectors of M to beU \u2208Rn\u00d7n, and eigenvectors of A to be \u02c6 U \u2208Rn\u00d7n. Classic Davis-Kahan perturbation bound informs us that \u2225\u02c6 U\u00b72 \u2212U\u00b72\u2225\u2264 \u2225Z\u2225 \u2206\u03bb\u2212\u2225Z\u2225, \u2225\u02c6 U\u00b73 \u2212U\u00b73\u2225\u2264 \u2225Z\u2225 \u2206\u03bb\u2212\u2225Z\u2225, where the spectral gap \u2206\u03bb of M is \u2206\u03bb := (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)\u00b7(\u03bb2 \u2212\u03bb3) = (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)\u00b7 \" 2 k/2 X i=1 cos \u00b5 i 2\u03c0 n \u00b6 \u22122 k/2 X i=1 cos \u00b5 i 2\u03c0\u00b72 n \u00b6# = (1\u2212\u03b2)(1\u2212\u03b2 k n \u22121) \" 2sin k\u03c0 2n sin \u03c0 n cos (k +2)\u03c0 2n \u2212 2sin k\u03c0 n sin 2\u03c0 n cos (k +2)\u03c0 n # \u224d(1\u2212\u03b2)(1\u2212\u03b2 k n \u22121)k3 n2 . From the proof of Lemma 2, we know with high probability \u2225Z\u2225\u2aaf p k \u2228 q logn. We denote tan \u02c6 \u03b8i = \u02c6 U T \u00b73 A\u00b7i \u02c6 U T \u00b72 A\u00b7i , and tan\u03b8i = \u2329U\u00b73,M\u00b7i\u232a \u2329U\u00b72,M\u00b7i\u232a= \u03bb2 pn sin (i\u22121)2\u03c0 n \u03bb2 pn cos (i\u22121)2\u03c0 n = tan (i \u22121)2\u03c0 n . Observe that tan \u02c6 \u03b8i = \u2329\u02c6 U\u00b73, A\u00b7i\u232a \u2329\u02c6 U\u00b72, A\u00b7i\u232a , (22) and for both the denominator and numerator, we have the bound \u2329\u02c6 U\u00b72, A\u00b7i\u232a= \u2329( \u02c6 U\u00b72 \u2212U\u00b72)+U\u00b72,M\u00b7i + Z\u00b7i\u232a = \u2329U\u00b72,M\u00b7i\u232a+\u2329\u02c6 U\u00b72 \u2212U\u00b72,M\u00b7i\u232a+\u2329\u02c6 U\u00b72,Z\u00b7i\u232a \u2264U\u00b72,M\u00b7i\u232a+\u2225\u02c6 U\u00b72 \u2212U\u00b72\u2225\u2225M\u00b7i\u2225+|\u2329\u02c6 U\u00b72,Z\u00b7i\u232a|. 19 \fThus we know max \u00a9 |\u2329\u02c6 U\u00b72, A\u00b7i\u232a\u2212\u2329U\u00b72,M\u00b7i\u232a|,|\u2329\u02c6 U\u00b73, A\u00b7i\u232a\u2212\u2329U\u00b73,M\u00b7i\u232a| \u00aa (23) \u2264max \u00a9 \u2225\u02c6 U\u00b72 \u2212U\u00b72\u2225\u2225M\u00b7i\u2225+|\u2329\u02c6 U\u00b72,Z\u00b7i\u232a|,\u2225\u02c6 U\u00b73 \u2212U\u00b73\u2225\u2225M\u00b7i\u2225+|\u2329\u02c6 U\u00b73,Z\u00b7i\u232a| \u00aa (24) \u2264 p k \u2228 p logn \u03bb2 \u2212\u03bb3 \u2212 p k \u2228 p logn \u00b7 p k(1\u2212\u03b2)+ q logn (25) where the last line follows from the de\ufb01nition of principal angle and Davis-Kahan bound and Hoeffding\u2019s inequality for \u2329\u02c6 U\u00b72,Z\u00b7i\u232a. Proceeding with Equation (22), without loss of generality, assume 0 \u2264(i\u22121)2\u03c0 n \u2264\u03c0 2 , for 0 \u2264(i\u22121)2\u03c0 n \u2264\u03c0 4 tan \u02c6 \u03b8i \u2264 \u03bb2 pn sin (i\u22121)2\u03c0 n + p k\u2228p logn \u03bb2\u2212\u03bb3\u2212 p k\u2228p logn \u00b7 p k(1\u2212\u03b2)+ p logn \u03bb2 pn cos (i\u22121)2\u03c0 n \u2212 p k\u2228p logn \u03bb2\u2212\u03bb3\u2212 p k\u2228p logn \u00b7 p k(1\u2212\u03b2)\u2212 p logn (26) = sin (i\u22121)2\u03c0 n +\u03b4 cos (i\u22121)2\u03c0 n \u2212\u03b4 , with \u03b4 = pn \u03bb2 \u00b7 ( p k \u2228 p logn \u03bb2 \u2212\u03bb3 \u2212 p k \u2228 p logn \u00b7 p k(1\u2212\u03b2)+ q logn ) (27) tan \u02c6 \u03b8i \u2265 sin (i\u22121)2\u03c0 n \u2212\u03b4 cos (i\u22121)2\u03c0 n +\u03b4 , (28) with similar bounds for cot \u02c6 \u03b8i with \u03c0 4 \u2264(i\u22121)2\u03c0 n \u2264\u03c0 2 . Here the stochastic eror is bounded in the sense \u03b4 \u224dn2.5 k3 1 1\u2212\u03b2 \u21920. From the above equation, we have |\u02c6 \u03b8i \u2212\u03b8i| \u2264min{|tan \u02c6 \u03b8i \u2212tan\u03b8i|,|cot \u02c6 \u03b8i \u2212cot\u03b8i|} \u2264min \u00bd\u03b4(1+tan\u03b8i) cos\u03b8i \u2212\u03b4 , \u03b4(1+cot\u03b8i) sin\u03b8i \u2212\u03b4 \u00be \u2264 2\u03b4 1 p 2 \u2212\u03b4 . For any i, we have the bound on the stochastic error |\u02c6 \u03b8i \u2212\u03b8i| < C \u00b7\u03b4 \u224dn2.5 k3 1 1\u2212\u03b2. And for all j \u2208N (i) in the neighborhood, the support is |\u03b8j \u2212\u03b8i| \u22642\u03c0k n . Fix any i, for any j \u2209N (vi), min j\u2209N (vi ) |\u02c6 \u03b8j \u2212\u02c6 \u03b8i| \u22652\u03c0k n \u2212C\u03b4 > (2\u03c0k n \u2212C \u2032\u03b4)+C\u03b4 \u2265 max |j\u2212i| 2C. Therefore, the following bound on symmetric set difference holds max i\u2208[n] | \u02c6 N (vi)\u25b3N (vi)| |N (vi)| \u2264C \u2032 \u00b7n\u03b4 k \u2264 C \u2032 \u00b7n \u00b7 n2.5 k3 1 1\u2212\u03b2 k . In summary under the condition 1\u2212\u03b2 \u227bn3.5 k4 , one can recover the neighborhood consistently w.h.p. in the sense lim n,k(n)\u2192\u221emax i\u2208[n] | \u02c6 N (vi)\u25b3N (vi)| |N (vi)| = 0. 20 \facknowledgement The authors thank Elchanan Mossel for many helpful discussions and suggestions for improving the paper." + }, + { + "url": "http://arxiv.org/abs/1603.06923v1", + "title": "Inference via Message Passing on Partially Labeled Stochastic Block Models", + "abstract": "We study the community detection and recovery problem in partially-labeled\nstochastic block models (SBM). We develop a fast linearized message-passing\nalgorithm to reconstruct labels for SBM (with $n$ nodes, $k$ blocks, $p,q$\nintra and inter block connectivity) when $\\delta$ proportion of node labels are\nrevealed. The signal-to-noise ratio ${\\sf SNR}(n,k,p,q,\\delta)$ is shown to\ncharacterize the fundamental limitations of inference via local algorithms. On\nthe one hand, when ${\\sf SNR}>1$, the linearized message-passing algorithm\nprovides the statistical inference guarantee with mis-classification rate at\nmost $\\exp(-({\\sf SNR}-1)/2)$, thus interpolating smoothly between strong and\nweak consistency. This exponential dependence improves upon the known error\nrate $({\\sf SNR}-1)^{-1}$ in the literature on weak recovery. On the other\nhand, when ${\\sf SNR}<1$ (for $k=2$) and ${\\sf SNR}<1/4$ (for general growing\n$k$), we prove that local algorithms suffer an error rate at least $\\frac{1}{2}\n- \\sqrt{\\delta \\cdot {\\sf SNR}}$, which is only slightly better than random\nguess for small $\\delta$.", + "authors": "T. Tony Cai, Tengyuan Liang, Alexander Rakhlin", + "published": "2016-03-22", + "updated": "2016-03-22", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction The stochastic block model (SBM) is a well-studied model that addresses the clustering phenomenon in large networks. Various phase transition phenomena and limitations for ef\ufb01cient algorithms have been established for this \u201cvanilla\u201d SBM (Coja-Oghlan, 2010; Decelle et al., 2011; Massouli\u00e9, 2014; Mossel et al., 2012, 2013a; Krzakala et al., 2013; Abbe et al., 2014; Hajek et al., 2014; Abbe and Sandon, 2015a; Deshpande et al., 2015). However, in real network datasets, additional side information is often available. This additional information may come, for instance, in the form of a small portion of revealed labels (or, community memberships), and this paper is concerned with methods for incorporating this additional information to improve recovery of the latent community structure. Many global algorithms studied in the literature are based on spectral analysis (with belief propagation as a further re\ufb01nement) or semide\ufb01nite programming. For these methods, it appears to be dif\ufb01cult to incorporate such additional side information, although some success has been reported (Cucuringu et al., 2012; Zhang et al., 2014). Incorporating the additional information within local algorithms, however, is quite natural. In this paper, we focus on local algorithms and study their fundamental limitations. Our model is a partially labeled stochastic block model (p-SBM), where \u03b4 portion of community labels are randomly revealed. We address the following questions: \u2217The research of Tony Cai was supported in part by NSF Grants DMS-1208982 and DMS-1403708, and NIH Grant R01 CA127334. \u2020Alexander Rakhlin gratefully acknowledges the support of NSF under grant CAREER DMS-0954737. 1 arXiv:1603.06923v1 [math.ST] 22 Mar 2016 \fPhase Boundary Are there different phases of behavior in terms of the recovery guarantee, and what is the phase boundary for partially labeled SBM? How does the amount of additional information \u03b4 affect the phase boundary? Inference Guarantee What is the optimal guarantee on the recovery results for p-SBM and how does it interpolate between weak and strong consistency known in the literature? Is there an ef\ufb01cient and near-optimal parallelizable algorithm? Limitation for Local v.s. Global Algorithms While optimal local algorithms (belief propagation) are computationally ef\ufb01cient, some global algorithms may be computationally prohibitive. Is there a fundamental difference in the limits for local and global algorithms? An answer to this question gives insights on the computational and statistical trade-offs. 1.1 Problem Formulation We de\ufb01ne p-SBM with parameter bundle (n,k,p,q,\u03b4) as follows. Let n denote the number of nodes, k the number of communities, p and q \u2013 the intra and inter connectivity probability, respectively. The proportion of revealed labels is denoted by \u03b4. Speci\ufb01cally, one observes a partially labeled graph G(V,E) with |V | = n, generated as follows. There is a latent disjoint partition V = Sk l=1Vl into k equal-sized groups,1 with |Vl| = n/k. The partition information introduces the latent labeling \u2113(v) = l iff v \u2208Vl. For any two nodes vi,v j,1 \u2264i, j \u2264n, there is an edge between them with probability p if vi and v j are in the same partition, and with probability q if not. Independently for each node v \u2208V , its true labeling is revealed with probability \u03b4. Denote the set of labeled nodes V l, its revealed labels \u2113(V l), and unlabeled nodes by V u (where V = V l \u222aV u). Equivalently, denote byG \u2208Rn\u00d7n the adjacency matrix, and let L \u2208Rn\u00d7n be the structural block matrix Li j = 1\u2113(vi )=\u2113(v j ), where Li j = 1 iff node i, j share the same labeling, Li j = 0 otherwise. Then we have independently for 1 \u2264i < j \u2264n Bi j \u223cBernoulli(p) if Li j = 1, Bi j \u223cBernoulli(q) if Li j = 0. Given the graph G(V,E) and the partially revealed labels \u2113(V l), we want to recover the remaining labels ef\ufb01ciently and accurately. We are interested in the case when \u03b4(n),p(n),q(n) decrease with n, and k(n) can either grow with n or stay \ufb01xed. 1.2 Prior Work In the existing literature on SBM without side information, there are two major criteria \u2013 weak and strong consistency. Weak consistency asks for recovery better than random guessing in a sparse random graph regime (p \u224dq \u224d1/n), and strong consistency requires exact recovery for each node above the connectedness theshold (p \u224dq \u224dlogn/n). Interesting phase transition phenomena in weak consistency for SBM have been discovered in (Decelle et al., 2011) via insightful cavity method from statistical physics. Sharp phase transitions for weak consistency have been thoroughly investigated in (Coja-Oghlan, 2010; Mossel 1The result can be generalized to the balanced case, |Vl | \u224dn/k, see Section 2.2. 2 \fet al., 2012, 2013a,b; Massouli\u00e9, 2014). In particular, spectral algorithms on the non-backtracking matrix have been studied in (Massouli\u00e9, 2014) and the non-backtracking walk in (Mossel et al., 2013b). Spectral algorithms as initialization and belief propagation as further re\ufb01nement to achieve optimal recovery was established in (Mossel et al., 2013a). The work of Mossel et al. (2012) draws a connection between SBM thresholds and broadcasting tree reconstruction thresholds through the observation that sparse random graphs are locally tree-like. Recent work of Abbe and Sandon (2015b) establishes the positive detectability result down to the Kesten-Stigum bound for all k via a detailed analysis of a modi\ufb01ed version of belief propagation. For strong consistency, (Abbe et al., 2014; Hajek et al., 2014, 2015) established the phase transition using information theoretic tools and semi-de\ufb01nite programming (SDP) techniques. In the statistical literature, Zhang and Zhou (2015); Gao et al. (2015) investigated the mis-classi\ufb01cation rate of the standard SBM. (Kanade et al., 2014) is one of the few papers that theoretically studied the partially labeled SBM. The authors investigated the stochastic block model where the labels for a vanishing fraction (\u03b4 \u21920) of the nodes are revealed. The results focus on the asymptotic case when \u03b4 is suf\ufb01ciently small and block number k is suf\ufb01ciently large, with no speci\ufb01ed growth rate dependence. Kanade et al. (2014) show that pushing below the Kesten-Stigum bound is possible in this setting, connecting to a similar phenomenon in k-label broadcasting processes (Mossel, 2001). In contrast to these works, the focus of our study is as follows. Given a certain parameter bundle p-SBM(n,k,p,q,\u03b4), we investigate the recovery thresholds as the fraction of labeled nodes changes, and determine the fraction of nodes that local algorithms can recover. The focus of this paper is on local algorithms. These methods, naturally suited for distributed computation (Linial, 1992), provide ef\ufb01cient (sub-linear time) solutions to computationally hard combinatorial optimization problems on graphs. For some of these problems, they are good approximations to global algorithms. We refer to (Kleinberg, 2000) on the shortest path problem for small-world random graphs, (Gamarnik and Sudan, 2014) for the maximum independent set problem for sparse random graphs, (Parnas and Ron, 2007) on the minimum vertex cover problem, as well as (Nguyen and Onak, 2008). Finally, let us brie\ufb02y review the literature on broadcasting processes on trees, from which we borrow technical tools to study p-SBM. Consider a Markov chain on an in\ufb01nite tree rooted at \u03c1 with branching number b. Given the label of the root \u2113(\u03c1), each vertex chooses its label by applying the Markov rule M to its parent\u2019s label, recursively and independently. The process is called broadcasting process on trees. One is interested in reconstructing the root label \u2113(\u03c1) given all the n-th level leaf labels. Sharp reconstruction thresholds for the broadcasting process on general trees for the symmetric Ising model setting (each node\u2019s label is {+,\u2212}) have been studied in (Evans et al., 2000). Mossel and Peres (2003) studied a general Markov channel on trees that subsumes k-state Potts model and symmetric Ising model as special cases; the authors established non-census-solvability below the Kesten-Stigum bound. Janson and Mossel (2004) extended the sharp threshold to robust reconstruction cases, where the vertex\u2019 labels are contaminated with noise. In general, transition thresholds proved in the above literature correspond to the Kesten-Stigum bound b|\u03bb2(M)|2 = 1 (Kesten and Stigum, 1966b,a). We remark that for a general Markov channel M, b|\u03bb2(M)|2 < 1 does not always imply non-solvability \u2014 even though it indeed implies non-census-solvability (Mossel and Peres, 2003) \u2014 which is equivalent to the extremality of free-boundary Gibbs measure. The non-solvability of the tree reconstruction problem below the KestenStigum bound for a general Markov transition matrix M \u2208Rk\u00d7k still remains open, especially for large k. 3 \f1.3 Our Contributions This section summarizes the results. In terms of methodology, we propose a new ef\ufb01cient linearized message-passing Algorithm 1 that solves the label recovery problem of p-SBM in near-linear runtime. The algorithm shares the same transition boundary as the optimal local algorithm (belief propagation) and takes on a simple form of a weighted majority vote (with the weights depending on graph distance). This voting strategy is easy to implement (see Section 5). On the theoretical front, our focus is on establishing recovery guarantees according to the size of the Signal-to-Noise Ratio (SNR), de\ufb01ned as SNR(n,k,p,q,\u03b4) := (1\u2212\u03b4) n(p \u2212q)2 k2(q + p\u2212q k ) . (1) Phase Boundary For k = 2, the phase boundary for recovery guarantee is SNR = 1. Above the threshold, the problem can be solved ef\ufb01ciently. Below the threshold, the problem is intrinsically hard. For growing k, on the one hand, a linearized message-passing algorithm succeeds when SNR > 1, matching the well-established Kesten-Stigum bound for all k. On the other hand, no local algorithms work signi\ufb01cantly better than random guessing if SNR < 1 4. Inference Guarantee Above the SNR phase boundary, Algorithm 1, a fast linearized message-passing algorithm \u02c6 A (with near-linear run-time O\u2217(n)) provides near optimal recovery. For k = 2, under the regime SNR > 1, the proportion of mis-classi\ufb01ed labels is at most sup l\u2208{+,\u2212} Pl( \u02c6 A \u0338= l) \u2264exp \u00b5 \u2212SNR\u22121 2+o(1) \u00b6 \u22271 2. Thus when SNR \u2208(1,2logn), the recovery guarantee smoothly interpolates between weak and strong consistency. On the other hand, below the boundary SNR < 1, all local algorithms suffer the minimax classi\ufb01cation error at least inf \u03a6 sup l\u2208{+,\u2212} Pl(\u03a6 \u0338= l) \u22651 2 \u2212O \uf8eb \uf8ed s \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u2212SNR \uf8f6 \uf8f8. For growing k, above the phase boundary SNR > 1, the proportion of mis-classi\ufb01ed labels is at most sup l\u2208[k] Pl( \u02c6 A \u0338= l) \u2264(k \u22121)\u00b7exp \u00b5 \u2212SNR\u22121 2+o(1) \u00b6 \u2227k \u22121 k via the approximate message-passing algorithm. However, below the boundary SNR < 1/4, the minimax classi\ufb01cation error is lower bounded by inf \u03a6 sup l\u2208[k] Pl(\u03a6 \u0338= l) \u22651 2 \u2212O \u00b5 \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u22124\u00b7SNR \u22281 k \u00b6 . 4 \fLimitations of Local v.s. Global Algorithms It is known that the statistical boundary (limitation for global and possibly exponential time algorithms) for growing number of communities is SNR \u224dO( logk k ) (Abbe and Sandon (2015b), weak consistency) and SNR \u224dO( logn k ) (Chen and Xu (2014), strong consistency). We show in this paper that the limitation for local algorithms (those that use neighborhood information up to depth logn) is 1 4 \u2264SNR \u22641. In conclusion, as k grows, there is a factor k gap between the boundaries for global and local algorithms. Local algorithms can be evaluated in near line time; however, the global algorithm achieving the statistical boundary requires exponential time. To put our results in the right context, let us make comparisons with the known literature. Most of the literature studies the standard SBM with no side labeling information. Here, many algorithms that achieve the sharp phase boundary are either global algorithms, or a combination of global and local algorithms, see (Mossel et al., 2013b; Massouli\u00e9, 2014; Hajek et al., 2014; Abbe et al., 2014). However, from the theoretical perspective, it is not clear how to distinguish the limitation for global v.s. local algorithms through the above studies. In addition, from the model and algorithmic perspective, many global algorithms such as spectral (Coja-Oghlan, 2010; Massouli\u00e9, 2014) and semi-de\ufb01nite programming (Abbe et al., 2014; Hajek et al., 2014) are not readily applicable in a principled way when there is partially revealed labels. We try to resolve the above concerns. First, we establish a detailed statistical inference guarantee for label recovery. Allowing for a vanishing \u03b4 amount of randomly revealed labels, we show that a fast local algorithm enjoys a good recovery guarantee that interpolates between weak and strong recovery precisely, down to the well-known Kesten-Stigum bound, for general k. The error bound exp(\u2212(SNR\u22121)/2) proved in this paper improves upon the best known result of (SNR \u22121)\u22121 in the weak recovery literature. We also prove that the limitation for local algorithms matches the Kesten-Stigum bound, which is sub-optimal compared to the limitation for global algorithms, when k grows. We also remark that the boundary we establish matches the best known result for the standard SBM when we plug in \u03b4 = 0. We study the message-passing algorithms for multi-label broadcasting tree when a fraction of nodes\u2019 labels have been revealed. Unlike the usual asymptotic results for belief propagation and approximate message-passing, we prove non-asymptotic concentration of measure phenomenon for messages on multilabel broadcasting trees. As the tree structure encodes detailed dependence among random variables, proving the concentration phenomenon requires new ideas. We further provide a lower bound on belief propagation for multi-label broadcasting trees. 1.4 Organization of the Paper The rest of the paper is organized as follows. Section 2 reviews the preliminary background and theoretical tools \u2013 broadcasting trees \u2013 that will be employed to solve the p-SBM problem. To better illustrate the main idea behind the theoretical analysis, we split the main result into two sections: Section 3 resolves the recovery transition boundary for k = 2, where the analysis is simple and best illustrates the main idea. In Section 4, we focus on the growing k = k(n) case, where a modi\ufb01ed algorithm and a more detailed analysis are provided. In the growing k case, we establish a distinct gap in phase boundaries between the global algorithms and local algorithms. 5 \f2 Preliminaries 2.1 Broadcasting Trees First, we introduce the notation for the tree broadcasting process. Let T\u2264t(\u03c1) denote the tree up to depth t with root \u03c1. The collection of revealed labels for a broadcasting tree T\u2264t(\u03c1) is denoted as \u2113T\u2264t(\u03c1) (this is a collection of random variables). The labels for the binary broadcasting tree are [2] := {+,\u2212} and for k-broadcasting tree [k] := {1,2,...,k}. For a node v, the set of labeled children is denoted by C l(v) and unlabeled ones by C u(v). We also denote the depth-t children of v to be Ct(v). For a broadcasting tree T , denote by d its broadcasting number, whose rigorous de\ufb01nition is given in (Evans et al., 2000; Lyons and Peres, 2005). For a broadcasting tree with bias parameter \u03b8, the labels are broadcasted in the following way: conditionally on the label of v, \u2113(u) = \u00bd \u2113(v) w.p. \u03b8 + 1\u2212\u03b8 k l \u2208[k]\\\u2113(v) w.p. 1\u2212\u03b8 k for any u \u2208C (v). In words, the child copies the color of its parent with probability \u03b8 + 1\u2212\u03b8 k , or changes to any of the remaining k \u22121 colors with equal probability 1\u2212\u03b8 k . For the node v, NC l(v)(+) denotes the number of revealed positive nodes among its children. Similarly, we de\ufb01ne NC l(v)(l) for l \u2208[k] in multilabel trees. 2.2 Local Tree-like Graphs & Local Algorithms When viewed locally, stochastic block models share many properties with broadcasting trees. In fact, via the coupling lemma (see Lemma 2) from (Mossel et al., 2012), one can show the graph generated from the stochastic block model is locally a tree-like graph. For the rest of the paper, we abbreviate the following maximum coupling depth \u00af tn,k,p,q as \u00af t (see Lemma 2 for details). De\ufb01nition 1 (\u00af t-Local Algorithm Class for p-SBM). The \u00af t-local algorithm class is the collection of decentralized algorithms that run in parallel on nodes of the graph. To recover a node v\u2019s label in p-SBM, an algorithm may only utilize information (revealed labels, connectivity) of the local tree T\u2264\u00af t(v) rooted at v with depth at most \u00af t. In view of the coupling result, for the stochastic block model p-SBM(n,k = 2,p,q,\u03b4), as long as we focus on \u00af t-local algorithms, we can instead study the binary-label broadcasting process Treek=2(\u03b8,d,\u03b4) with broadcasting number d = n 2 (p +q) and bias parameter \u03b8 = p\u2212q p+q . Similarly, for the multi-label model p-SBM(n,k,p,q,\u03b4), we will study the k-label broadcasting process Treek(\u03b8,d,\u03b4) with broadcasting number d = n(q + p\u2212q k ) and bias parameter \u03b8 = p\u2212q k(q+ p\u2212q k ). 2 For each layer of the broadcasting tree, \u03b4 portion of nodes\u2019 labels are revealed. Our goal is to understand the condition under which message-passing algorithms on multi-label broadcasting trees succeed in recovering the root label. 2.3 Hyperbolic Functions and Other Notation In order to introduce the belief propagation and message-passing algorithms, let us recall several hyperbolic functions that will be used frequently. As we show, linearization of the hyperbolic function induces 2In the balanced SBM case, for each node, the local tree changes slightly with different branching number and bias parameter. 6 \fa new approximate message-passing algorithm. Recall that tanhx = ex \u2212e\u2212x ex +e\u2212x , tanh-1 x = 1 2 log \u00b51+ x 1\u2212x \u00b6 , and de\ufb01ne f\u03b8(x) := 2tanh-1 \u00b3 \u03b8tanh x 2 \u00b4 = log 1+\u03b8 \u00b7 ex\u22121 ex+1 1\u2212\u03b8 \u00b7 ex\u22121 ex+1 . (2) \u22124 0 4 \u22124 0 4 x y theta 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 1: Function f\u03b8 for \u03b8 \u2208[0,1]. The function f\u03b8 : R \u2192R is a contraction with |f (x)\u2212f (y)| \u2264\u03b8|x \u2212y| since d f\u03b8(x) dx = 2\u03b8 (1\u2212\u03b82)cosh(x)+(1+\u03b82) \u2264\u03b8. An illustration of f\u03b8 is provided in Figure 1. The recursion rule for message passing can be written succinctly using the function f\u03b8, as we show in Section 3.2. Let us collect a few remaining de\ufb01nitions. The moment generating function (MGF) for a random variable X is denoted by \u03a8X (\u03bb) = Ee\u03bbX , for \u03bb > 0, and the cumulant generating function is de\ufb01ned as KX (\u03bb) = log\u03a8X (\u03bb). For asymptotic order of magnitude, we use a(n) = O(b(n)) to mean \u2200n,a(n) \u2264Cb(n) for some universal constant C, and use O\u2217(\u00b7) to omit the poly-logarithmic dependence. As for notation \u227e,\u227f: a(n) \u227eb(n) if and only if lim n\u2192\u221e a(n) b(n) \u2264c, with some constant c > 0, and vice versa. The square bracket [\u00b7] is used to represent the index set [k] := [1,2,...,k]; in particular when k = 2, [2] := {+,\u2212} for convenience. 7 \f3 Number of Communities k = 2 : Message Passing with Partial Information 3.1 p-SBM Transition Thresholds We propose a novel linearized message-passing algorithm to solve the p-SBM in near-linear time. The method employs Algorithm 3 and 4 as sub-routines, can run in parallel, and is easy to implement. Algorithm 1: Message Passing for p-SBM Data: A network graph G(V,E) with partial label information, where V = V l \u222aV u is composed of labeled set and unlabeled set. Denote \u03f5 = o(1) small, and \u00af t \u227e logn log(n(p+q)). Result: The labeling for each node v \u2208V u. for each node v \u2208V u in the unlabeled set, do open the tree neighborhood T\u2264\u00af t(v) induced by the graph G(V,E) ; for each node u \u2208C(1\u2212\u03f5)\u00af t(v), i.e., depth (1\u2212\u03f5)\u00af t child of v, do focus on the subtree T\u2264\u03f5\u00af t(u), ; initialize the message for u via the labeled node \u2208V l in layer \u03f5\u00af t of the subtree a ; end run message-passing Algorithm 3 (Algorithm 4 for general k) on the tree T\u2264(1\u2212\u03f5)\u00af t(v) with initial message on layer (1\u2212\u03f5)\u00af t ; output \u2113(v). end aas there is at least one labeled node in layer \u03f5\u00af t, for all the subtrees rooted in u Now we are ready to present the main result. Theorem 1 (Transition Thresholds for p-SBM: k = 2). Consider the partially labeled stochastic block model G(V,E) and its revealed labels \u2113(V l) under the conditions (1) np \u224dnq \u227eno(1) and (2) \u03b4 \u227fn\u2212o(1). For any node \u03c1 \u2208V u and its locally tree-like neighborhood T\u2264\u00af t(\u03c1), de\ufb01ne the maximum mis-classi\ufb01cation error of a local estimator \u03a6 : \u2113T\u2264\u00af t(\u03c1) \u2192{+,\u2212} as Err(\u03a6) := max l\u2208{+,\u2212} P \u00a1 \u03a6(\u2113T\u2264t(\u03c1)) \u0338= \u2113(\u03c1)|\u2113(\u03c1) = l \u00a2 . The transition boundary for p-SBM depends on the value SNR = (1\u2212\u03b4)n(p \u2212q)2 2(p + q) . (k = 2 in Eq. (1)). On the one hand, if SNR > 1, (3) the \u00af tlocal message-passing Algorithm 1 \u2014 denoted as \u02c6 A(\u2113T\u2264\u00af t(\u03c1)) \u2014 recovers the true labels of the nodes with mis-classi\ufb01cation rate at most Err( \u02c6 A) \u2264exp \u00b5 \u2212SNR\u22121 2C +o\u00af t(1) \u00b6 \u22271 2, (4) where C > 0 is a constant and C \u22611 if the local tree is regular. On the other hand, when SNR < 1, (5) 8 \ffor any \u00af t-local estimator \u03a6 : \u2113T\u2264\u00af t(\u03c1) \u2192{+,\u2212}, the minimax mis-classi\ufb01cation error is lower bounded as inf \u03a6 Err(\u03a6) \u22651 2 \u2212C \u00b7 s \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u2212SNR \u00b7 (p + q)2 pq = 1 2 \u2212C \u2032 \u00b7 s \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u2212SNR. The above lower bound in the regime \u03b4 = o(1) implies that no local algorithm using information up to depth \u00af t can do signi\ufb01cantly better than 1/2+O( p \u03b4), close to random guessing. Let us compare the main result for p-SBM with the well-known result for the standard SBM with no partial label information. The boundary in Equations (3) and (5) is the phase transition boundary for the standard SBM when we plug in \u03b4 = 0. This also matches the well-known Kesten-Stigum bound. For the standard SBM in k = 2 case, the Kesten-Stigum bound is proved to be sharp (even for global algorithms), see (Mossel et al., 2013b; Massouli\u00e9, 2014). The interesting case is when there is a vanishing amount of revealed label information, i.e., o(1) = \u03b4 \u227f n\u2212o(1). In this case, the upper bound part of Theorem 1 states that this vanishing amount of initial information is enough to propagate the labeling information to all the nodes, above the same detection transition threshold as the vanilla SBM. However, the theoretical guarantee for the label propagation pushes beyond weak consistency (detection), explicitly interpolating between weak and strong consistency. The result provides a detailed understanding of the strength of the SNR threshold and its effect on percentage recovery guarantee, i.e., the inference guarantee. More concretely, for the regime p = a/n,q = b/n, the boundary SNR = (1\u2212\u03b4)n(p \u2212q)2 2(p + q) > 1 which is equivalent to the setting (a\u2212b)2 2(a+b) > 1 1\u2212\u03b4. When \u03b4 = 0, this matches the boundary for weak consistency in (Mossel et al., 2013b; Massouli\u00e9, 2014). In addition, SNR > 1+2logn implies Err( \u02c6 A) < 1/n \u21920, which means strong consistency (recovery) in the regular tree case (C \u22611). This condition on SNR is satis\ufb01ed, for instance, by taking p = a logn/n,q = b logn/n and computing the relationship between a,b, and \u03b4 to ensure SNR = (1\u2212\u03b4)n(p \u2212q)2 2(p + q) > 1+2logn. This relationship is precisely pa \u2212 p b p 2 > s 1+ 1 2logn 1\u2212\u03b4 \u00b7 pa + p b p 2(a +b) \u227f r 1 1\u2212\u03b4. The above agrees with the scaling for strong recovery in (Abbe et al., 2014; Hajek et al., 2014). The following sections are dedicated to proving the theorem. The upper bound is established in Corollary 2.1 through a linearized belief propagation that serves as a subroutine for Algorithm 1. The lower bound is established by employing the classic Le Cam\u2019s theory, as shown in Theorem 3. 3.2 Belief Propagation & Message Passing In this section we introduce the belief propagation (BP) Algorithm 2 and motivate the new messagepassing Algorithm 3 that, while being easier to analyze, mimics the behavior of BP . Algorithm 3 serves as the key building block for Algorithm 1. 9 \fRecall the de\ufb01nition of the partially revealed binary broadcasting tree Treek=2(\u03b8,d,\u03b4) with broadcasting number d. The root \u03c1 is labeled \u2113(\u00b7) with either {+,\u2212} equally likely, and the label is not revealed. The labels are broadcasted along the tree with a bias parameter 0 < \u03b8 < 1: for a child v \u2208C (u) of u, \u2113(v) = \u2113(u) with probability 1+\u03b8 2 and \u2113(v) = \u2212\u2113(u) with probability 1\u2212\u03b8 2 . The tree is partially labeled in the sense that a fraction 0 < \u03b4 < 1 of labels are revealed for each layer and \u2113T\u2264t(\u03c1) stands for the revealed label information of tree rooted at \u03c1 with depth \u2264t. Let us formally introduce the BP algorithm, which is the Bayes optimal algorithm on trees. We de\ufb01ne Mi(\u2113T\u2264i (v)) := log P \u00a1 \u2113(v) = +|\u2113T\u2264i (v) \u00a2 P \u00a1 \u2113(v) = \u2212|\u2113T\u2264i (v) \u00a2 as the belief of node v\u2019s label, and we abbreviate it as Mi when the context is clear. The belief depends on the revealed information \u2113T\u2264i (v). The following Algorithm 2 calculates the log ratio Mt(\u2113T\u2264t(\u03c1)) based on the revealed labels up to depth t, recursively, as shown in Figure 2. The Algorithm is derived through Bayes\u2019 rule and simple algebra, and the detailed derivation is included in Section 6. f\u2713(\u00b7) f\u2713(\u00b7) Mt(`T\uf8fft(\u21e2)) Mt\u22121(`T\uf8fft\u22121(C(\u21e2))) Mt\u22122(`T\uf8fft\u22122(C2(\u21e2))) M1(`T1(\u21e2)) M1(`T1(C(\u21e2))) Figure 2: Illustration of recursion in Eq. (6) for messages on a d-regular tree. Here d = 3 with two unlabeled children ((1\u2212\u03b4)d = 2, denoted by blue) and one labeled child (\u03b4d = 1, denoted by black), and the depth is 2. C t(\u03c1) denotes depth t children of the root \u03c1. The red arrows correspond to messages received from the labeled children and black arrow are from the unlabeled children. 10 \fAlgorithm 2: Belief Propagation (BP) on Partially Labeled Binary Broadcasting Tree Data: A partially labeled tree T\u2264t(\u03c1) with depth t, with labels \u2113T\u2264t(\u03c1), the root label \u2113(\u03c1) is unknown. Result: The logit of the posterior probability Mt(\u2113T\u2264t(\u03c1)) = log P(\u2113(\u03c1)=+|\u2113T\u2264t (\u03c1)) P(\u2113(\u03c1)=\u2212|\u2113T\u2264t (\u03c1)). Initialization: i = 1, and M0(\u2113T\u22640(v)) = 0, M1(\u2113T\u22641(v)) = \u00a1 NC l(v)(+)\u2212NC l(v)(\u2212) \u00a2 log 1+\u03b8 1\u2212\u03b8, \u2200v \u2208T\u2264t(\u03c1); while i \u2264t do focus on (t \u2212i)-th layer; for v \u2208Ct\u2212i(\u03c1) and v unlabeled do update messages for the subtree: Mi(\u2113T\u2264i (v)) = M1(\u2113T1(v))+ X u\u2208C u(v) f\u03b8 \u00a1 Mi\u22121(\u2113T\u2264i\u22121(u)) \u00a2 (6) move one layer up: i = i +1 ; end end output Mt(\u2113T\u2264t(\u03c1)). The computational complexity of this algorithm is O \u00b5(\u03b4d +1)[(1\u2212\u03b4)d]t \u2212d (1\u2212\u03b4)d \u22121 \u00b6 . While the method is Bayes optimal, the density of the messages Mi is dif\ufb01cult to analyze, due to the dependence on revealed labels and the non-linearity of f\u03b8. However, the following linearized version, Algorithm 3, shares many theoretical similarities with Algorithm 2, and is easier to analyze. Both Algorithms 2, 3 require the prior knowledge of \u03b8. Algorithm 3: Approximate Message Passing (AMP) on Partially Labeled Binary Broadcasting Tree Data: A partially labeled tree T\u2264t(\u03c1) with depth t, with labels \u2113T\u2264t(\u03c1), the root label \u2113(\u03c1) is unknown. Result: Label \u2113(\u03c1) = sgn(Mt(\u2113T\u2264t(\u03c1))). Initialization: i = 1, and M0(\u2113T\u22640(v)) = 0, M1(\u2113T\u22641(v)) = \u00a1 NC l(v)(+)\u2212NC l(v)(\u2212) \u00a2 log 1+\u03b8 1\u2212\u03b8, \u2200v \u2208T\u2264t(\u03c1); while i \u2264t do focus on (t \u2212i)-th layer; for v \u2208Ct\u2212i(\u03c1) and v unlabeled do update messages for the subtree: Mi(\u2113T\u2264i (v)) = M1(\u2113T1(v))+\u03b8 \u00b7 X u\u2208C u(v) Mi\u22121(\u2113T\u2264i\u22121(u)) (7) move one layer up: i = i +1 ; end end output \u2113(\u03c1) = sgn(Mt(\u2113T\u2264t(\u03c1))). Algorithm 3 can also be viewed as a weight-adjusted majority vote algorithm. We will prove in the next two sections that BP and AMP achieve the same transition boundary in the following sense. Above 11 \fa certain threshold, the AMP algorithm succeeds, which implies that the optimal BP algorithm will also work. Below the same threshold, even the optimal BP algorithm will fail, and so does the AMP algorithm. 3.3 Concentration Phenomenon on Messages We now prove Theorem 2, which shows the concentration of measure phenomenon for messages de\ufb01ned on the broadcasting tree. We focus on a simpler case of regular local trees, and the result will be generalized to Galton-Watson trees with a matching branching number. We state the result under a stronger condition \u03b4d = O(1). In the case when \u03b4d = o(1), a separate trick, described in Remark 1 below, of aggregating the \u03b4 information in a subtree will work. Theorem 2 (Concentration of Messages for AMP). Consider the Approximate Message Passing (AMP) Algorithm 3 on the binary-label broadcasting tree Treek=2(\u03b8,d,\u03b4). Assume \u03b4d = O(1). De\ufb01ne parameters {\u00b5t, \u03c32 t }t\u22650 as \u00b5t = \u00b51 +\u03b1\u00b7\u00b5t\u22121, (8) \u03c32 t = \u03c32 1 +\u03b1\u00b7\u03c32 t\u22121 +\u03b1\u00b7\u00b52 t\u22121, (9) with the initialization \u00b51 = \u03b8\u03b4d \u00b7log 1+\u03b8 1\u2212\u03b8 , \u03c32 1 = \u03b4d \u00b7log2 1+\u03b8 1\u2212\u03b8 , \u03b1 := (1\u2212\u03b4)\u03b82d. The explicit formulas for \u00b5t and \u03c32 t are \u00b5t = \u03b1t \u22121 \u03b1\u22121 \u00b7\u00b51, (10) \u03c32 t = \u03b1t \u22121 \u03b1\u22121 \u00b7\u03c32 1 + \u03b12t\u2212\u03b1t+1+\u03b1t\u2212\u03b1 \u03b1\u22121 \u22122(t \u22121)\u03b1t (\u03b1\u22121)2 \u00b7\u00b52 1. (11) For a certain depth t, conditionally on \u2113(\u03c1) = +, the messages in Algorithm 3 concentrate as \u00b5t \u2212x \u00b7\u03c3t \u2264Mt(\u2113T\u2264t(\u03c1)) \u2264\u00b5t + x \u00b7\u03c3t, and conditionally on \u2113(\u03c1) = \u2212, \u2212\u00b5t \u2212x \u00b7\u03c3t \u2264Mt(\u2113T\u2264t(\u03c1)) \u2264\u2212\u00b5t + x \u00b7\u03c3t, both with probability at least 1\u22122exp(x2/2). Using Theorem 2, we establish the following positive result for approximate message-passing. Corollary 2.1 (Recovery Proportions for AMP , \u03b1 > 1). Assume \u03b1 := (1\u2212\u03b4)\u03b82d > 1, and for any t de\ufb01ne \u03f5(t) = (\u03b1\u22121)2 \u03b82\u03b4d 1 \u03b1t \u22121 +O(\u03b1\u2212t), with lim t\u2192\u221e\u03f5(t) = 0. 12 \fAlgorithm 3 recovers the label of the root node with probability at least 1\u2212exp \u00b5 \u2212 \u03b1\u22121 2(1+\u03f5(t)) \u00b6 , and its computational complexity is O \u00b5(\u03b4d +1)[(1\u2212\u03b4)d]t \u2212d (1\u2212\u03b4)d \u22121 \u00b6 . Remark 1. For the sparse case \u03b4d = o(1), we employ the following technique. Take t0 > 0 to be the smallest integer such that \u03b4[(1\u2212\u03b4)d]t0 > 1. For each leaf node v, open a depth t0 subtree rooted at v, with the number of labeled nodes Poisson(\u03b4[(1\u2212\u03b4)d]t0). Then we have the following parameter updating rule \u00b5t = \u03b1\u00b7\u00b5t\u22121, \u03c32 t = \u03b1\u00b7\u03c32 t\u22121 +\u03b1\u00b7\u00b52 t\u22121, with initialization \u00b51 = \u03b8t0 \u00b7log 1+\u03b8 1\u2212\u03b8 , \u03c32 1 = log2 1+\u03b8 1\u2212\u03b8 , \u03b1 := (1\u2212\u03b4)\u03b82d. The explicit formulas for \u00b5t and \u03c32 t based on the above updating rules are \u00b5t = \u03b1t\u22121 \u00b7\u00b51, \u03c32 t = \u03b1t\u22121 \u00b7\u03c32 1 + \u03b1t\u22121(\u03b1t\u22121 \u22121) \u03b1\u22121 \u00b7\u00b52 1. Corollary 2.1 will change as follows: the value \u03f5(t) is now \u03f5(t) = 1 \u03b82t0 1 \u03b1t\u22121 , with lim t\u2192\u221e\u03f5(t) = 0. This slightly modi\ufb01ed algorithm recovers the label of the root node with probability at least 1\u2212exp \u00b3 \u2212 \u03b1\u22121 2(1+\u03f5(t)) \u00b4 . 3.4 Lower Bound for Local Algorithms: Le Cam\u2019s Method In this section we show that the SNR threshold in Theorem 1 and Corollary 2.1 is sharp for all local algorithms. The limitation for local algorithms is proved along the lines of Le Cam\u2019s method. If we can show a small upper bound on total variation distance between two tree measures \u00b5\u2113\u2264t(+),\u00b5\u2113\u2264t(\u2212), then no algorithm utilizing the information on the tree can distinguish these two measures well. Theorem 3 formalizes this idea. Theorem 3 (Limits of Local Algorithms). Consider the following two measures of revealed labels de\ufb01ned on trees: \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1). Assume that \u03b4d > 1, (1\u2212\u03b4)\u03b82d < 1, and 2\u03b4d log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 < [1\u2212(1\u2212\u03b4)\u03b82d]2. Then for any t > 0, the following bound on total variation holds d2 TV \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 \u2264 2\u03b4d log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d . 13 \fThe above bound implies inf \u03a6 sup l(\u03c1)\u2208{+,\u2212} P \u00a1 \u03a6(\u2113T\u2264t(\u03c1)) \u0338= \u2113(\u03c1) \u00a2 \u22651 2 \u2212C \u00b7 \uf8f1 \uf8f2 \uf8f3 \u03b4d log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d \uf8fc \uf8fd \uf8fe 1/2 , where \u03a6 : \u2113T \u2264t(\u03c1) \u2192{+,\u2212} is any estimator mapping the revealed labels in the local tree to a decision, and C > 0 is some universal constant. We defer the proof of the Theorem 3 to Section 6. Theorem 3 assures the optimality of Algorithm 3. 4 Growing Number of Communities In this section, we extend the algorithmic and theoretical results to p-SBM with general k. There is a distinct difference between the case of large k and k = 2: there is a factor gap between the boundary achievable by local and global algorithms. The main Algorithm that solves p-SBM for general k is still Algorithm 1, but this time it takes Algorithm 4 as a subroutine. We will \ufb01rst state Theorem 4, which summarizes the main result. 4.1 p-SBM Transition Thresholds The transition boundary for partially labeled stochastic block model depends on the critical value SNR de\ufb01ned in Equation (1). Theorem 4 (Transition Thresholds for p-SBM: general k). Assume (1) np \u224dnq \u227eno(1), (2) \u03b4 \u227fn\u2212o(1), (3) k \u227eno(1), and consider the partially labeled stochastic block model G(V,E) and the revealed labels \u2113(V l). For any node \u03c1 \u2208V u and its locally tree-like neighborhood T\u2264\u00af t(\u03c1), de\ufb01ne the maximum mis-classi\ufb01cation error for a local estimator \u03a6 : \u2113T\u2264\u00af t(\u03c1) \u2192[k] as Err(\u03a6) := max l\u2208[k] P \u00a1 \u03a6(\u2113T\u2264t(\u03c1)) \u0338= \u2113(\u03c1)|\u2113(\u03c1) = l \u00a2 . On the one hand, if SNR > 1, (12) the \u00af tlocal message-passing Algorithm 1, denoted by \u02c6 A(\u2113T\u2264\u00af t(\u03c1)), recovers the true labels of the nodes, with mis-classi\ufb01cation rate at most Err( \u02c6 A) \u2264(k \u22121)exp \u00b5 \u2212SNR\u22121 2C +o\u00af t(1) \u00b6 \u2227k \u22121 k , (13) where C \u22611 if the local tree is regular. On the other hand, if SNR < 1 4, (14) for any \u00af t-local estimator \u03a6 : \u2113T\u2264\u00af t(\u03c1) \u2192[k], the minimax mis-classi\ufb01cation error is lower bounded as inf \u03a6 Err(\u03a6) \u22651 2 \u00b5 1\u2212C \u00b7 \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u22124\u00b7SNR \u00b7 (p + q)(q +(p \u2212q)/k) pq \u22121 k \u00b6 > 1 2 \u2212C \u2032 \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u22124\u00b7SNR \u22281 k , where C = C \u2032 \u22611 if the local tree is regular. 14 \fWhen \u03b4 = o(1) and k > 2, the above lower bound says that no local algorithm (that uses information up to depth \u00af t) can consistently estimate the labels with vanishing error. As we did for k = 2, let us compare the main result for p-SBM with the well-known result for the standard SBM with no partial label information. The boundary in Equation (12) matches the detection bound in (Abbe and Sandon, 2015b) for standard SBM when we plug in \u03b4 = 0, which also matches the well-known Kesten-Stigum (K-S) bound. In contrast to the case k = 2, it is known that the K-S bound is not sharp when k is large, i.e., there exists an algorithm which can succeed below the K-S bound. A natural question is whether K-S bound is sharp within a certain local algorithm class. As we show in Equation (14), below a quarter of the K-S bound, the distributions (indexed by the root label) on the revealed labels for the local tree are bounded in the total variation distance sense, implying that no local algorithm can signi\ufb01cantly push below the K-S bound. In summary, 1/4 \u2264SNR \u22641 is the limitation for local algorithms. Remarkably, it is known in the literature (Chen and Xu, 2014; Abbe and Sandon, 2015b) that information-theoretically the limitation for global algorithms is SNR = O\u2217(1/k). This suggests a possible computational and statistical gap as k grows. 4.2 Belief Propagation & Message Passing In this section, we investigate the message-passing Algorithm 4 for p-SBM with k blocks, corresponding to multi-label broadcasting trees. Denote X (i) t (\u2113T\u2264t(v)) = P \u00a1 \u2113(v) = i|\u2113T\u2264t(v) \u00a2 . For u \u2208C (v), P(\u2113(u) = \u2113(v)|\u2113(v)) = \u03b8 + 1\u2212\u03b8 k P(\u2113(u) = l \u2208[k]\\\u2113(v)|\u2113(v)) = 1\u2212\u03b8 k . For any j \u0338= i \u2208[k] and general t, the following Lemma describes the recursion arising from the Bayes theorem. Lemma 1. It holds that log X (i) t (\u2113T\u2264t(v)) X (j) t (\u2113T\u2264t(v)) = log X (i) 1 (\u2113T1(v)) X (j) 1 (\u2113T1(v)) + X u\u2208C u(v) log 1+ k\u03b8 1\u2212\u03b8 X (i) t\u22121(\u2113T\u2264t\u22121(u)) 1+ k\u03b8 1\u2212\u03b8 X (j) t\u22121(\u2113T\u2264t\u22121(u)) . The above belief propagation formula for X (i) t (\u2113T\u2264t(v)) is exact. However, it turns out analyzing the density of X (i) t (\u2113T\u2264t(v)) is hard. Inspired by the \u201clinearization\u201d trick for k = 2, we analyze the following linearized message-passing algorithm. 15 \fAlgorithm 4: Approximate Message Passing on Partially Labeled k-Broadcasting Tree Data: A partially labeled tree T\u2264t(\u03c1) with depth t and labels \u2113T\u2264t(\u03c1), \ufb01xed j \u2208[k]. Result: The messages M(i\u2192j) t (\u2113T\u2264t(v)), for any i \u2208[k]/j. initialization: s = 1, and M0(\u2113T\u22640(v)) = 0,M(i\u2192j) 1 (\u2113T\u22641(v)) = \u00a1 NC l(v)(i)\u2212NC l(v)(j) \u00a2 log \u00b3 1+ k\u03b8 1\u2212\u03b8 \u00b4 , \u2200v,i \u0338= j; while s \u2264t do focus on (t \u2212s)-th layer; for v \u2208Ct\u2212s(\u03c1) and v unlabeled do update messages for the subtree: M(i\u2192j) s (\u2113T\u2264s(v)) = M(i\u2192j) 1 (\u2113T1(v))+\u03b8 \u00b7 P u\u2208C u(v) M(i\u2192j) s\u22121 (\u2113T\u2264s\u22121(u)); move one layer up: s = s +1 ; end end If maxi\u2208[k]/j M(i\u2192j) t (\u2113T\u2264t(\u03c1)) > 0, output \u2113(\u03c1) = argmaxi\u2208[k]/j M(i\u2192j) t (\u2113T\u2264t(\u03c1)); Else output \u2113(\u03c1) = j. For p-SBM with k blocks, Algorithm 1, which uses the above Algorithm 4 as a sub-routine, will succeed in recovering the labels in the regime above the threshold (12). The theoretical justi\ufb01cation is given in the following sections. 4.3 Concentration Phenomenon on Messages As in the case k = 2, here we provide the concentration result on the distribution of approximate messages recursively calculated based on the tree. Theorem 5 (Concentration of Messages for k-AMP , (1 \u2212\u03b4)\u03b82d > 1). Consider the Approximate Message Passing (AMP) Algorithm 4 on the k-label broadcasting tree Treek(\u03b8,d,\u03b4). Assume \u03b4d = O(1). With the initial values \u00b51 = \u03b8\u03b4d \u00b7log \u00b5 1+ k\u03b8 1\u2212\u03b8 \u00b6 , \u03c32 1 = \u03b4d \u00b7log2 \u00b5 1+ k\u03b8 1\u2212\u03b8 \u00b6 and the factor parameter \u03b1 := (1\u2212\u03b4)\u03b82d, the recursion of the parameters \u00b5t, \u03c32 t follows as in Eq. (8). For a certain depth t, conditionally on \u2113(v) = l, the moment generating function for M(i\u2192j) t (\u2113T\u2264t(v)) is upper bounded as \u03a8M(i\u2192j) t (\u03bb) \u2264 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 e\u03bb\u00b5t e \u03bb2\u03c32 t 2 , i = l e \u03bb2\u03c32 t 2 , i, j \u0338= l e\u2212\u03bb\u00b5t e \u03bb2\u03c32 t 2 , j = l The message-passing Algorithm 4 succeeds in recovering the label with probability at least 1\u2212(k\u22121)exp \u00b3 \u2212 \u03b1 2(1+o(1)) \u00b4 when (1\u2212\u03b4)\u03b82d > 1. 16 \fAgain, from Theorem 5 we can easily get the following recovery proportion guarantee. For the messagepassing Algorithm 3, assume \u03b1 := (1\u2212\u03b4)\u03b82d > 1, and de\ufb01ne for any t \u03f5(t) = (\u03b1\u22121)2 \u03b82\u03b4d 1 \u03b1t \u22121 +O(\u03b1\u2212t), with lim t\u2192\u221e\u03f5(t) = 0. Then Algorithm 4 recovers the label of the root node with probability at least 1\u2212(k \u22121)exp \u00b5 \u2212(1\u2212\u03b4)\u03b82d \u22121 2(1+\u03f5(t)) \u00b6 with time complexity O \u00b5 (k \u22121)(\u03b4d +1)[(1\u2212\u03b4)d]t \u2212d (1\u2212\u03b4)d \u22121 \u00b6 . 4.4 Multiple Testing Lower Bound on Local Algorithm Class We conclude the theoretical study with a lower bound for local algorithms for k-label broadcasting trees. We bound the distributions of leaf labels, indexed by different root colors and show that in total variation distance, the distributions are indistinguishable (below the threshold in equation (14)) from each other as \u03b4 vanishes. Theorem 6 (Limitation for Local Algorithms). Consider the following measures of revealed labels de\ufb01ned on trees indexed by the root\u2019s label: \u00b5(i) \u2113T\u2264t (\u03c1),i \u2208[k]. Assume \u03b4d > 1, (1\u2212\u03b4)\u03b82d < 1/4 and 2\u03b4d log \u00c3 1+\u03b82 \u00c3 1 \u03b8 + 1\u2212\u03b8 k + 1 1\u2212\u03b8 k !! < [1\u22124(1\u2212\u03b4)\u03b82d]2. Then for any t > 0, the following bound on the \u03c72 distance holds: max i,j\u2208[k]log \u00b3 1+d\u03c72 \u00b3 \u00b5(i) \u2113T\u2264t (\u03c1),\u00b5(j) \u2113T\u2264t (\u03c1) \u00b4\u00b4 \u2264 2\u03b4d log \u00b5 1+\u03b82 \u00b5 1 \u03b8+ 1\u2212\u03b8 k + 1 1\u2212\u03b8 k \u00b6\u00b6 1\u22124(1\u2212\u03b4)\u03b82d \u2264k \u00b7 2\u03b4\u03b82d 1\u22124(1\u2212\u03b4)\u03b82d \u00b5 1 1\u2212\u03b8 + 1 k\u03b8 +1\u2212\u03b8 \u00b6 . Furthermore, it holds that inf \u03a6 sup l(\u03c1)\u2208[k] P \u00a1 \u03a6(\u2113T\u2264t(\u03c1)) \u0338= \u2113(\u03c1) \u00a2 \u22651 2 \u00b5 1\u2212 2\u03b4\u03b82d 1\u22124(1\u2212\u03b4)\u03b82d \u00b5 1 1\u2212\u03b8 + 1 k\u03b8 +1\u2212\u03b8 \u00b6 \u22121 k \u00b6 , where \u03a6 : \u2113T \u2264t(\u03c1) \u2192[k] is any local estimator mapping from the revealed labels to a decision. The proof is based on a multiple testing argument in Le Cam\u2019s minimax lower bound theory. We would like to remark that condition 4\u00b7(1\u2212\u03b4)\u03b82d < 1 can be relaxed to (1\u2212\u03b4)\u03b82d \u00b7 \u00b5 1+3(1\u2212\u03b8)(1\u22122 k ) \u00b6 < 1. 17 \f5 Numerical Studies In this section we apply our approximate message-passing Algorithm 1 to the political blog dataset (Adamic and Glance, 2005), with a total of 1222 nodes. In the literature, the state-of-the-art result for a global algorithm appears in (Jin, 2015), where the mis-classi\ufb01cation rate is 58/1222 = 4.75%. Here we run our message-passing Algorithm 1 with three different settings \u03b4 = 0.1,0.05,0.025, replicating each experiment 50 times (we sample the revealed nodes independently in 50 experiments for each \u03b4 speci\ufb01cation). As a benchmark, we compare our results to the spectral algorithm on the (1\u2212\u03b4)n sub-network. For our message-passing algorithm, we look at the local tree with depth 1 to 5. The results are summarized as boxplots in Figure 3. The left \ufb01gure illustrates the comparison of AMP with depth 1 to 5 and the spectral algorithm, with red, green, blue boxes corresponding to \u03b4 = 0.025,0.05,0.1, respectively. The right \ufb01gure zooms in on the left plot with only AMP depth 2 to 4 and spectral, to better emphasize the difference. Remark that if we only look at depth 1, some of the nodes may have no revealed neighbors. In this setting, we classify this node as wrong (this explains why depth-1 error can be larger than 1/2). 0.00 0.25 0.50 0.75 depth1 depth2 depth3 depth4 depth5 spectral depth_AMP error_rate delta 0.025 0.05 0.1 0.04 0.08 0.12 0.16 depth2 depth3 depth4 spectral depth_AMP error_rate delta 0.025 0.05 0.1 Figure 3: AMP algorithm on Political Blog Dataset. We present in this paragraph some of the statistics of the experiments, extracted from the above Figure 3. In the case \u03b4 = 0.1, from depth 2-4, the AMP algorithm produces the mis-classi\ufb01cation error rate (we took the median over the experiments for robustness) of 6.31%,5.22%,5.01%, while the spectral algorithm produces the error rate 6.68%. When \u03b4 = 0.05, i.e. about 60 node labels revealed, the error rates are 7.71%,5.44%,5.08% for the AMP algorithm with depth 2 to 4, contrasted to the spectral algorithm error 6.66%. In a more extreme case \u03b4 = 0.025 when there are only 30 node labels revealed, AMP depth 2-4 has error 10.20%,5.71%,5.66%, while spectral is 6.63%. In general, the AMP algorithm with depth 3-4 uniformly beats the vanilla spectral algorithm. Note our AMP algorithm is a distributed decentralized algorithm that can be run in parallel. We acknowledge that the error \u223c5% (when \u03b4 is very small) is still slightly worse than the state-of-the-art degree-corrected SCORE algorithm in (Jin, 2015), which is 4.75%. 18 \f6 Technical Proofs We will start with two useful Lemmas. Lemma 2 couples the local behavior of a stochastic block model with that of a Galton-Watson branching process. Lemma 3 is the well-known Hoeffding\u2019s inequality. Lemma 2 (Proposition 4.2 in (Mossel et al., 2012)). Take t = \u00af tn,k,p,q \u227e logn log[kn(q+ p\u2212q k )]. There exists a coupling between (G,\u03c3) and (T,\u2113) such that (G\u2264t,\u03c3G\u2264t ) = (T\u2264t,\u2113T\u2264t ) asymptotically almost surely. Here (T,\u2113) corresponds to the broadcast process on a Galton-Watson tree process T with offspring distribution Poisson \u00a1 n(q + p\u2212q k ) \u00a2 , and (G,\u03c3) corresponds to the SBM and its labels. Lemma 3 (Hoeffding\u2019s Inequality). Let X be any real-valued random variable with expected value EX = 0 and such that a \u2264X \u2264b almost surely. Then, for all \u03bb > 0, E h e\u03bbX i \u2264exp \u00b5\u03bb2(b \u2212a)2 8 \u00b6 . Let us now derive the algorithms for belief propagation. Derivation of Belief Propagation, k = 2. The algorithm we rely on is Belief Propagation (or MAP). We recursively calculate the posterior probability P \u00a1 \u2113(\u03c1) = +|\u2113T\u2264t(\u03c1) \u00a2 backwards from the leaf of the tree. The recursion holds from t \u2192t \u22121 via Bayes Theorem P \u00a1 \u2113(\u03c1) = +|\u2113T\u2264t(\u03c1) \u00a2 = P \u00a1 \u2113T\u2264t(\u03c1)|\u2113(\u03c1) = + \u00a2 P \u00a1 \u2113(\u03c1) = + \u00a2 P \u00a1 \u2113T\u2264t(\u03c1)|\u2113(\u03c1) = + \u00a2 P \u00a1 \u2113(\u03c1) = + \u00a2 +P \u00a1 \u2113T\u2264t(\u03c1)|\u2113(\u03c1) = \u2212 \u00a2 P \u00a1 \u2113(\u03c1) = \u2212 \u00a2 = Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = + \u00a2 Q v\u2208C u(\u03c1) P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = + \u00a2 Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = + \u00a2 Q v\u2208C u(\u03c1) P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = + \u00a2 + Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = \u2212 \u00a2 Q v\u2208C u(\u03c1) P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = \u2212 \u00a2. Here we denote for the \u03b4d labeled nodes M1(\u2113T1(\u03c1)) := log P \u00a1 \u2113(\u03c1) = +|\u2113T\u22641(\u03c1) \u00a2 P \u00a1 \u2113(\u03c1) = \u2212|\u2113T\u22641(\u03c1) \u00a2 = log Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = + \u00a2 Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = \u2212 \u00a2 = \u00b3 NC l(\u03c1)(+)\u2212NC l(\u03c1)(\u2212) \u00b4 log 1+\u03b8 1\u2212\u03b8 , where NC l(\u03c1)(+) \u223cBinom \u00b3 \u03b4d, 1+\u03b8 2 \u00b4 , NC l(\u03c1)(\u2212) = \u03b4d \u2212NC l(\u03c1)(+). For the unlabeled nodes v \u2208C u(\u03c1) de\ufb01ne Mt\u22121(\u2113T\u2264t\u22121(v)) := log P \u00a1 \u2113(v) = +|\u2113T\u2264t\u22121(v) \u00a2 P \u00a1 \u2113(v) = \u2212|\u2113T\u2264t\u22121(v) \u00a2 = log 1+ P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 \u2212P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 1\u2212 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 \u2212P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 , which means P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = + \u00a2 \u2212P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = \u2212 \u00a2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = + \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = \u2212 \u00a2 = tanh \u00a1 Mt\u22121(\u2113T\u2264t\u22121(v))/2 \u00a2 . 19 \fNow we have log P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = + \u00a2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = \u2212 \u00a2 = log P \u00a1 \u2113T\u2264t\u22121(v),\u2113(v) = +|\u2113(\u03c1) = + \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v),\u2113(v) = \u2212|\u2113(\u03c1) = + \u00a2 P \u00a1 \u2113T\u2264t\u22121(v),\u2113(v) = +|\u2113(\u03c1) = \u2212 \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v),\u2113(v) = \u2212|\u2113(\u03c1) = \u2212 \u00a2 = log P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = + \u00a2 \u00b7 1+\u03b8 2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = \u2212 \u00a2 \u00b7 1\u2212\u03b8 2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = + \u00a2 \u00b7 1\u2212\u03b8 2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = \u2212 \u00a2 \u00b7 1+\u03b8 2 = log 1+\u03b8 \u00b7 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 \u2212P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 1\u2212\u03b8 \u00b7 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 \u2212P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=+ \u00a2 +P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v)=\u2212 \u00a2 = log 1+\u03b8 \u00b7tanh \u00a1 Mt\u22121(\u2113T\u2264t\u22121(v))/2 \u00a2 1\u2212\u03b8 \u00b7tanh \u00a1 Mt\u22121(\u2113T\u2264t\u22121(v))/2 \u00a2 = f\u03b8 \u00a1 Mt\u22121(\u2113T\u2264t\u22121(v)) \u00a2 . Thus we have the following recursion Mt(\u2113T\u2264t(\u03c1)) = M1(\u2113T1(\u03c1))+ X v\u2208C u(\u03c1) f\u03b8 \u00a1 Mt\u22121(\u2113T\u2264t\u22121(v)) \u00a2 with |C u(\u03c1)| = (1\u2212\u03b4)d. The notation Mt(\u2113T\u2264t(\u03c1)) can be viewed as the messages (logit) of the root label \u2113(\u03c1) for the depth t tree that rooted from \u03c1. M1(\u2113T1(\u03c1)) denotes the message on the labels on depth 1 layer with root \u03c1. These messages denote the belief of the labeling of the node \u03c1 based on the random labels \u2113T\u2264t(\u03c1). Derivation of Lemma 1, BP , general k. Note X (i) t\u22121(\u2113T\u2264t\u22121(u)) are (1 \u2212\u03b4)d i.i.d conditionally on \u2113(v). The initial message is log X (i) 1 (\u2113T1(v)) X (j) 1 (\u2113T1(v)) = \u00a1 NC l(v)(i)\u2212NC l(v)(j) \u00a2 \u00b7log(1+\u03b8). For the case when \u2113(v) = l, we have NC l(v)(l) \u223cBinom(\u03b4d,\u03b8 + 1\u2212\u03b8 k ) NC l(v)(i) \u223cBinom(\u03b4d, 1\u2212\u03b8 k ),i \u2208[k]/l. 20 \fThen log X (i) t (\u2113T\u2264t(\u03c1)) X (j) t (\u2113T\u2264t(\u03c1)) = log P \u00a1 \u2113T\u2264t(\u03c1)|\u2113(\u03c1) = i \u00a2 P \u00a1 \u2113T\u2264t(\u03c1)|\u2113(\u03c1) = j \u00a2 = log Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = i \u00a2 Q v\u2208C u(\u03c1) P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = i \u00a2 Q v\u2208C l(\u03c1) P \u00a1 \u2113(v)|\u2113(\u03c1) = j \u00a2 Q v\u2208C u(\u03c1) P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(\u03c1) = j \u00a2 = log X (i) 1 (\u2113T1(v)) X (j) 1 (\u2113T1(v)) + X v\u2208C u(\u03c1) log \u03b8 \u00b7P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = i \u00a2 + 1\u2212\u03b8 k \u00b7P l\u2208[k] P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = l \u00a2 \u03b8 \u00b7P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = j \u00a2 + 1\u2212\u03b8 k \u00b7P l\u2208[k] P \u00a1 \u2113T\u2264t\u22121(v)|\u2113(v) = l \u00a2 = log X (i) 1 (\u2113T1(v)) X (j) 1 (\u2113T1(v)) + X v\u2208C u(\u03c1) log 1+\u03b8X (i) t\u22121(\u2113T\u2264t\u22121(v)) 1+\u03b8X (j) t\u22121(\u2113T\u2264t\u22121(v)) . Now we are ready to prove the main theoretical results. First, we focus on the k = 2 case and prove the broadcasting tree version of Theorem 2 and Theorem 3, under the assumption the tree is regular. Later, based on these two theorems, Theorem 1 for p-SBM (k = 2) is proved. Proof of Theorem 2. We focus on a regular tree where each node has (1\u2212\u03b4)d unlabeled children and \u03b4d labeled children. For t = 1, the results follow from Hoeffding\u2019s inequality directly because M1(\u2113T1(\u03c1)) = \u00b3 NC l(\u03c1)(+)\u2212NC l(\u03c1)(\u2212) \u00b4 log 1+\u03b8 1\u2212\u03b8 . Let us use induction to prove the remaining claim. Assume for tree with depth t \u22121 rooted from u, for any \u03bb > 0 E h e\u03bbMt\u22121(\u2113T\u2264t\u22121(u))|\u2113(u) = + i \u2264e\u03bb\u00b5t\u22121 \u00b7e \u03bb2 2 \u03c32 t\u22121, E h e\u03bbMt\u22121(\u2113T\u2264t\u22121(u))|\u2113(u) = \u2212 i \u2264e\u2212\u03bb\u00b5t\u22121 \u00b7e \u03bb2 2 \u03c32 t\u22121. These will further imply, conditionally on \u2113(u) = +, Mt\u22121(\u2113T\u2264t\u22121(u)) \u2208\u00b5t\u22121 \u00b1 x \u00b7\u03c3t\u22121; and conditionally on \u2113(v) = \u2212, Mt\u22121(\u2113T\u2264t\u22121(u)) \u2208\u2212\u00b5t\u22121 \u00b1 x \u00b7\u03c3t\u22121; both with probability at least 1\u22122exp(x2/2). Now, recall the recursion for AMP: Mt(\u2113T\u2264t(v)) = M1(\u2113T1(v))+\u03b8 \u00b7 X u\u2208C u(v) Mt\u22121(\u2113T\u2264t\u22121(u)). 21 \fFor the moment generating function we have E h e\u03bbMt(\u2113T\u2264t (v))|\u2113(v) = + i \u2264e\u03bb \u00a1 \u03b8\u03b4d\u00b7log 1+\u03b8 1\u2212\u03b8 \u00a2 e \u03bb2 2 \u00b3p \u03b4d\u00b7log 1+\u03b8 1\u2212\u03b8 \u00b42 \u00b7 Y u\u2208C u(v) E h e\u03bb\u03b8Mt\u22121(\u2113T\u2264t\u22121(u))|\u2113(v) = + i = e\u03bb\u00b51e \u03bb2 2 \u03c32 1 \u00b7 Y u\u2208C u(v) E h e\u03bb\u03b8Mt\u22121(\u2113T\u2264t\u22121(u))|\u2113(v) = + i . The last term in the previous equation can be written as E h e\u03bb\u03b8Mt\u22121(\u2113T\u2264t\u22121(u))|\u2113(v) = + i \u2264e \u03bb2\u03b82 2 \u03c32 t\u22121 \u00b7 \u00bd1+\u03b8 2 e\u03bb\u03b8\u00b5t\u22121 + 1\u2212\u03b8 2 e\u2212\u03bb\u03b8\u00b5t\u22121 \u00be (15) \u2264e \u03bb2\u03b82 2 \u03c32 t\u22121 \u00b7e\u03bb\u03b8 \u00a1 1+\u03b8 2 \u00b5t\u22121\u22121\u2212\u03b8 2 \u00b5t\u22121 \u00a2 \u00b7e \u03bb2\u03b82 2 \u00b52 t\u22121 (16) = e \u03bb2\u03b82 2 \u03c32 t\u22121 \u00b7e\u03bb\u03b82\u00b5t\u22121 \u00b7e \u03bb2\u03b82 2 \u00b52 t\u22121 where equation (15) to (16) relies on Hoeffding\u2019s lemma: for a random variable Y = \u03b8\u00b5t\u22121 with probability 1+\u03b8 2 and Y = \u2212\u03b8\u00b5t\u22121 with probability 1\u2212\u03b8 2 , \u03a8Y (\u03bb) = Ee\u03bbY \u2264e\u03bbEY e \u03bb2 2 \u03b82\u00b52 t\u22121 = e\u03bb \u00a1 1+\u03b8 2 \u03b8\u00b5t\u22121\u22121\u2212\u03b8 2 \u03b8\u00b5t\u22121 \u00a2 e \u03bb2 2 \u03b82\u00b52 t\u22121. Thus E h e\u03bbMt(\u2113T\u2264t (v))|\u2113(v) = + i \u2264e \u03bb2 2 \u03c32 1 \u00b7e\u03bb\u00b51 \u00b7 \u00bd e \u03bb2\u03b82 2 \u03c32 t\u22121 \u00b7e\u03bb\u03b82\u00b5t\u22121 \u00b7e \u03bb2\u03b82 2 \u00b52 t\u22121 \u00be(1\u2212\u03b4)d = e\u03bb(\u00b51+\u03b1\u00b5t\u22121) \u00b7e \u03bb2 2 (\u03c32 1+\u03b1\u03c32 t\u22121+\u03b1\u00b52 t\u22121) = e\u03bb\u00b5t \u00b7e \u03bb2 2 \u03c32 t . When \u2113(v) = \u2212, we have E h e\u03bbMt(\u2113T\u2264t (v))|\u2113(v) = \u2212 i \u2264e \u03bb2 2 \u03c32 1 \u00b7e\u2212\u03bb\u00b51 \u00b7 Y u\u2208C u(v) E h e\u03bb\u03b8Mt\u22121(\u2113T\u2264t\u22121(u))|\u2113(v) = \u2212 i \u2264e \u03bb2 2 \u03c32 1 \u00b7e\u2212\u03bb\u00b51 \u00b7 \u00bd e \u03bb2\u03b82 2 \u03c32 t\u22121 \u00b7 \u00bd1+\u03b8 2 e\u2212\u03bb\u03b8\u00b5t\u22121 + 1\u2212\u03b8 2 e\u03bb\u03b8\u00b5t\u22121 \u00be\u00be(1\u2212\u03b4)d \u2264e \u03bb2 2 \u03c32 1 \u00b7e\u2212\u03bb\u00b51 \u00b7 \u00bd e \u03bb2\u03b82 2 \u03c32 t\u22121 \u00b7e\u2212\u03bb\u03b82\u00b5t\u22121 \u00b7e \u03bb2\u03b82 2 \u00b52 t\u22121 \u00be(1\u2212\u03b4)d = e\u2212\u03bb\u00b5t \u00b7e \u03bb2 2 \u03c32 t . This completes the proof. 22 \fProof of Theorem 3. De\ufb01ne the measure \u00b5+ \u2113T\u2264t (\u03c1) on the revealed labels, for a depth t tree rooted from \u03c1 with label \u2113(\u03c1) = + (and similarly de\ufb01ne \u00b5\u2212 \u2113T\u2264t (\u03c1)). We have the following recursion formula \u00b5+ \u2113T\u2264t (\u03c1) = \u00b51+\u03b8 2 \u00b6NC l (\u03c1) \u00b51\u2212\u03b8 2 \u00b6\u03b4d\u2212NC l (\u03c1) Y v\u2208C u(\u03c1) \u00b71+\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) \u00b8 . Recall that the \u03c72 distance between two absolute continuous measures \u00b5(x),\u03bd(x) is d\u03c72(\u00b5,\u03bd) = Z \u00b52 \u03bd dx \u22121, and we have the total variation distance between these two measures is upper bounded by the \u03c72 distance dTV \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 \u2264 r d\u03c72 \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 . Let us upper bound the symmetric version of \u03c72 distance de\ufb01ned as d t \u03c72 := max n d\u03c72 \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 , d\u03c72 \u00b3 \u00b5\u2212 \u2113T\u2264t (\u03c1),\u00b5+ \u2113T\u2264t (\u03c1) \u00b4o . Note that d\u03c72 \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 = \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6\u03b4d \u00b7 \u00b7 1+d\u03c72 \u00b51+\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v), 1+\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) \u00b6\u00b8(1\u2212\u03b4)d \u22121, and for the RHS, we have the expression d\u03c72 \u00b51+\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v), 1+\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) \u00b6 = \u03b82 Z (\u00b5+ \u2113\u2264t\u22121(v) \u2212\u00b5\u2212 \u2113\u2264t\u22121(v))2 1+\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) dx. Recalling Jensen\u2019s inequality, RHS of the above equation is further upper bounded by RHS \u2264\u03b82 Z (\u00b5+ \u2113\u2264t\u22121(v) \u2212\u00b5\u2212 \u2113\u2264t\u22121(v))2 \u00b7 \" 1+\u03b8 2 \u00b7 1 \u00b5\u2212 \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7 1 \u00b5+ \u2113\u2264t\u22121(v) # dx = \u03b82 \u00b71+\u03b8 2 d\u03c72 \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 + 1\u2212\u03b8 2 d\u03c72 \u00b3 \u00b5\u2212 \u2113\u2264t\u22121(v),\u00b5+ \u2113\u2264t\u22121(v) \u00b4\u00b8 . Thus max \u00bd d\u03c72 \u00b51+\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v), 1+\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) \u00b6 , d\u03c72 \u00b51+\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v), 1+\u03b8 2 \u00b7\u00b5+ \u2113\u2264t\u22121(v) + 1\u2212\u03b8 2 \u00b7\u00b5\u2212 \u2113\u2264t\u22121(v) \u00b6\u00be \u2264\u03b82 max n d\u03c72 \u00b3 \u00b5+ \u2113\u2264t\u22121(v),\u00b5\u2212 \u2113\u2264t\u22121(v) \u00b4 ,d\u03c72 \u00b3 \u00b5\u2212 \u2113\u2264t\u22121(v),\u00b5+ \u2113\u2264t\u22121(v) \u00b4o = \u03b82d t\u22121 \u03c72 . 23 \fTherefore, we have log \u00b3 1+d t \u03c72 \u00b4 \u2264\u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1\u2212\u03b4)d \u00b7log \u00b3 1+\u03b82 \u00b7d t\u22121 \u03c72 \u00b4 . If (1 \u2212\u03b4)\u03b82d < 1, denote the \ufb01xed point of the above equation as c\u2217(the existence is manifested by the following bound (18)), i.e., log(1+c\u2217) = \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1\u2212\u03b4)d \u00b7log(1+\u03b82 \u00b7c\u2217). Due to the fact that x \u22121 2x2 \u2264log(1+ x) \u2264x, we have the following upper bound c\u2217\u22121 2(c\u2217)2 \u2264log(1+c\u2217) = \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1\u2212\u03b4)d \u00b7log(1+\u03b82c\u2217) \u2264\u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1\u2212\u03b4)\u03b82d \u00b7c\u2217 (17) and thus c\u2217\u2264 \u03b4d \u00b7log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d \u00b7 2 1+ r 1\u22122 \u03b4d\u00b7log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 (1\u2212(1\u2212\u03b4)\u03b82d)2 . (18) If we have d t\u22121 \u03c72 \u2264c\u2217 it is easy to see that log \u00b3 1+d t \u03c72 \u00b4 \u2264\u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1\u2212\u03b4)d \u00b7log \u00b3 1+\u03b82 \u00b7d t\u22121 \u03c72 \u00b4 \u2264\u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1\u2212\u03b4)d \u00b7log(1+\u03b82 \u00b7c\u2217) = log(1+c\u2217), which implies d t \u03c72 \u2264c\u2217. Therefore we only need to verify d1 \u03c72 \u2264c\u2217, which is trivial. Thus we have the bound, limsup t\u2192\u221e d t \u03c72 \u2264c\u2217\u22642 \u03b4d \u00b7log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d , provided 2\u03b4d\u00b7log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 (1\u2212(1\u2212\u03b4)\u03b82d)2 < 1. So far we have proved limsup t\u2192\u221e d t TV \u2264limsup t\u2192\u221e \u00b3 d t \u03c72 \u00b41/2 \u2264 \uf8f1 \uf8f2 \uf8f3 2\u03b4d \u00b7log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d \uf8fc \uf8fd \uf8fe 1/2 . Through Le Cam\u2019s Lemma, the error rate, for all local algorithms, is at least inf \u03a6 sup l\u2208{+,\u2212} Pl(\u03a6 \u0338= l) \u2265 1\u2212 ( 2\u03b4d\u00b7log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d )1/2 2 . 24 \fNow we are ready to prove Theorem 1 with the help of Lemma 2. The main task in the proof of Theorem 1 is to extend Theorems 2 and 3 from the regular tree case to the general branching tree case with the matching branching number. Since the general branching tree is a random tree with a varied structure, we need to prove that versions of upper and lower bounds from the earlier proofs hold almost surely for this random tree. The proof requires new ideas employing different notions of the \u201cbranching number\u201d (Lyons and Peres, 2005). Proof of Theorem 1. For the regular tree, the upper and lower bounds have been already proved in Theorem 2 and Theorem 3. Instead of a (1 \u2212\u03b4)d regular tree, we need to prove the theorem for GaltonWatson tree with offspring distribution Poisson((1\u2212\u03b4)d) (recall the a.a.s. coupling between local tree of SBM and Galton-Watson tree from Lemma 2). Note these two trees share the same branching number br(T ) = (1\u2212\u03b4)d almost surely. We will use the following equivalent de\ufb01nitions of the branching number (Lyons and Peres, 2005) for a tree T rooted at \u03c1: Flow : a \ufb02ow is non-negative function on the edges of T , with the property that for each non-root vertex x, \ufb02ow((z,x)) = Pd i=1\ufb02ow((x, yi)) if x has parent z and children y1,..., yd. We say that \ufb02ow(e) is the amount of water \ufb02owing along edge e with the total amount of water being \ufb02ow(T ) = P v\u2208C (\u03c1)\ufb02ow((\u03c1,v)). Consider the following restriction on a \ufb02ow: given \u03bb \u22651, \ufb02ow((x,z)) \u2264\u03bb\u2212n for an edge (x,z) with distance n from \u03c1. Branching number br(T ) of a tree T is the supremum over \u03bb that admits a positive total amount of water \ufb02ow\u03bb(T ) > 0 to \ufb02ow through T . Denote for a node v with parent u, the \ufb02ow(v) := \ufb02ow((u,v)). Cutset : de\ufb01ne a cutset to be a set whose removal leaves the root \u03c1 in a \ufb01nite component. Branching number br(T ) of a tree T can be de\ufb01ned as br(T ) = inf \u00bd \u03bb > 0 : inf cutset C P x\u2208C \u03bb\u2212|x| = 0 \u00be . Let us \ufb01x a particular node \u03c1 \u2208V u in p-SBM and focus on its depth-t local tree T\u2264t(\u03c1). For T\u2264t(\u03c1), denote the number of its labeled children at depth i as Ni(\u03c1). Consider the case (1\u2212\u03b4)\u03b82d > 1. Exactly as the method in Proof of Theorem 2, we have the following recursion for cumulant-generating function K (\u03bb) (where the expectation is taken over the label broadcasting process, conditionally on the GaltonWatson tree structure) KMt(\u03c1)(\u03bb) = KM1(\u03c1)(\u03bb)+ X v\u2208C u(\u03c1) KMt\u22121(v)(\u03b8\u03bb) which implies the following mean \u00b5t(\u03c1) and deviation \u03c32 t (\u03c1) bounds for message Mt(\u03c1) \u00b5t(\u03c1) = N1(\u03c1)\u00b7\u03b8log 1+\u03b8 1\u2212\u03b8 +\u03b82 \u00b7 X v\u2208C u(\u03c1) \u00b5t\u22121(v), \u03c32 t (\u03c1) = N1(\u03c1)\u00b7log2 1+\u03b8 1\u2212\u03b8 +\u03b82 \u00b7 X v\u2208C u(\u03c1) [\u00b52 t\u22121(v)+\u03c32 t\u22121(v)]. Now we can easily \ufb01nd the following expression via the above equation \u00b5t(\u03c1) = \" t X i=1 \u03b82(i\u22121)Ni(\u03c1) # \u00b7\u03b8log 1+\u03b8 1\u2212\u03b8 , \u03c32 t (\u03c1) = \" t X i=1 \u03b82(i\u22121)Ni(\u03c1) # \u00b7log2 1+\u03b8 1\u2212\u03b8 + t\u22121 X i=1 \u03b82i X v\u2208C (i)(\u03c1) \u00b52 t\u2212i(v). 25 \fFor the Galton-Watson tree (with Poisson((1\u2212\u03b4)d) off-spring distribution), we have Ni(\u03c1) with growth rate \u224d\u03b4d[(1\u2212\u03b4)d]i\u22121, due to Kesten-Stigum Theorem (Lyons and Peres, 2005). Moreover, it can be shown that \u00b5t(\u03c1) \u224d[(1\u2212\u03b4)\u03b82d]t, as follows. Recall the \ufb02ow de\ufb01nition of branching number for Galton-Watson tree, as the maximum \u03bb such that it admits a positive \ufb02ow\u03bb(T ). Thus the following representation of \u00b5t in terms of \ufb02ow holds for any \u03bb < br(T ) = (1\u2212\u03b4)d \u00b5t(\u03c1) = \" t X i=1 [\u03b82\u03bb]i\u22121 \u00b7 \u00b3 |Ni(\u03c1)|\u03bb\u2212(i\u22121)\u00b4# \u00b7\u03b8log 1+\u03b8 1\u2212\u03b8 \u2265 \" t X i=1 [\u03b82\u03bb]i\u22121\ufb02ow\u03bb(T ) # \u00b7\u03b4d\u03b8log 1+\u03b8 1\u2212\u03b8 = [\u03b82\u03bb]t \u22121 \u03b82\u03bb\u22121 \ufb02ow\u03bb(T )\u00b7\u03b4d\u03b8log 1+\u03b8 1\u2212\u03b8 = \u2126([\u03b82\u03bb]t) (19) due to the fact |Ni(\u03c1)|\u03bb\u2212(i\u22121) \u2265\u03b4d \u00b7\ufb02ow\u03bb(T ), for any layer i \u22121. Taking \u03bb \u2191(1\u2212\u03b4)d ensures us that \u00b5t(\u03c1) \u224d [(1\u2212\u03b4)\u03b82d]t holds almost surely for Galton-Watson tree. Now let us bound \u03c32 t /\u00b52 t . In the regular tree case, we have shown limt\u2192\u221e \u03c32 t (\u03c1) \u00b52 t (\u03c1) = 1 (1\u2212\u03b4)\u03b82d\u22121. Here we want to show limt\u2192\u221e \u03c32 t (\u03c1) \u00b52 t (\u03c1) \u2264C \u00b7 1 (1\u2212\u03b4)\u03b82d\u22121. Conductance is a positive function cond(e) on the edges of T . Recall the energy de\ufb01nition enrg(T ) := P e[\ufb02ow(e)]2/cond(e). In addition to the earlier de\ufb01nitions, the branching number is the largest \u03bb such that the electric current \ufb02ows with \ufb01nite energy enrg\u03bb(T ), given \u03bb\u2212n is the conductance of edges cond(e) at distance n from the root of T . For any \u03bb < br(T ) = (1 \u2212\u03b4)d, we have \u03c32 t (\u03c1) \u00b52 t (\u03c1) = 1 \u03b82\u03b4d \u00b7\u03f5(t)+ t\u22121 X i=1 [\u03b82\u03bb]\u2212i \u00b7\u03be(i,t), where \u03f5(t) := 1 t P i=1 \u03b82(i\u22121)Ni(\u03c1) \u224d 1 [(1\u2212\u03b4)\u03b82d]t = ot(1) due to equation (19) (20) \u03be(i,t) := P v\u2208C (i)(\u03c1) [\u03b82i\u00b5t\u2212i(v)]2 \u00b52 t (\u03c1) \u00b7 1 \u03bb\u2212i \u2264 X v\u2208C (i)(\u03c1) [\ufb02ow(v)]2 \u00b7 1 cond(v) < enrg\u03bb(T ) < \u221e (21) thus t\u22121 X i=1 [\u03b82\u03bb]\u2212i \u00b7\u03be(i,t) \u2264 1 \u03b82\u03bb\u22121enrg\u03bb(T ). Equation (21) is due to the electric current \ufb02ow(v) = \u03b82i \u00b5t\u2212i (v) P v\u2208C (i)(\u03c1) \u03b82i \u00b5t\u2212i (v) \u2265\u03b82i \u00b5t\u2212i (v) \u00b5t(\u03c1) . It is easy to verify that this \ufb02ow satis\ufb01es the de\ufb01nition and that P v\u2208C (i)(\u03c1)\ufb02ow(v) = 1 is the unit \ufb02ow. In view of (20), we have almost surely lim t\u2192\u221e \u03c32 t (\u03c1) \u00b52 t (\u03c1) \u2264C \u00b7 1 (1\u2212\u03b4)\u03b82d \u22121. For a regular tree, we have C \u22611. In summary, conditionally on non-extinction, label recovery succeeds with probability at least 1\u2212exp \u00b3 SNR\u22121 2C(1+o(1)) \u00b4 . This establishes the upper bound. Now let us prove the lower bound. Consider the case (1 \u2212\u03b4)\u03b82d < 1. Recall the proof of Theorem 3 and de\ufb01ne DT\u2264t(\u03c1) := max n d\u03c72 \u00b3 \u00b5+ \u2113T\u2264t (\u03c1),\u00b5\u2212 \u2113T\u2264t (\u03c1) \u00b4 ,d\u03c72 \u00b3 \u00b5\u2212 \u2113T\u2264t (\u03c1),\u00b5+ \u2113T\u2264t (\u03c1) \u00b4o 26 \f(abbreviate as D(\u03c1) when there is no confusion), we have the following recursion log(1+DT\u2264t(\u03c1)) \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +\u03b82 X v\u2208C u(\u03c1) log(1+\u03b82 \u00b7DT\u2264t\u22121(v)) \u03b82 . Invoke the following fact, log(1+\u03b82x) \u03b82 \u2264(1+\u03b7)log(1+ x) for all 0 \u2264x \u2264\u03b7, \u2200\u03b8, whose proof is in one line log(1+\u03b82x) \u03b82 \u2264x \u2264(1+\u03b7) x 1+ x \u2264(1+\u03b7)log(1+ x). Thus if Dt\u22121 \u2264\u03b7, then the following holds log(1+DT\u2264t(\u03c1)) \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1+\u03b7)\u03b82 X v\u2208C u(\u03c1) log(1+DT\u2264t\u22121(v)). (22) Denoting dT\u2264t(\u03c1) := log(1+DT\u2264t(\u03c1)), Equation (22) becomes dT\u2264t(\u03c1) \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1+\u03b7)\u03b82 X v\u2208C u(\u03c1) dT\u2264t\u22121(v). We will need the de\ufb01nition of branching number br(T ) = (1\u2212\u03b4)d via cutset. Lemma 4 (Pemantle and Steif (1999), Lemma 3.3). Assume br(T ) < \u03bb. Then for all \u03f5 > 0, there exists a cutset C such that X x\u2208C \u00b5 1 \u03bb \u00b6|x| \u2264\u03f5 (23) and for all v such that |v| \u2264maxx\u2208C |x|, X x\u2208C\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u22641. (24) Here the notation |v| denotes the depth of v. Fix any \u03bb such that 1 \u03b82 > \u03bb > br(T ) = (1\u2212\u03b4)d (this is doable because (1\u2212\u03b4)\u03b82d < 1). De\ufb01ne function g\u03b1(\u03b7) = \u03b7 1+\u03b7[1\u2212(1+\u03b7)\u03b1] for \u03b1 < 1, clearly it is a monotone increasing function in \u03b7 < p 1/\u03b1\u22121. Thus the inverse g \u22121 \u03b1 (y) exists if y < g\u03b1( p 1/\u03b1\u22121) = (1\u2212p\u03b1)2. Under the assumption \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 < (1\u2212 p \u03b82\u03bb)2, (25) 27 \fChoose \u03b7 = g \u22121 \u03b82\u03bb \u00b5 \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6\u00b6 which implies \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 = g\u03b82\u03bb(\u03b7) and we know \u03b7 < p 1/\u03b82\u03bb\u22121 \u21d2(1+\u03b7)\u03b82\u03bb < 1. The reason will be clear in a second. For any \u03f5 small, the above Lemma claims the existence of cutset C\u03f5 such that equations (23) and(24) holds. Let\u2019s prove through induction on maxx\u2208C\u03f5 |x| \u2212|v| that for any v such that |v| \u2264maxx\u2208C\u03f5 |x|, we have dT\u2264C\u03f5(v) \u2264 \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264 \u03b7 1+\u03b7. (26) Note for the start of induction v \u2208C\u03f5, dT\u2264C\u03f5(v) = \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 = g\u03b82\u03bb(\u03b7) \u2264 \u03b7 1+\u03b7(1\u2212(1+\u03b7)\u03b82\u03bb) < \u03b7 1+\u03b7. Now precede with the induction, assume for v such that maxx\u2208C\u03f5 |x|\u2212|v| = t \u22121 equation (26) is satis\ufb01ed, let\u2019s prove for \u03c1 : maxx\u2208C\u03f5 |x|\u2212|\u03c1| = t. Due to the fact for all v \u2208C u(\u03c1), dT\u2264C\u03f5(v) \u2264 \u03b7 1+\u03b7 \u21d2DT\u2264C\u03f5(v) \u2264\u03b7, we can recall the linearized recursion dT\u2264C\u03f5(\u03c1) \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1+\u03b7)\u03b82 X v\u2208C u(\u03c1) dT\u2264C\u03f5(v) \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 +(1+\u03b7)\u03b82 X v\u2208C u(\u03c1) \" \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v|# \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 + \u03b7 1+\u03b7 \u00b7(1+\u03b7)\u03b82\u03bb X v\u2208C u(\u03c1) X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v|\u22121 \u2264N1(\u03c1)\u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 + \u03b7 1+\u03b7 \u00b7(1+\u03b7)\u03b82\u03bb X x\u2208C\u03f5\u2229T (\u03c1) \u00b5 1 \u03bb \u00b6|x|\u2212|\u03c1| \u2264 \u03b7 1+\u03b7(1\u2212(1+\u03b7)\u03b82\u03bb)+ \u03b7 1+\u03b7 \u00b7(1+\u03b7)\u03b82\u03bb \u2264 \u03b7 1+\u03b7. So far we have proved for any v, such that |v| \u2264maxx\u2208C\u03f5 |x| dT\u2264C\u03f5(v) \u2264 \u03b7 1+\u03b7 X x\u2208C\u03f5\u2229T (v) \u00b5 1 \u03bb \u00b6|x|\u2212|v| \u2264 \u03b7 1+\u03b7 which implies DT\u2264C\u03f5(v) \u2264\u03b7 so that the linearized recursion (22) always holds. Take \u03f5 \u21920,\u03bb \u2192(1\u2212\u03b4)d. De\ufb01ne t\u03f5 := min{|x|,x \u2208C\u03f5}, it is also easy to see from equation (23) that \u00b5 1 \u03bb \u00b6t\u03f5 \u2264 X x\u2208C\u03f5 \u00b5 1 \u03bb \u00b6|x| \u2264\u03f5 \u21d2t\u03f5 > log(1/\u03f5) log\u03bb \u2192\u221e. Putting things together, under the condition \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 \u2264(1\u2212 p (1\u2212\u03b4)\u03b82d)2, 28 \fwe have lim t\u2192\u221eDT\u2264t(\u03c1) = lim \u03f5\u21920DT\u2264C\u03f5(\u03c1) \u2264\u03b7 \u2264C \u00b7 \u03b4d log \u00b3 1+ 4\u03b82 1\u2212\u03b82 \u00b4 1\u2212(1\u2212\u03b4)\u03b82d . Here the last step is due to a simple bound on \u03b7 based on the inequality \u03b4d \u00b7log \u00b5 1+ 4\u03b82 1\u2212\u03b82 \u00b6 = \u03b7 1+\u03b7 \u2212\u03b7\u00b7(1\u2212\u03b4)\u03b82d > \u03b7[1\u2212(1\u2212\u03b4)\u03b82d]\u2212\u03b72. Proof of Theorem 5. For \u03b1 > 1: Use induction analysis. For t = 1, the result follows from Hoeffding\u2019s lemma. Assume results hold for t \u22121, then if above the fraction label is l, E h e\u03bbMt(\u2113T\u2264t (v))|\u2113(v) = l i \u2264e \u03bb2 2 \u03c32 1 \u00b7e\u03bb\u00b51 \u00b7 Y u\u2208C u(v) E h e\u03bb\u03b8Mt\u22121(\u2113T\u2264t\u22121(u))|\u2113(v) = l i = e \u03bb2 2 \u03c32 1 \u00b7e\u03bb\u00b51 \u00b7 Y u\u2208C u(v) \u00b7\u00b5 \u03b8 + 1\u2212\u03b8 k \u00b6 \u00b7e\u03bb\u03b8\u00b5t\u22121e \u03bb2\u03b82\u03c32 t\u22121 2 + 1\u2212\u03b8 k \u00b7e\u2212\u03bb\u03b8\u00b5t\u22121e \u03bb2\u03b82\u03c32 t\u22121 2 + (k \u22122)(1\u2212\u03b8) k \u00b7e \u03bb2\u03b82\u03c32 t\u22121 2 \u00b8 \u2264e\u03bb(\u00b51+\u03b1\u00b5t\u22121) \u00b7e \u03bb2 2 \u00b7(\u03c32 1+(1\u2212\u03b4)d\u03b82\u03c32 t\u22121) \u00b7e \u03bb2 2 \u00b7(1\u2212\u03b4)d\u03b82\u00b52 t\u22121 \u2264e\u03bb(\u00b51+\u03b1\u00b5t\u22121) \u00b7e \u03bb2 2 \u00b7(\u03c32 1+(1\u2212\u03b4)d\u03b82\u03c32 t\u22121+(1\u2212\u03b4)d\u03b82\u00b52 t\u22121) where the last step uses Hoeffding\u2019s Lemma 3. When none of the labels is l, we have the following bound E h e\u03bbMt(\u2113T\u2264t (v))|\u2113(v) = l i \u2264e \u03bb2 2 \u03c32 1 \u00b7 Y u\u2208C u(v) \u00b71\u2212\u03b8 k \u00b7e\u03bb\u03b8\u00b5t\u22121e \u03bb2\u03b82\u03c32 t\u22121 2 + 1\u2212\u03b8 k \u00b7e\u2212\u03bb\u03b8\u00b5t\u22121e \u03bb2\u03b82\u03c32 t\u22121 2 + \u00b5 \u03b8 + (k \u22122)(1\u2212\u03b8) k \u00b6 \u00b7e \u03bb2\u03b82\u03c32 t\u22121 2 \u00b8 \u2264e \u03bb2 2 \u00b7(\u03c32 1+(1\u2212\u03b4)d\u03b82\u03c32 t\u22121) \u00b7e \u03bb2 2 \u00b7(1\u2212\u03b4)d\u03b82\u00b52 t\u22121. Proof is completed. Proof of Theorem 6. Borrowing the idea from Proof 6, we can study the following testing problem: d\u03c72 \u00b3 \u00b5(i) \u2113T\u2264t (\u03c1),\u00b5(j) \u2113T\u2264t (\u03c1) \u00b4 = \u00c3 1+\u03b82 \u00c3 1 \u03b8 + 1\u2212\u03b8 k + 1 1\u2212\u03b8 k !!\u03b4d h 1+d\u03c72 \u00b3 \u03b8\u00b5(i) \u2113\u2264t\u22121(v) +(1\u2212\u03b8) \u00af \u00b5\u2113\u2264t\u22121(v),\u03b8\u00b5(j) \u2113\u2264t\u22121(v) +(1\u2212\u03b8) \u00af \u00b5\u2113\u2264t\u22121(v) \u00b4i(1\u2212\u03b4)d \u22121 29 \fWe know d\u03c72 \u00b3 \u03b8\u00b5(i) \u2113\u2264t\u22121(v) +(1\u2212\u03b8) \u00af \u00b5\u2113\u2264t\u22121(v),\u03b8\u00b5(j) \u2113\u2264t\u22121(v) +(1\u2212\u03b8) \u00af \u00b5\u2113\u2264t\u22121(v) \u00b4 = Z \u03b82(\u00b5(i) \u2113\u2264t\u22121(v) \u2212\u00b5(j) \u2113\u2264t\u22121(v))2 \u03b8\u00b5(j) \u2113\u2264t\u22121(v) +(1\u2212\u03b8) \u00af \u00b5\u2113\u2264t\u22121(v) dx \u2264\u03b82 \u00b7 (\u03b8 + 1\u2212\u03b8 k )d\u03c72 \u00b3 \u00b5(i) \u2113\u2264t\u22121(v),\u00b5(j) \u2113\u2264t\u22121(v) \u00b4 + 1\u2212\u03b8 k d\u03c72 \u00b3 \u00b5(j) \u2113\u2264t\u22121(v),\u00b5(i) \u2113\u2264t\u22121(v) \u00b4 +1\u2212\u03b8 k X l\u2208[k]\\{i,j} 2 \u00b3 d\u03c72 \u00b3 \u00b5(i) \u2113\u2264t\u22121(v),\u00b5(l) \u2113\u2264t\u22121(v) \u00b4 +d\u03c72 \u00b3 \u00b5(j) \u2113\u2264t\u22121(v),\u00b5(l) \u2113\u2264t\u22121(v) \u00b4\u00b4# \u2264\u03b82(1+ 3(1\u2212\u03b8)(k \u22122) k )\u00b7d t\u22121 \u03c72 Thus de\ufb01ne d t \u03c72 := max i,j\u2208[k],i\u0338=j d\u03c72 \u00b3 \u00b5(i) \u2113T\u2264t (\u03c1),\u00b5(j) \u2113T\u2264t (\u03c1) \u00b4 Then log(1+d t \u03c72) \u2264\u03b4d \u00b7log \u00c3 1+\u03b82 \u00c3 1 \u03b8 + 1\u2212\u03b8 k + 1 1\u2212\u03b8 k !! +(1\u2212\u03b4)d \u00b7log \u00b5 1+\u03b82(1+ 3(1\u2212\u03b8)(k \u22122) k )\u00b7d t\u22121 \u03c72 \u00b6 Thus if (1\u2212\u03b4)\u03b82d(1+ 3(1\u2212\u03b8)(k \u22122) k ) < 1, denote c\u2217as the \ufb01xed point of the equation log(1+c\u2217) = \u03b4d \u00b7log \u00c3 1+\u03b82 \u00c3 1 \u03b8 + 1\u2212\u03b8 k + 1 1\u2212\u03b8 k !! +(1\u2212\u03b4)d \u00b7log \u00b5 1+\u03b82(1+ 3(1\u2212\u03b8)(k \u22122) k )\u00b7c\u2217 \u00b6 . We have the following upper bounds for c\u2217via the fact that x \u22121 2x2 < log(1+ x) < x c\u2217\u22121 2(c\u2217)2 \u2264\u03b4d \u00b7log \u00c3 1+\u03b82 \u00c3 1 \u03b8 + 1\u2212\u03b8 k + 1 1\u2212\u03b8 k !! +(1\u2212\u03b4)\u03b82d(1+ 3(1\u2212\u03b8)(k \u22122) k )\u00b7c\u2217. The above equation implies c\u2217< 2\u03b4d\u00b7log \u00b5 1+\u03b82 \u00b5 1 \u03b8+ 1\u2212\u03b8 k + 1 1\u2212\u03b8 k \u00b6\u00b6 1\u2212(1\u2212\u03b4)\u03b82d(1+ 3(1\u2212\u03b8)(k\u22122) k ) , and log(1+d t \u03c72) \u2264 2\u03b4d \u00b7log \u00b5 1+\u03b82 \u00b5 1 \u03b8+ 1\u2212\u03b8 k + 1 1\u2212\u03b8 k \u00b6\u00b6 1\u2212(1\u2212\u03b4)\u03b82d(1+ 3(1\u2212\u03b8)(k\u22122) k ) . Invoke the following Lemma from Tsybakov (2009)\u2019s Proposition 2.4. Lemma 5 (Tsybakov (2009), Proposition 2.4). Let P0,P1,...,Pk\u22121 be probability measures on (X ,A ) satisfying 1 k \u22121 k\u22121 X i=1 d\u03c72(P j,P0) \u2264(k \u22121)\u00b7\u03b1\u2217 then we have for any selector \u03c8 : X \u2192[k] max i\u2208[k] Pi(\u03c8 \u0338= i) \u22651 2[1\u2212\u03b1\u2217\u2212 1 k \u22121] 30 \fSince we have 1 k\u22121 P i\u2208[k]\\j d\u03c72 \u00b3 \u00b5(i) \u2113T\u2264t (\u03c1),\u00b5(j) \u2113T\u2264t (\u03c1) \u00b4 \u2264\u03b1\u00b7(k \u22121), we apply Lemma 5 and obtain inf \u03a6 sup l\u2208[k] P(\u03a6 \u0338= l) \u22651 2 \u00b5 1\u2212\u03b1\u2212 1 k \u22121 \u00b6 , where \u03b1 = \u03b4 1\u2212\u03b4 \u00b7 SNR 1\u22124\u00b7SNR \u00b7 2(p+q)(q+p/(k\u22121)) pq . Proof of Theorem 4. The proof of Theorem 4 is the same idea as the proof of Theorem 1, with Theorem 5 and Theorem 6 as the case for regular tree. For simplicity, we will not prove it again. Acknowledgements The authors want to thank Elchanan Mossel for many valuable discussions." + }, + { + "url": "http://arxiv.org/abs/1512.02487v1", + "title": "High-Dimensional Gaussian Copula Regression: Adaptive Estimation and Statistical Inference", + "abstract": "We develop adaptive estimation and inference methods for high-dimensional\nGaussian copula regression that achieve the same performance without the\nknowledge of the marginal transformations as that for high-dimensional linear\nregression. Using a Kendall's tau based covariance matrix estimator, an\n$\\ell_1$ regularized estimator is proposed and a corresponding de-biased\nestimator is developed for the construction of the confidence intervals and\nhypothesis tests. Theoretical properties of the procedures are studied and the\nproposed estimation and inference methods are shown to be adaptive to the\nunknown monotone marginal transformations. Prediction of the response for a\ngiven value of the covariates is also considered. The procedures are easy to\nimplement and perform well numerically. The methods are also applied to analyze\nthe Communities and Crime Unnormalized Data from the UCI Machine Learning\nRepository.", + "authors": "T. Tony Cai, Linjun Zhang", + "published": "2015-12-08", + "updated": "2015-12-08", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "main_content": "Introduction Finding the relationship between a response and a set of covariates is a ubiquitous problem in scienti\ufb01c studies. Linear regression analysis, which occupies a central position in statistics, is arguably the most commonly used method. It has been well studied in both the conventional low-dimensional and contemporary high-dimensional settings. However, the assumption of linear relationship between the predictors and the response is often too restrictive and unrealistic. Data transformations, such as the Box-Cox transformation, Fisher\u2019s z transformation, and variance stabilization transformation, have been frequently used to improve the linear \ufb01t and to correct violations of model assumptions such as constant error variance. These transformations are often required to be prespeci\ufb01ed before applying the linear regression analysis. See, for example, Carroll and Rupert [7] for detailed discussions on transformations. For a response Y and predictors X1, ..., Xp, the following functional form of the relationship has been widely used in a range of applications, f\u03bb0(Y ) = \u03b20 + p X j=1 \u03b2ff\u03bbj(Xj) + \u03f5, (1) where f\u03bbj(\u00b7) are univariate functions and \u03bbj is the parameter associated with f\u03bbj. Examples of this model include the additive regression model, single index model, copula regression model, and semiparametric proportional hazards models [9,20,21,23,26,30,33,40\u201342]. For applications in econometrics, computational biology, criminology, and natural language processing, see for example [14,19,22,29,38]. In particular, [42] and [40] established the convergence rates for the minimax estimation risk under the high-dimensional additive regression model and single index model respectively. [27] proposes a plug-in approach for estimating a regression function based on copulas, and presents the asymptotic normality of the estimator. Their model and analysis are restricted to the low-dimensional setting and not well adapted to the high-dimensional case. For data transformations, it is natural to consider the transformations that are continuous and one to one on an interval. Indeed, the functions satisfying these two conditions must be strictly monotonic [36]. In the present paper, we consider adaptive estimation and statistical inference for high-dimensional sparse Gaussian copula regression. The model can be formulated as follows. Suppose we have an independent and identically distributed random sample Z1 = (Y1, X1), ..., Zn = (Yn, Xn) \u2208Rp+1 2 \fwhere Yi \u2208R are the responses and Xi \u2208Rp are the covariates. Set d = p + 1. We say (Yi, Xi) satis\ufb01es a Gaussian copula regression model, if there exists a set of strictly increasing functions f = {f0, f1, ..., fp} such that the marginally transformed random vectors \u02dc Zi = ( \u02dc Yi, \u02dc Xi) def = (f0(Yi), f1(Xi1), ..., fp(Xip)) satisfy \u02dc Zi i.i.d \u223cNd(0, \u03a3) for some positive-de\ufb01nite covariance matrix \u03a3 \u2208Rd\u00d7d with diag(\u03a3) = 1. The condition diag(\u03a3) = 1 is for identi\ufb01ability because the scaling and shifting are absorbed in the marginal transformations. Note that under the Gaussian copula regression model, one has the following linear relationship for the transformed data: \u02dc Yi = \u02dc Xi \u22a4\u03b2 + \u03f5i, i = 1, 2, ..., n, (2) where \u03b2 \u2208Rp and \u03f5i are i.i.d zero-mean Gaussian variables. Writing in terms of the covariances, one has \u03b2 = \u03a3\u22121 \u02dc X \u02dc X\u03a3 \u02dc X \u02dc Y and \u03f5i i.i.d \u223c N(0, 1 \u2212\u03a3 \u02dc Y \u02dc X\u03a3\u22121 \u02dc X \u02dc X\u03a3 \u02dc X \u02dc Y ), where \u03a3 \u02dc X \u02dc X = Cov( \u02dc X1, \u02dc X1) and \u03a3 \u02dc X \u02dc Y = Cov( \u02dc X1, \u02dc Y1). We focus on the high-dimensional setting where p is comparable to or much larger than n and \u03b2 is sparse. The fundamental di\ufb00erence between the Gaussian copula regression model and the conventional linear regression model (2) is that one observes {Y1, X1), ..., (Yn, Xn)}, not {( \u02dc Y1, \u02dc X1), , , , , ( \u02dc Yn, \u02dc Xn)} as the transformations fi are unknown. The goal of the present paper is to develop adaptive estimation and inference methods that achieve the same performance in terms of the convergence rates without the knowledge of the marginal transformations as that for the high-dimensional linear regression. The rank-based Kendall\u2019s tau is used to extract the covariance information on the transformed data that does not require estimation of the transformations. Based on the covariance matrix estimator, an \u21131 regularized estimator is proposed to estimate \u03b2 and a corresponding de-biased estimator is developed for the construction of the con\ufb01dence intervals and hypothesis tests. In addition, prediction of the response for a given value of the covariates is also considered. Theoretical properties of the procedures for estimation, prediction, and statistical inference are studied. The proposed estimator is shown to be rate-optimal under regularity conditions. The proposed estimation and inference methods share similar properties as those optimal procedures for the high-dimensional linear regression. They are more \ufb02exible in the sense that they are adaptive to unknown monotone marginal transformations. For example, it is of practical interest to test whether a given covariate Xi is related to the response Y . The proposed testing procedure enables one to test this hypothesis without the need of knowing 3 \for estimating the marginal transformations. In addition, the procedures are easy to implement and perform well numerically. The methods are also applied to analyze the Communities and Crime Unnormalized Data from the UCI Machine Learning Repository. Compared with other methods such as those for the additive regression model and single index model, a signi\ufb01cant advantage for our proposed estimation and inference procedures is that they do not require estimation of the marginal transformations. For example, one can select the important variables xi without any knowledge of the transformations fi. This makes the methods more \ufb02exible and adaptive, and achieves the same optimal rate as that for high-dimensional linear regression. The rest of the paper is organized as follows. After basic notations and de\ufb01nitions are introduced, Section 2 presents the \u21131 penalized minimization procedure for estimating \u03b2 that uses a rank-based correlation matrix estimator. Prediction is also considered. Section 3 constructs a de-biased estimator and establishes an asymptotic normality result. Con\ufb01dence intervals and hypothesis tests are developed based on the limiting distribution. Numerical performance of the proposed estimator and inference procedures are investigated in Section 4. A brief discussion is given in Section 5 and the main results are proved in Section 6. 2 Adaptive Estimation and Prediction We consider adaptive estimation and prediction in this section. We \ufb01rst introduce the rank-based correlation matrix estimator to extract covariance information on the transformed data that does not require estimation of the marginal transformations, and then present the estimation and prediction procedures and their theoretical properties. We begin with the basic notations and de\ufb01nitions. Throughout the paper, we use bold-faced letters for vectors. For a vector u \u2208Rp and 1 \u2264q \u2264\u221e, the \u2113q norm is de\ufb01ned as ||u||q = (Pp i=1 |ui|q)1/q, with ||u||\u221e= maxi |ui|. In addition, u[i : j] denotes the entries of u from i-th to j-th coordinates and supp(u) is the support of u. For a matrix A \u2208Rp\u00d7p and 1 \u2264q \u2264\u221e, the matrix \u2113q operator norm is is de\ufb01ned as ||A||q = sup\u2225u\u2225q=1 ||Au||q. The spectral norm of A is the \u21132 operator norm and the \u21131 norm is the maximum absolute column sum. For an integer 1 \u2264s \u2264p, the s-restricted spectral norm of A is de\ufb01ned as ||A||2,s = supu\u2208Sp\u22121,|u|0=s ||Au||2, where Sp\u22121 is 4 \fthe unit ball in Rp. The vector \u2113\u221enorm on matrix A is |A|\u221e= maxi,j |Aij|. For a symmetric matrix A, we use \u03bbmax(A) and \u03bbmin(A) to denote respectively the largest and smallest eigenvalue of A, and \u03ba(A) = \u03bbmax(A)/\u03bbmin(A) is the condition number. In addition, \u25e6denotes the matrix element-wise multiplication, and \u2297is the Kronecker product. Moreover, vec(\u00b7) maps an m \u00d7 n matrix A to a Rmn vector by laying out the columns of A one by one. For a set of indices I,J, we let AI,J denote the submatrix formed by the rows in I and columns in J. e(n) i is the i-th unit vector in Rn with entries e(n) ij = I{j=i}, for j = 1, ..., n. \u03a6(\u00b7) denotes the cumulative distribution function of a standard normal distribution. For two sequences of nonnegative real numbers, an \u2272bn implies that there exists a constant C not depending on n, such that an \u2264Cbn. Finally, we use [d] to denote the set {1, 2, ..., d}. 2.1 Rank-Based Estimator of Correlation Matrix Recall the model (2), we use (Y , X) to denote the observed data, with Y \u2208Rn and X \u2208Rn\u00d7p the design matrix with rows X\u22a4 1 , ..., X\u22a4 n , and ( \u02dc Y , \u02dc X) to be the original data who possesses the linear relationship. In addition, Z\u22a4 i def = (Yi, X\u22a4 i ) and \u02dc Z\u22a4 i def = ( \u02dc Yi, \u02dc X\u22a4 i ). An essential quantity in estimation of \u03b2 and inference for the Gaussian copula regression model (2) is the covariance matrix (or correlation matrix as the diagonal is 1) \u03a3 in (2). Since the marginal transformations fi\u2019s are unknown and thus (\u02dc Y , \u02dc X) are not directly accessible, the conventional sample covariance matrix is not available as an estimate of \u03a3. We thus need an alternative method to estimate the covariance/correlation matrix \u03a3. Our approach is to use the rank-based Kendall\u2019s tau, which can be well estimated from the observed data (Y1, X\u22a4 1 ), ..., (Yn, X\u22a4 n ). This estimator is based on the following fact (see Section 3 of [15]). if \u02dc Zi i.i.d. \u223cNd(0, \u03a3) with \u03a3 = (\u03c3jk)1\u2264j,k\u2264d, then \u03c3jk = sin \u0010\u03c0 2 \u03c4jk \u0011 , (3) where \u03c4jk is called Kendall\u2019s tau and de\ufb01ned as \u03c4jk = E[sgn(\u02dc z1j \u2212\u02dc z2j)sgn(\u02dc z1k \u2212\u02dc z2k)], (4) with \u02dc Zi = (\u02dc zi1, \u02dc zi2, ..., \u02dc zid)\u22a4, i = 1, 2, being two independent copies of Nd(0, \u03a3). 5 \fNote that \u03c4jk given in (4) is invariant under strictly increasing marginal transformations. This leads to an estimate of \u03c4ij based on the observed data Z1, ..., Zn under the Gaussian copula regression model \u02c6 \u03c4jk = 2 n(n \u22121) X 1\u2264i1 0. Output: Regularized estimator \u02c6 \u03b2(\u03bb). 1: Construct Kendall\u2019s tau based covariance estimators \u02c6 \u03a3XX and \u02c6 \u03a3XY . 2: Set \u02c6 \u03b2(\u03bb) = arg min \u03b2\u2208Rp {1 2(\u03b2\u22a4\u02c6 \u03a3XX\u03b2 \u22122\u02c6 \u03a3Y X\u03b2) + \u03bb||\u03b2||1}. (9) We now consider the properties of the estimator \u02c6 \u03b2(\u03bb) given in Algorithm 1. We \ufb01rst de\ufb01ne the Restricted Strong Convexity (RSC) condition introduced in [25]. De\ufb01nition 1 (RSC). For a given sparsity level s \u2264p and constant \u03b1 \u22651, de\ufb01ne the set C(s, \u03b1) := {\u03b8 \u2208Rp : ||\u03b8Sc||1 \u2264\u03b1||\u03b8S||1, S \u2282{1, ..., p}, |S| \u2264s}. We say a matrix \u03a3 \u2208Rp\u00d7p satis\ufb01es the restricted strong convexity (RSC) condition with constants (\u03b31, s, \u03b1), if \u03b8\u22a4\u03a3\u03b8 \u2265\u03b31||\u03b8||2 2 for all \u03b8 \u2208C(s; \u03b1). The RSC condition is related to the restricted eigenvalue condition [2] used in the analysis of high-dimensional linear regression. See [25] for more detailed discussion on the RSC. Theorem 2.1. Assume that \u03b2 is s-sparse. Suppose that \u03ba(\u03a3) \u2264M for some M > 0, and \u03a3XX satis\ufb01es the RSC with constants (\u03b31, s, 3). Let \u02c6 \u03b2(\u03bb) be de\ufb01ned as (9). If s = o( n log p), and the tuning parameter \u03bb = C1 q log p n is chosen with C1 > 2M, then with probability at least 1 \u22122p\u22121, || \u02c6 \u03b2(\u03bb) \u2212\u03b2||2 \u2272 r s log p n and || \u02c6 \u03b2(\u03bb) \u2212\u03b2||1 \u2272s r log p n . (10) Furthermore, if |\u03a3XS,XSc|\u221e\u22641 \u2212\u03b1 for some constant \u03b1 > 0, where S = supp(\u03b2) and XS is its corresponding index set in \u03a3, mini\u2208S |\u03b2i| \u22658M \u03b31 (1 + 4(2\u2212\u03b1) \u03b1 ) q s log p n , then for \u03bb = 8M(2\u2212\u03b1) \u03b1 q s log p n , with probability at least 1 \u22122p\u22121, sgn( \u02c6 \u03b2) = sgn( \u02c6 \u03b2(\u03bb)). (11) 7 \fThe convergence rates of \u02c6 \u03b2(\u03bb) under the \u21131 and \u21132 norm losses given in (10) match the minimax lower bounds for high-dimensional linear regression [32]. This implies that \u02c6 \u03b2(\u03bb) is minimax rate optimal under the Gaussian copula regression model and achieves the same optimal rate attained by the regular Lasso for linear regression. In other words, the proposed procedure is adaptive to the unknown marginal transformations and gains this added \ufb02exibility for free in terms of convergence rate. The result given in (11) shows that, under regularity conditions, \u02c6 \u03b2(\u03bb) is sign consistent. 2.3 Prediction In addition to estimation of \u03b2, another problem of signifcant practical interest is predicting the response Y \u2217for a given value of the covariates x\u2217= (x\u2217 1, ..., x\u2217 p) based on the Gaussian copula regression model (2). In the oracle setting where the transformations f0, ..., fp and the coe\ufb03cient vector \u03b2 are known, then the optimal prediction of the response is \u00b5\u2217= f\u22121 0 ( p X i=1 fi(x\u2217 i )\u03b2i). Our goal is to construct a predictor \u02c6 \u00b5\u2217, based only on the observed data (Y1, X1), ..., (Yn, Xn), that is close to the oracle predictor \u00b5\u2217. Let F0 be the cumulative distribution function of Y and let Fi be the cumulative distribution function of Xi for i = 1, ..., p. As for the sample version, let \u02c6 F0 be the empirical cumulative distribution function of {Y1, ..., Yn} and let \u02c6 Fi be the empirical cumulative distribution function of {Xi1, ..., Xin} for i = 1, ..., p. Set \u02c6 fi(t) = \u03a6\u22121( \u02c6 Fi(t)). (12) For a given value of the covariates x\u2217= (x\u2217 1, ..., x\u2217 p), we de\ufb01ne the predictor \u02c6 \u00b5\u2217= \u02c6 f\u22121 0 ( p X i=1 \u02c6 fi(x\u2217 i )\u02c6 \u03b2(\u03bb)i), (13) where \u02c6 \u03b2(\u03bb) is the estimator given in (9) and \u02c6 f\u22121 0 is the generalized inverse of \u02c6 f0: \u02c6 f\u22121 0 (t) = inf{x \u2208R : \u02c6 f0(x) \u2265t}. We have the following result for the predictor \u02c6 \u00b5\u2217. 8 \fTheorem 2.2. Suppose the conditions in Theorem 2.1 hold. Suppose for some constant c > 0, |f0(v1) \u2212f0(v2)| \u2265c|v1 \u2212v2| for all v1, v2 \u2208R, and maxi=1,...,p Fi(x\u2217 i ) \u2208(\u03b4\u2217, 1 \u2212\u03b4\u2217) for some constant \u03b4\u2217> 0. If s = o( q n log p), then the predictor \u02c6 \u00b5\u2217given in (13) satis\ufb01es, with probability at least 1 \u2212p\u22121 \u2212n\u22121, |\u02c6 \u00b5\u2217\u2212\u00b5\u2217| \u2272s r log p n . This error bound is tight. f0(\u00b5\u2217) = Pp i=1 fi(x\u2217 i )\u03b2i can be viewed as a linear functional of \u03b2 with unknown weights fi(x\u2217 i ) (as the marginal transformations fi are unknown). For high-dimensional linear regression, inference on the linear functionals of \u03b2 with known weights has been considered in [5], where a lower bound of order s q log p n was established for estimation error and for the expected length of con\ufb01dence intervals for linear functionals with \u201cdense\u201d weight vectors. 3 Statistical Inference We turn in this section to statistical inference for the Gaussian copula regression model. The Lasso estimator is inherently biased as it is essential to trade variance and bias in order to achieve the optimal estimation performance. For statistical inference such as con\ufb01dence intervals and hypothesis tests, it is desirable to use (nearly) unbiased pivotal estimators. Such an approach has been used in the construction of con\ufb01dence intervals for high-dimensional linear regression in the recentliterature. See, for example, [5, 13, 37, 43]. We follow the same principle to de-bias the estimator \u02c6 \u03b2(\u03bb) given in Algorithm 1. We begin by noting that \u02c6 \u03b2(\u03bb) satis\ufb01es the Karush-Kuhn-Tucker (KKT) condition \u02c6 \u03a3XX \u02c6 \u03b2(\u03bb) \u2212\u02c6 \u03a3XY + \u03bb\u2202|| \u02c6 \u03b2(\u03bb)||1 = 0, (14) where \u2202|| \u02c6 \u03b2(\u03bb)||1 is the subgradient of the \u21131 norm || \u00b7 ||1. Equation (14) can be rewritten as \u02c6 \u03a3XX( \u02c6 \u03b2(\u03bb) \u2212\u03b2) + \u03bb\u2202|| \u02c6 \u03b2(\u03bb)||1 = \u02c6 \u03a3XY \u2212\u02c6 \u03a3XX\u03b2. Suppose one has a good approximation of the \u201cinverse\u201d of \u02c6 \u03a3XX, say M, then M \u02c6 \u03a3XX( \u02c6 \u03b2(\u03bb) \u2212\u03b2) + \u03bbM\u2202|| \u02c6 \u03b2(\u03bb)||1 = M(\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX\u03b2), 9 \fand it follows ( \u02c6 \u03b2(\u03bb) + \u03bbM\u2202|| \u02c6 \u03b2(\u03bb)||1) \u2212\u03b2 = M(\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX\u03b2) + (I \u2212M \u02c6 \u03a3XX)( \u02c6 \u03b2(\u03bb) \u2212\u03b2), (15) where (I \u2212M \u02c6 \u03a3XX)( \u02c6 \u03b2(\u03bb) \u2212\u03b2) is negligible under mild conditions. This analysis suggests the following de-biasing procedure: \u02c6 \u03b2u = \u02c6 \u03b2(\u03bb) + \u03bbM\u2202|| \u02c6 \u03b2(\u03bb)||1 = \u02c6 \u03b2(\u03bb) + M(\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX \u02c6 \u03b2(\u03bb)), where the second equality is from (14). We need to construct the matrix M that is a good approximation of the \u201cinverse\u201d of \u02c6 \u03a3XX. We proceed with two objectives in mind: One is to control |M \u02c6 \u03a3XX|\u221eand another is to control the variance of \u02c6 \u03b2u i . The latter is for the precision of the statistical inference procedures. For example, the length of the con\ufb01dence intervals for \u03b2i is proportional to the standard deviation of \u02c6 \u03b2u i . Assuming that (I \u2212M \u02c6 \u03a3XX)( \u02c6 \u03b2(\u03bb) \u2212\u03b2) is negligible, the variance of \u02c6 \u03b2u i is determined by that of m\u22a4 i (\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX\u03b2), where mi is the i-th column of M. Let ui = (0, m\u22a4 i )\u22a4and v0 = (1, \u2212\u03b2\u22a4)\u22a4\u2208Rd, m\u22a4 i (\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX\u03b2) = ui \u02c6 \u03a3v\u22a4 0 . It will be shown in Lemma 6.3 in Section 6 that the asymptotic variance of \u221anui \u02c6 \u03a3v\u22a4 0 is \u03c02\u03c32 g1(ui) def = \u03c02Var(g1(Z; ui)), (16) where g1(Z; ui) = E[g(Z, Z\u2032; ui)|Z], and g(Z, Z\u2032; ui) is de\ufb01ned as g(Z, Z\u2032; ui) = sgn(Z \u2212Z\u2032)\u22a4(uiv\u22a4 0 \u25e6cos(\u03c0 2 T))sgn(Z \u2212Z\u2032) for Z, Z\u2032 i.i.d \u223cNd(0, \u03a3) and ui \u2208Rd. We need a good estimate of \u03c32 g1(ui). Note that (16) can be further expressed as \u03c32 g1(ui)=Var(g1(Z; ui)) = vec(uiv\u22a4 0 \u25e6cos(\u03c0 2 T))\u22a4\u00b7 \u03a3hZ \u00b7 vec(uiv\u22a4 0 \u25e6cos(\u03c0 2 T)). (17) Here \u03a3hZ = Var(hZ(Z)) \u2208Rd2\u00d7d2 is the covariance matrix of hZ(Z) = E[sgn(Z \u2212Z\u2032) \u2297sgn(Z \u2212 Z\u2032)|Z] \u2208Rd2, which can be estimated by \u02c6 \u03a3hZ = 1 n n X i=1 (\u02c6 hZ(Zi) \u22121 n n X i\u2032=1 \u02c6 hZ(Zi\u2032))(\u02c6 hZ(Zi) \u22121 n n X i\u2032=1 \u02c6 hZ(Zi\u2032))\u22a4, 10 \fwhere \u02c6 hZ(Zi) = 1 n\u22121 Pn i\u2032\u0338=i sgn(Zi \u2212Zi\u2032) \u2297sgn(Zi \u2212Zi\u2032). Then a good estimate of \u03c32 g1(ui) is given by \u02c6 \u03c32 g1(ui) = vec(ui\u02c6 v\u22a4\u25e6cos(\u03c0 2 \u02c6 T))\u22a4\u02c6 \u03a3hZvec(ui\u02c6 v\u22a4\u25e6cos(\u03c0 2 \u02c6 T)), where \u02c6 v = (1, \u02c6 \u03b2(\u03bb)\u22a4)\u22a4. We are ready to present the de-biasing procedure. To simplify the notation, we de\ufb01ne x(u) : Rd \u2192Rd2, with x(u) = vec(uv\u22a4 0 \u25e6cos( \u03c0 2 T)), and \u02c6 x(u) : Rd \u2192Rd2, with \u02c6 x(u) = vec(u\u02c6 v\u22a4\u25e6cos( \u03c0 2 \u02c6 T)). Then \u03c32 g1(u) = x(u)\u22a4\u03a3hZx(u) and \u02c6 \u03c32 g1(u) = \u02c6 x(u)\u22a4\u02c6 \u03a3hZ \u02c6 x(u). Algorithm 2: De-biased estimator of \u03b2 Input: Observed pairs (Y1, X\u22a4 1 ), ..., (Yn, X\u22a4 n ), parameters a \u2208(0, 1 12), b > 0, \u00b5 > 0, \u03bb > 0. Output: De-biased estimator \u02c6 \u03b2u. 1: Construct Kendall\u2019s tau based covariance estimators \u02c6 \u03a3XY and \u02c6 \u03a3XX. 2: Let \u02c6 \u03b2(\u03bb) = min \u03b2\u2208Rp{1 2(\u03b2\u22a4\u02c6 \u03a3XX\u03b2 \u22122\u02c6 \u03a3Y X\u03b2) + \u03bb||\u03b2||1}. (18) 3: for i = 1, 2, . . . , p do 4: Let ui be a solution of minimize u\u2208Rp \u02c6 x(u)\u22a4\u02c6 \u03a3hZ \u02c6 x(u) subject to ||\u02c6 \u03a3XXu[2 : d] \u2212e(p) i ||\u221e\u2264\u00b5 u[1] = 0 b\u22121n\u2212a \u2264||u||2 \u2264||u||1 \u2264bna/2 (19) 5: Set M = (u1[2 : p + 1], ..., up[2 : p + 1]). If any of the above problems is not feasible, then set M = Ip\u00d7p. 6: De\ufb01ne \u02c6 \u03b2u as \u02c6 \u03b2u = \u02c6 \u03b2(\u03bb) + M(\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX \u02c6 \u03b2(\u03bb)). (20) Note that (19) is a convex program and can be solved e\ufb03ciently. Let K = cos( \u03c0 2 \u02c6 T) = (K1, ..., Kd), \u02c7 u = (u\u22a4, ..., u\u22a4)\u22a4\u2208Rd2, and \u02c7 D = diag(v1 diag(K1), ..., vd diag(Kd)). Then \u02c6 \u03c32 g1(u) 11 \fcan be rewritten as \u02c6 \u03c32 g1(u) = \u02c6 x(u)\u22a4\u02c6 \u03a3hZ \u02c6 x(u) = \u02c7 u\u22a4\u02c7 D\u02c6 \u03a3hZ \u02c7 D\u02c7 u. (21) Hence \u02c6 \u03c32 g1(u) is convex with respect to u. Since the constraints of (19) are a convex set of u, these two facts together imply that (19) is a convex program. Note that the \ufb01rst constraint in (19) is to make sure that M is a good approximation of \u02c6 \u03a3\u22121 XX, and the third constraint is for the convenience of theoretical analysis, in practice b can be chosen su\ufb03ciently large so that it does not a\ufb00ect the numerical performance of the algorithm. The following theorem states the distributional property of \u02c6 \u03b2u that will serve as the basis for the construction of statistical inference procedures. Theorem 3.1. Suppose for some constants Mi > 0, i = 1, 2, 3, that 1 M1 \u2264\u03bbmin(\u03a3) \u2264\u03bbmax(\u03a3) \u2264 M1, ||\u03a3\u22121||1 < M2, and \u03bbmin(\u03a3hZ) > M3. Suppose s = o( \u221an log p) and \u00b5 = a q log p n , and \u03bb = c q log p n in Algorithm 2 are chosen with a > 4M2 and c > 2M2 1 . Then for any \ufb01xed 1 \u2264i \u2264p and for all x \u2208R, lim n\u2192\u221e sup \u03b2\u2208Rp\u22121,||\u03b2||0\u2264s \f \f \f \f \f \f P \uf8eb \uf8ed \u221an( \u02c6 \u03b2u i \u2212\u03b2i) \u03c0 q \u02c6 x(ui)\u22a4\u02c6 \u03a3hZ \u02c6 x(ui) \u2264x \uf8f6 \uf8f8\u2212\u03a6(x) \f \f \f \f \f \f = 0. (22) Theorem 3.1 shows that the estimator \u02c6 \u03b2u possesses the similar distributional property as that of the de-biased Lasso estimator in [13], although the observed data here have a linear relationship only after unknown transformations. The asymptotic normality result given in (22) can be used to construct con\ufb01dence intervals and hypothesis tests for any given coordinate \u03b2i. Let z\u03b1/2 = \u03a6\u22121(1 \u2212\u03b1/2). Corollary 3.1. Suppose the conditions of Theorem 3.1 hold. Then for any given 1 \u2264i \u2264p, CIi = \uf8ee \uf8f0\u03b2u i \u2212z\u03b1/2\u03c0 s \u02c6 x(ui)\u22a4\u02c6 \u03a3hZ \u02c6 x(ui) n , \u03b2u i + z\u03b1/2\u03c0 s \u02c6 x(ui)\u22a4\u02c6 \u03a3hZ \u02c6 x(ui) n \uf8f9 \uf8fb (23) is an asymptotically (1 \u2212\u03b1) level con\ufb01dence interval for \u03b2i. It is of practical interest to test whether a given covariate Xi is related to the response Y . In the context of the Gaussian copula regression model, this can be formulated as testing an individual null hypothesis H0,i : \u03b2i = 0 versus the alternative H1,i : \u03b2i \u0338= 0. To test H0,i against H1,i at the 12 \fnominal level \u03b1 for some 0 < \u03b1 < 1, based on the asymptotic normality result given in Theorem 3.1, we introduce the test \u02c6 \u03a8i = I \uf8eb \uf8ed \u221an|\u02c6 \u03b2u i | \u03c0 q \u02c6 x(ui)\u22a4\u02c6 \u03a3hZ \u02c6 x(ui) > z\u03b1/2 \uf8f6 \uf8f8. (24) Let \u03a8i be any test for testing H0,i : \u03b2i = 0 versus H1,i : \u03b2i \u0338= 0. De\ufb01ne \u03b1n(\u03a8i) be the size of the test over the collection of s-sparse vectors, i.e., \u03b1n(\u03a8i) = sup{P\u03b2(\u03a8i = 1) : \u03b2 \u2208Rp, ||\u03b2||0 \u2264s, \u03b2i = 0}. For the power of the test, we consider the collection of s-sparse vectors with |\u03b2i| \u2265\u03b3 for some given \u03b3 > 0 and de\ufb01ne the power \u03b6n(\u03a8i; \u03b3) = inf{P\u03b2(\u03a8i = 1) : \u03b2 \u2208Rp, ||\u03b2||0 \u2264s, |\u03b2i| \u2265\u03b3}. Corollary 3.2. Suppose the conditions of Theorem 3.1 hold. The test \u02c6 \u03a8i de\ufb01ned in (24) satis\ufb01es lim n\u2192\u221e\u03b1n(\u02c6 \u03a8i) \u2264\u03b1 and lim inf n\u2192\u221e \u03b6n(\u02c6 \u03a8i; \u03b3) \u03b6\u2217 n(\u03b3) \u22651, (25) where \u03b6\u2217 n(\u03b3) def = G(\u03b1, \u221an\u03b3 \u03c0\u03c3g1(u) ) with the function G(\u00b7, \u00b7) de\ufb01ned by G(\u03b1, u) = 2 \u2212\u03a6(z\u03b1/2 + u) \u2212\u03a6(z\u03b1/2 \u2212u). for 0 < \u03b1 < 1 and u \u2208R+. Consider the problem of testing an individual null hypothesis H0,i : \u03b2i = 0 versus the alternative H1,i : \u03b2i \u0338= 0 under the linear model \u02dc Yi = \u02dc Xi \u22a4\u03b2 + \u03f5i, i = 1, 2, ..., n, (26) with \u02dc Xi i.i.d \u223cN(0, \u03a3XX) and \u03f5i \u223cN(0, \u03c32). As shown in [12], for any test \u03a8i, if \u03b1n(\u03a8i) \u2264\u03b1, then lim sup n\u2192\u221e\u03b6n(\u03a8i; \u03b3) \u2264G(\u03b1, \u221an\u03b3 \u03c3d ), where \u03c3d = \u03c3 q \u03c3ii \u2212\u03a3i,S\u03a3\u22121 S,S\u03a3S,i . 13 \fHence, our test \u02c6 \u03a8i has nearly optimal power in the following sense: it has power at least as large as the power of any other test \u03a8i based on a sample of size n Cd , where the factor Cd = \u03c0\u03c3g1(ui) \u03c3d . The results show that the proposed con\ufb01dence intervals and hypothesis tests share the similar properties as those optimal procedures for the high-dimensional linear regression. They are more \ufb02exible in the sense that they are adaptive to unknown monotone marginal transformations. 4 Numerical Performance The proposed estimation and inference procedures are easy to implement. We investigate in this section the numerical performance of the adaptive estimator (9), and we denote it by \u02c6 \u03b2Copula(Y , X) in this section, as well as the con\ufb01dence procedure through simulations. The procedures are also applied to the analysis of the Communities and Crime Unnormalized Data from the UCI Machine Learning Repository. 4.1 Simulation Results for Estimation Accuracy We \ufb01rst consider the performance of the the proposed estimator in Gaussian copula regression model de\ufb01ned in (9) by comparing its Root Mean Square error and model selection error with those of the regular Lasso estimator \u02c6 \u03b2Lasso(Y , X) that is performed on (Y , X) directly, and the Lasso estimator \u02c6 \u03b2Lasso( \u02dc Y , \u02dc X) that is performed on ( \u02dc Y , \u02dc X), in which case we assume the marginal transformations fi are known and \u02dc Y is linear on \u02dc X. The simulation setup is as follows. Three cases, (n, p, s)=(150, 50, 10), (300, 100, 20), (400, 400, 20) and (400, 200, 20), are analyzed. In each case, we \ufb01rst generate a random Gaussian matrix A = (ai,j)1\u2264i,j\u2264d where d = p + 1 and ai,j i.i.d. \u223cN(0, 1), then we make the last p \u2212s columns of A orthogonal to the \ufb01rst column of A via the Gram-Schmidt process, and obtain matrix B. We then set the covariance matrix \u03a3 = D\u22121/2(B\u22a4B + I)\u22121D\u22121/2, where D = diag((B\u22a4B + I)\u22121). By this procedure, we zero out the last p \u2212s entries in the \ufb01rst column of \u03a3\u22121, and guarantee the diagonal of \u03a3 to be one. Finally, we generate n i.i.d samples ( \u02dc Yi, \u02dc X\u22a4 i ) \u223cNd(0, \u03a3). For each choice of (n, p, s), we consider two settings. In the \ufb01rst setting, we set Yi = exp( \u02dc Yi), Xij = 2 \u02dc X5 ij + 1 for j = 1, 2, .., 10, Xij = \u2212exp( \u02dc Xij) for j = 11, 12, .., 30, except for Xi,21 = \u03a6( \u02dc Xi,21), bounded by 0 and 14 \f1, while in the second setting we constrain Yi \u2208[0, 1] and set Yi = \u03a6( \u02dc Yi) with Xij\u2019s transformed the same way as the \ufb01rst setting. In each setting, the simulation is repeated Nsim = 500 times and the tuning parameter \u03bb is selected via 5-fold cross validation. The accuracy of the estimators is measured by the average Root Mean Square error eest = 1 Nsim Nsim X i=1 || \u02c6 \u03b2 \u2212\u03b2||2, and the model selection error eselection = 1 Nsim Nsim X i=1 (1 p p X j=1 I(\u02c6 \u03b2j \u0338= \u03b2j)). The simulation results for the three di\ufb00erent estimates \u02c6 \u03b2Copula(Y , X), \u02c6 \u03b2Lasso( \u02dc Y , \u02dc X) and \u02c6 \u03b2Lasso(Y , X) are summarized in Table 1. \u02c6 \u03b2Copula(Y , X) \u02c6 \u03b2Lasso( \u02dc Y , \u02dc X) \u02c6 \u03b2Lasso(Y , X) (n, p, s) eselection eest eselection eest eselection eest (150, 50, 10)1 0.119 1.296 0.122 1.301 0.236 1.680 (150, 50, 10)2 0.119 1.296 0.122 1.301 0.324 1.721 (300, 100, 20)1 0.121 1.698 0.116 1.666 0.247 2.306 (300, 100, 20)2 0.121 1.698 0.116 1.666 0.453 4.334 (400, 200, 20)1 0.068 2.202 0.082 1.554 0.143 1.799 (400, 200, 20)2 0.068 2.202 0.082 1.554 0.395 4.712 (400, 400, 40)1 0.098 2.104 0.094 2.021 0.123 2.114 (400, 400, 40)2 0.098 2.104 0.094 2.021 0.325 9.854 Table 1: Simulation results for the synthetic data described in Section 4. The results corresponds to model selection error eselection and estimation error eest for \u02c6 \u03b2Copula(Y , X), \u02c6 \u03b2Lasso( \u02dc Y , \u02dc X) and \u02c6 \u03b2Lasso(Y , X). The subscript i (i = 1, 2) in (n, p, s)i denotes the i-th setting of transformations Table 1 shows that the performance of the proposed estimator \u02c6 \u03b2Copula(Y , X), which does not require the knowledge of the marginal transformations fi, is as good as the oracle estimator \u02c6 \u03b2Lasso( \u02dc Y , \u02dc X), which assumes the full knowledge of the transformations fi. As expected, applying 15 \fthe Lasso estimator directly to the observed data leads to severely problematic model selection and parameter estimation. 4.2 Simulation Results for Statistical Inference We now consider the performance of the proposed con\ufb01dence interval CIi for the i-th coordinate \u03b2i given in (23) based on the observed data (Yi, X\u22a4 i ) in terms of the coverage probability and expected length. In this section we denote the de-biased estimator in (20) as \u02c6 \u03b2u Copula(Y , X). The con\ufb01dence interval is compared with the con\ufb01dence interval proposed in [13] based on the transformed data (Yi, X\u22a4 i ) with de-biased estimator \u02c6 \u03b2u Lasso(Y , X), and that of \u02c6 \u03b2u Lasso( \u02dc Y , \u02dc X) on the original data ( \u02dc Yi, \u02dc X\u22a4 i ) while assuming the marginal transformations fi are known.In all simulations we set the signi\ufb01cance level \u03b1 = 0.05, and consider three cases: (n, p, s)=(150, 50, 10), (300, 100, 20) and (400, 200, 20). In each setting, the simulation is repeated 500 times. The tuning parameter \u03bb are selected via 5-fold cross validation, and \u00b5, a, b in Algorithm 2 are manually set to be 1 2 q log p n , 1 13 and 10 respectively. We discover that the result is robust with respect to the choice of \u00b5, a and b. Recall that the \u03b2 is constructed with \ufb01rst s elements nonzero, we construct the 95% con\ufb01dence intervals for the nonzero (active) coe\ufb03cient \u03b21. Table 2 summarizes the empirical coverage probability of the nominal 95% con\ufb01dence intervals and the corresponding average lengths of \u03b21. The results show that the empirical coverage probability of \u02c6 \u03b2u Copula(Y , X) is very close to the desired con\ufb01dence level, while it is problematic to construct con\ufb01dence intervals based on \u02c6 \u03b2u Lasso(Y , X). The desired con\ufb01dence level for the con\ufb01dence intervals of an active coe\ufb03cient is always small when we apply the de-biased Lasso estimator directly to the data. The con\ufb01dence interval constructed by \u02c6 \u03b2u Copula(Y , X) performs as good as that constructed by \u02c6 \u03b2u Lasso( \u02dc Y , \u02dc X), which needs additional information of the transformations. In particular, our method tends to have stable con\ufb01dence interval lengths, while the length of con\ufb01dence intervals constructed by \u02c6 \u03b2u Lasso(Y , X) varies a lot according to the scale of data. 16 \fCI( \u02c6 \u03b2u Copula(Y , X)) CI( \u02c6 \u03b2u Lasso( \u02dc Y , \u02dc X)) CI( \u02c6 \u03b2u Lasso(Y , X)) (n, p, s) l(\u03b21) C(\u03b21) l(\u03b21) C(\u03b21) l(\u03b21) C(\u03b21) (150, 50, 10)1 0.252 0.950 0.333 0.946 0.025 0.150 (150, 50, 10)2 0.252 0.950 0.333 0.946 0.02 0.020 (300, 100, 20)1 0.284 0.942 0.312 0.968 0.014 0.076 (300, 100, 20)2 0.284 0.942 0.312 0.968 0.001 0.014 (400, 200, 20)1 0.263 0.958 0.282 0.942 0.016 0.082 (400, 200, 20)2 0.263 0.958 0.282 0.942 0.001 0.012 Table 2: Simulation results for the synthetic data described in Section 4. The results corresponds to 95% con\ufb01dence intervals. C(\u03b21) and l(\u03b21) respectively stand for coverage probability and average lengths of the con\ufb01dence interval for \u03b21. The subscript i (i = 1, 2) in (n, p, s)i denotes the i-th setting of transformations. 4.3 Analysis of Communities and Crime Unnormalized Data We now apply our estimation and inference procedures on a real data example. The Communities and Crime Unnormalized Data from the UCI Machine Learning Repository combines socioeconomic data from the 1990 Census, law enforcement data from the 1990 Law Enforcement Management and Administration Stats survey, and crime data from the 1995 FBI UCR. This dataset has been analyzed in [4, 31]. In this example, we will focus on explaining the response variable, percentage of women who are divorced, using various community characteristics, such as percentage of population that is African American and percent of people in owner occupied households, as well as law enforcement and crime information, such as percent of o\ufb03cers assigned to drug units. In order to further explore the high-dimensional setting, we use the state-level data of Pennsylvania, whose number of predictors is at least as large as the number of observations. After removing the variables with NA\u2019s and two variables directly related to the response (total and male divorce percentages), the data has 101 observations and 114 predictors. To evaluate the performance of the proposed methods, we randomly split the data into a training set with 70 observations, and a test set with 31 observation. We perform such split 100 times, at each time 17 \fthe proposed model is \ufb01tted on the training set and the Root Mean Square Error (RMSE) of the prediction (13) is calculated on the test set. Over the 100 random splits of the data, the average RMSE for our method is 1.38. In comparison, performing the regular Lasso on this dataset yields an average RMSE of 3.28. The predicted values by the proposed estimator and those by the Lasso estimator are plotted against the observed values, in one of the testing dataset, as shown in Figure 1. Figure 1: Predicted values by the proposed estimator (top) and Lasso estimator (bottom) are plotted against the observed values in the testing dataset for the divorce percentage of women in the Pennsylvania Communities and Crime Data. In addition, we use the proposed method for model selection. Applying the procedure to the Communities and Crime Unnormalized Data leads to four selected variables to explain the percentage of women who are divorced: PctFam2Par (percentage of families that are headed by two parents); PctKidsBornNeverMar (percentage of kids born to never married); PctPersOwnOccup (percent of people in owner occupied households) and PctSameHouse85 (percent of people living 18 \fin the same house as in 1985). This selection procedure correctly exclude all the law enforcement and crime information and irrelevant features in community characteristics, such as the percentage of population that is African American and percentage of people 16 and over who are employed in manufacturing. In addition, the variables selected are all about family/house, which are directly related to divorce percentage. 5 Discussion The Gaussian copula regression model is more \ufb02exible than the conventional linear model as it allows for unknown marginal monotonic transformations. The present paper proposes procedures for estimation and statistical inference that are adaptive to the unknown transformations. This is a signi\ufb01cant advantage over other methods such as those for the additive regression model and single index model. An important observation is that the objective function for the penalized least squares in classical high-dimensional regression only requires the sample covariances among X and Y , which can be replaced by a Kendall\u2019s tau based estimator under the Gaussian copula regression model. This idea can also be generalized to the high-dimensional sparse multivariate regression. For example, under the linear model, the regularized estimator proposed in [34] and the block-structured regularized estimator introduced in [28] only require the knowledge of X\u22a4X and X\u22a4Y . These can be replaced by the Kendall\u2019s tau based estimator \u02c6 \u03a3XX and \u02c6 \u03a3XY under the Gaussian copula model. Analogous analysis can be carried out to establish estimation consistency and inference results. Similar ideas can be applied to other related models, such as the additive models in a Reproducing Kernel Hilbert Space (RKHS). In RKHS, the \ufb01tting procedure only requires the inner products among data points, and the proposed Algorithm 2 can be modi\ufb01ed, via dual representation, for the construction of con\ufb01dence intervals for additive models in RKHS. In addition, it is also possible to extend the model to discrete data and mixed data, by using the similar idea in [8]. These are interesting topics for future work. Rank-based correlation matrix estimation has been studied in a number of settings, including the nonparanormal graphical model [1, 17, 39], high dimensional structured covariance/precision 19 \fmatrix estimation [17, 18, 39], and sparse PCA model [10, 24]. In the present paper, we only consider Kendall\u2019s tau-based estimator. Alternatively, one may use Spearman\u2019s rho. The results are similar and the same technique can be applied. 6 Proofs We prove the main results in this section. We begin by collecting a few technical lemmas that will be used in the proofs of the main results. These lemmas are proved at the end of this section. 6.1 Technical Tools The \ufb01rst lemma shows that the sign vector of a Gaussian random vector is sub-Gaussian. Lemma 6.1. If Z \u223cNd(0, \u03a3), then sgn(Z) = (sgn(Z1), ..., sgn(Zd))\u22a4is a random vector with subgaussian constant less than \u03c0 \u00b7 \u03ba(\u03a3), that is, for any w \u2208Sd\u22121, E[et\u00b7w\u22a4sgn(Z)] \u2264et2\u03c0\u00b7\u03ba(\u03a3). The next lemma characterizes the convergence rates of the Kendall\u2019s tau based correlation matrix estimator \u02c6 \u03a3 under di\ufb00erent norms. Lemma 6.2. If \u02c6 \u03a3 is an estimator of \u03a3 based on Kendall\u2019s tau, and \u03ba(\u03a3) \u2264M for some M > 0, then 1. P(|\u02c6 \u03a3 \u2212\u03a3|\u221e\u2272 q log p n ) \u22651 \u22122p\u22122; 2. P(||\u02c6 \u03a3 \u2212\u03a3||2 \u2272max{ q p+t n , p+t n }) \u22651 \u2212e\u2212t; 3. P(||\u02c6 \u03a3 \u2212\u03a3||2,s \u2272 q s log p n ) \u22651 \u2212p\u2212s. Lemma 6.3 below captures the asymptotics of certain U-statistics, which will be used to establish the asymptotic results for the proposed estimator. Lemma 6.3. For i = 1, 2, ..., p, let Hi = ui[2 : d]\u22a4(\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX\u03b2) = u\u22a4 i \u02c6 \u03a3v0, where v0 = (1, \u2212\u03b2\u22a4)\u22a4, then the asymptotic variance of \u221anHi is \u03c02\u03c32 g1(ui), and moreover, lim n\u2192\u221esup x\u2208R |P( \u221an(Hi \u2212E[Hi]) \u03c0\u03c3g1(ui) \u2264x) \u2212\u03a6(x)| = 0, 20 \fwhere \u03c3g1(ui) is de\ufb01ned in (17). Lemmas 6.4, 6.5, 6.6, and 6.7 control the vanishing terms in the construction of con\ufb01dence intervals for each coordinate \u03b2i, and all of these four lemmas are stated under the conditions of Theorem 3.1. We use u to denote ui the solution to (19) for any \ufb01xed i. Lemma 6.4. If we take \u00b5 = C q log p n and a, b > 0 in Algorithm 2 for large C, then with probability at least 1 \u22122p\u22122, the optimization problem (19) is feasible when n is large, that is, |\u03a3\u22121 XX \u02c6 \u03a3XX \u2212I|\u221e\u2264\u00b5, and b\u22121n\u2212a \u2264||u||2 \u2264||u||1 \u2264bna/2. Lemma 6.5. Let \u03a3hZ = Var(hZ(Z)) \u2208Rd2\u00d7d2 be the covariance matrix of hZ(Z) = E[sgn(Z \u2212 Z\u2032) \u2297sgn(Z \u2212Z\u2032)|Z], with \u2297being the Kronecker product, and its corresponding estimator \u02c6 \u03a3hZ is \u02c6 \u03a3hZ = 1 n P i(\u02c6 hZ(Zi)\u22121 n P i\u2032 \u02c6 hZ(Zi\u2032))(\u02c6 hZ(Zi)\u22121 n P i\u2032 \u02c6 hZ(Zi\u2032))\u22a4, with \u02c6 hZ(Zi) = 1 n\u22121 P i\u2032\u0338=i sgn(Zi \u2212 Zi\u2032) \u2297sgn(Zi \u2212Zi\u2032). Then with probability at least 1 \u22125p\u22122, |x(u)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(u)| \u2272 r s log p n1\u22122a . Lemma 6.6. Let x(u) = vec(uv\u22a4 0 \u25e6cos( \u03c0 2 T)) and \u02c6 x(u) = vec(u\u02c6 v\u22a4\u25e6cos( \u03c0 2 \u02c6 T)), then with probability at least 1 \u2212p\u22122, ||x(u) \u2212\u02c6 x(u)||1 \u2272na r s log p n . Lemma 6.7. Let \u03c3g1(u) be de\ufb01ned as in (17) with u is the solution to (19) with any \ufb01xed i, then \u03c32 g1(u) \u2273n\u22122a. The following lemma provides a tight, pointwise deviation inequality of empirical cumulative distribution function, which will be used to establish the consistency of the proposed predictor. Lemma 6.8. (Adapted from [44]) Let \u02c6 fi be de\ufb01ned as (12) for i \u2208{0, 1, , ..., p}, then for any \u03f5 \u2208(0, \u221a 2\u03c0], and \u03b3 \u2208(0, 2), and t \u2208R such that |fi(t)| \u2264\u221a\u03b3 log n, we have P(| \u02c6 fi(t) \u2212fi(t)| \u2265\u03f5) \u22642 exp(\u2212 n1\u2212\u03b3/2 12\u03c0 \u221a 2\u03c0\u221a\u03b3 log n\u03f52) \u22123 log(8\u03c0n\u03b3 log n) exp(\u2212 1 64 \u221a 2\u03c0 n1\u2212\u03b3/2 \u221alog n), where Fi(t) = \u03a6(fi(t)). 21 \f6.2 Proof of Theorem 2.1 This proof relies on the Corollary 1 in [25] and Theorem 3.4 in [16]: Lemma 6.9. (An adapted version of Corollary 1 in [25] ) If the loss function L(\u03b2) = \u03b2\u22a4\u02c6 \u03a3XX\u03b2 \u22122\u02c6 \u03a3Y X\u03b2 + 1 satis\ufb01es restricted strong convexity (RSC), that is \u03b4L(\u2206, \u03b2) def = L(\u03b2 + \u2206) \u2212L(\u03b2) \u2212\u27e8\u2207L(\u03b2), \u2206\u27e9\u2265\u03baL||\u2206||2 2, (27) for some \u03baL > 0 and \u2206\u2208C(s) := {\u03b8 \u2208Rp : ||\u03b8Sc||1 \u22643||\u03b8S||1, |S| \u2264s}. Then for \u03bb \u22652||\u2207L(\u03b2)||\u221e, any optimal solution \u02c6 \u03b2(\u03bb) to the convex program (9) satis\ufb01es the bound || \u02c6 \u03b2(\u03bb) \u2212\u03b2||2 \u2272\u221as\u03bb, || \u02c6 \u03b2(\u03bb) \u2212\u03b2||1 \u2272s\u03bb. Lemma 6.10. (An adapted version of Theorem 3.4 in [16]) If we further assume |\u03a3XSXSc|\u221e\u2264 1 \u2212\u03b1 for some \u03b1 > 0 and S = supp(\u03b2) and mini\u2208S |\u03b2i| \u2265 8 \u03b31 (1 + 4(2\u2212\u03b1) \u03b1 )M q s log p n , then for \u03bb = 8(2\u2212\u03b1) \u03b1 M q s log p n , with probability at least 1 \u22122p\u22121, sgn( \u02c6 \u03b2) = sgn( \u02c6 \u03b2(\u03bb)). Therefore, to prove Theorem 2.1, it is su\ufb03cient to verify (27) and calculate ||\u2207L(\u03b2)||\u221e. We divide these into two steps. Step 1 By the de\ufb01nition of \u03b4L(\u2206, \u03b2), \u03b4L(\u2206, \u03b2) =L(\u03b2 + \u2206) \u2212L(\u03b2) \u2212\u27e8\u2207L(\u03b2), \u2206\u27e9 =1 2(\u03b2 + \u2206)\u22a4\u02c6 \u03a3XX(\u03b2 + \u2206) \u2212\u02c6 \u03a3Y X(\u03b2 + \u2206) \u22121 2\u03b2\u22a4\u02c6 \u03a3XX\u03b2 + \u02c6 \u03a3Y X\u03b2 \u2212\u2206\u22a4(\u02c6 \u03a3XX\u03b2 \u2212\u02c6 \u03a3XY ) =1 2\u2206\u22a4\u02c6 \u03a3XX\u2206. Before proving (27), we state the adapted version of reduction principle from [35]. 22 \fLemma 6.11. (The adapted version of Theorem 10 in [35]) Let \u03b4 \u2208(0, 1 5) and k0 = 3. Then there exists a constant C0 that is not dependent with n, p, s, such that \u02dc s = C0s and let E(\u02dc s) = {w \u2208Rp : ||w||0 = \u02dc s} for \u02dc s < p and E = Rp otherwise. If \u02c6 \u03a3XX satis\ufb01es \u2200w \u2208E(\u02dc s) (1 \u2212\u03b4)||w||2 2 \u2264w\u22a4\u02c6 \u03a3XXw \u2264(1 + \u03b4)||w||2 2. (28) Then for any w \u2208C(s), (1 \u22125\u03b4)||w||2 2 \u2264w\u22a4\u02c6 \u03a3XXw \u2264(1 + 3\u03b4)||w||2 2 (29) The above claim implies that it is su\ufb03cient to show, for \u2206\u2208E(\u02dc s) = {w \u2208Rp : ||w||0 = \u02dc s} and some \u03b4 \u2208(0, 1/5), |\u2206\u22a4\u02c6 \u03a3XX\u2206| \u2265(1 \u2212\u03b4)||\u2206||2. Then Lemma 6.2.2 together with the fact that the spectral norm of a submatrix is bounded by the spectral norm of the whole matrix, for \u2206\u2208{w \u2208Rp : ||w||0 = \u02dc s}, with probability at least 1 \u2212p\u22122, we have |\u2206\u22a4\u02c6 \u03a3XX\u2206| =|\u2206\u22a4\u03a3XX\u2206+ \u2206\u22a4(\u02c6 \u03a3XX \u2212\u03a3XX)\u2206| \u2265|\u2206\u22a4\u03a3XX\u2206| \u2212|\u2206\u22a4(\u02c6 \u03a3XX \u2212\u03a3XX)\u2206| \u2265|\u2206\u22a4\u03a3XX\u2206| \u2212||\u02c6 \u03a3XX \u2212\u03a3XX||2,\u02dc s \u00b7 ||\u2206||2 2 \u2265|\u2206\u22a4\u03a3XX\u2206| \u2212 r C0s log p n ||\u2206||2 2 \u2265\u03b31||\u2206||2 2 \u2212 r C0s log p n ||\u2206||2 2. Therefore (27) holds when s log p/n \u21920. 23 \fStep 2: ||\u2207L(\u03b2)||\u221e= ||\u02c6 \u03a3XX\u03b2 \u2212\u02c6 \u03a3XY ||\u221e= ||\u02c6 \u03a3XX\u03a3\u22121 XX\u03a3XY \u2212\u02c6 \u03a3XY ||\u221e =||(\u02c6 \u03a3XX \u2212\u03a3XX)\u03a3\u22121 XX\u03a3XY + \u03a3XY \u2212\u02c6 \u03a3XY ||\u221e =||(\u02c6 \u03a3XX \u2212\u03a3XX)\u03b2 + \u03a3XY \u2212\u02c6 \u03a3XY ||\u221e \u2264||(\u02c6 \u03a3 \u2212\u03a3)(1, \u2212\u03b2\u22a4)\u22a4||\u221e\u2264|\u02c6 \u03a3 \u2212\u03a3|\u221e||(1, \u2212\u03b2\u22a4)\u22a4||1 \u2264 r log p n \u00b7 (1 + ||\u03b2||1) \u2264 r log p n \u00b7 (1 + \u221as||\u03b2||2) = r log p n \u00b7 (1 + \u221as||\u03a3\u22121 XX\u03a3XY ||2) \u2264 r log p n \u00b7 (1 + \u221as||\u03a3\u22121 XX||2||\u03a3XY ||2) \u2264 r s log p n M. Therefore if we choose \u03bb such that \u03bb > 2M q s log p n , then we have \u03bbn \u22652||\u2207L(\u03b2)||\u221e. Then it follows from Theorem 6.9 that, when s log p/n \u21920, with probability at least 1 \u22122p\u22122, || \u02c6 \u03b2(\u03bb) \u2212\u03b2||2 \u2272\u221as\u03bb \u2272 r s log p n || \u02c6 \u03b2(\u03bb) \u2212\u03b2||1 \u2272s\u03bb \u2272s r log p n sgn( \u02c6 \u03b20) = sgn( \u02c6 \u03b2(\u03bb)). 6.3 Proof of Theorem 2.2 According to Lemma 6.8 and by the union bound P( max i\u2208[0,1,2,...,p] | \u02c6 fi(t) \u2212fi(t)| \u2265\u03f5) \u22642 exp(log d \u2212 n1\u2212\u03b3/2 12\u03c0 \u221a 2\u03c0\u221a\u03b3 log n\u03f52) \u22123 log(8\u03c0n\u03b3 log n) exp(log d \u2212 1 64 \u221a 2\u03c0 n \u221an\u03b3 log n). Therefore by taking \u03f5 = q 24\u03c0 \u221a 2\u03c0\u221a\u03b3 log n log d n1\u2212\u03b3/2 , then for t \u2208R such that |fi(t)| \u2264\u221a\u03b3 log n, with probability at least 1 \u2212d\u22121 \u2212n\u22121, max i\u2208[0,1,2,...,p] | \u02c6 fi(t) \u2212fi(t)| \u2272(\u03b3 log n)1/4\u221alog d n1/2\u2212\u03b3/4 . (30) 24 \fSince maxi=1,...,p Fi(x\u2217 i ) \u2208(\u03b4\u2217, 1 \u2212\u03b4\u2217), there exists some constant M\u2217> 0, such that max i=1,...,p fi(x\u2217 i ) = max i=1,...,p \u03a6\u22121(Fi(x\u2217 i )) < M\u2217. Therefore, if we let \u03b3 = M2 \u2217 log n, we have maxi=1,...,p fi(x\u2217 i ) \u2264\u221a\u03b3 log n. Then by (30), with probability at least 1 \u2212d\u22121 \u2212n\u22121, max i\u2208[0,1,2,...,p] | \u02c6 fi(x\u2217 i ) \u2212fi(x\u2217 i )| \u2272 r log d n . Combining the result in Theorem 2.1, with probability at least 1 \u22122d\u22121 \u2212n\u22121, |y\u2217\u2212\u02c6 y\u2217| =| \u02c6 f\u22121 0 ( p X i=1 \u02c6 fi(x\u2217 i )\u02c6 \u03b2(\u03bb)i) \u2212f\u22121 0 ( p X i=1 fi(x\u2217 i )\u03b2(\u03bb)i)| \u2272| p X i=1 \u02c6 fi(x\u2217 i )\u02c6 \u03b2(\u03bb)i \u2212 p X i=1 fi(x\u2217 i )\u03b2(\u03bb)i| \u2264| p X i=1 \u02c6 fi(x\u2217 i )\u02c6 \u03b2(\u03bb)i \u2212 p X i=1 fi(x\u2217 i )\u02c6 \u03b2(\u03bb)i| + | p X i=1 fi(x\u2217 i )\u02c6 \u03b2(\u03bb)i \u2212 p X i=1 fi(x\u2217 i )\u03b2(\u03bb)i| \u2272(||\u03b2||1 + s r log p n ) \u00b7 max i\u2208[0,1,2,...,p] | \u02c6 fi(t) \u2212fi(t)| + || \u02c6 \u03b2(\u03bb) \u2212\u03b2||1 \u2264|| \u02c6 \u03b2(\u03bb) \u2212\u03b2||1 + (s||\u03b2||2 + s r log p n ) \u00b7 max i\u2208[0,1,2,...,p] | \u02c6 fi(t) \u2212fi(t)| \u2272s r log d n , where the last inequality results from the fact \u03b2 = \u03a3\u22121 XX\u03a3XY , and then ||\u03b2||2 = ||\u03a3\u22121 XX\u03a3XY ||2 \u2264\u03bbmax(\u03a3) \u03bbmin(\u03a3) \u2264M. What\u2019s more, the \ufb01rst inequality is due to the following claim. Claim: For two increasing functions f1, f2, if |f1(f\u22121 1 (t)) \u2212f2(f\u22121 1 (t))| < c1 for some t \u2208R and c1 > 0, and |f2(v1) \u2212f2(v2)| \u2265c2|v1 \u2212v2| for some c2 > 0, then |f\u22121 1 (t) \u2212f\u22121 2 (t)| \u2264c1 c2 . In e\ufb00ect, if |f\u22121 1 (t) \u2212f\u22121 2 (t)| > c1 c2 , then |f1(f\u22121 1 (t)) \u2212f2(f\u22121 1 (t))| =|f1(f\u22121 1 (t)) \u2212f2(f\u22121 2 (t)) + f2(f\u22121 2 (t)) \u2212f2(f\u22121 1 (t))| \u2265|f2(f\u22121 2 (t)) \u2212f2(f\u22121 1 (t))| \u2212|f1(f\u22121 1 (t)) \u2212f2(f\u22121 2 (t))| >c2 \u00b7 c1 c2 \u22120 = c1. 25 \fThis leads to a contradiction. 6.4 Proof of Theorem 3.1 Before we proceed, we should determine \u00b5 to make the optimization problem (19) feasible. By Lemma 6.4, it is su\ufb03cient to set \u00b5 = C q log p n for some su\ufb03cient large constant C. According to (20) in Algorithm 2, \u02c6 \u03b2u = \u02c6 \u03b2(\u03bb) + M(\u02c6 \u03a3XY \u2212\u02c6 \u03a3XX \u02c6 \u03b2(\u03bb)) = \u03b2 \u2212\u03b2 + \u02c6 \u03b2(\u03bb) + M \u02c6 \u03a3XY \u2212M \u02c6 \u03a3XX \u02c6 \u03b2(\u03bb) = \u03b2 + (M \u02c6 \u03a3XY \u2212M \u02c6 \u03a3XX\u03b2) + (M \u02c6 \u03a3XX \u2212I)(\u03b2 \u2212\u02c6 \u03b2(\u03bb)). This implies \u221an( \u02c6 \u03b2u \u2212\u03b2(\u03bb)) = \u221an(M \u02c6 \u03a3XY \u2212M \u02c6 \u03a3XX\u03b2) + \u221an(I \u2212M \u02c6 \u03a3XX)(\u03b2 \u2212\u02c6 \u03b2(\u03bb)). (31) We control the two terms on the right hand side separately. Step 1: ||\u221an(I \u2212M \u02c6 \u03a3XX)(\u03b2 \u2212\u02c6 \u03b2(\u03bb))||\u221e\u21920 with high probability. By Theorem 2.1 and Lemma 6.4, with probability at least 1 \u22123p\u22122, ||\u221an(I \u2212M \u02c6 \u03a3XX)(\u03b2 \u2212\u02c6 \u03b2(\u03bb))||\u221e\u2264\u221an||I \u2212M \u02c6 \u03a3XX||\u221e||\u03b2 \u2212\u02c6 \u03b2(\u03bb)||1 \u2264\u221an\u00b5 \u00b7 s r log p n \u2272\u221an r log p n \u00b7 s r log p n . Therefore, when s log p \u221an \u21920, with probability at least 1 \u22123p\u22122, ||\u221an(I \u2212M \u02c6 \u03a3XX)(\u03b2 \u2212\u02c6 \u03b2(\u03bb))||\u221e\u21920. Step 2: Asymptotics of \u221an(u\u2032 i \u02c6 \u03a3XY \u2212u\u2032 i \u02c6 \u03a3XX\u03b2). With Lemma 6.5, Lemma 6.6, and by |\u03a3hZ|\u221e\u22641, when s log p \u221an \u21920, we have with probability 26 \fat least 1 \u2212p\u22122, |\u03c32 g1(ui) \u2212\u02c6 \u03c32 g1(ui)| = |x(ui)\u22a4\u03a3hZx(ui) \u2212\u02c6 x(ui)\u22a4\u02c6 \u03a3hZ \u02c6 x(ui)| \u2264|(x(ui) \u2212\u02c6 x(ui))\u22a4\u03a3hZ(x(ui) \u2212\u02c6 x(ui))| + |x(ui)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(ui)| \u2264||x(ui) \u2212\u02c6 x(ui)||2 1 + |x(ui)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(ui)| \u2272n2a s log p n + r s log p n1\u22122a \u2272 r s log p n1\u22122a Lemma 6.7 shows \u03c32 g1(ui) \u2273n\u22122a. It follows | \u02c6 \u03c32 g1(ui) \u03c32 g1(ui) \u22121| \u2272 q s log p n1\u22126a . In addition, due to the positiveness of \u03c3g1 and \u02c6 \u03c3g1, when s log p \u221an \u21920 and a < 1 12, \u02c6 \u03c3g1(ui)/\u03c3g1(ui) \u21921 in probability. Then according to Lemma 6.3, for any \u03f5 > 0, P( \u221an(Hi \u2212E[Hi]) \u03c0\u02c6 \u03c3g1(ui) \u2264x) =P(\u03c3g1(ui) \u02c6 \u03c3g1(ui) \u221an(Hi \u2212E[Hi]) \u03c0\u03c3g1(ui) \u2264x) \u2264P( \u221an(Hi \u2212E[Hi]) \u03c0\u03c3g1(ui) \u2264 x 1 \u2212\u03f5) + P( \u02c6 \u03c3g1(ui) \u03c3g1(ui) \u2265 1 1 \u2212\u03f5) \u2192\u03a6( x 1 \u2212\u03f5) as n \u2192\u221e, where the last limit results from Lemma 6.3. Let \u03f5 \u21920, we have lim sup n\u2192\u221eP( \u221an(Hi \u2212E[Hi]) \u03c0\u02c6 \u03c3g1(ui) \u2264x) \u2264\u03a6(x). Similarly, we have P( \u221an(Hi \u2212E[Hi]) \u03c0\u02c6 \u03c3g1(ui) \u2264x) \u2265P( \u221an(Hi \u2212E[Hi]) \u03c0\u03c3g1(ui) \u2264x(1 \u2212\u03f5)) \u2212P( \u02c6 \u03c3g1(ui) \u03c3g1(ui) \u22641 \u2212\u03f5) This leads to lim inf n\u2192\u221eP( \u221an(Hi \u2212E[Hi]) \u03c0\u02c6 \u03c3g1(ui) \u2264x) \u2265\u03a6(x). In conclusion, when s log p \u221an \u21920, we have lim n\u2192\u221esup x\u2208R |P( \u221an(Hi \u2212E[Hi]) \u03c0\u02c6 \u03c3g1(ui) \u2264x) \u2212\u03a6(x)| = 0. 27 \f7 Proof of Auxiliary Lemmas Proof of Lemma 6.1 Let A1, A2 \u2208Rd\u00d72d with each row of Ai has unit norm, and for some diagonal matrix D = diag(m1, m2, ..., md), satisfy \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 A1P \u22a5 A2A\u22a4 1 = D (A1 + A2)(A1 + A2)\u22a4= \u03a3 rank(A1) = rank(A2) = d, (32) where P \u22a5 A2 = I2d \u2212A\u22a4 2 (A2A\u22a4 2 )\u22121A2. Therefore, if \uf8eb \uf8edX Y \uf8f6 \uf8f8\u223cN(0, \uf8eb \uf8ed\u03a311 \u03a312 \u03a321 \u03a322 \uf8f6 \uf8f8) with \u03a311 = A1A\u22a4 1 , \u03a322 = A2A\u22a4 2 , \u03a312 = A1A\u22a4 2 , then \u03a311\u00b72 def = \u03a311 \u2212\u03a312\u03a3\u22121 22 \u03a321 = D, \u03a311 + \u03a312 + \u03a321 + \u03a322 = \u03a3. This implies X|Y \u223cN(\u03a312\u03a3\u22121 22 Y , D), X + Y \u223cN(0, \u03a3). For v \u2208Rd with ||v||2 = 1, Eev\u22a4sgn(Z) = Eev\u22a4sgn(X+Y ) = E[E[ev\u22a4sgn(X+Y )|Y ] = E[E[e Pd i=1 visgn(Xi+Yi)|Y ] = E[ d Y i=1 E[evisgn(Xi+Yi)|Y ]]. We have sgn(Xi + Yi)|Y \u223c \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1, with probability \u03a6(Yi+e\u22a4 i \u03a312\u03a3\u22121 22 Y \u221ami ), \u22121, with probability 1 \u2212\u03a6( Yi+e\u22a4 i \u03a312\u03a3\u22121 22 Y \u221ami ). (33) Let hi(Y ) def = E[sgn(Xi + Yi)|Y ] = 2\u03a6( Yi+e\u22a4 i \u03a312\u03a3\u22121 22 Y \u221ami ) \u22121 = 2\u03a6( e\u22a4 i (I+\u03a312\u03a3\u22121 22 )Y \u221ami ) \u22121. Therefore Eev\u22a4sgn(Z) =E[ d Y i=1 E[evi(sgn(Xi+Yi)\u2212hi(Y ))|Y ]evihi(Y )] = e1/2E[ d Y i=1 evihi(Y )] = e1/2E[e Pd i=1 vihi(Y )]. 28 \fLet g( \u02dc Y ) = Pd i=1 vihi(\u03a31/2 22 \u02dc Y ) where \u02dc Y = \u03a3\u22121/2 22 Y \u223cNd(0, I), then we have |g( \u02dc Y1) \u2212g( \u02dc Y2)| = | d X i=1 vi(hi(\u03a31/2 22 \u02dc Y1) \u2212hi(\u03a31/2 22 \u02dc Y2))| \u2264 v u u t d X i=1 (hi(\u03a31/2 22 \u02dc Y1) \u2212hi(\u03a31/2 22 \u02dc Y2))2 \u2264\u03c0||D\u22121/2(I + \u03a312\u03a3\u22121 22 )\u03a31/2 22 || \u00b7 || \u02dc Y1 \u2212\u02dc Y2|| \u2264\u03c0 q ||D\u22121/2(I + \u03a312\u03a3\u22121 22 )\u03a322(I + \u03a312\u03a3\u22121 22 )\u22a4D\u22121/2|| \u00b7 || \u02dc Y1 \u2212\u02dc Y2|| = \u03c0 q ||D\u22121/2\u03a3 \u22121|| \u00b7 || \u02dc Y1 \u2212\u02dc Y2||. From (1) we know that 0 \u2264(A1 + A2)P \u22a5 A2(A1 + A2)\u22a4= \u03a3 \u2212D. Then we have |g( \u02dc Y1) \u2212g( \u02dc Y2)| \u2264\u03bbmax(\u03a3) \u2212\u03bbmin(\u03a3) \u03bbmin(\u03a3) \u03c0|| \u02dc Y1 \u2212\u02dc Y2||. Thus, the Lipschitz norm of g(\u00b7) is bounded by \u03bbmax(\u03a3)\u2212\u03bbmin(\u03a3) \u03bbmin(\u03a3) \u03c0. By the Gaussian concentration inequality [3], E[e Pd i=1 vihi(Y )] = e\u03c0E[eg(\u03a3\u22121/2 22 Y )] \u2264e\u03c0M/2 with M = ||D\u22121\u03a3||2. If we let D = \u03bbmin(\u03a3)I, then M = \u03ba(\u03a3). Proof of Lemma 6.2 1. According to Taylor\u2019s expansion, |\u02c6 \u03a3 \u2212\u03a3|\u221ecan be bounded by | \u02c6 T \u2212T|\u221e: \u02c6 \u03a3 \u2212\u03a3 = sin(\u03c0 2 \u02c6 T) \u2212sin(\u03c0 2 T) = cos(\u03c0 2 T) \u25e6( \u02c6 T \u2212T) \u00b7 \u03c0 2 \u22121 2 sin(\u03c0 2 ) \u25e6\u03c0 2 ( \u02c6 T \u2212T) \u25e6\u03c0 2 ( \u02c6 T \u2212T). This implies |\u02c6 \u03a3 \u2212\u03a3|\u221e\u2272| \u02c6 T \u2212T|\u221e+ | \u02c6 T \u2212T|2 \u221e. By Hoe\ufb00ding inequality, P(|Tjk \u2212Tjk| > t) \u22642 exp(\u2212nt2/4). Therefore, P(| \u02c6 T \u2212T|\u221e> t) \u2264 p X j,k=1 P(|Tjk \u2212Tjk| > t) \u22642p2 exp(\u2212nt2/4) = 2 exp(2 log p \u2212nt2/4). Let t = 4 q log p n , the above inequality implies that with probability 1 \u22122p\u22122, | \u02c6 T \u2212T|\u221e\u2272 r log p n . This shows that with probability 1 \u22122p\u22122, |\u02c6 \u03a3 \u2212\u03a3|\u221e\u2264| \u02c6 T \u2212T|\u221e+ | \u02c6 T \u2212T|2 \u221e\u2272 r log p n . 29 \f2. Let d = p+1, and without loss of generality we assume n is even. For i, i\u2032 \u2208{1, 2, ..., n}, de\ufb01ne Si,i\u2032 = sgn(Zi\u2212Zi\u2032) = (sgn(Zi1\u2212Zi\u20321), ..., sgn(Zid\u2212Zi\u2032d))\u22a4, and \u02c6 \u2206i,i\u2032 = 1 n(n\u22121)(Si,i\u2032S\u22a4 i,i\u2032\u2212 T). Moreover, for any permutation \u03c3 \u2208Sn, where Sn is the permutation group of {1, ..., n}, let (i1, ..., in) = \u03c3(1, ..., n). For r = 1, ..., n/2 (without loss of generality, we assume n is even), we de\ufb01ne S\u03c3 r and \u02c6 \u2206\u03c3 r to be S\u03c3 r = S2ir,2ir\u22121, \u02c6 \u2206\u03c3 r = 1 n/2(S\u03c3 r S\u03c3T r \u2212T). Then \u2206= \u02c6 T \u2212T = X i,i\u2032 \u2206i,i\u2032 = 1 |Sn| X \u03c3\u2208Sn n/2 X r=1 \u02c6 \u2206\u03c3 r . and consequently, || \u02c6 T \u2212T|| \u2264 1 |Sn| X \u03c3\u2208Sn n/2 X r=1 \u02c6 \u2206\u03c3 r . Let N\u03f5 be the largest number of \u03f5-balls one can pack in the (1 + \u03f5)-ball centered at the origin and {w(j), j \u2264N\u03f5} be the centers of such \u03f5-balls in one of such con\ufb01gurations. From straight forward volume comparison we have N\u03f5 \u2264(1/\u03f5 + 1)d. For each w \u2208Sd\u22121, ||w \u2212w(j)||2 \u22642\u03f5 for some j \u2264N\u03f5, so that |w\u22a4\u2206w| \u2264|w\u22a4 (j)\u2206w(j)| + |(w \u2212w(j))\u22a4\u2206(w \u2212w(j))| \u2264|w\u22a4 (j)\u2206w(j)| + 4\u03f52||\u2206||2. This implies ||\u2206||2 \u2264sup j\u2264N\u03f5 |w\u22a4 (j)\u2206w(j)| 1 \u2212\u03f52 , (34) with N\u03f5 \u2264(1 + 1/\u03f5)d. In addition, for any w \u2208Sd\u22121, according to Lemma 6.1, we have E[etw\u22a4(Pn/2 r=1 \u02c6 \u2206\u03c3 r )w] = n/2 Y r=1 E[etw\u22a4\u02c6 \u2206\u03c3 r w] = n/2 Y r=1 E[e t n/2 w\u22a4(S\u03c3 r S\u03c3T r \u2212T)w] \u2264e 2t2M2\u03c0 n . Then by Jensen\u2019s inequality, E[etw\u22a4\u2206w] = E[etw\u22a4 1 |Sn| P \u03c3\u2208Sn Pn/2 r=1 \u02c6 \u2206\u03c3 r w] \u2264 1 |Sn| X \u03c3\u2208Sn E[etw\u22a4Pn/2 r=1 \u02c6 \u2206\u03c3 r w] \u2264e 2t2M2\u03c0 n . 30 \fTherefore, by the property of sub-gaussian random variable, for any w \u2208Sd\u22121, P(w\u22a4\u2206w > t) \u2264e\u2212 nt2 2M2\u03c0 . Then by (34) and let \u03f5 = 1/2, we have P(||\u2206||2 > t) \u22643de\u2212 nt2 2M2\u03c0 = ed log 3\u2212 nt2 2M2\u03c0 . Let t = q (2\u03c0 log 3M2) d+t n , then with probability at least 1 \u2212e\u2212t, ||\u2206|| \u2272 r d + t n . 3. By (2), for any A \u2282[p] with |A| = s, with probability at least 1 \u2212e\u2212t, ||\u2206A\u00d7A|| ||\u03a3A\u00d7A|| \u2272 r s + t n . Therefore ||\u2206||2,s ||\u03a3||2,s = sup|A|=s ||\u2206A\u00d7A|| sup|A|=s ||\u03a3A\u00d7A|| \u2264sup |A|=s ||\u2206A\u00d7A|| ||\u03a3A\u00d7A|| \u2272 r s + t n with probability at least 1 \u2212 \u0000p s \u0001 e\u2212t = 1 \u2212es log p\u2212t. This implies with probability at least 1 \u2212e\u2212t, ||\u2206||2,s ||\u03a3||2,s \u2272 r s log p + t n \u21d2||\u2206||2,s \u2272||\u03a3||2,s r s log p + t n . Proof of Lemma 6.3 Recall that \u03c32 g1(ui) = Var(g1(Z; ui)) = x(ui)\u22a4\u03a3hZx(ui) and for i = 1, 2, ..., p, Hi = u\u22a4 i \u02c6 \u03a3v0. Taylor expansion yields \u02c6 \u03a3 \u2212\u03a3 = sin(\u03c0 2 \u02c6 T) \u2212sin(\u03c0 2 T) = cos(\u03c0 2 T) \u25e6( \u02c6 T \u2212T) \u00b7 \u03c0 2 \u22121 2 sin(\u03c0 2 \u02c7 T) \u25e6\u03c0 2 ( \u02c6 T \u2212T) \u25e6\u03c0 2 ( \u02c6 T \u2212T). It follows Hi \u2212E[Hi] =u\u22a4 i (\u02c6 \u03a3 \u2212\u03a3)v0 =\u03c0 2 u\u22a4 i cos(\u03c0 2 T) \u25e6( \u02c6 T \u2212T)v0 \u22121 2u\u22a4 i sin(\u03c0 2 \u02c7 T) \u25e6\u03c0 2 ( \u02c6 T \u2212T) \u25e6\u03c0 2 ( \u02c6 T \u2212T)v0. (35) 31 \fLet Ji = u\u22a4 i cos( \u03c0 2 T) \u25e6\u02c6 Tv0 = 1 n(n\u22121)/2 P i 0. Then it is su\ufb03cient to show that (0, \u03a3\u22121 i,\u00b7 )\u22a4satis\ufb01es the constraint condition when \u00b5 = C q log p n with high probability. 33 \fBy Lemma 6.2.3, with probability at least 1 \u2212p\u22122 ||\u02c6 \u03a3XX\u03a3\u22121 \u00b7,i \u2212e(p) i ||\u221e= ||\u02c6 \u03a3XX\u03a3\u22121 \u00b7,i \u2212\u03a3XX\u03a3\u22121 \u00b7,i ||\u221e \u2264|\u02c6 \u03a3XX \u2212\u03a3XX|\u221e\u00b7 ||\u03a3\u22121 \u00b7,i ||1 \u2272 r log p n . In addition, due to the fact that ||\u03a3\u22121 i,\u00b7 ||1 \u2264||\u03a3\u22121||1 \u2264M2, and ||\u03a3\u22121 i,\u00b7 ||2 \u2265\u03bbmin(\u03a3\u22121) \u2265 1 M1 , we concludes that the constraints in the optimization problem 19 is feasible with probability at least 1 \u2212p\u22122. Proof of Lemma 6.5 Recall that x(u) = vec(uv\u22a4 0 \u25e6cos( \u03c0 2 T)), \u03c32 g1 = x(u)\u22a4\u03a3hZx(u), and \u02c6 \u03a3hZ = 1 n n X i=1 (\u02c6 hZ(Zi) \u22121 n n X i\u2032=1 \u02c6 hZ(Zi\u2032))(\u02c6 hZ(Zi) \u22121 n n X i\u2032=1 \u02c6 hZ(Zi\u2032))\u22a4, with \u02c6 hZ(Zi) = 1 n\u22121 P i\u0338=i\u2032 sgn(Zi \u2212Zi\u2032) \u2297sgn(Zi \u2212Zi\u2032) \u2208Rd2. We would like to prove with high probability |x(u)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(u)| \u2264 r log p n1\u22122a . By de\ufb01nition, for a random vector Z = (Z(1), ..., Z(d)) with an independent copy Z\u2032, and any j, k \u2208[d], let\u2019s use hZ(Z)jk to denote the [(j \u22121)d + k]-th coordinate of hZ(Z) hZ(Z)jk = E[sgn(Z(j) \u2212Z\u2032 (j))sgn(Z(k) \u2212Z\u2032 (k))|Z], and \u02c6 hZ(Zi)jk = 1 n \u22121 n X i\u2032\u0338=i sgn(Zij \u2212Zi\u2032j)sgn(Zik \u2212Zi\u2032k). This implies 1 n Pn i=1 \u02c6 hZ(Zi)jk = \u02c6 \u03c4jk. Therefore \u02c6 \u03a3hZ(jk, j1k1) = 1 n X i (\u02c6 hZ(Zi)jk \u22121 n X i\u2032 \u02c6 hZ(Zi\u2032)jk)(\u02c6 hZ(Zi)j1,k1 \u22121 n X i\u2032 \u02c6 hZ(Zi\u2032)j1,k1) = 1 n X i (\u02c6 hZ(Zi)jk \u2212\u02c6 \u03c4jk)(\u02c6 hZ(Zi)j1,k1 \u2212\u02c6 \u03c4j1k1) = 1 n X i \u02c6 hZ(Zi)jk\u02c6 hZ(Zi)j1,k1 \u2212\u02c6 \u03c4jk\u02c6 \u03c4j1k1. 34 \fIt follows x(u)\u22a4\u02c6 \u03a3hZx(u) = 1 n X i x(u)\u22a4\u0010 \u02c6 hZ(Zi) \u22121 n X i\u2032 \u02c6 hZ(Zi\u2032) \u0011\u0010 \u02c6 hZ(Zi) \u22121 n X i\u2032 \u02c6 hZ(Zi\u2032) \u0011\u22a4 x(u) = 1 n X i x(u)\u22a4\u02c6 hZ(Zi)\u02c6 hZ(Zi)\u22a4x(u) \u2212x(u)\u22a4vec( \u02c6 T)vec( \u02c6 T)\u22a4x(u) = 1 n X i [x(u)\u22a4\u02c6 hZ(Zi)]2 \u2212[x(u)\u22a4vec( \u02c6 T)]2. Since x(u)\u22a4\u02c6 hZ(Zi) = 1 n \u22121 X i\u0338=i\u2032 x(u)\u22a4\u0010 sgn(Zi \u2212Zi\u2032) \u2297sgn(Zi \u2212Zi\u2032) \u0011 . Conditional on Zi, x(u)\u22a4sgn(Zi \u2212Zi\u2032) \u2297sgn(Zi \u2212Zi\u2032) are n \u22121 i.i.d random vectors. In addition, similar as the proof in Lemma 6.3, |x(u)\u22a4sgn(Zi \u2212Zi\u2032) \u2297sgn(Zi \u2212Zi\u2032)| = |sgn(Z \u2212Z\u2032)\u22a4(uv\u22a4 0 \u25e6cos(\u03c0 2 T))sgn(Z \u2212Z\u2032)| =|tr(uv\u22a4 0 diag(sgn(Z \u2212Z\u2032)) cos(\u03c0 2 T)diag(sgn(Z \u2212Z\u2032)))| =|v\u22a4 0 diag(sgn(Z \u2212Z\u2032)) cos(\u03c0 2 T)diag(sgn(Z \u2212Z\u2032))u| \u2264||v0||2|| cos(\u03c0 2 T)||2||u||2 \u22642M3 1 ||u||2. Therefore, by Hoe\ufb00ding inequality, P(|x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi)| > t|Zi) \u2264e \u2212 nt2 4M3 1 ||u||2 . This implies E[et(x(u)\u22a4\u02c6 hZ(Zi)\u2212x(u)\u22a4hZ(Zi))] \u2264e 4M3 1 ||u||2t2 n . Therefore the sub-gaussian norm of x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi), ||x(u)\u22a4\u02c6 hZ(Zi)) \u2212x(u)\u22a4hZ(Zi)||\u03c82 \u22644M3 1 ||u||2 n , and this implies || 1 n n X i=1 x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi)||\u03c82 \u22641 n n X i=1 ||x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi)||\u03c82 \u22644M3 1 ||u||2 n . 35 \fTherefore P(| 1 n n X i=1 x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi)| > t) \u2264e \u2212 nt2 4M3||u||2 . Similarly, by Hoe\ufb00ding inequality, P(|x(u)\u22a4vec( \u02c6 T) \u2212x(u)\u22a4vec(T)| > t) \u2264e \u2212 nt2 4M3 1 ||u||2 , P(| 1 n n X i=1 (x(u)\u22a4hZ(Zi))2 \u2212E[(x(u)\u22a4hZ(Zi))2]| > t) \u2264e \u2212 nt2 8M6 1 ||u||2 2 . It follows with probability at least 1 \u22124e \u2212 nt2 4M3 1 ||u||2 \u2212e \u2212 nt2 8M6 1 ||u||2 2 , |x(u)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(u)| =| 1 n X i [x(u)\u22a4\u02c6 hZ(Zi)]2 \u2212[x(u)\u22a4vec( \u02c6 T)]2 \u22121 n X i E[(x(u)\u22a4hZ(Zi))2] + [x(u)\u22a4vec(T)]2 =| 1 n X i [x(u)\u22a4\u02c6 hZ(Zi)]2 \u2212[x(u)\u22a4vec( \u02c6 T)]2 \u22121 n X i (x(u)\u22a4hZ(Zi))2 + 1 n X i (x(u)\u22a4hZ(Zi))2 \u22121 n X i E[(x(u)\u22a4hZ(Zi))2] + [x(u)\u22a4vec(T)]2| \u2264| 1 n X i (x(u)\u22a4\u02c6 hZ(Zi))2 \u22121 n X i (x(u)\u22a4hZ(Zi))2| + | 1 n X i (x(u)\u22a4hZ(Zi))2 \u22121 n X i E[(x(u)\u22a4hZ(Zi))2]| + |[x(u)\u22a4vec( \u02c6 T)]2 \u2212[x(u)\u22a4vec(T)]2| =| 1 n X i (x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi))(x(u)\u22a4\u02c6 hZ(Zi) + x(u)\u22a4hZ(Zi))| + |[x(u)\u22a4vec( \u02c6 T)]2 \u2212[x(u)\u22a4vec(T)]2| + | 1 n X i (x(u)\u22a4hZ(Zi) \u2212E[(x(u)\u22a4hZ(Zi)])((x(u)\u22a4hZ(Zi) + E[(x(u)\u22a4hZ(Zi)])|. In addition, |x(u)\u22a4hZ(Zi)| \u2264||vec(uv\u22a4 0 \u25e6cos(\u03c0 2 T))||1 \u2264||u||1||v0||1 \u2264||u||1||\u03b2||1 \u2272\u221as||u||1. Similarly, |x(u)\u22a4\u02c6 hZ(Zi)| \u2272\u221as||u||1, |x(u)\u22a4vec( \u02c6 T)| \u2272\u221as||u||1, and |x(u)\u22a4vec(T)| \u2272\u221as||u||1. 36 \fTherefore |x(u)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(u)| \u2264\u221as||u||1 \u00b7 h | 1 n X i x(u)\u22a4\u02c6 hZ(Zi) \u2212x(u)\u22a4hZ(Zi)| + |x(u)\u22a4vec( \u02c6 T) \u2212x(u)\u22a4vec(T)| + | 1 n X i x(u)\u22a4hZ(Zi) \u2212E[(x(u)\u22a4hZ(Zi)]| i \u22643\u221as||u||1t. Therefore, P(|x(u)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(u)| \u2273\u221as||u||1t) \u22644e \u2212 nt2 4M3 1 ||u||2 + e \u2212 nt2 8M6 1 ||u||2 2 . Let t = q 8M3 1 log p\u00b7na n , and by the fact that ||u||2 \u2264||u||1 \u2264na/2, we have for any \u03f5 > 0, P(|x(u)\u22a4(\u02c6 \u03a3hZ \u2212\u03a3hZ)x(u)| \u2273 r s log p n1\u22122a ) \u22645p\u22122. Proof of Lemma 6.6 Recall that x(u) = vec(uv\u22a4 0 \u25e6cos( \u03c0 2 T)), and \u02c6 x(u) = vec(u \u02c6 v0\u22a4\u25e6cos( \u03c0 2 \u02c6 T)), therefore ||x(u) \u2212\u02c6 x(u)||1 =||vec(uv\u22a4 0 \u25e6cos(\u03c0 2 T)) \u2212vec(u\u02c6 v\u22a4\u25e6cos(\u03c0 2 \u02c6 T))||1 =|uv\u22a4 0 \u25e6cos(\u03c0 2 T) \u2212u\u02c6 v\u22a4\u25e6cos(\u03c0 2 \u02c6 T)||1 \u2264||u(v0 \u2212\u02c6 v)\u22a4||1 \u2264||u||1||v0 \u2212\u02c6 v||\u221e \u2264||u||1||v0 \u2212\u02c6 v||2 \u2272na r s log p n . Proof of Lemma 6.7 Since ||\u03b2||2 = ||\u03a3\u22121 XX\u03a3XY ||2 \u2265M\u22122 1 , then \u03c32 g1(u) = Var(g1(Z; u)) = vec(uv\u22a4 0 \u25e6cos(\u03c0 2 T))\u22a4\u00b7 \u03a3hZ \u00b7 vec(uv\u22a4 0 \u25e6cos(\u03c0 2 T)) \u2265M3 M1 ||(0, u[2 : p])(1, \u2212\u03b2)\u22a4\u25e6cos(\u03c0 2 T)||2 F \u2265M3 M5 1 ||u||2 2 \u2265 M3 M5 1 n2a . 37 \fReference [1] Rina Foygel Barber and Mladen Kolar. Rocket: Robust con\ufb01dence intervals via kendall\u2019s tau for transelliptical graphical models. arXiv preprint arXiv:1502.07641, 2015. [2] Peter J Bickel, Ya\u2019acov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, 37:1705\u20131732, 2009. [3] Christer Borell. The Brunn-Minkowski inequality in Gauss space. Inventiones mathematicae, 30(2):207\u2013216, 1975. [4] Anna L Buczak and Christopher M Gi\ufb00ord. Fuzzy association rule mining for community crime pattern discovery. In ACM SIGKDD Workshop on Intelligence and Security Informatics, page 2. ACM, 2010. [5] T Tony Cai and Zijian Guo. Con\ufb01dence intervals for high-dimensional linear regression: Minimax rates and adaptivity. arXiv preprint arXiv:1506.05539v1, 2015. [6] Herman Callaert and Paul Janssen. The Berry-Esseen theorem for U-statistics. The Annals of Statistics, 6:417\u2013421, 1978. [7] Raymond J Carroll and David Ruppert. Transformation and Weighting in Regression, volume 30. CRC Press, 1988. [8] Jianqing Fan, Han Liu, Yang Ning, and Hui Zou. High dimensional semiparametric latent graphical model for mixed data. arXiv preprint arXiv:1404.7236, 2014. [9] Jared C Foster, Jeremy MG Taylor, and Bin Nan. Variable selection in monotone single-index models via the adaptive lasso. Statistics in Medicine, 32(22):3944\u20133954, 2013. [10] Fang Han and Han Liu. Transelliptical component analysis. In Advances in Neural Information Processing Systems, pages 368\u2013376, 2012. [11] Roger A Horn and Charles R Johnson. Matrix Analysis. Cambridge university press, 2012. 38 \f[12] Adel Javanmard and Alessandro Montanari. Hypothesis testing in high-dimensional regression under the gaussian random design model: Asymptotic theory. Information Theory, IEEE Transactions on, 60(10):6522\u20136554, 2014. [13] Adel Javanmard and Andrea Montanari. Con\ufb01dence intervals and hypothesis testing for highdimensional regression. The Journal of Machine Learning Research, 15(1):2869\u20132909, 2014. [14] John Johnston and John DiNardo. Econometric Methods. Cambridge Univ Press, 1997. [15] William H Kruskal. Ordinal measures of association. Journal of the American Statistical Association, 53(284):814\u2013861, 1958. [16] Jason D Lee, Yuekai Sun, and Jonathan E Taylor. On model selection consistency of mestimators with geometrically decomposable penalties. arXiv preprint arXiv:1305.7477, 2013. [17] Han Liu, Fang Han, Ming Yuan, John La\ufb00erty, and Larry Wasserman. High-dimensional semiparametric gaussian copula graphical models. The Annals of Statistics, 40(4):2293\u20132326, 2012. [18] Han Liu, John La\ufb00erty, and Larry Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295\u20132328, 2009. [19] Yichao Lu, Paramveer Dhillon, Dean P Foster, and Lyle Ungar. Faster ridge regression via the subsampled randomized Hadamard transform. In Advances in Neural Information Processing Systems, pages 369\u2013377, 2013. [20] Shikai Luo and Subhashis Ghosal. Forward selection and estimation in high dimensional single index model. Submitted, 2015. [21] Guido Masarotto and Cristiano Varin. Gaussian copula marginal regression. Electronic Journal of Statistics, 6:1517\u20131549, 2012. [22] John H McDonald. Handbook of Biological Statistics, volume 2. Sparky House Publishing Baltimore, MD, 2009. 39 \f[23] Lukas Meier, Sara Van de Geer, Peter B\u00a8 uhlmann, et al. High-dimensional additive modeling. The Annals of Statistics, 37(6B):3779\u20133821, 2009. [24] Ritwik Mitra and Cun-Hui Zhang. Multivariate analysis of nonparametric estimates of large correlation matrices. arXiv preprint arXiv:1403.6195, 2014. [25] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K Ravikumar. A uni\ufb01ed framework for high-dimensional analysis of M-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, pages 1348\u20131356, 2009. [26] Liqiang Ni, R Dennis Cook, and Chih-Ling Tsai. A note on shrinkage sliced inverse regression. Biometrika, 92(1):242\u2013247, 2005. [27] Hohsuk Noh, Anouar El Ghouch, and Taou\ufb01k Bouezmarni. Copula-based regression estimation and inference. Journal of the American Statistical Association, 108(502):676\u2013688, 2013. [28] Guillaume Obozinski, Martin J Wainwright, and Michael I Jordan. Support union recovery in high-dimensional multivariate regression. The Annals of Statistics, 39:1\u201347, 2011. [29] D Wayne Osgood. Poisson-based regression analysis of aggregate crime rates. Journal of quantitative criminology, 16(1):21\u201343, 2000. [30] Michael Pitt, David Chan, and Robert Kohn. E\ufb03cient bayesian inference for gaussian copula regression models. Biometrika, 93(3):537\u2013554, 2006. [31] Peter Radchenko. High dimensional single index models. Journal of Multivariate Analysis, 139:266\u2013282, 2015. [32] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Minimax rates of estimation for high-dimensional linear regression over-balls. Information Theory, IEEE Transactions on, 57(10):6976\u20136994, 2011. [33] Pradeep Ravikumar, John La\ufb00erty, Han Liu, and Larry Wasserman. Sparse additive models. Journal of the Royal Statistical Society: Series B, 71(5):1009\u20131030, 2009. 40 \f[34] Adam J Rothman, Elizaveta Levina, and Ji Zhu. Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 19(4):947\u2013962, 2010. [35] Mark Rudelson and Shuheng Zhou. Reconstruction from anisotropic random measurements. Information Theory, IEEE Transactions on, 59(6):3434\u20133447, 2013. [36] Elias M Stein and Rami Shakarchi. Real Analysis: Measure Theory, Integration, and Hilbert Spaces. Princeton University Press, 2009. [37] Sara Van de Geer, Peter B\u00a8 uhlmann, Yaacov Ritov, and Ruben Dezeure. On asymptotically optimal con\ufb01dence regions and tests for high-dimensional models. The Annals of Statistics, 42(3):1166\u20131202, 2014. [38] William Yang Wang and Zhenhao Hua. A semiparametric gaussian copula regression model for predicting \ufb01nancial risks from earnings calls. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1155\u20131165, Baltimore, Maryland, June 2014. [39] Lingzhou Xue and Hui Zou. Regularized rank-based estimation of high-dimensional nonparanormal graphical models. The Annals of Statistics, 40(5):2541\u20132571, 2012. [40] Xinyang Yi, Zhaoran Wang, Constantine Caramanis, and Han Liu. Optimal linear estimation under unknown nonlinear transform. arXiv preprint arXiv:1505.03257, 2015. [41] Zhou Yu, Liping Zhu, Heng Peng, and Lixing Zhu. Dimension reduction and predictor selection in semiparametric models. Biometrika, 100(3):641\u2013654, 2013. [42] Ming Yuan and Ding-Xuan Zhou. Minimax optimal rates of estimation in high dimensional additive models: Universal phase transition. arXiv preprint arXiv:1503.02817, 2015. [43] Cun-Hui Zhang and Stephanie S Zhang. Con\ufb01dence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B, 76(1):217\u2013 242, 2014. [44] Yue Zhao and Marten Wegkamp. Semiparametric gaussian copula classi\ufb01cation. arXiv preprint arXiv:1411.2944, 2014. 41" + }, + { + "url": "http://arxiv.org/abs/1506.03382v1", + "title": "Optimal Rates of Convergence for Noisy Sparse Phase Retrieval via Thresholded Wirtinger Flow", + "abstract": "This paper considers the noisy sparse phase retrieval problem: recovering a\nsparse signal $x \\in \\mathbb{R}^p$ from noisy quadratic measurements $y_j =\n(a_j' x )^2 + \\epsilon_j$, $j=1, \\ldots, m$, with independent sub-exponential\nnoise $\\epsilon_j$. The goals are to understand the effect of the sparsity of\n$x$ on the estimation precision and to construct a computationally feasible\nestimator to achieve the optimal rates. Inspired by the Wirtinger Flow [12]\nproposed for noiseless and non-sparse phase retrieval, a novel thresholded\ngradient descent algorithm is proposed and it is shown to adaptively achieve\nthe minimax optimal rates of convergence over a wide range of sparsity levels\nwhen the $a_j$'s are independent standard Gaussian random vectors, provided\nthat the sample size is sufficiently large compared to the sparsity of $x$.", + "authors": "T. Tony Cai, Xiaodong Li, Zongming Ma", + "published": "2015-06-10", + "updated": "2015-06-10", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.IT", + "math.IT", + "math.NA", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction In a range of \ufb01elds in science and engineering, researchers face the problem of recovering a pdimensional signal of interest x by probing the signal via a set of p-dimensional sensing vectors aj, j = 1, . . . , m, and hence the observations are the (a\u2032 jx)\u2019s contaminated with noise. This gives rise to the linear regression model in statistical terminology where x is the regression coe\ufb03cient vector and A = [a1, . . . , am]\u2032 is the design matrix. There is an extensive literature on the theory and methods for the estimation/recovery of x under such a linear model. However, in many important applications, including X-ray crystallography, microscopy, astronomy, di\ufb00raction and array imaging, interferometry, and quantum information, it is sometimes impossible to observe a\u2032 jx directly and the measurements that one is able to obtain are the magnitude/energy of a\u2032 jx contaminated with noise. In other words, the observations are generated by the following phase retrieval model: yj = |a\u2032 jx|2 + \u03f5j, j = 1, . . . , m, (1.1) 1 arXiv:1506.03382v1 [math.ST] 10 Jun 2015 \fwhere \u03f5 = (\u03f51, . . . , \u03f5m)\u2032 is a vector of stochastic noise with E \u03f5 = 0. Note that E(yj) = |a\u2032 jx|2, so in the real case, (1.1) can be treated as a generalized linear model with the multi-value link function g(z) := \u00b1\u221az. We refer interested readers to [41] and the reference therein for more detailed discussions on scienti\ufb01c and engineering background for this model. In many applications, especially those related to imaging, the signal x \u2208Rp admits a sparse representation under some known and deterministic linear transformation. Without loss of generality, we assume in the rest of the paper that such a linear transform has already taken place and hence the signal x is sparse itself. In this case, model (1.1) is referred to as the sparse phase retrieval model. In addition, we consider the case where \u03f5 are independent centered sub-exponential random errors. This is motivated by the observation that in the application settings where model (1.1) is appropriate, especially in optics, heavy-tailed noise may arise due to photon counting. E\ufb03cient computational methods for phase retrieval have been proposed in the community of optics, and they are mostly based on the seminal work by Gerchberg, Saxton, and Fienup [21, 19]. The e\ufb00ectiveness of these methods relies on careful exploration of prior information of the signal in the spatial domain. Moreover, these methods were revealed later as non-convex successive projection algorithms [30, 4]. This provides insight for occasional observation of stagnation of iterates and failure of convergence. Recently, inspired by multiple illumination, novel computational methods were proposed for phase retrieval without exploring and employing a priori information of the signal. These methods include semide\ufb01nite programming [14, 10, 11, 44, 13], polarization [2], alternating minimization [37], gradient methods [12], alternating projection [35], etc. More importantly, profound and remarkable theoretical guarantees for these methods have also been established. As for noiseless sparse phase retrieval, semide\ufb01nite programming has been proven to be e\ufb00ective with theoretical guarantees [31, 38, 22]. Other empirical methods for sparse phase retrieval include belief propagation [39] and greedy methods [40]. Regarding noisy phase retrieval, some stability results have been established in the literature; See [9, 42, 15]. In particular, stability results have been established in [16] for noisy sparse phase retrieval by semide\ufb01nite programming, though the authors did not study the optimal dependence of the convergence rates on the sparsity of the signal and the sample size. Nearly minimax convergence rates for sparse phase retrieval with Gaussian noise have been established in [28] under sub-gaussian design matrices. However, the optimal rates are achieved by empirical risk minimization under sparsity constraints, in which both the objective function and the constraint are non-convex, implying that the procedure is not computationally feasible. In the present paper, we establish the minimax optimal rates of convergence for noisy sparse phase retrieval under sub-exponential noise, and propose a novel thresholded gradient descent method in order to estimate the signal x under the model (1.1). For conciseness, we focus on the case where the signal and the sensing vectors are all real-valued, and the key ideas extend naturally to the complex case. The theoretical analysis sheds light on the e\ufb00ects of the sparsity of the signal x and the presence of sub-exponential noise on the minimax rates for the estimation 2 \fof x under the \u21132 loss, as long as the sensing vectors aj\u2019s are independent standard Gaussian vectors. Combining the minimax upper and lower bounds given in Section 3, the optimal rate of convergence for estimating the signal x under the \u21132 loss is \u03c3 \u2225x\u22252 q k log p m , where k is the sparsity of x, \u2225\u00b7 \u22252 is the usual Euclidean norm, and \u03c3 characterizes the noise level. Moreover, it is shown that the thresholded gradient descent procedure is both rate-optimal and computationally e\ufb03cient, and the sample size requirement matches the state-of-the-art result in computational sparse phase retrieval under structureless Gaussian design matrices. We explain some notation used throughout the paper. For any n-dimensional vector v = (v1, . . . , vn)\u2032 and a subset S \u2282{1, . . . , n}, we denote by vS the n-dimensional vector by keeping the coordinates of v with indices in S unchanged, while changing all other components to zero. We also denote \u2225v\u2225q := (vq 1 + . . . + vq n)1/q for q \u22651, and \u2225v\u2225\u221e= max1\u2264k\u2264n |vk|. Also denote \u2225v\u22250 as the number of nonzero components of v. For any matrix M \u2208Rn1\u00d7n2, and any subsets S1 \u2208{1, . . . , n1} and S2 \u2208{1, . . . , n2}, MS1S2 \u2208Rn1\u00d7n2 is de\ufb01ned by keeping the submatrix of M with row index set S1 and column index set S2, while changing all other entries to zero. For any q1 \u22651 and q2 \u22651, we denote \u2225M\u2225q2\u2192q1 the induced norm from the Banach space (Rn2, \u2225\u00b7 \u2225q2) to (Rn1, \u2225\u00b7 \u2225q1). For simplicity, denote \u2225M\u2225:= \u2225M\u22252\u21922. We also denote by In the n \u00d7 n identity matrix. The rest of the paper is organized as follows: In Section 2, we introduce in detail the thresholded gradient descent procedure, which consists of two steps. The \ufb01rst is an initialization step by applying a diagonal thresholding method to a matrix constructed with available data. The second step applies iterative thresholding procedure for the recovery of the sparse vector x. Section 3 establishes the minimax optimal rates of convergence for noisy sparse phase retrieval under the \u21132 loss. The results show that the proposed thresholded gradient descent method is rate-optimal. In Section 4, numerical simulations illustrate the e\ufb00ectiveness of thresholding in denoising, and demonstrate how the relative estimation error depends on the thresholding parameter \u03b2, sample size m, sparsity k, and the noise-to-signal ratio \u03c3/\u2225x\u22252 2. In Section 5, we discuss the connections between our thresholded gradient method for noisy sparse phase retrieval and related methods proposed in the literature for high-dimensional regression. The proofs are given in Section 6 with some technical details deferred to the appendix. 2 Methodology The major component of the our method is a thresholded gradient descent algorithm to obtain a sparse solution to a given non-convex empirical risk minimization problem. Due to the non-convex nature of the problem, in order to avoid any local optimum that is far away from the truth, the initialization step is crucial. Thus, we also provide a candidate method which can be justi\ufb01ed theoretically for yielding a good initializer. The methodology is proposed assuming that A has standard Gaussian entries, though it could potentially also be used when such an assumption does not necessarily hold. 3 \f2.1 Thresholded Wirtinger \ufb02ow Given the sensing vectors aj and the noisy magnitude measurements yj as in (1.1) for j = 1, . . . , m, one can consider estimating x by minimizing the following empirical risk function f(z) := 1 4m m X j=1 \u0000|a\u2032 jz|2 \u2212yj \u00012 . (2.1) Statistically speaking, in the low-dimensional setup with \ufb01xed p and m \u2192\u221e, if the additive noises are heavy-tailed, least-absolute-deviations (LAD) methods might be more robust than least-squares methods. However, recent progress in modern linear regression analysis shows that least-squares could be preferable to LAD when p and m are proportional, even the noises are subexponential [18]. Due to this surprising phenomenon, we simply take the least-squares empirical risk in (2.1), although phase retrieval is a nonlinear regression problem, which could be very di\ufb00erent from linear regression. More importantly, close-form gradient methods can be induced from the empirical risk function in (2.1), which is computationally convenient. To be speci\ufb01c, at any current value of z, one updates the estimator by taking a step along the gradient direction \u2207f(z) = 1 m m X j=1 \u0000|a\u2032 jz|2 \u2212yj \u0001 (a\u2032 jz)aj (2.2) until a stationary point is reached. Indeed, Cand` es et al. [12] showed that under appropriate conditions, initialized by an appropriate spectral method, a gradient method, referred to as Wirtinger \ufb02ow, leads to accurate recovery of x up to a global phase in the complex domain and noiseless setting. However, the direct application of gradient descent is not ideal for noisy sparse phase retrieval since it does not utilize the knowledge that the true signal x is sparse in order to mitigate the contamination of the noise. To incorporate this a priori knowledge, it makes sense to seek a \u201csparse minimizer\u201d of (2.1). To this end, suppose we have a sparse initial guess x(0) for x. To update x(0) to another sparse vector, we may take a step along \u2207f(x(0)), and then sparsify the result by thresholding. Indeed, if we were given the oracle knowledge of the support S of x, then we can reduce the problem to recovering xS based on the {yj, ajS}m j=1. By avoiding estimating any coordinate of x in Sc, we could greatly reduce variance of the resulting estimator of x. In reality, we do not have such oracle knowledge and the additional thresholding step added on top of gradient descent is intended to mimic the oracle behavior by hopefully restricting all the updated coordinates on S. Let T\u03c4 be any thresholding function satisfying T\u03c4(x) = 0, \u2200x \u2208[\u2212\u03c4, \u03c4], and |T\u03c4(x) \u2212x| \u2264\u03c4, \u2200x \u2208R. (2.5) For any vector b = (b1, . . . , bp)\u2032, let T\u03c4(b) = (T\u03c4(b1), . . . , T\u03c4(bp))\u2032. With the foregoing de\ufb01nition, the proposed thresholded gradient descent method can be summarized as Algorithm 1. In view of 4 \fAlgorithm 1: Thresholded Wirtinger \ufb02ow for noisy sparse phase retrieval Input: Data {aj, yj}m j=1; initial estimator b x0; thresholding function T ; gradient tuning parameter \u00b5; thresholding tuning parameter \u03b2; number of iterations T. Output: Final estimator b x. 1 Initialize n \u21900 and b x(0) = b x0. repeat 2 Compute threshold level \u03c4(b x(n)) = v u u t\u03b2 log(mp) m2 m X j=1 \u0010 |a\u2032 j b x(n)|2 \u2212yj \u00112 |a\u2032 j b x(n)|2 ; (2.3) 3 Update b x(n+1) = \u03d5(b x(n)) := T \u00b5 \u03c62 \u03c4(b x(n)) \u0012 b x(n) \u2212\u00b5 \u03c62 \u2207f(b x(n)) \u0013 , (2.4) until n = T; where \u2207f is de\ufb01ned in (2.2); 4 Return b x = b x(T). the Wirtinger \ufb02ow method for noiseless phase retrieval [12], we name our approach the \u201cThresholded Wirtinger Flow\u201d method. The data-driven choice of the threshold level in (2.3) is motivated by the following intuition. Recall that we assume the sensing vectors {aj : j = 1, . . . , m} are independent standard Gaussian vectors. For a \ufb01xed z, if we act as if each (|a\u2032 jz|2 \u2212yj)(a\u2032 jz) is a \ufb01xed constant, then the gradient in (2.2) is a linear combination of Gaussian vectors and hence has i.i.d. Gaussian entries with mean zero and variance 1 m2 Pm j=1(|a\u2032 jz|2 \u2212yj)2(a\u2032 jz)2. Therefore, the threshold \u03c4(z) is simply p \u03b2 log(mp) times the standard deviation of these Gaussian random variables, which is essentially the universal thresholding in the Gaussian sequence model literature [24]. Although the above intuition is not exactly true, the resulting thresholds in (2.3) are indeed the right choices as justi\ufb01ed later in Section 3, and illustrated in Section 4. Notice that there are two tuning parameters \u00b5 and \u03b2, which should be treated as absolute constants. We will validate some theoretical choices and also provide practical choices later. 2.2 Initialization It is worth noting that the success of Algorithm 1 depends crucially on the initial estimator for two reasons. First, the empirical risk (2.1) is a non-convex function of z and hence it could have multiple local minimizers. Hence the success of a gradient descent based approach depends naturally on the starting point. Moreover, an accurate initializer can reduce the required number of iterations in the thresholded Wirtinger \ufb02ow algorithm. In view of its crucial rule, we propose in Algorithm 2 an initialization method which can be proven to yield a decent starting point for 5 \fAlgorithm 2: Initialization for Algorithm 1 Input: Data {aj, yj}m j=1; tuning parameter \u03b1. Output: Initial estimator b x0. 1 Compute \u03c62 = 1 m m X j=1 yj, (2.6) and Il = 1 m m X j=1 yja2 jl, l = 1, . . . , p. (2.7) 2 Select a set of coordinates b S0 = ( l \u2208[p] : Il > 1 + \u03b1 r log(mp) m ! \u03c62 ) . (2.8) 3 Compute a p \u00d7 p matrix Wb S0 b S0 := 1 m m X j=1 yjaj b S0a\u2032 j b S0. (2.9) 4 Return b x0 = \u03c6 b v1 (2.10) where b v1 as the leading eigenvector of Wb S0 b S0. Algorithm 1 under our modeling assumption. The motivation of the algorithm is similar to that of diagonal thresholding [25] for sparse PCA: we want to identify a small collection of coordinates with big marginal signals and then compute an estimator of x by focusing only on these coordinates. In particular, the quantity Il in (2.7) captures the marginal signal strength of the l-th coordinate and b S0 (2.8) selects all coordinates with big marginal signals. Last but not least, (2.9) and (2.10) computes the initial estimator by focusing only on the coordinates in b S0. There is a tuning parameter \u03b1 needed as input of the algorithm, which can be treated as an absolute constant. We will provide some justi\ufb01ed theoretical choice later. 3 Theory We \ufb01rst establish the statistical convergence rate for the thresholded Wirtinger \ufb02ow method under the case of \u201cGaussian design\u201d, i.e., aj iid \u223cN(0, Ip) for j = 1, . . . , m in (1.1). Moreover, we assume the signal x is k-sparse, i.e., \u2225x\u22250 = k, and the noises \u03f51, . . . , \u03f5m are m independent centered sub-exponential random variables with maximum sub-exponential norm \u03c3, i.e., 6 \f\u03c3 := max1\u2264i\u2264m \u2225\u03f5i\u2225\u03c81. Here for any random variable X, its sub-exponential norm is de\ufb01ned as \u2225X\u2225\u03c81 := supp\u22651 p\u22121(E |X|p) 1 p . This de\ufb01nition, as well as some fundamental properties of sub-exponential variables (such as Bernstein inequality), can be found in Section 5.2.4 of [43]. Theorem 3.1 Suppose \u03b2 = 4 in (2.3), and \u03b1 = K \u0010 1 + \u03c3 \u2225x\u22252 2 \u0011 in (2.8) for some absolute constant K. Suppose \u00b5 \u2264\u00b50 in (2.4) and m \u2265C \u0010 1 + \u03c32 \u2225x\u22254 2 \u0011 k2 log(mp). For all t = 1, 2, 3, . . ., there holds sup \u2225x\u22250=k P(A,y|x) min i=0,1 \u2225b x(t) \u2212(\u22121)ix\u22252 > 1 6 \u0010 1 \u2212\u00b5 16 \u0011t \u2225x\u22252 + C0 \u03c3 \u2225x\u22252 r k log p m ! \u226446 m +10 ek + t mp2 where \u00b50, C, and C0 are some absolute constants. The proof is given in Section 6. Lemma 6.3 guarantees the e\ufb03cacy of the initialization step Algorithm 2, and Lemmas 6.4 and 6.5 explain why the thresholded Wirtinger \ufb02ow method leads to accurate estimation. Here \u03b2 = 4 and \u03b1 = K \u0010 1 + \u03c3 \u2225x\u22252 2 \u0011 are chosen for analytical convenience. The discussion of empirical choices of \u03b2, \u03b1, and \u00b5 are deferred to Section 4. Let us interpret Theorem 3.1 by considering the following cases. In the noiseless case, with high probability, we obtain min i=0,1 \u2225b x(t) \u2212(\u22121)ix\u22252 \u22641 6 \u00001 \u2212\u00b5 16 \u0001t \u2225x\u22252. This implies that thresholded gradient descent method leads to linear convergence to the original signal up to a global sign. In the noisy case, if \u00b5 > 0 is an absolute constant, by letting t \u224dlog (1/\u03b4) where \u03b4 = \u03c3 \u2225x\u22252 2 q k log p m , we obtain min i=0,1 \u2225b x(t) \u2212(\u22121)ix\u22252 \u227e \u03c3 \u2225x\u22252 q k log p m with high probability. If the knowledge of \u03b4 is not available, by choosing t = O(log p), we can obtain min i=0,1 \u2225b x(t) \u2212(\u22121)ix\u22252 \u227e \u03c3 \u2225x\u22252 q k log p m + 1 pc for any predetermined c > 0. The convergence rate \u03c3 \u2225x\u22252 q k log p m is better than the upper bound result established in [28], which is achieved by the intractable sparsity constrained empirical risk minimization. Our contribution is to show that this rate can be obtained tractably by a fast algorithm. Ignoring any polylog factor, the above convenient properties of thresholded Wirtinger \ufb02ow are guaranteed by the sample size condition m \u2273k2. When m \u226ap, this condition is crucial for the e\ufb00ectiveness of initialization Algorithm 2. An immediate question is whether such a minimum sample size condition is in some sense necessary for any computationally e\ufb03cient algorithm, if the sensing matrix is random and structureless? A similar phenomenon has been previously observed in the related but di\ufb00erent problem of sparse principal component analysis. Assuming the hardness of the planted clique problem [3], a series of papers [6, 45, 20] have shown that a comparable minimum sample size condition is necessary for any estimator computable in polynomial time complexity to achieve consistency and optimal convergence rates uniformly over a parameter space of interest. In particular, it was shown in [20] that this is the case even for the most restrictive parameter space in sparse principal component analysis \u2013 (discretized) Gaussian single spiked model with a sparse leading eigenvector. Establishing comparable computational lower bounds for sparse phase retrieval, especially under the Gaussian design, is an interesting project for future research. 7 \fIn the case when m \u2273p ignoring any log factor, it is well-known that a consistent initializer can be obtained by spectral methods [37, 12], no matter whether x is sparse or not. In other words, the diagonal thresholding idea in Algorithm 2 is not as crucial as in the case m \u226ap. It is interesting to investigate whether m \u2273k2 can be relaxed such that the optimal converge rates can still be achieved by thresholded Wirtinger \ufb02ow. The convergence rate \u03c3 \u2225x\u22252 q k log p m is essentially optimal. The following lower bound result has been essentially proven in [28]: Theorem 3.2 ([28]) Let \u0398(k, p, R) = {x \u2208Rp : \u2225x\u22252 = R, \u2225x\u22250 = k}. Suppose the aj\u2019s are i.i.d. N(0, Ip), the \u03f5j\u2019s are i.i.d. N(0, \u03c32), and they are mutually independent. There holds under model (1.1), inf b x sup x\u2208\u0398(k,p,R) P(A,y|x) min i=0,1 \u2225b x \u2212(\u22121)ix\u22252 \u2265C0 \u03c3 R r k log(ep/k) m ! \u22651 5, provided m \u2265C \u0010 \u03c32 \u2225x\u22254 2 + 1 \u0011 k log(ep/k), where both C and C0 are some absolute constants. Notice that for a standard Gaussian variable with variance \u03c32, its sub-exponential norm is a constant multiple of \u03c3. For brevity, we do not scale the Gaussian noises such that their subexponential norms are strictly less than or equal to \u03c3. 4 Numerical Simulation In this section, we report numerical simulation results to demonstrate how the relative estimation error depends on the thresholding parameter \u03b2, the noise-to-signal ratio (NSR) \u03c3/\u2225x\u22252 2, the sample size m, and the sparsity k. To guarantee fair comparison, we always \ufb01x the length of the signal p = 1000 and the initialization parameter \u03b1 = 0.1 (except for the \ufb01rst case on thresholding e\ufb00ect). Moreover, in each numerical experiment, we conservatively choose gradient parameter \u00b5 = 0.01, and the number of iterations T = 1000 for thresholded Wirtinger \ufb02ow. The resulting estimator is denoted as b x = b x(1000). With each \ufb01xed k, the support of x is uniformly distributed at random. The nonzero entries of x are i.i.d. \u223cN(0, 1). The noise \u03f5 \u223cN(0, \u03c32Im), where \u03c3 is determined by \u2225x\u22252 and the choice of NSR \u03c3/\u2225x\u22252 2. As discussed before, the design matrix A consists of independent standard Gaussian random variables. 1. Thresholding e\ufb00ect: Fix \u03b1 = 0.1, m = 7000, k = 100, and \u03c3/\u2225x\u22252 2 = 1. For each \u03b2 = 0, 0.25, 0.5, . . . , 3, we implement the algorithm for 10 times with independently generated A, x, and \u03f5. and then take the average of the 10 independent relative errors min(\u2225b x \u2212 x\u22252, \u2225b x + x\u22252)/\u2225x\u22252. The relation between the average relative error and the choice of \u03b2 is plotted as the red curve in Figure 1. The result shows that the average relative error essentially decreases from 0.2365 to 0.1151 as the thresholding parameter increases from 0 to 0.75, and then increases slowly up to 0.1684 as \u03b2 continues to increase to 3. 8 \fWe implement the above experiments again with the only di\ufb00erence \u03b1 = 0.5. The relation curve between the relative estimation error and \u03b2 is plotted as the blue curve in Fig. 1. It is clear that the performance of the algorithm is very close to the case \u03b1 = 0.1. 0 0.5 1 1.5 2 2.5 3 relative error 0 0.05 0.1 0.15 0.2 0.25 0.3 Figure 1: The relation between the average relative error and the thresholding parameter \u03b2. Setup of parameters: p = 1000, m = 1000, k = 100, \u03c3/\u2225x\u22252 2 = 1, \u00b5 = 0.01, and T = 1000. Red curve with \u03b1 = 0.1, while blue curve with \u03b1 = 0.5. 2. Noise e\ufb00ect: Fix m = 7000, k = 100, and \u03b2 = 1. In each choice of NSR \u03c3/\u2225x\u22252 2 = 0, 0.1, . . . , 1, with 5 instances of (A, x, \u03f5) generated independently, we take the average of the relative error min(\u2225b x \u2212x\u22252, \u2225b x + x\u22252)/\u2225x\u22252. In Figure 2, it shows how the average relative error depends on NSR. The average relative error strictly increases from 0.0000 to 0.1219, as the NSR increases from 0 to 1. 0. As a consequence, as long as m log m \u2265C(\u03b4) \u0010 1 + \u03c32 \u2225x\u22254 2 \u0011 , there holds 9 10 \u22641 \u2212\u03b4 \u2264 \u03c62 \u2225x\u22252 2 \u22641 + \u03b4 \u226411 10. Proof By the de\ufb01nition of \u03c62 and yj, j = 1, . . . , m, we have \u03c62 = 1 m m X j=1 (a\u2032 jx)2 + 1 m m X j=1 \u03f5j. As shown in Lemma A.7, with probability at least 1 \u22121 m, \f \f \f \f \f \f 1 m m X j=1 \u03f5j \f \f \f \f \f \f \u2264C0\u03c3 r log m m 12 \ffor some numerical constant C0 > 0. Moreover, since x is \ufb01xed, there holds Pm j=1(a\u2032 jx)2 \u2225x\u22252 2 \u223c\u03c72(m). By Lemma 4.1 of [27], with probability at least 1 \u22122 m, we have 1 \u22122 r log m m \u2264 Pm j=1(a\u2032 jx)2 m\u2225x\u22252 2 \u22641 + 2 r log m m + 2 log m m . The proof is done. Lemma 6.3 Let \u03b1 = K \u0010 1 + \u03c3 \u2225x\u22252 2 \u0011 for some large enough absolute constant K, and b x(0) be de\ufb01ned in Algorithm 2. There exists a random vector x(0) satisfying x(0) | = ASc and supp(x(0)) \u2282 S, such that on an event E01 with probability at least 1 \u221216 m \u22122e\u2212k, we have x(0) = b x(0), and min(\u2225x(0) \u2212x\u22252, \u2225x(0) + x\u22252) \u22641 6\u2225x\u22252, provided m \u2265C \u0010 1 + \u03c32 \u2225x\u22254 2 \u0011 k2 log(mp). Here C is an absolute constant. Proof Recall that S = {1, . . . , k} and Il = 1 m Pm j=1 yja2 jl for l = 1, . . . , p. De\ufb01ne S0 = ( l \u2208S : Il > 1 + \u03b1 r log(mp) m ! \u03c62 ) \u2282S. (6.2) Since {I1, . . . , lk, \u03c6} | = ASc, we have S0 | = ASc. De\ufb01ne x(0) \u2208Rp as the leading eigenvector of WS0S0 := 1 m m X j=1 yjajS0a\u2032 jS0 \u2208Rp\u00d7p with 2-norm \u03c6. This easily implies supp(x(0)) \u2282S0 \u2282S. Since {WS0S0, \u03c6} | = ASc, we also have x(0) | = ASc. To simplify notation, let us write for any j \u2208[m], e yj := (a\u2032 jx)2 = (a\u2032 jSx)2, which implies yj = e yj + \u03f5j. Notice that Il \u2212\u03c62 = 1 m m X j=1 e yj(a2 jl \u22121) + 1 m m X j=1 \u03f5j(a2 jl \u22121), (6.3) in which we will \ufb01rst control the second term. For a given l \u2208[p], we know a2 1l \u22121, . . . , a2 ml \u22121 are i.i.d. centered sub-exponential random variables with sub-exponential norms being an absolute constant. Then, by Bernstein inequality (see, e.g., Proposition 16 in [43]), we have with probability at least 1 \u2212 2 mp, \f \f \f \f \f \f m X j=1 \u03f5j(a2 jl \u22121) \f \f \f \f \f \f \u2264C0 \u0010 \u2225\u03f5\u22252 p log(mp) + \u2225\u03f5\u2225\u221elog(mp) \u0011 13 \ffor some absolute constant C0. Then by Lemma A.7, with probability at least 1 \u22124/m, we have max 1\u2264l\u2264p \f \f \f \f \f \f 1 m m X j=1 \u03f5j(a2 jl \u22121) \f \f \f \f \f \f \u2264C0\u03c3 r log(mp) m + (log m)(log(mp)) m ! \u2264C0\u03c3 r log(mp) m , (6.4) provided m \u2265C(log p) for some absolute constant C. Next, we prove that with high probability x(0) = b x(0). It su\ufb03ces to prove b S0 = S0, i.e., b S0 \u2282S. For any l \u2208Sc, ajl and e yj are independent, and so conditional on {e yj, j \u2208[m]}, Pm j=1 e yja2 jl is a weighted sum of \u03c72 1 variables. By Lemma 4.1 of [27], P \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 m X j=1 e yj(a2 jl \u22121) > 2 \u221a t \uf8eb \uf8ed m X j=1 e y2 j \uf8f6 \uf8f8 1 2 + 2 \u0012 max j e yj \u0013 t \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe \u2264exp(\u2212t). Moreover, Chebyshev\u2019s inequality, the Gaussian tail bound and the union bound lead to P \uf8f1 \uf8f2 \uf8f3 m X j=1 e y2 j /\u2225x\u22254 2 > 3m + \u221a 96mt \uf8fc \uf8fd \uf8fe\u2264t\u22122, P \u001a max j e yj/\u2225x\u22252 2 > t \u001b \u22642m exp(\u2212t/2). Thus, with probability at least 1 \u22124 m, for all l \u2208Sc, 1 m m X j=1 e yj(a2 jl \u22121) \u22642 q 3 + \u221a 96\u2225x\u22252 2 r log(mp) m + 8\u2225x\u22252 2 (log(mp))2 m \u22648\u2225x\u22252 2 r log(mp) m . (6.5) Here the last inequality holds when m \u2265C for some absolute constant C. Since \u03b1 = K \u0010 1 + \u03c3 \u2225x\u22252 2 \u0011 with large enough K, by (6.3), (6.5), (6.4) and Lemma 6.2, we obtain that with probability at least 1 \u221211 m , for all l \u2208Sc, Il \u2212\u03c62 \u2264(8\u2225x\u22252 2 + C0\u03c3) r log(mp) m \u2264\u03b1\u03c62 r log(mp) m , which implies that b S0 \u2282S. Next, we prove that \u2225x(0) \u2212x\u22252/\u2225x\u22252 \u2264 1 6 with high probability. For any \ufb01xed l \u2208S, straightforward calculation yields E e yja2 jl = \u2225x\u22252 2 + 2x2 l . On the other hand, E e y2 j a4 jl = 105x4 l + 90x2 l (\u2225x\u22252 2 \u2212x2 l ) + 9(\u2225x\u22252 2 \u2212x2 l )2. So for Xj = \u2225x\u22252 2 + 2x2 l \u2212e yja2 jl, we have Xj \u2264\u2225x\u22252 2 + 2x2 l \u22643\u2225x\u22252 2, E Xi = 0 and E X2 i = 20x4 j + 68\u2225x\u22252 2x2 l + 8\u2225x\u22254 2 \u226496\u2225x\u22254 2. By Lemma A.1, P \uf8f1 \uf8f2 \uf8f3 m X j=1 e yja2 jl \u2212m(\u2225x\u22252 2 + 2x2 l ) \u2264\u2212t \uf8fc \uf8fd \uf8fe\u2264exp \u0012 \u2212 t2 192\u2225x\u22254 2m \u0013 . 14 \fNext, Lemma 4.1 of [27] leads to with probability at least 1 \u22121 m, 1 m m X j=1 e yj \u2212\u2225x\u22252 2 \u2264 2 r log m m + 2 log m m ! \u2225x\u22252 2 \u22642.1\u2225x\u22252 2 r log m m . The last two inequalities, together with (6.4) and (6.3), imply that with probability at least 1\u22126 m, for all l \u2208S, Il \u2212\u03c62 \u22652x2 l \u2212(16\u2225x\u22252 2 + C0\u03c3) r log(mp) m . De\ufb01ne S\u2212= \u001a l \u2208S : x2 l \u2265 \u000011 + 3 5\u03b1 \u0001 \u2225x\u22252 2 q log(mp) m \u001b . Then, for all l \u2208S\u2212we have Il \u2212\u03c62 \u2265(6 5\u03b1\u2225x\u22252 2 + 6\u2225x\u22252 2 \u2212C0\u03c3) r log(mp) m . Since \u03b1 = K \u0010 1 + \u03c3 \u2225x\u22252 2 \u0011 with su\ufb03ciently large absolute constant K, by lemma 6.2, we have or all l \u2208S\u2212, Il \u2212\u03c62 \u2265\u03b1\u03c62 r log(mp) m , with probability at least 1 \u22129/m. This implies S\u2212\u2282S0. Therefore, we have \u2225x\u2212xS0\u22252 2 \u2264\u2225x\u2212xS\u2212\u22252 2 \u2264(11+0.6\u03b1)\u2225x\u22252 2 q k2 log(mp) m \u2264\u03b42\u2225x\u22252 2, provided that m \u2265C(\u03b4) \u0010 1 + \u03c32 \u2225x\u22254 2 \u0011 k2 log(mp). Notice that E W = \u2225x\u22252 2Ip + 2xx\u2032, which implies that (E W )SS = \u2225x\u22252 2(Ip)SS + 2xx\u2032. Furthermore, by the de\ufb01nition of W , we have WSS = 1 m m X j=1 \f \faj\u2032 Sx \f \f2 ajSaj\u2032 S + 1 m m X j=1 \u03f5jajSaj\u2032 S. By Lemma A.6, with probability at least 1 \u22121/m, we have \r \r \r \r \r \r 1 m m X j=1 |aj\u2032 Sx|2ajSaj\u2032 S \u2212 \u0000\u2225x\u22252 2(Ip)SS + 2xx\u2032\u0001 \r \r \r \r \r \r \u2264\u03b4 2\u2225x\u22252 2, provided m \u2265C(\u03b4)k log p. Moreover, by Lemma A.7 and Lemma A.8, with probability at least 1 \u22122/m \u22122e\u2212k, we have \r \r \rPm j=1 \u03f5jajSa\u2032 jS \r \r \r \u2264C0\u03c3 p m(k + log m). By assuming m \u2265 C(\u03b4) \u03c32 \u2225x\u22254 2 k log(mp), we have 1 m \r \r \rPm j=1 \u03f5jajSa\u2032 jS \r \r \r \u2264\u03b4 2\u2225x\u22252 2. This implies that \u2225WS0S0 \u2212(E W )S0S0\u2225\u2264\u2225WSS \u2212(E W )SS\u2225\u2264\u03b4\u2225x\u22252 2. It is noteworthy that the leading eigenvector of (E W )SS with unit norm is xS0/\u2225xS0\u22252, and the eigengap between the leading two eigenvalues of (E W )S0S0 is 2\u2225xS0\u22252 2. Recall that x(0) is the leading eigenvector WS0S0 with norm \u03c6. Then by the Sin-Theta theorem, \r \r \r \r \r x(0)(x(0))T \u03c62 \u2212xS0xT S0 \u2225xS0\u22252 2 \r \r \r \r \r \u2264 \u03b4\u2225x\u22252 2 2\u2225xS0\u22252 2 \u2212\u03b4\u2225x\u22252 2 \u2264 \u03b4 2 \u22125\u03b4. 15 \fBy Lemma 6.2, we have 1 + \u03b4 \u2265\u03c6/\u2225x\u22252 \u22651 \u2212\u03b4. Together with 1 \u2265\u2225xS0\u22252/\u2225x\u22252 \u22651 \u2212\u03b4, we can easily obtain that min(\u2225x(0) \u2212x\u22252, \u2225x(0) + x\u22252) \u2264C0\u03b4\u2225x\u22252 for some absolute constant C0. By letting \u03b4 be small enough, we have min(\u2225x(0) \u2212x\u22252, \u2225x(0) + x\u22252) \u22641/6\u2225x\u22252. In conclusion, we have P \u0010 x(0) = b x(0) and min(\u2225x(0) \u2212x\u22252, \u2225x(0) + x\u22252) \u22641/6\u2225x\u22252 \u0011 \u22651 \u221216 m \u22122e\u2212k. Lemma 6.4 De\ufb01ne \u03b7(z) = T \u00b5 \u03c62 \u03c4(z) \u0010 z \u2212\u00b5 \u03c62 \u2207f(z)S \u0011 . With probability at least 1 \u221215 m \u22124e\u2212k, for all z \u2208Rp satisfying \u2225z \u2212x\u22252 \u22641 6\u2225x\u22252 and supp(z) \u2282S, we have \u2225\u03b7(z) \u2212x\u22252 \u2225x\u22252 \u2264 \u0010 1 \u2212\u00b5 8 \u0011 \u2225z \u2212x\u22252 \u2225x\u22252 + C0 \u00b5\u03c3 \u2225x\u22252 2 r k log p m , provided \u00b5 \u2264\u00b50 and m \u2265Ck2 log p. Here C0, C, and \u00b50 are numerical constants. This implies that, on an event E02 with probability at least 1 \u221230 m \u22128e\u2212k, for all z \u2208Rp satisfying min(\u2225z \u2212 x\u22252, \u2225z + x\u22252) \u22641 6\u2225x\u22252 and supp(z) \u2282S, we have min(\u2225\u03b7(z) \u2212x\u22252, \u2225\u03b7(z) + x\u22252) \u2264 \u0010 1 \u2212\u00b5 8 \u0011 min(\u2225z \u2212x\u22252, \u2225z + x\u22252) + C0 \u00b5\u03c3 \u2225x\u22252 r k log p m . Proof For z supported on S, de\ufb01ne u = \u03b7(z) = T \u00b5 \u03c62 \u03c4(z) \u0012 z \u2212\u00b5 \u03c62 \u2207f(z)S \u0013 = z \u2212\u00b5 \u03c62 \u2207f(z)S + \u00b5 \u03c62 \u03c4(z)v, where v \u2208Rp, supp(v) \u2282S and \u2225v\u2225\u221e\u22641. Since supp(z) \u2282S = {1, . . . , k}, we have \u2207f(z)S = 1 m m X j=1 \u0000|aj\u2032 Sz|2 \u2212yj \u0001 (aj\u2032 Sz)ajS. (6.6) For convenience, let ^ \u2207f(z)S = 1 m m X j=1 \u0000|aj\u2032 Sz|2 \u2212|aj\u2032 Sx|2\u0001 (aj\u2032 Sz)ajS, (6.7) and so \u2207f(z)S \u2212^ \u2207f(z)S = \u22121 m m X j=1 \u03f5j(aj\u2032 Sz)ajS. (6.8) Denote h = z \u2212x \u2208Rp, which implies supp(h) \u2282S and \u2225h\u22252 \u2264\u2225x\u22252/6. Straightforward calculation yields \u2225u \u2212x\u22252 \u2264 \r \r \r \rh \u2212\u00b5 \u03c62 ^ \u2207f(z)S \r \r \r \r 2 + \u00b5 \u03c62 \r \r \r\u2207f(z)S \u2212^ \u2207f(z)S \r \r \r 2 + \u00b5 \u221a k \u03c62 \u03c4(z) := T1 + \u00b5 \u03c62 T2 + \u00b5 \u221a k \u03c62 \u03c4(z). (6.9) It su\ufb03ces to bound T1, T2 and \u03c4(z). 16 \fBound for T1 By simple algebra, we have T 2 1 = \u2225h\u22252 2 \u2212\u00b5 \u03c62 1 m m X j=1 \u00002(aj\u2032 Sx)2(aj\u2032 Sh)2 + 3(aj\u2032 Sx)(aj\u2032 Sh)3 + (aj\u2032 Sh)4\u0001 + \u00b52 \u03c64 \r \r \r ^ \u2207f(z)S \r \r \r 2 2 := \u2225h\u22252 2 \u2212\u00b5 \u03c62 T11 + \u00b52 \u03c64 T12. (6.10) In what follows, we derive lower bound for T11 and upper bound for T12 separately. Notice that T11 = 1 m m X j=1 \u00002(aj\u2032 Sx)2(aj\u2032 Sh)2 + 3(aj\u2032 Sx)(aj\u2032 Sh)3 + (aj\u2032 Sh)4\u0001 . First, by Lemma A.6 with probability at least 1 \u22121/m, we have 1 m m X j=1 2(aj\u2032 Sx)2(aj\u2032 Sh)2 \u2265(2 \u22122\u03b4) \u00002(x\u2032h)2 + \u2225x\u22252 2\u2225h\u22252 2 \u0001 . By Lemma A.5, with probability at least 1 \u22122/m, we have 1 m m X j=1 3(aj\u2032 Sx)(aj\u2032 Sh)3 \u22643 m \uf8eb \uf8ed m X j=1 (aj\u2032 Sx)4 \uf8f6 \uf8f8 1 4 \uf8eb \uf8ed m X j=1 (aj\u2032 Sh)4 \uf8f6 \uf8f8 3 4 \u22643 m((3m) 1 4 + k 1 2 + p 2 log m)4\u2225x\u22252\u2225h\u22253 2 \u226410\u2225x\u22252\u2225h\u22253 2, provided m \u2265Ck2 for some su\ufb03ciently large numerical constant C. This implies T11 \u2265(2 \u22122\u03b4)\u2225x\u22252 2\u2225h\u22252 2 \u221210\u2225x\u22252\u2225h\u22253 2 \u2265(1/3 \u22122\u03b4)\u2225x\u22252 2\u2225h\u22252 2. As to the upper bound for T12, we can \ufb01nd \u2225w\u22252 = 1, such that T12 = \u2225^ \u2207f(z)S\u22252 2 \u22642 m2 \f \f \f \f \f \f m X j=1 |aj\u2032 Sh||aj\u2032 S(2x + h)||aj\u2032 S(x + h)||aj\u2032 Sw| \f \f \f \f \f \f 2 . By Holder\u2019s inequality and Lemma A.5, we have T12 \u22642 m2 \uf8eb \uf8ed m X j=1 |aj\u2032 Sh|4 \uf8f6 \uf8f8 1 2 \uf8eb \uf8ed m X j=1 |aj\u2032 S(2x + h)|4 \uf8f6 \uf8f8 1 2 \uf8eb \uf8ed m X j=1 |aj\u2032 S(x + h)|4 \uf8f6 \uf8f8 1 2 \uf8eb \uf8ed m X j=1 |aj\u2032 Sw|4 \uf8f6 \uf8f8 1 2 \u22642 m2 ((3m) 1 4 + k 1 2 + p 2 log m)8\u2225h\u22252 2\u22252x + h\u22252 2\u2225x + h\u22252 2\u2225w\u22252 2 \u2264C0\u2225h\u22252 2\u2225x\u22254 2, provided m \u2265Ck2, with su\ufb03ciently large constants C0 and C. To summarize, with probability at least 1 \u22123/m, T 2 1 \u2264\u2225h\u22252 2 \u2212\u00b5 \u03c62 (1/3 \u22122\u03b4)\u2225h\u22252 2\u2225x\u22252 2 + C0 \u00b52 \u03c64 \u2225x\u22254 2\u2225h\u22252 2. (6.11) 17 \fBy Lemma 6.2, letting \u03b4 small enough, we have with probability at least 1 \u22126/m, T1 \u2264(1 \u2212\u00b5/8)\u2225h\u22252, provided \u00b5 \u2264\u00b50 with su\ufb03ciently small absolute constant \u00b50 > 0. Bound for T2 Note that T2 \u2264 7 6m\u2225x\u22252 \r \r \r \r \r \r m X j=1 \u03f5jajSa\u2032 jS \r \r \r \r \r \r . By Lemma A.7 and Lemma A.8, with probability at least 1 \u22122/m \u22122e\u2212k, we have \r \r \r \r \r \r m X j=1 \u03f5jajSa\u2032 jS \r \r \r \r \r \r \u2264C0\u03c3 p m(k + log m) provided m/ log m \u2265k. In summary, by Lemma 6.2, we have that with probability at least 1 \u22125/m \u22122e\u2212k, \u00b5 \u03c62 T2 \u2264C0\u00b5 \u03c3 \u2225x\u22252 r k + log m m . Bound for \u03c4(z) By simple algebra, \u03c4 2(z) = \u03b2 log p m2 m X j=1 \u0000(aj\u2032 Sh)aj\u2032 S(2x + h) \u2212\u03f5j \u00012 |aj\u2032 S(x + h)|2 \u22642\u03b2 log p m2 \uf8f1 \uf8f2 \uf8f3 m X j=1 |aj\u2032 Sh|2|aj\u2032 S(2x + h)|2|aj\u2032 S(x + h)|2 + m X j=1 \u03f52 j|aj\u2032 S(x + h)|2 \uf8fc \uf8fd \uf8fe := 2\u03b2 log p m2 (T1 + T2). By Holder\u2019s inequality and Lemma A.5, with probability at least 1 \u22122/m, we have T1 \u2264 \uf8eb \uf8ed m X j=1 |aj\u2032 Sh|6 \uf8f6 \uf8f8 1 3 \uf8eb \uf8ed m X j=1 |aj\u2032 S(2x + h)|6 \uf8f6 \uf8f8 1 3 \uf8eb \uf8ed m X j=1 |aj\u2032 S(x + h)|6 \uf8f6 \uf8f8 1 3 \u2264C0\u2225AS\u22256 2\u21926\u2225h\u22252 2\u2225x\u22254 2 \u2264C0(m + k3)\u2225h\u22252 2\u2225x\u22254 2 for some numerical constant C0. By Lemma A.7 and Lemma A.8, with probability at least 1 \u22122/m \u22122e\u2212k, we have, T2 \u226449 36\u2225x\u22252 2 \r \r \r \r \r \r m X j=1 \u03f52 jajSa\u2032 jS \r \r \r \r \r \r \u2264C0m\u03c32\u2225x\u22252 2, for some numerical constant C0, provided m log2 m \u2265k. In summary, \u00b5 \u03c62 \u221a k\u03c4 \u2264C0\u00b5 p (mk + k4) log p m \u2225h\u22252 + \u03c3 \u2225x\u22252 r k log p m ! \u2264\u00b5\u2225h\u22252 16 + C0 \u00b5\u03c3 \u2225x\u22252 r k log p m , (6.12) provided m \u2265C max(k log p, k2\u221alog p). 18 \fSummary We can guarantee that, with probability at least 1 \u221215 m \u22124e\u2212k, \u2225u \u2212x\u22252 \u2225x\u22252 \u2264 \u0010 1 \u2212\u00b5 16 \u0011 \u2225z \u2212x\u22252 \u2225x\u22252 + C0\u00b5 r k log p m \u03c3 \u2225x\u22252 2 , (6.13) for some absolute constant C0 > 0, provided m \u2265Ck2 log(mp) and \u00b5 \u2264\u00b50. Suppose E0 is the intersection of the events E01 and E02 described by Lemmas 6.3 and 6.4, respectively. Then we have P(E0) \u22651 \u221246 m \u221210e\u2212k. The following induction argument guarantees the e\ufb00ectiveness of thresholded Wirtinger \ufb02ow: Lemma 6.5 Let \u03b2 = 4 and b x(n), n = 0, 1, 2, . . . are de\ufb01ned iteratively by (2.10) and (2.4). For \ufb01xed n \u22650, assume that there exists a random vector x(n) satisfying x(n) | = ASc and supp(x(n)) \u2282 S, and that on an event En \u2282E0 we have b x(n) = x(n) and min i=0,1 \u2225b x(n) \u2212(\u22121)ix\u22252 \u22641 6\u2225x\u22252. Then there exists a random vector x(n+1) satisfying x(n+1) | = ASc and supp(x(n+1)) \u2282S, and on an event En+1 \u2282En satisfying P(En/En+1) \u22641 \u2212 1 m2p, we have b x(n+1) = x(n+1) and min i=0,1 \u2225b x(n+1) \u2212(\u22121)ix\u22252 \u2264 \u0010 1 \u2212\u00b5 16 \u0011 min i=0,1 \u2225b x(n) \u2212(\u22121)ix\u22252 + C0 \u00b5\u03c3 \u2225x\u22252 r k log p m \u22641 6\u2225x\u22252, provided m \u2265C \u0010 1 + \u03c32 \u2225x\u22254 2 \u0011 k2 log(mp) for su\ufb03ciently large C. Proof The improved estimation is de\ufb01ned as b x(n+1) = T \u00b5 \u03c62 \u03c4(b x(n)) \u0012 b x(n) \u2212\u00b5 \u03c62 \u2207f(b x(n)) \u0013 . where T\u03c4 is the soft-thresholding operator. We now de\ufb01ne x(n+1) := \u03b7(x(n)) = T \u00b5 \u03c62 \u03c4(x(n)) \u0012 x(n) \u2212\u00b5 \u03c62 \u2207f(x(n))S \u0013 . By the de\ufb01nition of \u2207f, \u03c4 and \u03c6, as well as the assumption that x(n) | = ASc and supp(x(n)) \u2282S, we can prove supp(x(n+1)) \u2282S as well as x(n+1) | = ASc. In fact, by the de\ufb01nition (2.3), we know if x(n) is supported on S and independent of ASc, then \u03c4(x(n)) is independent of ASc. Moreover, by the de\ufb01nition of the gradient (2.2), we know \u0000\u2207f(x(n)) \u0001 S is supported on S and independent of ASc. The assertion is established by the obvious fact \u03c6 | = ASc shown in Lemma 6.1. In the following, we will construct En+1 \u2282En such that b x(n+1) = x(n+1) on En+1. For any i = k + 1, k + 2, . . . , p, with probability 1 \u2212 1 m2p2 , \f \f \f \f \u2202 \u2202zi f(x(n)) \f \f \f \f = \f \f \f \f \f \f 1 m m X j=1 \u0010 |aj\u2032x(n)|2 \u2212yj \u0011 (aj\u2032x(n))(aj)i \f \f \f \f \f \f \u2264 p 4 log(mp) m v u u t m X j=1 \u0000|aj\u2032x(n)|2 \u2212yj \u00012 |aj\u2032x(n)|2 \u2264\u03c4(x(n)). 19 \fThe \ufb01rst inequality is due to supp(x(n)) \u2282S and x(n) | = ASc, and the second inequality is due to \u03b2 = 4. Then with probability at least 1 \u2212 1 m2p, max k+1\u2264i\u2264p \f \f \f \f \u2202 \u2202zi f(x(n)) \f \f \f \f \u2264\u03c4(x(n)), which implies T \u00b5 \u03c62 \u03c4(x(n)) \u0012 x(n) \u2212\u00b5 \u03c62 \u2207f(x(n)) \u0013 = T \u00b5 \u03c62 \u03c4(x(n)) \u0012 x(n) \u2212\u00b5 \u03c62 \u2207f(x(n))S \u0013 . Notice that on the event En, we have b x(n) = x(n), and hence b x(n+1) = T \u00b5 \u03c62 \u03c4(x(n)) \u0012 x(n) \u2212\u00b5 \u03c62 \u2207f(x(n)) \u0013 . Then there exists En+1 \u2282En, such that P(En/En+1) \u2264 1 m2p, and b x(n+1) = T \u00b5 \u03c62 \u03c4(x(n)) \u0012 x(n) \u2212\u00b5 \u03c62 \u2207f(x(n))S \u0013 = x(n+1). By the assumption, we have min(\u2225x(n) \u2212x\u22252, \u2225x(n) + x\u22252) \u22641 6\u2225x\u22252 on En. Since En \u2282E0 and x(n+1) = \u03b7(x(n)), by Lemma 6.4, we have min(\u2225x(n+1) \u2212x\u22252, \u2225x(n+1) + x\u22252) \u2264 \u0010 1 \u2212\u00b5 16 \u0011 min(\u2225x(n) \u2212x\u22252, \u2225x(n) + x\u22252) + C0 \u00b5\u03c3 \u2225x\u22252 r k log p m \u22641 6\u2225x\u22252 on En, provided m \u2265C(\u03c32/\u2225x\u22254 2)k log p for a su\ufb03ciently large absolute constant C. Since En+1 \u2282En, and b x(n+1) = x(n+1) on En+1, we have min i=0,1 \u2225b x(n+1) \u2212(\u22121)ix\u22252 \u2264 \u0010 1 \u2212\u00b5 16 \u0011 min i=0,1 \u2225b x(n) \u2212(\u22121)ix\u22252 + C0 \u00b5\u03c3 \u2225x\u22252 r k log p m \u22641 6\u2225x\u22252 on En+1. Theorem 3.1 can be directly implied by Lemma 6.5. In fact, by Lemma 6.3, we know the initial condition in 6.5 holds. For all t = 1, 2, 3, . . ., straight forward calculation yields min(\u2225b x(t) \u2212x\u22252, \u2225b x(t) + x\u22252) \u2225x\u22252 \u22641 6 \u0010 1 \u2212\u00b5 16 \u0011t + C0 \u03c3 \u2225x\u22252 2 r k log p m on Et for some universal constant C0, where P(Et) \u22651 \u221246 m \u221210e\u2212k \u2212 t mp2 . 20 \fA Preliminaries and supporting lemmas Lemma A.1 ([5]) Suppose X1, . . . , Xm are i.i.d. real-valued random variables obeying Xi \u2264b for some absolute constant b > 0, E Xi = 0 and E X2 i = v2. Setting \u03c32 = m(b2 \u2228v2), P {X1 + \u00b7 \u00b7 \u00b7 + Xm \u2265y} \u2264exp \u0012 \u2212y2 2\u03c32 \u0013 \u2227c0(1 \u2212\u03a6(y/\u03c3)) where one can take c0 = 25. Lemma A.2 (Proposition 34 [43]) Suppose that x \u223cN(0, In) is a standard normal random vector, and f : Rn \u2192R is a 1-Lipschitz function. Then P(f(x) \u2212E f(x) \u2265t) \u2264e\u2212t2 2 . Lemma A.3 (Proposition 33 [43]) Consider two centered Gaussian processes (Xt)t\u2208T and (Yt)t\u2208T whose increments satisfy the inequality E |Xs \u2212Xt|2 \u2264E |Ys \u2212Yt|2 for all s, t \u2208T. Then E sup t\u2208T Xt \u2264E sup t\u2208T Yt. Lemma A.4 (Proposition 35 [43]) Let AS \u2208Rm\u00d7p be de\ufb01ned in (6.1). Then, with probability at least 1 \u22122 exp(\u2212t2/2), we have the following inequality \u2225AS\u2225\u2264\u221am + \u221a k + t. (A.1) Lemma A.5 Let AS \u2208Rm\u00d7p be de\ufb01ned in (6.1). Then, with probability at least 1\u22124 exp(\u2212t2/2), the following inequalities hold \u2225AS\u22252\u21926 \u2264(15m)1/6 + \u221a k + t, (A.2) and \u2225AS\u22252\u21924 \u2264(3m)1/4 + \u221a k + t. (A.3) Proof The proof follows that of Theorem 32 in [43] step by step. De\ufb01ne Xu,v = \u27e8ASu, v\u27e9on T = {(u, v) : u \u2208Rp, supp(U) \u2282S, \u2225u\u22252 = 1, v \u2208Rm, \u2225v\u22256/5 = 1}. Then \u2225AS\u22252\u21926 = max(u,v)\u2208T Xu,v. De\ufb01ne Yu,v = \u27e8gS, u\u27e9+ \u27e8h, v\u27e9 where gS \u2208Rp with supp(gS) = S and h \u2208Rm are independent standard Gaussian random vectors. 21 \fFor any (u, v), (u\u2032, v\u2032) \u2208T, we have E |Xu,v \u2212Xu\u2032,v\u2032| = \u2225v\u22252 2 + \u2225v\u2032\u22252 2 \u22122\u27e8u, u\u2032\u27e9\u27e8v, v\u2032\u27e9 and E |Yu,v \u2212Yu\u2032,v\u2032| = 2 + \u2225v\u22252 2 + \u2225v\u2032\u22252 2 \u22122\u27e8u, u\u2032\u27e9\u2212\u27e8v, v\u2032\u27e9. Therefore, E |Xu,v \u2212Xu\u2032,v\u2032| \u2212E |Yu,v \u2212Yu\u2032,v\u2032| = 2(1 \u2212\u27e8u, u\u2032\u27e9)(1 \u2212\u27e8v, v\u2032\u27e9) \u22650, due to \u2225u\u22252 = \u2225u\u2032\u22252 = 1, \u2225v\u22252 \u2264\u2225v\u22256/5 = 1, and \u2225v\u2032\u22252 \u2264\u2225v\u2032\u22256/5 = 1. Then by Lemma A.3, we have E \u2225AS\u22252\u21926 \u2264E max (u,v)\u2208T Yu,v = E \u2225gS\u22252 + E \u2225h\u22256 \u2264 q E \u2225gS\u22252 2 + (E \u2225h\u22256 6)1/6 = \u221a k + (15m)1/6. Since \u2225\u00b7 \u22252\u21926 is a 1-Lipschitz function, by Lemma A.2, there holds with probability at least 1 \u22122 exp(\u2212t2/2) \u2225AS\u22252\u21926 \u2264 \u221a k + (15m)1/6 + t. Similarly, with probability at least 1 \u22122 exp(\u2212t2/2) \u2225AS\u22252\u21924 \u2264 \u221a k + (3m)1/4 + t. Lemma A.6 On an event with probability at least 1 \u22121/m, we have \r \r \r \r \r \r 1 m m X j=1 |aj\u2032 Sx|2ajSaj\u2032 S \u2212 \u0000\u2225x\u22252 2(Ip)S + 2xx\u2032\u0001 \r \r \r \r \r \r \u2264\u03b4\u2225x\u22252 2 provided m \u2265C(\u03b4)k log k, where C(\u03b4) is constant only depending on \u03b4. Here (Ip)S by de\ufb01nition is a diagonal matrix with \ufb01rst k diagonal entries equal to 1, whereas other entries being 0. Furthermore, it implies that 1 m m X j=1 (aj\u2032 Sx)2(aj\u2032 Sh)2 \u22652(x\u2032h)2 + (1 \u2212\u03b4)\u2225x\u22252 2\u2225h\u22252 2 for any h \u2208Rp that satis\ufb01es supp(h) \u2282S. The proof of this lemma is the same as that of Lemma 7.4 in [12]. Lemma A.7 Suppose \u03f51, . . . , \u03f5m are independent zero-mean sub-exponential random variables with \u03c3 := max 1\u2264i\u2264m \u2225\u03f5i\u2225\u03c81. 22 \fThen with probability at least 1 \u22123 m, we have \f \f \f \f \f \f 1 m m X j=1 \u03f5j \f \f \f \f \f \f \u2264C0\u03c3 r log m m , \u2225\u03f5\u2225\u221e\u2264C0\u03c3 log m, \f \f \f \f \f \f 1 m m X j=1 \u03f52 j \f \f \f \f \f \f \u2264C0\u03c32, and \f \f \f \f \f \f 1 m m X j=1 \u03f54 j \f \f \f \f \f \f \u2264C0\u03c34. provided m \u2265m0 for some numerical constants C0 and m0. Proof By Proposition 16 in [43], we have P \f \f \f \f \f m X i=1 \u03f5i \f \f \f \f \f \u2265t ! \u22642 exp \u0014 \u2212c min \u0012 t2 m\u03c32 , t \u03c3 \u0013\u0015 . This implies that with probability at least 1 \u2212 2 m10 , we have \f \f \f \f \f m X i=1 \u03f5i \f \f \f \f \f \u2264C0\u03c3 max \u0010p m log m, log m \u0011 \u2264C0\u03c3 p m log m provided m \u2265m0. This implies that \f \f \f \f \f \f 1 m m X j=1 \u03f5j \f \f \f \f \f \f \u2264C0\u03c3 r log m m . By the basic properties of sub-exponential random variables, for each j = 1, . . . , m, we have P (|\u03f5j| \u2265t) \u2264exp \u0012 1 \u2212c t \u03c3 \u0013 , which implies that |\u03f5j| \u2264C0\u03c3 log m with probability at least 1 \u2212e/m11. This implies that \u2225\u03f5\u2225\u221e\u2264C0\u03c3 log m with probability at least 1 \u2212e/m10. Since \u03c3 \u2265\u2225\u03f5j\u2225\u03a81 = sup p\u22651 p\u22121 (E |\u03f5j|p) 1 p , we have E \u03f52 j \u2264(2\u03c3)2 and E \u03f54 j \u2264(4\u03c3)4. De\ufb01ne X = 1 m m X j=1 \u03f52 j. Then we have E X \u2264(2\u03c3)2, and Var(X) \u2264(4\u03c3)4/m. By Chebyshev\u2019s inequality, we have P (|X \u2212E X| \u2265t) \u2264Var(X) t2 . 23 \fBy letting t = (4\u03c3)2, we obtain that with probability at least 1 \u22121/m, we have |X| \u226420\u03c32. Similarly, with probability at least 1 \u22121/m, we have \f \f \f 1 m Pm j=1 \u03f54 j \f \f \f \u2264C0\u03c34 for some absolute constant C0. Lemma A.8 Suppose zj \u2208Rk, j = 1, . . . , m are IID standard normal random vectors. For \ufb01xed a \u2208Rm, with probability at least 1 \u22122e\u2212k, we have \r \r \r \r \r \r m X j=1 ajzjz\u2032 j \u2212 \uf8eb \uf8ed m X j=1 aj \uf8f6 \uf8f8Ik \r \r \r \r \r \r \u2264C0 \u0012q k\u2225a\u22252 2 + k\u2225a\u2225\u221e \u0013 for some absolute constant C0. Proof De\ufb01ne A := m X j=1 ajzjz\u2032 j \u2212 \uf8eb \uf8ed m X j=1 aj \uf8f6 \uf8f8Ik. By Lemma 4 in [43], we have \u2225A\u2225\u22642 sup x\u2208N 1 4 |x\u2032Ax|, where N 1 4 is the 1/4-net of the unit sphere T k\u22121. For \ufb01xed x \u2208N 1 4 , let yj = |z\u2032 jx|2 \u22121. Then x\u2032Ax = m X j=1 ajyj. Notice that yj, j = 1, . . . , m are IID sub-exponential variables with \u2225yj\u2225\u03c81 \u2264K where K is an absolute constant. By Bernstein inequality (see, e.g., Proposition 16 in [43]), we have with probability at least 1 \u22122 exp(\u22124k), \f \f \f \f \f \f m X j=1 ajyj \f \f \f \f \f \f \u2264(C0/2) \u0012q k\u2225a\u22252 2 + k\u2225a\u2225\u221e \u0013 for some absolute constant C0. Since |N 1 4 | \u22649k, we know with probability at least 1 \u22122e\u2212k, we have \u2225A\u2225\u22642 sup x\u2208N 1 4 |x\u2032Ax| \u2264C0 \u0012q k\u2225a\u22252 2 + k\u2225a\u2225\u221e \u0013 . 24" + }, + { + "url": "http://arxiv.org/abs/1505.01585v1", + "title": "Optimal Estimation of A Quadratic Functional and Detection of Simultaneous Signals", + "abstract": "Motivated by applications in genomics, this paper studies the problem of\noptimal estimation of a quadratic functional of two normal mean vectors,\n$Q(\\mu, \\theta) = \\frac{1}{n}\\sum_{i=1}^n\\mu_i^2\\theta_i^2$, with a particular\nfocus on the case where both mean vectors are sparse. We propose optimal\nestimators of $Q(\\mu, \\theta)$ for different regimes and establish the minimax\nrates of convergence over a family of parameter spaces. The optimal rates\nexhibit interesting phase transitions in this family. The simultaneous signal\ndetection problem is also considered under the minimax framework. It is shown\nthat the proposed estimators for $Q(\\mu, \\theta)$ naturally lead to optimal\ntesting procedures.", + "authors": "T. Tony Cai, Xin Lu Tan", + "published": "2015-05-07", + "updated": "2015-05-07", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.TH" + ], + "main_content": "Introduction The problem of quadratic functional estimation occupies an important position in nonparametric and high-dimensional statistical inference. It is of signi\ufb01cant interest in its own right, and also has close connections to other important problems such as signal detection and construction of con\ufb01dence balls. The focus so far has been on the one-sequence case. Bickel and Ritov (1988) showed that there is an interesting phase transition in the density estimation setting where the minimax rate of convergence is the usual parametric rate when the density function is su\ufb03ciently smooth, and is otherwise slower than the parametric rate. Under the Gaussian sequence model: Yi = \u03b8i + \u03c3nzi, i = 1, 2, . . . , (1) where zi iid \u223cN(0, 1), Donoho and Nussbaum (1990), Fan (1991), and Efromovich and Low (1996) further developed this theory for estimating Q(\u03b8) = P \u03b82 i over quadratically convex parameter spaces such as hyperrectangles or Sobolev balls. The Gaussian sequence model (1) is equivalent to the white noise with drift model and can be used to approximate nonparametric regression and density estimation models. Cai and Low (2005, 2006b) considered minimax and adaptive estimation of the quadratic functional Q(\u03b8) over parameter spaces that are not quadratically convex. It is shown that in such a setting optimal quadratic rules are often suboptimal and nonquadratic procedures may exhibit di\ufb00erent phase transition phenomena than quadratic procedures. The results on estimating the quadratic functional Q(\u03b8) have important implications on hypothesis testing and construction of con\ufb01dence balls. See, for example, Li (1989), D\u00a8 umbgen (1998), Lepski and Spokoiny (1999), Ingster and Suslina (2003), Baraud (2004), Genovese and Wasserman (2005), and Cai and Low (2005, 2006a). 1 arXiv:1505.01585v1 [math.ST] 7 May 2015 \fMotivated by contemporary applications in genomics, we consider in the present paper estimation of the functional Q(\u00b5, \u03b8) = 1 n n X i=1 \u00b52 i \u03b82 i (2) under the Gaussian two-sequence model, Xi = \u00b5i + \u03c3z\u2032 i, Yi = \u03b8i + \u03c3zi, i = 1, . . . , n, (3) where z\u2032 1, . . . , z\u2032 n, z1, . . . , zn iid \u223cN(0, 1) and \u03c3 is the noise level. The goal is to optimally estimate the quadratic functional Q(\u00b5, \u03b8) based on the observed data (Xi, Yi), i = 1, ..., n. (Strictly speaking, Q(\u00b5, \u03b8) is a quartic functional, but we will refer to it as a quadratic functional in the two-sequence case, as it is quadratic in \u00b5 given \u03b8, and vice versa.) We are particularly interested in the case where both mean vectors \u00b5 = (\u00b51, . . . , \u00b5n) and \u03b8 = (\u03b81, . . . , \u03b8n) are sparse. This estimation problem is motivated by the detection of simultaneous signals in genomics, where high-throughput technologies have generated a broad array of large-scale genome-wide datasets (Schena, Shalon, Davis, and Brown, 1995; Lockhart, Brown, Wong, Chee, and Gineras, 2002; Puig, Caspary, Rigaut, Rutz, Bouveret, Bragado-Nilsson, Wilm, and S\u00b4 eraphin, 2001). As the heterogeneous datasets provide distinct but often complementary views of biological systems, an integrative approach in data analysis is called for to obtain a coherent view of the underlying biology. As an example, it is of great interest to connect certain genotypes to speci\ufb01c phenotypic outcomes to infer causal relationship among genetic variation, expression and disease. With regard to this, many genome-wide association studies (GWAS) have identi\ufb01ed potential disease-associated SNPs, and a natural next step is to identify genes whose expression levels are regulated by the disease-associated SNPs. A possibly e\ufb00ective integrative approach exploits the potential overlap between SNPs associated with expression (expression SNPs) and the SNPs associated with disease (disease SNPs) for improved power in detecting gene-disease associations (He, Fuller, Song, Meng, Zhang, Yang, and Li, 2013). Recent \ufb01ndings also suggest overlapping SNPs between numerous human traits (Sivakumaran, Agakov, Theodoratou, Prendergast, Zgaga, Manolio, Rudan, McKeigue, Wilson, and Campbell, 2011) and disorders (Cotsapas, Voight, Rossin, Lage, Neale, Wallace, Abecasis, Barrett, Behrens, Cho, et al., 2011; Consortium, 2011). Thus combining GWAS statistics from two (or even multiple) disorders provides increased power for discovering genes associated with common biological mechanism, thereby informs on overlapping pathophysiological relationship between the disorders. Other examples where detecting simultaneously occurring signals is of interest include the detection of shared DNA copy number variation across samples and meta-analysis of multiple linkage studies (Zhang, Siegmund, Ji, and Li, 2010). In a simpli\ufb01ed statistical framework, a problem of particular interest is detecting simultaneous signals under the Gaussian two-sequence model (3). Speci\ufb01cally, let \u00b5 \u22c6\u03b8 = (\u00b51\u03b81, . . . , \u00b5n\u03b8n) be the coordinate-wise product of \u00b5 and \u03b8. For the mean vector \u00b5 (similarly, \u03b8), we say that there is a signal at location i if \u00b5i \u0338= 0 (similarly, \u03b8i \u0338= 0). Our goal is to detect the existence of simultaneous signal for \u00b5 and \u03b8, which corresponds to the presence of location i\u2019s with \u00b5i\u03b8i \u0338= 0. Equivalently, we want to distinguish between \u00b5 \u22c6\u03b8 = 0 and \u00b5 \u22c6\u03b8 \u0338= 0. Of particular interest is the setting where the proportion of signals is small, and the signal strengths are relatively weak. This is indeed the setting in the gene-disease associations context, as only a small number of SNPs are expected to be associated with a disease or to regulate gene expression level. Moreover, the association, if exists, is weak. As demonstrated in the single Gaussian sequence model setting considered in Cai and Low (2005), the minimax hypothesis testing problem is closely connected to the minimax estimation theory. Our interest in detecting the existence of simultaneous signals for the unknown mean vectors \u00b5 and \u03b8 motivates the estimation of the quadratic functional of (\u00b5, \u03b8) given in (2). Note that Q(\u00b5, \u03b8) = 0 if and only if \u00b5\u22c6\u03b8 = 0. Indeed, the study of estimation of Q(\u00b5, \u03b8) turns out to highlight some important 2 \ffeatures of the testing problem. We emphasize that the two-sequence estimation and detection problem are not straightforward extension of the one-sequence case, and is interesting in its own right. Our contribution is two-fold. First, we propose optimal estimators of Q(\u00b5, \u03b8) over a family of parameter spaces to be introduced, and establish the minimax rates of convergence. It is shown that the optimal rate exhibits interesting phase transitions in this family. Along with the establishment of the minimax rates of convergence, we explain the intuition behind the construction of the optimal estimators. Second, we study the simultaneous signal detection problem under the minimax framework, and show that the proposed estimators for Q(\u00b5, \u03b8) naturally lead to optimal testing procedures. Thus, we bridge the gap between estimation and detection in the two-sequence case. Our formulation of the simultaneous signal detection problem also provides an alternative view to that of Zhao, Cai, and Li (2014), where the problem is studied under the mixture model framework. The rest of the paper is organized as follows: Section 2 considers estimation of the functional Q(\u00b5, \u03b8) and establishes the minimax rates of convergence. An application of the estimators of Q(\u00b5, \u03b8) to the simultaneous signal detection problem is given in Section 3. Section 4 complements our theoretical study with some simulation results, and we conclude the paper with a discussion in Section 5. Some additional results that are not included in the main text are given in Appendix A. Proofs of some of the main results are given in Section 6, with the rest relegated to Appendix B for the reason of space. 2 Optimal Estimation of Q(\u00b5, \u03b8) In this section, we consider the estimation of the quadratic functional Q(\u00b5, \u03b8) = 1 n Pn i=1 \u00b52 i \u03b82 i of two sparse normal mean vectors \u00b5 = (\u00b51, . . . , \u00b5n) and \u03b8 = (\u03b81, . . . , \u03b8n) under the Gaussian twosequence model (3). An additional constraint is also imposed on the number of coordinates that are simultaneously nonzero for both mean vectors. The noise level \u03c3 in model (3) is assumed to be known. Estimation of the noise level, \u03c3, is relatively easy under the sparse sequence model (3) and will be discussed in Section 4. We begin by introducing some notation that will be used throughout the paper. Given a vector \u03b8 = (\u03b81, . . . , \u03b8n), we denote by \u2225\u03b8\u22250 = Card({i : \u03b8i \u0338= 0}) the \u21130-quasi-norm of \u03b8, \u2225\u03b8\u22252 = qPn i=1 \u03b82 i its \u21132-norm, and \u2225\u03b8\u2225\u221e= max1\u2264i\u2264n |\u03b8i| its \u2113\u221e-norm. For any real number a and b, set a \u2227b = min{a, b}, a \u2228b = max{a, b} and a+ = a \u22280. Throughout, the notation an \u224dbn means that there exists some numerical constants c and C such that c \u2264an bn \u2264C when n is large. By \u201cnumerical constants\u201d we usually mean constants that might depend on the characteristics of the problem but whose speci\ufb01c values are of little interest to us. The precise values of the numerical constants c and C may also vary from line to line. Adopting an asymptotic framework where the vector size n is the driving variable, we parameterize the signal strength, sparsity, and simultaneous sparsity of \u00b5 and \u03b8 as functions of n. Speci\ufb01cally, we consider the family of parameter spaces \u2126(\u03b2, \u03f5, b) = {(\u00b5, \u03b8) \u2208Rn \u00d7 Rn : \u2225\u00b5\u22250 \u2264kn, \u2225\u00b5\u2225\u221e\u2264sn, \u2225\u03b8\u22250 \u2264kn, \u2225\u03b8\u2225\u221e\u2264sn, \u2225\u00b5 \u22c6\u03b8\u22250 \u2264qn}, (4) indexed by three parameters \u03b2, \u03f5, and b. We have the sparsity parameterization kn = n\u03b2, 0 < \u03b2 < 1 2, (5) the simultaneous sparsity parameterization qn = n\u03f5, 0 < \u03f5 \u2264\u03b2, (6) 3 \fand the signal strength parametrization sn = nb, b \u2208R. (7) In other words, \u2126(\u03b2, \u03f5, b) is the collection of vector pairs (\u00b5, \u03b8) \u2208Rn \u00d7Rn, where both \u00b5 and \u03b8 have at most kn nonzero entries, each entry is bounded in its magnitude by sn, and the number of simultaneous nonzero entries for \u00b5 and \u03b8 is at most qn. In principle, \u03b2 can take any value between 0 and 1. We are primarily interested in the estimation problem for the range 0 < \u03b2 < 1 2, as it is well-known that this corresponds to the case of rare signals (Donoho and Jin, 2004). Also, even though we parametrize the signal strength at the algebraic order sn = nb, throughout we will make remark on the estimation result for sn of order \u221alog n, since this is an interesting region in the one-sequence signal detection problem (Donoho and Jin, 2004). Our goal is to derive the minimax rate of convergence for Q(\u00b5, \u03b8) over \u2126(\u03b2, \u03f5, b): R\u2217(n, \u2126(\u03b2, \u03f5, b)) = inf b Q sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q \u2212Q(\u00b5, \u03b8))2. We will show that R\u2217(n, \u2126(\u03b2, \u03f5, b)) satis\ufb01es R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u224d\u03b3n(\u03b2, \u03f5, b), (8) where \u03b3n(\u03b2, \u03f5, b) is a function of n indexed by \u03b2, \u03f5 and b. There are two main tasks in establishing the minimax rate of convergence. For each triple (\u03b2, \u03f5, b) satisfying 0 < \u03f5 \u2264\u03b2 < 1 2 and b \u2208R, we (a) construct an estimator b Q\u2217that satis\ufb01es sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q\u2217\u2212Q(\u00b5, \u03b8))2 \u2264C\u03b3n(\u03b2, \u03f5, b), (b) and show that R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u2265c\u03b3n(\u03b2, \u03f5, b), where C and c are numerical constants that depend only on \u03b2, \u03f5, b, and \u03c3. Combining the upper bound derived in task (a) and the lower bound derived in task (b) yields the minimax rate of convergence (8). In this case, we say that the estimator b Q\u2217attains the minimax rate of convergence over the parameter space \u2126(\u03b2, \u03f5, b). Interestingly, the estimation problem exhibits di\ufb00erent phase transitions for the minimax rate \u03b3n(\u03b2, \u03f5, b) in three regimes: the sparse regime where 0 < \u03f5 < \u03b2 2 , the moderately dense regime where \u03b2 2 \u2264\u03f5 \u22643\u03b2 4 , and the strongly dense regime where 3\u03b2 4 < \u03f5 \u2264\u03b2. Collectively, we call \u03b2 2 \u2264\u03f5 \u2264\u03b2 the dense regime. In the sparse regime, simultaneous signal is sparse in the sense that qn \u226a\u221akn, while in the dense regime, simultaneous signal is dense in the sense that qn \u226b\u221akn. This is analogous to the terminology used in the one-sequence model, where signal is called sparse if 0 < \u03b2 < 1 2 (kn \u226a\u221an), and dense if 1 2 \u2264\u03b2 \u22641 (kn \u226b\u221an). The key distinction is that, in the two-sequence case, the sparseness or denseness is used to describe the relationship between simultaneous sparsity qn and sparsity kn, as opposed to between kn and the vector size n. We also remark that our use of the terminology is not super\ufb01cial \u2014 a detailed analysis of lower bound and upper bound for the estimation problem does reveal intimate connection to the corresponding regimes in the one-sequence case. Intuitively, when b is very small (i.e., signal is very weak), we are better o\ufb00estimating Q(\u00b5, \u03b8) by b Q0 = 0, (9) since any attempt to estimate Q(\u00b5, \u03b8) will incur a greater estimation risk. On the other hand, when b is su\ufb03ciently large (i.e., signal is strong), it is desirable to estimate Q(\u00b5, \u03b8) based on the observed 4 \fdata (Xi, Yi), i = 1, . . . , n. With a slight abuse of terminology, we say that the signal is weak if it corresponds to the region where b Q0 is optimal, and we say that the signal is strong otherwise. We construct two estimators of Q(\u00b5, \u03b8) that respectively attain the minimax rates of convergence over the sparse and dense regimes when the signal is su\ufb03ciently large in Sections 2.1 and 2.2. Note that it is possible to generalize our parametrization to the case where \u00b5 and \u03b8 have di\ufb00erent levels of both sparsity and signal strengths. This amounts to estimating Q(\u00b5, \u03b8) over the parameter space \u2126(\u03b1, \u03b2, \u03f5, a, b) = {(\u00b5, \u03b8) \u2208Rn \u00d7 Rn : \u2225\u00b5\u22250 \u2264jn, \u2225\u00b5\u2225\u221e\u2264rn, \u2225\u03b8\u22250 \u2264kn, \u2225\u03b8\u2225\u221e\u2264sn, \u2225\u00b5 \u22c6\u03b8\u22250 \u2264qn}, (10) where jn = n\u03b1, kn = n\u03b2, qn = n\u03f5 with 0 < \u03f5 \u2264\u03b1 \u2227\u03b2 < 1 2, and rn = na, sn = nb with a, b \u2208R. In this section, however, we will focus on the simplest case where jn = kn = n\u03b2 and rn = sn = nb, since the technical analysis is similar to that for the more general case (10) but less tedious. We did derive the minimax rate of convergence for the case where jn = kn = n\u03b2 but rn and sn are allowed to di\ufb00er. As the phase transitions for the minimax rates of convergence in this case are much more sophisticated but also are less easily digestible, we opt to defer its presentation to Appendix A.1. The analysis for the general case (10) where no constraint is imposed on either the sparsity or signal strength of \u00b5 and \u03b8 follows similarly, provided that the magnitude of the simultaneous sparsity \u03f5 is compared to \u03b1 if a \u2265b, and to \u03b2 if b \u2265a, for the determination of sparse and dense regimes. 2.1 Estimation in the Sparse Regime We begin with the estimation of Q(\u00b5, \u03b8) = 1 n P \u00b52 i \u03b82 i over the parameter space \u2126(\u03b2, \u03f5, b) in the sparse regime, where qn is calibrated as in expression (6) with 0 < \u03f5 < \u03b2 2 . To construct an optimal estimator for Q(\u00b5, \u03b8), we base our intuition on the estimation of the quadratic functional Q(\u03b8) = 1 n P \u03b82 i , in the case where we only have one sequence of observations Yi, i = 1, . . . , n, from model (3). Consider the family of parameter spaces indexed by kn = n\u03b2, 0 < \u03b2 < 1 and sn = nb, b \u2208R: \u0398(\u03b2, b) = {\u03b8 \u2208Rn : \u2225\u03b8\u22250 \u2264kn, \u2225\u03b8\u2225\u221e\u2264sn}. (11) That is, \u0398(\u03b2, b) is the collection of vectors in Rn that has at most kn nonzero entries uniformly bounded in magnitude by sn. It can be shown that for 0 < \u03b2 < 1 2, the minimax rate of convergence for Q(\u03b8) over \u0398(\u03b2, b) satis\ufb01es R\u2217(n, \u0398(\u03b2, b)) := inf b Q sup \u03b8\u2208\u0398(\u03b2,b) E\u03b8( b Q \u2212Q(\u03b8))2 \u224d\u03b3n(\u03b2, b), (12) where \u03b3n(\u03b2, b) = \uf8f1 \uf8f2 \uf8f3 n2\u03b2+4b\u22122 if b \u22640, n2\u03b2\u22122(log n)2 if 0 < b \u2264\u03b2 2 , n\u03b2+2b\u22122 if b > \u03b2 2 . (13) Moreover, the minimax rate of convergence when sn = \u03c3\u221ad log n for some d > 0 satis\ufb01es inf b Q sup \u03b8:\u2225\u03b8\u22250\u2264kn,\u2225\u03b8\u2225\u221e\u2264\u03c3\u221ad log n E\u03b8( b Q \u2212Q(\u03b8))2 \u224dn2\u03b2\u22122(log n)2. Thus, the phase transition of \u03b3n(\u03b2, b) from b \u22640 to b > 0 is smooth. The special interest on the signal strength along the order of \u221alog n has its root in the one-sequence signal detection problem, which we will discuss in more details in Section 3. 5 \fWhen 0 < \u03b2 < 1 2, we have kn \u226a\u221an. Thus, we anticipate only very few coordinates of \u03b8 to be nonzero. If, in addition, b < 0, then the signal is both rare and weak, and one can do no better than simply estimating Q(\u03b8) by b Q0 = 0. Nonetheless, when b > 0, signal is rare but su\ufb03ciently strong, and the estimator b Q1 = 1 n n X i=1 [(Y 2 i \u2212\u03c32\u03c4n)+ \u2212\u03b80], where \u03b80 := E0(Y 2 i \u2212\u03c32\u03c4n)+ (14) which performs coordinate-wise thresholding on Y 2 i with choice of tuning parameter \u03c4n = 2 log n is optimal. Note that each term \u03b82 i is estimated independently by (Y 2 i \u2212\u03c32\u03c4n)+ \u2212\u03b80, since the sparsity pattern is unstructured. The estimator (14) involves a thresholding step, (Y 2 i \u2212\u03c32\u03c4n)+, for denoising, and a de-bias step by subtracting \u03b80 from the thresholded term so that we estimate the zero coordinates of \u03b8 unbiasedly. This is important because the proportion of zero entries in this case is relatively large, and a biased estimator for these coordinates will unnecessarily in\ufb02ates the estimation risk. When sn = \u03c3\u221ad log n for some d > 0, we are indi\ufb00erent in terms of estimation, since both b Q0 and b Q1 attains the minimax rate of convergence. We now return to the estimation of Q(\u00b5, \u03b8) in the two-sequence case, where 0 < \u03f5 < \u03b2 2 and 0 < \u03b2 < 1 2. In this case, kn \u226a\u221an, so the signal of individual sequences is rare. Moreover, the simultaneous sparsity qn \u226a\u221akn, implying that knowledge about whether \u00b5i is nonzero does not entail much about whether \u03b8i is nonzero (and vice versa). This motivates the estimator b Q2 = 1 n n X i=1 [(X2 i \u2212\u03c32\u03c4n)+ \u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4n)+ \u2212\u03b80], \u00b50 = \u03b80 := E0(Y 2 i \u2212\u03c32\u03c4n)+ (15) in the case of su\ufb03ciently strong signal, where the threshold level \u03c4n = log n. The construction of b Q2 is a straightforward extension of the construction of b Q1: each term \u00b52 i \u03b82 i is estimated independently by the product [(X2 i \u2212\u03c32\u03c4n)+\u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4n)+\u2212\u03b80]. Since qn \u226a\u221akn, following our previous argument, thresholding X2 i and Y 2 i independently seems reasonable. We now present a theorem on the upper bound of the mean squared error of b Q2. Theorem 1 (Sparse Regime: Upper Bound). For b > 0, the estimator b Q2 as in (15) with \u03c4n = log n satis\ufb01es sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q2 \u2212Q(\u00b5, \u03b8))2 \u2264C h n2\u03f5+4b\u22122(log n)2 + n\u03f5+6b\u22122i . (16) Straightforward calculation shows that for the estimator b Q0 = 0, sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q0 \u2212Q(\u00b5, \u03b8))2 = sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) \u0012 1 n n X i=1 \u00b52 i \u03b82 i \u00132 = q2 ns8 nn\u22122 = n2\u03f5+8b\u22122, (17) for 0 < \u03f5 \u2264\u03b2 < 1 2 and b \u2208R. We now show that the combination of b Q0 (when b < 0) and b Q2 (when b \u22650) is optimal, by providing a matching lower bound. Theorem 2 (Sparse Regime: Lower Bound). Let 0 < \u03f5 < \u03b2 2 and 0 < \u03b2 < 1 2. Then R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u2265c\u03b3n(\u03b2, \u03f5, b), where \u03b3n(\u03b2, \u03f5, b) = \uf8f1 \uf8f2 \uf8f3 n2\u03f5+8b\u22122 if b \u22640, n2\u03f5+4b\u22122(log n)2 if 0 < b \u2264\u03f5 2, n\u03f5+6b\u22122 if b > \u03f5 2. (18) 6 \fCrucial to the derivation of lower bound is the Constrained Risk Inequality (CRI) given in Brown and Low (1996). To apply CRI, it su\ufb03ces to construct two priors supported on \u2126(\u03b2, \u03f5, b) that have small chi-square distance but a large di\ufb00erence in the expected values of the resulting quadratic functionals. The cases b \u2264\u03f5 2 and b > \u03f5 2 correspond to choices of distinct pairs of priors. For b > \u03f5 2, the CRI boils down to the standard technique of inscribing a hardest hyperrectangle, with the Bayes risk for a simple prior supported on the hyperrectangle being a lower bound for the minimax risk. Nevertheless, the case b \u2264\u03f5 2 requires the use of a rich collection of hyperrectangles and a mixture prior which mixes over the vertices of the hyperrectangles in this collection. Mixing increases the di\ufb03culty of the Bayes estimation problem and is needed here to attain a sharp lower bound. Remark 1. Combining (16), (17) and (18), we see that when 0 < \u03f5 < \u03b2 2 and 0 < \u03b2 < 1 2, b Q2 attains the optimal rate of convergence over \u2126(\u03b2, \u03f5, b) when b > 0. In contrast, b Q0 attains the optimal rate of convergence over \u2126(\u03b2, \u03f5, b) when b \u22640. Remark 2. Interestingly, there is no dependence on \u03b2 in the minimax rate of convergence \u03b3n(\u03b2, \u03f5, b) in the sparse regime, except for the requirement that 0 < \u03f5 < \u03b2 2 . 2.2 Estimation in the Dense Regime We now consider estimating Q(\u00b5, \u03b8) in the dense regime, where qn is calibrated as in expression (6) with \u03b2 2 \u2264\u03f5 \u2264\u03b2. The dense regime is subdivided into two cases: the moderately dense case with \u03b2 2 \u2264\u03f5 \u22643\u03b2 4 and the strongly dense case with 3\u03b2 4 < \u03f5 \u2264\u03b2. In the dense regime, the estimator b Q2 de\ufb01ned in (15) is suboptimal, as the thresholding step in both X2 i and Y 2 i ends up thresholding too many coordinates when the signal is weak. Note that the simultaneous sparsity qn \u226b\u221akn suggests that for each coordinate i with \u00b5i \u0338= 0, it is usually the case that \u03b8i \u0338= 0, and vice versa. Therefore, it is no longer reasonable to perform thresholding on X2 i and Y 2 i independently. The additional knowledge of relatively high proportion of simultaneous nonzero entries suggests that whenever we observe a large value of X2 i (an implication of \u00b5i \u0338= 0), then even if Y 2 i is small, we should still estimate \u00b52 i \u03b82 i rather than setting it equals zero. The same reasoning applies to the case where X2 i is small but Y 2 i is large. To construct an optimal estimator in the dense regime, we again borrow some intuition from the estimation of the quadratic functional Q(\u03b8) = 1 n P \u03b82 i in the one-sequence case. We consider the family of parameter spaces given in (11), but for 1 2 \u2264\u03b2 < 1. The minimax rate of convergence once again satis\ufb01es (12), but with \u03b3n(\u03b2, b) = \uf8f1 \uf8f2 \uf8f3 n2\u03b2+4b\u22122 if b \u22641\u22122\u03b2 4 , n\u22121 if 1\u22122\u03b2 4 < b \u22641\u2212\u03b2 2 , n\u03b2+2b\u22122 if b > 1\u2212\u03b2 2 . (19) When 1 2 \u2264\u03b2 < 1, we have kn \u226b\u221an, meaning that many coordinates of \u03b8 is nonzero. The characterization of weak and strong signal is no longer b < 0 versus b \u22650 as in the case of 0 < \u03b2 < 1 2, but b \u22641\u22122\u03b2 4 versus b > 1\u22122\u03b2 4 . That is, given the same signal strength b, the vast number of nonzero coordinates of \u03b8 when kn \u226b\u221an collectively represents stronger signal as compared to the case when kn \u226a\u221an. Thus, the threshold of \u201cstrong\u201d signal as encoded by b is lowered when kn \u226b\u221an. It is not surprising that for the range of weak signal b \u22641\u22122\u03b2 4 , the estimator b Q0 = 0 is optimal. On the other hand, when b > 1\u22122\u03b2 4 , the optimal estimator for Q(\u03b8) is the unbiased estimator b Q3 = 1 n n X i=1 (Y 2 i \u2212\u03c32). (20) An optimal estimator is often one that strikes an appropriate balance between bias and variance in its mean squared error. The estimators b Q0 and b Q3 represent two extremities in terms of bias-variance 7 \ftradeo\ufb00. We see that b Q0 that is optimal for exceedingly weak signal has zero variance, while b Q3 that is optimal for su\ufb03ciently strong signal has zero bias. Due to the denseness of nonzero coordinates when kn \u226b\u221an, one could not a\ufb00ord to introduce bias to the estimator in the hope of achieving smaller variance. Without additional information about the sparsity structure, the unbiased estimator b Q3 is necessary for optimal estimation of Q(\u03b8). We now return to the two-sequence setting for the estimation of Q(\u00b5, \u03b8), for the case \u03b2 2 \u2264\u03f5 \u2264\u03b2 and 0 < \u03b2 < 1 2. Although the signal for individual sequences is sparse (kn \u226a\u221an), the simultaneous signal is dense in the sense that qn \u226b\u221akn. The intuition garnered from the one-sequence case motivates the following estimator: b Q4 = 1 n n X i=1 \u0002 (X2 i \u2212\u03c32)(Y 2 i \u2212\u03c32)1(X2 i \u2228Y 2 i > \u03c32\u03c4n) \u2212\u03b7 \u0003 , (21) where \u03b7 = E(0,0)[(X2 i \u2212\u03c32)(Y 2 i \u2212\u03c32)1(X2 i \u2228Y 2 i > \u03c32\u03c4n)]. From b Q4, we see that each term \u00b52 i \u03b82 i is estimated unbiasedly (modulo \u03b7) by (X2 i \u2212\u03c32)(Y 2 i \u2212\u03c32) whenever at least one of X2 i and Y 2 i is su\ufb03ciently large. This is in accordance with our previous argument that estimation should be done whenever we have at least one large value of X2 i or Y 2 i . The threshold \u03c4n is a tuning parameter whose value is yet to be determined during the analysis of the mean squared error of b Q4, though it turns out that \u03c4n = c log n for any c \u22654 attains the optimal rate of convergence. The subtraction of \u03b7 from (X2 i \u2212\u03c32)(Y 2 i \u2212\u03c32)1(X2 i \u2228Y 2 i > \u03c32\u03c4n) is needed because the majority of coordinates i has \u00b5i = \u03b8i = 0. A biased estimator for these coordinates unavoidably in\ufb02ates the estimation risk. Nevertheless, due to the rarity of nonzero coordinates in individual sequences, the naive unbiased estimator b Q5 = 1 n n X i=1 (X2 i \u2212\u03c32)(Y 2 i \u2212\u03c32) (22) is not optimal, as one would have expected. A thresholding step 1(X2 i \u2228Y 2 i > \u03c32\u03c4n) is needed to guard against estimating entries with \u00b5i = \u03b8i = 0 with noise. Note that b Q2 de\ufb01ned in (15) can be written as 1 n n X i=1 [(X2 i \u2212\u03c32\u03c4n)1(X2 i > \u03c32\u03c4n) \u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4n)1(Y 2 i > \u03c32\u03c4n) \u2212\u03b80]. Compare this expression with b Q4, we see that when both X2 i and Y 2 i are large, the term \u00b52 i \u03b82 i is roughly estimated as (X2 i \u2212\u03c32\u03c4n)(Y 2 i \u2212\u03c32\u03c4n). Moreover, (X2 i \u2212\u03c32\u03c4n)(Y 2 i \u2212\u03c32\u03c4n) is a biased estimator of \u00b52 i \u03b82 i when \u03c4n > 1. We present an upper bound on the mean squared error of b Q4 in the following theorem. Theorem 3 (Dense Regime: Upper Bound). For b > 0, the estimator b Q4 as in (21) with \u03c4n = 4 log n satis\ufb01es sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q4 \u2212Q(\u00b5, \u03b8))2 \u2264C max n n2\u03f5\u22122(log n)4, n\u03f5+6b\u22122, n\u03b2+4b\u22122o . (23) We now provide a matching lower bound to complement the upper bound in the dense regime. Theorem 4 (Dense Regime: Lower Bound). Let \u03b2 2 \u2264\u03f5 \u2264\u03b2 and 0 < \u03b2 < 1 2. Then R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u2265c\u03b3n(\u03b2, \u03f5, b), 8 \fwhere \u03b3n(\u03b2, \u03f5, b) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 n2\u03f5+8b\u22122 if b \u22640, n2\u03f5\u22122(log n)4 if 0 < b \u22642\u03f5\u2212\u03b2 4 , n\u03b2+4b\u22122 if 2\u03f5\u2212\u03b2 4 < b \u2264\u03b2\u2212\u03f5 2 , n\u03f5+6b\u22122 if b > \u03b2\u2212\u03f5 2 , (24) when \u03b2 2 \u2264\u03f5 \u22643\u03b2 4 , and \u03b3n(\u03b2, \u03f5, b) = \uf8f1 \uf8f2 \uf8f3 n2\u03f5+8b\u22122 if b \u22640, n2\u03f5\u22122(log n)4 if 0 < b \u2264\u03f5 6, n\u03f5+6b\u22122 if b > \u03f5 6. (25) when 3\u03b2 4 < \u03f5 \u2264\u03b2. The minimax rates of convergence display di\ufb00erent phase transitions within two subdivisions of the dense regime. In the moderately dense regime where \u03b2 2 \u2264\u03f5 \u22643\u03b2 4 , there are phase transitions at b = 2\u03f5\u2212\u03b2 4 and b = \u03b2\u2212\u03f5 2 , given in (24). Note that 2\u03f5\u2212\u03b2 4 \u2264\u03b2\u2212\u03f5 2 if and only if \u03f5 \u22643\u03b2 4 . In the strongly dense regime where \u03f5 > 3\u03b2 4 , the phase 2\u03f5\u2212\u03b2 4 < b \u2264\u03b2\u2212\u03f5 2 is non-existent, and we only have one intermediate phase 0 < b \u2264\u03f5 6, given in (25). We establish the lower bound by constructing least favorable priors and applying CRI. Except for the rate n\u03f5+6b\u22122 which is obtained through the inscription of a hardest hyperrectangle, all other cases require some forms of mixing over the vertices of a rich collection of hyperrectangles. Remark 3. Combining (17), (23), (24), and (25), we see that for the parameter space \u2126(\u03b2, \u03f5, b) with \u03b2 2 \u2264\u03f5 \u2264\u03b2 < 1 2, b Q4 attains the minimax rate of convergence when b > 0. In contrast, b Q0 = 0 attains the minimax rate of convergence when b \u22640. Remark 4. Similar to the sparse regime, there is no dependence on \u03b2 in the minimax rate of convergence \u03b3n(\u03b2, \u03f5, b) in the strongly dense regime, except for the requirement that 3\u03b2 4 < \u03f5 \u2264\u03b2. In contrast, \u03b3n(\u03b2, \u03f5, b) does depend explicitly on \u03b2 in the moderately dense regime \u03b2 2 \u2264\u03f5 \u22643\u03b2 4 . Remark 5. For either the sparse or dense regime, i.e., 0 < \u03f5 \u2264\u03b2, the minimax rate of convergence of Q(\u00b5, \u03b8) over \u2126(\u03b2, \u03f5, b) diverges to in\ufb01nity when b > 2\u2212\u03f5 6 . Hence when the signal strength is too large, it is impossible to consistently estimate Q(\u00b5, \u03b8). Remark 6. For either the sparse or dense regime, i.e., 0 < \u03f5 \u2264\u03b2, when rn = sn = \u03c3\u221ad log n for some d > 0, we have inf b Q sup (\u00b5,\u03b8):\u2225\u00b5\u22250\u2264kn,\u2225\u03b8\u22250\u2264kn, \u2225\u00b5\u2225\u221e\u2264\u03c3\u221ad log n,\u2225\u03b8\u2225\u221e\u2264\u03c3\u221ad log n E(\u00b5,\u03b8)( b Q \u2212Q(\u00b5, \u03b8))2 \u224dn2\u03f5\u22122(log n)4. The minimax rate of estimation is attained by b Q0, b Q2 for 0 < \u03f5 \u2264\u03b2 and by b Q4 for \u03b2 2 \u2264\u03f5 \u2264\u03b2. 2.3 Phase Transitions in the Minimax Rates of Convergence We see from Sections 2.1 and 2.2 that within each regime, the minimax rates of convergence exhibit several phase transitions. In addition, each transition is governed by a change in the relative magnitudes of the sparsity parameter \u03b2, the simultaneous sparsity parameter \u03f5, and the signal strength parameter b. In fact, it is the way phase transitions occur within each regime that characterizes the regime itself. Furthermore, the phase transitions actually display \u201ccontinuity\u201d across the boundaries of di\ufb00erent regimes. 9 \fTo depict what we meant graphically, \ufb01rst note that from Sections 2.1 and 2.2, the minimax rate of convergence \u03b3n(\u03b2, \u03f5, b) \u224dnr(\u03b2,\u03f5,b), (26) modulo a factor involving log n when applicable. In Figure 1, we plot the rate exponent r(\u03b2, \u03f5, b) against b for the sparse, moderately dense, and strongly dense regimes. Speci\ufb01cally, \ufb01xing \u03b2 = 0.45, we plot r(\u03b2, \u03f5, b) against b for a range of \u03f5 values in (0, \u03b2). The left panel of Figure 1 provides a continuum view of r(\u03b2, \u03f5, b), as \u03f5 increases from 0 to \u03b2. Each piecewise straight line corresponds to an \u03f5 value in the considered range. To highlight the discrepancy among the three regimes, we color the sparse regime (0 < \u03f5 < \u03b2 2 ) in red, the moderately dense regime ( \u03b2 2 \u2264\u03f5 \u22643\u03b2 4 ) in green, and the strongly dense regime (3\u03b2 4 < \u03f5 \u2264\u03b2) in blue. We see that the three regimes have somewhat di\ufb00erent behaviors for small positive values of b. In particular, the sparse regime and the strongly dense regime experience two transitions (three di\ufb00erent slopes), while the moderately dense regime experience three transitions (four di\ufb00erent slopes). Note that the di\ufb00erence in the number of transitions is restored at the intersection of the blue region and the red region. Thus, the phase transition is in some sense \u201ccontinuous\u201d across the regime boundaries \u2014 the piecewise straight lines corresponding to r(\u03b2, \u03f5, b)\u2019s exhibit smooth transition as \u03f5 increases from 0 to \u03b2. The right panel of Figure 1 provides a static view for each regime. We plot r(\u03b2, \u03f5, b) against b for three values of \u03f5 corresponding to three di\ufb00erent regimes: \u03f5 = 0.12 (sparse regime), \u03f5 = 0.28 (moderately dense regime), and \u03f5 = 0.4 (strongly dense regime). -0.05 0.00 0.05 0.10 0.15 -2.0 -1.5 -1.0 b r(\u03b2, \u03b5, b) sparse moderately dense strongly dense -0.05 0.00 0.05 0.10 0.15 -2.0 -1.5 -1.0 b r(\u03b2, \u03b5, b) sparse moderately dense strongly dense regime transition Figure 1: Plot of the rate exponent r(\u03b2, \u03f5, b) against the signal strength b. In the sparse regime ( ), r(\u03b2, \u03f5, b) changes in the order 2\u03f5+8b\u22122, 2\u03f5+4b\u22122, \u03f5+6b\u22122. In the moderately dense regime ( ), r(\u03b2, \u03f5, b) changes in the order 2\u03f5 + 8b \u22122, 2\u03f5 \u22122, \u03b2 + 4b \u22122, \u03f5 + 6b \u22122. In the strongly dense regime ( ), r(\u03b2, \u03f5, b) changes in the order 2\u03f5 + 8b \u22122, 2\u03f5 \u22122, \u03f5 + 6b \u22122. Left panel: a continuum view of r(\u03b2, \u03f5, b) as \u03f5 increases from 0 to \u03b2 = 0.45 (color changes from red to blue). Right panel: a static view of each regime: sparse (\u03f5 = 0.12), moderately dense (\u03f5 = 0.28), and strongly dense (\u03f5 = 0.4). Transition points are indicated by the knots on the dashed lines. Interestingly, in the two-sequence case, the regions {b : b \u22640} and {b : b > 0} appears to constitute the regions of weak signal and strong signal, respectively, regardless of the level of simultaneous sparsity. This is in contrast to the one-sequence case where the dividing line is b = 0 when kn \u226a\u221an, 10 \fand b = 1\u22122\u03b2 4 when kn \u226b\u221an. We caution that this apparent \u201creconciliation\u201d in the two-sequence case is simply because the signal strengths are taken to be the same for both sequences \u00b5 and \u03b8 in the simpli\ufb01ed results presented above. Remark 7. When the signal strengths rn = na and sn = nb of \u00b5 and \u03b8 are allowed to di\ufb00er, it turns out that {(a, b) : a \u2227b \u22640} characterizes the region of weak signal when qn \u226a\u221akn, while {(a, b) : a \u2228b \u22640} \u222a{(a, b) : a \u2227b \u2264\u03b2\u22122\u03f5 4 } comprises the region of weak signal when qn \u226b\u221akn. We refer the readers to Appendix A.1 for more details. 3 Detection There are strong connections between the problem of estimation and that of testing for quadratic functionals in the single Gaussian sequence model setting. In this setting, the primary goal of the testing problem is to distinguish between \u03b8 = 0 and \u03b8 \u0338= 0. Therefore, one can view the Gaussian sequence model as a signal plus noise model, where \u03b8 = 0 means that there is no signal, and testing whether \u03b8 is nonzero amounts to a signal detection problem. It has been shown that test procedures based on estimators of the quadratic functional Q(\u03b8) can be e\ufb00ective in detecting signals under various speci\ufb01cations of the parameter space (see, for example, Cai and Low (2005) and the references therein). In this section, we explore the links between estimation and testing for quadratic functionals in the Gaussian two-sequence model. In contrast to the one-sequence case, the main interest of testing in the two-sequence case is to distinguish between \u00b5\u22c6\u03b8 = 0 and \u00b5\u22c6\u03b8 \u0338= 0, where \u00b5\u22c6\u03b8 = (\u00b51\u03b81, . . . , \u00b5n\u03b8n) is the coordinate-wise product of \u00b5 and \u03b8. Note that this is in e\ufb00ect a simultaneous signal detection problem \u2014 we are only interested in the case where both sequences contain signal. As we shall see, the estimators b Q2 in (15) and b Q4 in (21) can be useful in the construction of test procedures. We will consider the hypothesis testing problem under an asymptotic minimax framework, where the size n of the mean vector \u03b8 is the driving variable. We \ufb01rst introduce some notation, which is applicable to both the one-sequence and two-sequence testing problem, i.e., think of \u03b8 below as a generic parameter. Consider the testing problem H0 : \u03b8 \u2208\u03980(n), H1 : \u03b8 \u2208\u03981(n), where \u03980(n) and \u03981(n) are parameter spaces whose speci\ufb01cation depends on n, and \u03980(n)\u2229\u03981(n) = \u2205. A test \u03c8 is a rule to accept or reject the null hypothesis based on the observed data. Therefore, it is a measurable function of the observed data with values in {0, 1}. The value \u03c8 = 1 means that we reject H0, and the value \u03c8 = 0 means that we do not reject H0. We measure the quality of a test \u03c8 by the sum of its maximal type I error (over \u03980(n)) and maximal type II error (over \u03981(n)): Sn(\u03980(n), \u03981(n))(\u03c8) = sup \u03b8\u2208\u03980(n) E\u03b8\u03c8 + sup \u03b8\u2208\u03981(n) E\u03b8(1 \u2212\u03c8). We de\ufb01ne the minimax total error probability for the hypothesis testing problem as the in\ufb01mum of such total error probability over all tests: Sn(\u03980(n), \u03981(n)) = inf \u03c8 Sn(\u03980(n), \u03981(n))(\u03c8). The goal is to establish the asymptotic detection boundary, i.e., the conditions on \u03980(n) and \u03981(n) which separate the undetectable region (where Sn(\u03980(n), \u03981(n)) \u2212 \u21921 as n \u2212 \u2192\u221e) from the detectable region (where Sn(\u03980(n), \u03981(n)) \u2212 \u21920 as n \u2212 \u2192\u221e). In the interior of undetectable region, signal is so weak that no tests can successfully separate H0 from H1: the sum of maximal type I and maximal type II error of any tests tends to one as n tends to in\ufb01nity. On the contrary, in the interior 11 \fof detectable region, it is possible to \ufb01nd a test that has sum of maximal type I and maximal type II error tends to zero as n diverges. Along with the establishment of the asymptotic detection boundary, we want to \ufb01nd a test \u03c8\u2217which can perfectly distinguish between H0 and H1 when n is large, i.e., Sn(\u03980(n), \u03981(n))(\u03c8\u2217) \u2212 \u21920 as n \u2212 \u2192\u221e, in the detectable region. We motivate our formulation of the simultaneous signal detection problem by the corresponding framework that has been established for the one-sequence signal detection problem (Ingster, 1997; Hall and Jin, 2010): H0 : \u03b8 = 0, H1 : \u03b8 \u2208\u03981(\u03b2, b), (27) where \u03981(\u03b2, b) = {\u03b8 \u2208Rn : \u2225\u03b8\u22250 = kn, \u03b8 \u2208{0, \u00b1sn}n}, (28) kn = n\u03b2 with 0 < \u03b2 < 1 and sn = nb with b \u2208R. In this formulation, there is no signal under the null hypothesis, whereas signal is constrained in terms of both sparsity and magnitude under the alternative hypothesis. Intuitively, for a \ufb01xed \u03b2, the signal detection problem becomes easier when b increases. Similarly, for a \ufb01xed b, an increase in \u03b2 makes the detection problem easier. In fact, the detection boundary under this framework is a mathematical formula describing the precise relationship between sparsity and signal strength. It is a curve b = \u03c1\u2217(\u03b2) that partitions {(\u03b2, b) : 0 < \u03b2 < 1, b \u2208R} into two regions: the detectable region where b > \u03c1\u2217(\u03b2), and the undetectable region where b < \u03c1\u2217(\u03b2). Similar to the problem of estimating the quadratic functional Q(\u03b8) over the parameter space \u0398(\u03b2, b) de\ufb01ned in (11), the detection problem (27) behaves di\ufb00erently over two regimes. In the dense regime, 1 2 \u2264\u03b2 < 1, and the detection boundary is \u03c1\u2217(\u03b2) = 1 \u22122\u03b2 4 . A simple test based on the quadratic functional b Q3 de\ufb01ned in (20) can be used here, by letting \u03c8\u2217= 1( b Q3 \u2265\u03bbn), where \u03bbn \u2212 \u21920 satis\ufb01es lim supn\u2212 \u2192\u221e\u03bbnn1\u2212\u03b2\u22122b < 1. With such choice of \u03bbn, E0\u03c8\u2217+ sup\u03b8\u2208\u03981(\u03b2,b) E\u03b8(1 \u2212\u03c8\u2217) \u2212 \u21920 whenever b > \u03c1\u2217(\u03b2). In the sparse regime, 0 < \u03b2 < 1 2. The detection boundary is \u03c1\u2217(\u03b2) = 0. Note that \u03c1\u2217(\u03b2) is independent of \u03b2 in this case. The explanation here is that we are not using our microscope at the right resolution. A more re\ufb01ned analysis reveals that if we parametrize sn at the order sn = \u03c3\u221ad log n, d > 0, rather than at the algebraic order sn = nb, b \u2208R, then signal is detectable whenever d > \u03c1\u2217(\u03b2), where \u03c1\u2217(\u03b2) = \u001a 2(1 \u2212\u221a\u03b2)2 if 0 < \u03b2 \u22641 4, 1 \u22122\u03b2 if 1 4 < \u03b2 < 1 2. (29) The detection boundary (29) is the same as that given in Donoho and Jin (2004), where a mixture model is used. A more general result extending to heteroscedatstic normal mixtures can be found in Cai, Jeng, and Jin (2011). The higher criticism test statistic is known to be optimally adaptive for the detection problem under the mixture model framework as considered in Donoho and Jin (2004) and Cai et al. (2011). Its optimal adaptivity is established in a regression context in Arias-Castro, Cand` es, and Plan (2011), where the single Gaussian sequence signal detection (27) serves as a special case, whereas the max-type test statistic is only optimal over the region 0 < \u03b2 \u22641 4 (see, e.g., Theorem 1 in Arias-Castro et al. (2011)). In the interest of connecting testing problem with estimation of quadratic functional, one might wonder if some form of sum-of-squares type test statistic such as that used in the dense regime can be 12 \fe\ufb00ective in the sparse regime. A natural candidate to consider is the estimator b Q1 of Q(\u03b8) de\ufb01ned in (14). Indeed, the test \u03c8\u2217= 1( b Q1 \u2265\u03bbn), where \u03bbn = log n n can asymptotically distinguish between H0 and H1 if b > 0, or if sn = \u03c3\u221ad log n with d su\ufb03ciently large. A rough analysis shows that the test procedure works for d > 4. Since we only have an upper bound for the mean and variance of b Q1 (and not the exact values), it will be challenging if not impossible to derive the smallest possible value of d where \u03c8\u2217can be e\ufb00ective. To generalize detection problem of the form (27), (28) to the two-sequence case, consider the following parameter spaces: \u21260(\u03b2, a, b) = {(\u00b5, \u03b8) : \u2225\u00b5\u22250 \u2264kn, \u2225\u03b8\u22250 \u2264kn, \u00b5 \u2208{0, \u00b1rn}n, \u03b8 \u2208{0, \u00b1sn}n, \u2225\u00b5 \u22c6\u03b8\u22250 = 0}, \u21261(\u03b2, \u03f5, a, b) = {(\u00b5, \u03b8) : \u2225\u00b5\u22250 \u2264kn, \u2225\u03b8\u22250 \u2264kn, \u00b5 \u2208{0, \u00b1rn}n, \u03b8 \u2208{0, \u00b1sn}n, \u2225\u00b5 \u22c6\u03b8\u22250 = qn}, (30) where kn = n\u03b2, qn = n\u03f5 with 0 < \u03f5 \u2264\u03b2 < 1 2, and rn = na, sn = nb with a, b \u2208R. Suppose that we want to test H0 : (\u00b5, \u03b8) \u2208\u21260(\u03b2, a, b), H1 : (\u00b5, \u03b8) \u2208\u21261(\u03b2, \u03f5, a, b). (31) Essentially, we are testing whether qn = 0 on condition that (\u00b5, \u03b8) satis\ufb01es the required sparsity and signal strength constraints. It is, perhaps, unsurprising that the testing problem being characterized by two regimes: the sparse regime where 0 < \u03f5 < \u03b2 2 , and the dense regime \u03b2 2 \u2264\u03f5 \u2264\u03b2. As the testing problem now involves four parameters: a, b, \u03b2, and \u03f5, it is easier to describe the detectable versus undetectable regions, instead of the detection boundary. Theorem 5. We state the detectable and undetectable regions in two cases: (a) In the sparse regime, 0 < \u03f5 < \u03b2 2 . The undetectable region is {(a, b) : a \u2227b \u22640}, and the detectable region is {(a, b) : a \u2227b > 0}. The test \u03c8\u2217= 1( b Q2 \u2265\u03bbn), where b Q2 is as de\ufb01ned in (15) with \u03c4n = log n, \u03bbn = 1 2n\u03f5+2a+2b\u22121, asymptotically separates H0 from H1 over the detectable region. (b) In the dense regime, \u03b2 2 \u2264\u03f5 \u2264\u03b2. The undetectable region is {(a, b) : a \u2227b < \u03b2\u22122\u03f5 4 or a \u2228b \u22640}, and the detectable region is {(a, b) : a \u2227b > \u03b2\u22122\u03f5 4 and a \u2228b > 0}. The test \u03c8\u2217= 1( b Q4 \u2265\u03bbn), where b Q4 is as de\ufb01ned in (21) with \u03c4n = 4 log n, \u03bbn = 1 2n\u03f5+2a+2b\u22121, asymptotically separates H0 from H1 over the detectable region. In Figure 2, we plot the detectable and the undetectable regions for both sparse and dense regimes. The detectable region in the dense regime is larger than that in the sparse regime. Interestingly, in the dense regime, signal is detectable provided that the signal strength of at least one of the sequences is large enough (and the signal strength of the other sequence is not too weak). In contrast, in the sparse regime, signal is only detectable when both sequences admit su\ufb03ciently strong signals. Note that the undetectable region for the detection problem (31) is essentially the same as the region where b Q0 = 0 attains the optimal rate of estimation for Q(\u00b5, \u03b8) over the parameter space \u2126(\u03b2, \u03f5, a, b) de\ufb01ned in (36) of Appendix A.1. To see the connection, compare Figure 2 with Remark 7 given at the end of Section 2.3, and the results in Appendix A.1. One may notice that the resolution of our microscope is not high enough in some sense: the condition a\u2227b = 0 when 0 < \u03f5 < \u03b2 2 and the condition a\u2228b = 0 when \u03b2 2 \u2264\u03f5 \u2264\u03b2 does not involve \u03f5 and \u03b2. A more re\ufb01ned analysis may involve the parameterization rn\u2227sn = \u03c3\u221ac log n or rn\u2228sn = \u03c3\u221ad log n for some c, d > 0. One can show that in such cases, the test procedures based on b Q2 and b Q4 are still e\ufb00ective for c, d su\ufb03ciently large. A detailed analysis of the exact detection boundary along this order is beyond the scope of the paper. 13 \fSparse regime a b (0, 0) Dense regime a b (0, 0) ((\u03b2-2\u03b5)/4, 0) (0, (\u03b2-2\u03b5)/4) Figure 2: Plot of detectable versus undetectable regions for the sparse regime (left) and the dense regime (right). The shaded area corresponds to the detectable region, while the unshaded area corresponds to the undetectable region. 4 Simulation In this section, we perform some simulation studies to compare the performance of the three estimators b Q0 = 0, b Q2 as in (15), and b Q4 as in (21), under di\ufb00erent scenarios. We compute the mean squared error (MSE) of the three estimators and show that our simulation results is compatible with the theoretical results given in Section 2. So far, we have assumed that the noise level \u03c3 is known. In practice, \u03c3 is typically unknown and needs to be estimated. Under the sparse setting of the present paper, \u03c3 is easily estimable. Denote by M \u2208R2n with M2i\u22121 = Xi and M2i = Yi for i = 1, ..., n. A simple robust estimator of the noise level \u03c3 is the following median absolute deviation (MAD) estimator: \u02c6 \u03c3 = median|Mj \u2212median(Mj)| 0.6745 . We consider simulation studies over a range of sample size n, sparsity kn = n\u03b2, simultaneous sparsity qn = n\u03f5, and signal strength sn = nb. More speci\ufb01cally, we take n \u2208{103, 104, . . . , 107}, \u03b2 = 0.45 for individual sequences, b \u2208{\u22120.1, 0.15, 0.2}, and three values of simultaneous sparsity, one for each regime: \u03f5 = 0.02 (sparse regime), \u03f5 = 0.3 (moderately dense regime) and \u03f5 = 0.44 (strongly dense regime). Figure 3 is the plot of the MSE (averaged over 500 replications) of the three estimators over di\ufb00erent sample sizes (in the log-log scale), for each combination of simultaneous sparsity and signal strength. The theoretical results in Section 2 indicate that for b Q = b Q0, b Q2, or b Q4, sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E( b Q \u2212Q(\u00b5, \u03b8))2 \u224dnr(\u03b2,\u03f5,b) for some rate exponent r(\u03b2, \u03f5, b) (modulo a logarithmic factor when applicable). Thus, it is not surprising that the results in Figure 3 (mostly) exhibit a linear pattern. When the signal is weak with 14 \f-20 -15 -10 -5 b = -0.1, \u03b5 = 0.02 n log10(MSE) 103 104 105 106 107 -20 -15 -10 -5 b = -0.1, \u03b5 = 0.3 n log10(MSE) 103 104 105 106 107 -20 -15 -10 -5 b = -0.1, \u03b5 = 0.44 n log10(MSE) 103 104 105 106 107 -6 -5 -4 -3 -2 -1 0 b = 0.15, \u03b5 = 0.02 n log10(MSE) 103 104 105 106 107 -6 -5 -4 -3 -2 -1 0 b = 0.15, \u03b5 = 0.3 n log10(MSE) 103 104 105 106 107 -6 -5 -4 -3 -2 -1 0 b = 0.15, \u03b5 = 0.44 n log10(MSE) 103 104 105 106 107 -4 -2 0 2 b = 0.2, \u03b5 = 0.02 n log10(MSE) 103 104 105 106 107 -4 -2 0 2 b = 0.2, \u03b5 = 0.3 n log10(MSE) 103 104 105 106 107 -4 -2 0 2 b = 0.2, \u03b5 = 0.44 n log10(MSE) 103 104 105 106 107 Figure 3: Plot of MSE for the estimators b Q0 ( ), b Q2 ( ), and b Q4 ( ) over di\ufb00erent sample sizes n \u2208{103, . . . , 107}, in the log-log scale. The columns are ordered from left to right as \u03f5 = 0.02 (sparse regime), \u03f5 = 0.3 (moderately dense regime), and \u03f5 = 0.44 (strongly dense regime). The rows are ordered from top to bottom in increasing signal strength: b \u2208{\u22120.1, 0.15, 0.2}. Horizontal grey line corresponds to MSE = 1, and it serves to distinguish between MSE \u2212 \u21920 (negative slope) and MSE \u2212 \u2192\u221e(positive slope). b = \u22120.1 (see the \ufb01rst row of Figure 3), we see that b Q0 (solid red line) and b Q4 (dotted blue line) have the lowest mean square error. Note that we expect b Q0 to be optimal when the signal is weak. We observe that b Q4 is nearly as good as b Q0 from Figure 3. This is because when the signal is weak, the thresholding step 1(X2 i \u2228Y 2 i \u2265\u03c32\u03c4n) thresholds both noise and weak signals, and the de-bias term \u03b7 is extremely small when n is moderately large, resulting in b Q4 \u2248b Q0 = 0. As signal becomes su\ufb03ciently strong (b \u2208{0.15, 0.2}), b Q2 starts to dominate in the sparse regime (\u03f5 = 0.02) while b Q4 15 \fdominates in the moderately dense and strongly dense regimes (\u03f5 \u2208{0.3, 0.44}). When the signal is su\ufb03ciently large (b \u2208{0.15, 0.2}), b Q0 is clearly suboptimal. In particular, in the case where signal is both dense and strong (b = 0.2, \u03f5 \u2208{0.3, 0.44}), the MSE of b Q0 diverges to in\ufb01nity, as indicated by the positive slope of the solid red line. Note also that as either \u03f5 or b increases, MSE increases, as can be seen by the \ufb02attening or reversing of slopes towards the right end or bottom of the plot panel. This is compatible with the fact that r(\u03b2, \u03f5, b) increases with respect to both \u03f5 and b. 5 Discussion In this paper, we discuss the estimation of the quadratic functional Q(\u00b5, \u03b8) = 1 n P \u00b52 i \u03b82 i over a family of parameter spaces where the magnitude and sparsity of both \u00b5 and \u03b8 are constrained. We show that the minimax rates of convergence display interesting phase transitions over three distinct regimes: the sparse regime, the moderately dense regime, and the strongly dense regime. We also demonstrate an application of the estimators of the quadratic functional to a closely related simultaneous signal detection problem, and show that the resulting test procedures are e\ufb00ective in detecting simultaneous signals over the detectable region. Throughout our analysis, we highlight distinctions and similarities between the one-sequence and two-sequence problems, for both estimation and testing. It will be interesting to generalize our study of the two-sequence estimation problems in numerous aspects. In Appendix A, we show that the optimal rates of estimation for Q(\u00b5, \u03b8) continue to subsume the aforementioned three regimes, when \u00b5 and \u03b8 are allowed unequal signal strengths. Nonetheless, the distinction between the sparse and dense regime is more apparent in this setting. In the sparse regime, estimation is only desirable when the signal strength of both sequences are su\ufb03ciently strong. In contrast, in the dense regime, estimation is desirable whenever at least one sequence admits su\ufb03ciently strong signal (and the signal strength of the other sequence is not too weak). We also examine in Appendix A the behavior of the estimation problem when signal strength is incorporated through \u21132-norm rather than \u2113\u221e-norm. Unlike the one-sequence estimation problem, in the two-sequence case the minimax rates of convergence are to some extent degenerate under the \u21132-norm constraint. Thus, it is reasonable to suspect that the one-sequence and two-sequence estimation problems are not that resembling after all. A more re\ufb01ned analysis of the characteristics of both one-sequence and two-sequence problems demands an examination of their respective behaviors under the \u2113p-norm constraint on the signal strength, for p \u2208(0, \u221e], and is beyond the scope of the paper. 6 Proofs of Theorem 1 and Theorem 2 In this section, we present the proofs of Theorem 1 and Theorem 2, which concern estimation results of Q(\u00b5, \u03b8) in the sparse regime. For reason of space, we relegate the proofs of other main results in the paper to Appendix B. Henceforth, we omit the subscripts n in kn, qn, sn and \u03c4n that signi\ufb01es their dependence on the sample size. We denote by \u03c8\u00b5 the density of a Gaussian distribution with mean \u00b5 and variance \u03c32, and we denote by \u2113(n, k) the class of all subsets of {1, . . . , n} of k distinct elements. Finally, c and C denote constants that may vary for each occurrence. 6.1 Proof of Theorem 1 The proof of Theorem 1 involves a careful analysis of the bias and variance of the estimator b Q2 (de\ufb01ned in (15)). We need the following lemma from Cai and Low (2005) (Lemma 1, page 2939) for proving Theorem 1. 16 \fLemma 1. Let Y \u223cN(\u03b8, \u03c32) and let \u03b80 = E0(Y 2\u2212\u03c32\u03c4)+, where the expectation is taken under \u03b8 = 0. Then for \u03c4 \u22651 and b \u03b82 = (Y 2 \u2212\u03c32\u03c4)+ \u2212\u03b80, |\u03b80| \u2264 4\u03c32 \u221a 2\u03c0\u03c4 1/2e\u03c4/2 , |E( b \u03b82) \u2212\u03b82| \u2264min{2\u03c32\u03c4, \u03b82}, Var( b \u03b82) \u22646\u03c32\u03b82 + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 . Lemma 2 is an immediate consequence of Lemma 1. Lemma 2. Let Y \u223cN(\u03b8, \u03c32) and let \u03b80 = E0(Y 2\u2212\u03c32\u03c4)+, where the expectation is taken under \u03b8 = 0. Then for \u03c4 \u22651, (E(Y 2 \u2212\u03c32\u03c4)+ \u2212\u03b80)2 \u2264max \u001a 6\u03c32\u03b82 + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 , 10\u03b84 \u001b . To streamline the presentation of proof of Theorem 1, we defer the proof of Lemma 2 to the end of Appendix B.2. Proof of Theorem 1. We \ufb01rst bound the bias of the estimator b Q2 de\ufb01ned in (15). Using the equality AB \u2212ab = (A \u2212a)(B \u2212b) + a(B \u2212b) + b(A \u2212a), the independence of Xi and Yi, and the triangle inequality, we get \f \f \fE(\u00b5i,\u03b8i){[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80]} \u2212\u00b52 i \u03b82 i \f \f \f \u2264 \f \f \fE\u00b5i[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50] \u2212\u00b52 i \f \f \f \u00b7 \f \f \fE\u03b8i[(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80] \u2212\u03b82 i \f \f \f + \u00b52 i \f \f \fE\u03b8i[(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80] \u2212\u03b82 i \f \f \f + \u03b82 i \f \f \fE\u00b5i[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50] \u2212\u00b52 i \f \f \f \u2264min{2\u03c32\u03c4, \u00b52 i } min{2\u03c32\u03c4, \u03b82 i } + \u00b52 i min{2\u03c32\u03c4, \u03b82 i } + \u03b82 i min{2\u03c32\u03c4, \u00b52 i } \u22642\u00b52 i min{2\u03c32\u03c4, \u03b82 i } + 2\u03b82 i min{2\u03c32\u03c4, \u00b52 i }, the second inequality follows from Lemma 1. It follows that for (\u00b5, \u03b8) \u2208\u2126(\u03b2, \u03f5, b) and \u03c4 \u22651, |E(\u00b5,\u03b8)( b Q2) \u2212Q(\u00b5, \u03b8)| = \f \f \f \f 1 n n X i=1 E(\u00b5i,\u03b8i){[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80]} \u22121 n n X i=1 \u00b52 i \u03b82 i \f \f \f \f \u22642 n n X i=1 \u0002 \u00b52 i min{2\u03c32\u03c4, \u03b82 i } + \u03b82 i min{2\u03c32\u03c4, \u00b52 i } \u0003 \u22644 n min{2\u03c32qs2\u03c4, qs4}, the second inequality follows from the fact that for (\u00b5, \u03b8) \u2208\u2126(\u03b2, \u03f5, b), there are at most q entries that are simultaneously nonzero for \u00b5 and \u03b8. We now proceed to bounding the variance of b Q2. Applying the equality Var(AB) = Var(A)Var(B) + [E(A)]2Var(B) + [E(B)]2Var(A), 17 \ffor \u03c4 \u22651, we have Var(\u00b5i,\u03b8i){[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80]} = Var\u00b5i[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50]Var\u03b8i[(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80] + [E\u00b5i(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50]2Var\u03b8i[(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80] + [E\u03b8i(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80]2Var\u00b5i[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50] \u22643 \u0014 6\u03c32\u00b52 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015\u0014 6\u03c32\u03b82 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015 + 10\u00b54 i \u0014 6\u03c32\u03b82 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015 + 10\u03b84 i \u0014 6\u03c32\u00b52 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015 , the inequality follows from Lemma 1 and Lemma 2. Thus, for (\u00b5, \u03b8) \u2208\u2126(\u03b2, \u03f5, b) and \u03c4 \u22651, Var\u00b5,\u03b8( b Q2) = 1 n2 n X i=1 Var(\u00b5i,\u03b8i){[(X2 i \u2212\u03c32\u03c4)+ \u2212\u00b50][(Y 2 i \u2212\u03c32\u03c4)+ \u2212\u03b80]} \u22643 n2 n X i=1 \u0014 6\u03c32\u00b52 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015\u0014 6\u03c32\u03b82 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015 + 10 n2 n X i=1 \u00b54 i \u0014 6\u03c32\u03b82 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015 + 10 n2 n X i=1 \u03b84 i \u0014 6\u03c32\u00b52 i + \u03c34 4\u03c4 1/2 + 18 e\u03c4/2 \u0015 \u22643 n2 \u0014 36\u03c34qs4 + 12\u03c36ks2 \u00124\u03c4 1/2 + 18 e\u03c4/2 \u0013 + n\u03c38 \u00124\u03c4 1/2 + 18 e\u03c4/2 \u00132\u0015 + 20 n2 \u0014 6\u03c32qs6 + \u03c34ks4 \u00124\u03c4 1/2 + 18 e\u03c4/2 \u0013\u0015 . Combining the bias and variance term, we get, for \u03c4 \u22651, sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q2 \u2212Q(\u00b5, \u03b8))2 \u2264C n2 \u0014 min{q2s4\u03c4 2, q2s8} + max \u001a qs4, qs6, ks2 \u00124\u03c4 1/2 + 18 e\u03c4/2 \u0013 , ks4 \u00124\u03c4 1/2 + 18 e\u03c4/2 \u0013 , n \u00124\u03c4 1/2 + 18 e\u03c4/2 \u00132\u001b\u0015 = C n2 \u0014 min{n2\u03f5+4b\u03c4 2, n2\u03f5+8b} + max \u001a n\u03f5+4b, n\u03f5+6b, n\u03b2+2b \u00124\u03c4 1/2 + 18 e\u03c4/2 \u0013 , n\u03b2+4b \u00124\u03c4 1/2 + 18 e\u03c4/2 \u0013 , n \u00124\u03c4 1/2 + 18 e\u03c4/2 \u00132\u001b\u0015 . Suppose that b > 0. Then letting \u03c4 = log n leads to sup (\u00b5,\u03b8)\u2208\u2126(\u03b2,\u03f5,b) E(\u00b5,\u03b8)( b Q2 \u2212Q(\u00b5, \u03b8))2 \u2264C \u0014 n2\u03f5+4b\u22122(log n)2 + n\u03f5+6b\u22122 \u0015 \u2264C\u03b3n(\u03b2, \u03f5, b), where \u03b3n(\u03b2, \u03f5, b) = \u001a n2\u03f5+4b\u22122(log n)2 if 0 < b \u2264\u03f5 2, n\u03f5+6b\u22122 if b > \u03f5 2. (32) 18 \fFrom the calculation above, one can also check that when 0 < \u03f5 \u2264\u03b2 and s = \u03c3\u221ad log n, sup (\u00b5,\u03b8):\u2225\u00b5\u22250\u2264kn,\u2225\u03b8\u22250\u2264kn, \u2225\u00b5\u2225\u221e\u2264\u03c3\u221ad log n,\u2225\u03b8\u2225\u221e\u2264\u03c3\u221ad log n E(\u00b5,\u03b8)( b Q2 \u2212Q(\u00b5, \u03b8))2 \u2264Cn2\u03f5\u22122(log n)4. 6.2 Proof of Theorem 2 To prove Theorem 2, it su\ufb03ces to show that for 0 < \u03b2 < 1 2, \u03b3n(\u03b2, \u03f5, b) \u2265 \uf8f1 \uf8f2 \uf8f3 n2\u03f5+4b\u22122(log n)2 if b > 0, for 0 < \u03f5 < \u03b2 2 , (Case 1) n2\u03f5+8b\u22122 if b \u22640, for 0 < \u03f5 \u2264\u03b2, (Case 2) n\u03f5+6b\u22122 if b > 0, for 0 < \u03f5 \u2264\u03b2. (Case 3) For individual regions in {(\u03b2, \u03f5, b) : 0 < \u03f5 < \u03b2 2 , 0 < \u03b2 < 1 2, b \u2208R}, the minimax rate of convergence is then given by the sharpest rate among all cases in which the region belongs to. For instance, the region {(\u03b2, \u03f5, b) : 0 < \u03f5 < \u03b2 2 , 0 < \u03b2 < 1 2, b > \u03f5 2} is included in Case 1 and Case 3, hence \u03b3n(\u03b2, \u03f5, b) \u2265max{n2\u03f5+4b\u22122(log n)2, n\u03f5+6b\u22122} = n\u03f5+6b\u22122. To establish the desired lower bounds, for each cases, we construct two priors f and g that have small chi-square distance but a large di\ufb00erence in the expected values of the resulting quadratic functionals, and then applying the Constrained Risk Inequality (CRI) in Brown and Low (1996). The choice of priors f and g are crucial in deriving sharp lower bound for the estimation problem. In fact, the fundamental di\ufb00erence between di\ufb00erent phases in the sparse regime for the estimation of Q(\u00b5, \u03b8) can be seen from the choice f and g. For some background on lower bound technique, see Appendix B.1.1. Proof of Case 1. Our proof builds on arguments similar to that used in Cai and Low (2004) and Baraud (2002), who considered the one-sequence estimation problem. We \ufb01rst follow the lines of the proof of Theorem 7 in Cai and Low (2004), and then apply a result from Aldous (1985) as was done in Baraud (2002). Let f(x1, . . . , xn, y1, . . . , yn) = k Y i=1 \u03c8s(xi) n Y i=k+1 \u03c80(xi) n Y i=1 \u03c80(yi). For I \u2208\u2113(k, q), let gI(x1, . . . , xn, y1, . . . , yn) = k Y i=1 \u03c8s(xi) n Y i=k+1 \u03c80(xi) k Y i=1 \u03c8\u03b8i(yi) n Y i=k+1 \u03c80(yi), where \u03b8i = \u03c11(i \u2208I) with \u03c1 > 0, and let g = 1 \u0000k q \u0001 X I\u2208\u2113(k,q) gI. In both f and g, the sequence \u00b5 = (s, . . . , s, 0, . . . , 0) is taken to be the same. However, \u03b8 is taken to be all zeros in f but is taken as a mixture in g. The nonzero coordinates of \u03b8 is mixed uniformly over the support of \u00b5 at a common magnitude \u03c1, whose value is yet to be determined. Our choice of f and g essentially reduces the two-sequence problem to the case where we only have one Gaussian mean sequence of length k with q nonzero coordinates, hence explains the correspondence between 19 \fthe sparse regime in the two-sequence case (q < \u221a k) and the sparse regime in the one-sequence case (k < \u221an). We now compute the chi-square a\ufb03nity between f and g, which bears the expression Z g2 f = 1 \u0000k q \u00012 X I\u2208\u2113(k,q) X J\u2208\u2113(k,q) Z gIgJ f . (33) For I, J \u2208\u2113(k, q), let m = Card(I \u2229J). Then Z gIgJ f = k Y i=1 Z \u03c8\u03c11(i\u2208I)(yi) \u00b7 \u03c8\u03c11(i\u2208J)(yi) \u03c80(yi) dyi = \u0014 Z \u03c80(y) dy \u0015k\u22122q+m\u0014 Z \u03c8\u03c1(y) dy \u00152q\u22122m\u0014 Z \u03c82 \u03c1(y) \u03c80(y) dy \u0015m = exp \u0012m\u03c12 \u03c32 \u0013 . It follows that Z g2 f = E \u0014 exp \u0012M\u03c12 \u03c32 \u0013\u0015 , where M has a hypergeometric distribution P(M = m) = \u0000 q m \u0001\u0000 k\u2212q q\u2212m \u0001 \u0000k q \u0001 . (34) As shown in Aldous (1985), M has the same distribution as the conditional expectation E( \u02dc M|B), where \u02dc M is a Binomial(q, q k) random variable and B is a suitable \u03c3-algebra. Coupled with Jensen\u2019s inequality, this implies that Z g2 f \u2264E \u0014 exp \u0012 \u02dc M\u03c12 \u03c32 \u0013\u0015 = \u0012 1 \u2212q k + q ke\u03c12/\u03c32\u0013q . Taking \u03c1 = \u03c3 p (\u03b2 \u22122\u03f5) log n gives e\u03c12/\u03c32 = n\u03b2\u22122\u03f5 = k q2 , hence Z g2 f \u2264 \u0012 1 + 1 q \u0013q \u2264e. Since Q(\u00b5, \u03b8) = 0 under f and Q(\u00b5, \u03b8) = 1 nqs2\u03c12 under g, it follows from CRI that R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u2265c \u0012 1 nqs2\u03c12 \u00132 = cn2\u03f5+4b\u22122(log n)2. Proof of Case 2. Let f(x1, . . . , xn, y1, . . . , yn) = n Y i=1 \u03c80(xi) n Y i=1 \u03c80(yi) 20 \fFor I \u2208\u2113(n, q), let gI(x1, . . . , xn, y1, . . . , yn) = n Y i=1 \u03c8\u00b5i(xi) n Y i=1 \u03c8\u03b8i(yi), where \u00b5i = \u03b8i = \u03c11(i \u2208I) with \u03c1 > 0, and let g = 1 \u0000n q \u0001 X I\u2208\u2113(n,q) gI. Contrast the choice of f an g here with that used in the proof of Case 1. Rather than \ufb01xing \u00b5 and mixing nonzero coordinates of \u03b8 over the support of \u00b5, in this case mixing is done over all n positions using nonzero coordinates of \u00b5 and \u03b8 simultaneously. Similar calculation as that used in the proof of Case 1 yields Z g2 f \u2264 \u0012 1 \u2212q n + q ne2\u03c12/\u03c32\u0013q . (35) Now take \u03c1 = s = nb. Since b < 0, it follows that when n is su\ufb03ciently large, e2\u03c12/\u03c32 \u2264n1\u22122\u03f5 = n q2 , hence Z g2 f \u2264 \u0012 1 + 1 q \u0013q \u2264e. Since Q(\u00b5, \u03b8) = 0 under f, and Q(\u00b5, \u03b8) = 1 nq\u03c14 under g, it follows from CRI that R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u2265c \u0012 1 nq\u03c14 \u00132 = cn2\u03f5+8b\u22122. In fact, when 0 < \u03f5 \u2264\u03b2 < 1 2 and s = \u03c3\u221ad log n for some d > 0, we also have inf b Q sup (\u00b5,\u03b8):\u2225\u00b5\u22250\u2264kn,\u2225\u03b8\u22250\u2264kn, \u2225\u00b5\u2225\u221e\u2264\u03c3\u221ad log n,\u2225\u03b8\u2225\u221e\u2264\u03c3\u221ad log n E(\u00b5,\u03b8)( b Q \u2212Q(\u00b5, \u03b8))2 \u2265cn2\u03f5\u22122(log n)4. This can be shown by letting \u03c1 = \u03c3 q 1 2 min{d, (1 \u22122\u03f5)} log n in (35). Proof of Case 3. The priors used in this case is very di\ufb00erent from that considered in the proofs of Case 1 and Case 2. Let f(x1, . . . , xn, y1, . . . , yn) = q Y i=1 \u03c8s(xi) n Y i=q+1 \u03c80(xi) q Y i=1 \u03c8s(yi) n Y i=q+1 \u03c80(yi), and g(x1, . . . , xn, y1, . . . , yn) = q Y i=1 \u03c8s(xi) n Y i=q+1 \u03c80(xi) q Y i=1 \u03c8s\u2212\u03b4(yi) n Y i=q+1 \u03c80(yi), where 0 < \u03b4 < s. Note that no mixing is performed in this case. Instead, we \ufb01x the sequence \u00b5 = (s, . . . , s, 0, . . . , 0) in both f and g, and perturb the nonzero entries of \u03b8 by a small amount \u03b4 in g. This set of priors provides the sharpest rate for the case when signal is strong, i.e., s = nb is large. The 21 \fintuition is that when s is large, estimation of Q(\u00b5, \u03b8) is most di\ufb03cult due to the indistinguishability between \u03b8i = s and \u03b8i = s \u2212\u03b4, where \u03b4 \u22480. The chi-square a\ufb03nity between f and g is given by Z g2 f = eq\u03b42/\u03c32. Let \u03b4 = \u03c3/\u221aq = \u03c3n\u2212\u03f5/2. Then we have Z g2 f = e < \u221e. Since Q(\u00b5, \u03b8) = 1 nqs4 under f and Q(\u00b5, \u03b8) = 1 nqs2(s \u2212\u03b4)2 under g, it follows from CRI that R\u2217(n, \u2126(\u03b2, \u03f5, b)) \u2265c \u0012 1 nqs2\u0000s2 \u2212(s \u2212\u03b4)2\u0001\u00132 = c \u0012 1 n \u221aqs3 \u00132 (1 + o(1)) = cn\u03f5+6b\u22122(1 + o(1))." + }, + { + "url": "http://arxiv.org/abs/1502.01988v2", + "title": "Computational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix", + "abstract": "The interplay between computational efficiency and statistical accuracy in\nhigh-dimensional inference has drawn increasing attention in the literature. In\nthis paper, we study computational and statistical boundaries for submatrix\nlocalization. Given one observation of (one or multiple non-overlapping) signal\nsubmatrix (of magnitude $\\lambda$ and size $k_m \\times k_n$) contaminated with\na noise matrix (of size $m \\times n$), we establish two transition thresholds\nfor the signal to noise $\\lambda/\\sigma$ ratio in terms of $m$, $n$, $k_m$, and\n$k_n$. The first threshold, $\\sf SNR_c$, corresponds to the computational\nboundary. Below this threshold, it is shown that no polynomial time algorithm\ncan succeed in identifying the submatrix, under the \\textit{hidden clique\nhypothesis}. We introduce adaptive linear time spectral algorithms that\nidentify the submatrix with high probability when the signal strength is above\nthe threshold $\\sf SNR_c$. The second threshold, $\\sf SNR_s$, captures the\nstatistical boundary, below which no method can succeed with probability going\nto one in the minimax sense. The exhaustive search method successfully finds\nthe submatrix above this threshold. The results show an interesting phenomenon\nthat $\\sf SNR_c$ is always significantly larger than $\\sf SNR_s$, which implies\nan essential gap between statistical optimality and computational efficiency\nfor submatrix localization.", + "authors": "T. Tony Cai, Tengyuan Liang, Alexander Rakhlin", + "published": "2015-02-06", + "updated": "2015-10-21", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction The \u201csignal + noise\" model X = M + Z, (1) where M is the signal of interest and Z is noise, is ubiquitous in statistics and is used in a wide range of applications. When M and Z are matrices, many interesting problems arise under a variety of structural assumptions on M and the distribution of Z. Examples include sparse principal component analysis (PCA) (Vu and Lei, 2012; Berthet and Rigollet, 2013b; Birnbaum et al., 2013; Cai et al., 2013, 2015), non-negative matrix factorization (Lee and Seung, 2001), non-negative PCA (Zass and Shashua, 2006; \u2217The research of Tony Cai was supported in part by NSF Grants DMS-1208982 and DMS-1403708, and NIH Grant R01 CA127334. \u2020Tengyuan Liang acknowledges the support of Winkelman Fellowship. \u2021Alexander Rakhlin gratefully acknowledges the support of NSF under grant CAREER DMS-0954737. 1 arXiv:1502.01988v2 [math.ST] 21 Oct 2015 \fMontanari and Richard, 2014). Under the conventional statistical framework, one is looking for optimal statistical procedures for recovering the signal or detecting its presence. As the dimensionality of the data becomes large, the computational concerns associated with statistical procedures come to the forefront. In particular, problems with a combinatorial structure or nonconvex constraints pose a signi\ufb01cant computational challenge because naive methods based on exhaustive search are typically not computationally ef\ufb01cient. Trade-off between computational ef\ufb01ciency and statistical accuracy in high-dimensional inference has drawn increasing attention in the literature. In particular, Chandrasekaran et al. (2012) and Wainwright (2014) considered a general class of linear inverse problems, with different emphasis on convex geometry and decomposition of statistical and computational errors. Chandrasekaran and Jordan (2013) studied an approach for trading off computational demands with statistical accuracy via relaxation hierarchies. Berthet and Rigollet (2013a); Ma and Wu (2013); Zhang et al. (2014) focused on computational requirements for various statistical problems, such as detection and regression. In the present paper, we study the interplay between computational ef\ufb01ciency and statistical accuracy in submatrix localization based on a noisy observation of a large matrix. The problem considered in this paper is formalized as follows. 1.1 Problem Formulation Consider the matrix X of the form X = M + Z, where M = \u03bb\u00b71Rm1T Cn (2) and 1Rm \u2208Rm with 1 on the index set Rm and zero otherwise. Here, the entries Zi j of the noise matrix are i.i.d. zero-mean sub-Gaussian random variables with parameter \u03c3 (de\ufb01ned formally in Equation (4)). Given the parameters m,n,km,kn,\u03bb/\u03c3, the set of all distributions described above\u2014for all possible choices of Rm and Cn\u2014forms the submatrix model M(m,n,km,kn,\u03bb/\u03c3). This model can be further extended to the case of multiple submatrices as M = r X s=1 \u03bbs \u00b71Rs1T Cs (3) where |Rs| = k(m) s and |Cs| = k(n) s denote the support set of the s-th submatrix. For simplicity, we \ufb01rst focus on the single submatrix and then extend the analysis to the model (3) in Section 2.5. There are two fundamental questions associated with the submatrix model (2). One is the detection problem: given one observation of the X matrix, decide whether it is generated from a distribution in the submatrix model or from the pure noise model. Precisely, the detection problem considers testing of the hypotheses H0 : M = 0 v.s. H\u03b1 : M \u2208M(m,n,km,kn,\u03bb/\u03c3). The other is the localization problem, where the goal is to exactly recover the signal index sets Rm and Cn (the support of the mean matrix M). It is clear that the localization problem is at least as hard (both computationally and statistically) as the detection problem. As we show in this paper, the localization problem requires larger signal to noise ratio \u03bb/\u03c3, as well as a more detailed exploitation of the submatrix structure. If the signal to noise ratio is suf\ufb01ciently large, it is computationally easy to localize the submatrix. On the other hand, if this ratio is small, the localization problem is statistically impossible. To quantify 2 \fthis phenomenon, we identify two distinct thresholds (SNRs and SNRc) for \u03bb/\u03c3 in terms of parameters m,n,km,kn. The \ufb01rst threshold, SNRs, captures the statistical boundary, below which no method (possibly exponential time) can succeed with probability going to one in the minimax sense. The exhaustive search method successfully \ufb01nds the submatrix above this threshold. The second threshold, SNRc, corresponds to the computational boundary, above which an adaptive (with respect to the parameters) linear time spectral algorithm \ufb01nds the signal. Below this threshold, no polynomial time algorithm can succeed, under the hidden clique hypothesis, described later. 1.2 Prior Work There is a growing body of work in statistical literature on submatrix problems. Shabalin et al. (2009) provided a fast iterative maximization algorithm to solve the submatrix localization problem. However, as with many EM type algorithms, the theoretical result is very sensitive to initialization. Arias-Castro et al. (2011) studied the detection problem for a cluster inside a large matrix. Butucea and Ingster (2013); Butucea et al. (2013) formulated the submatrix detection and localization problems under Gaussian noise and determined sharp statistical transition boundaries. For the detection problem, Ma and Wu (2013) provided a computational lower bound result under the assumption that hidden clique detection is computationally dif\ufb01cult. Balakrishnan et al. (2011); Kolar et al. (2011) focused on statistical and computational trade-offs for the submatrix localization problem. They provided a computationally feasible entry-wise thresholding algorithm, a row/column averaging algorithm, and a convex relaxation for sparse SVD to investigate the minimum signal to noise ratio that is required in order to localize the submatrix. Under the sparse regime km \u227em1/2 and kn \u227en1/2, the entry-wise thresholding turns out to be the \u201cnear optimal\u201d polynomialtime algorithm (which we will show a de-noised spectral algorithm that perform slightly better in Section 2.4). However, for the dense regime when km \u227fm1/2 and kn \u227fn1/2, the algorithms provided in Kolar et al. (2011) are not optimal in the sense that there are other polynomial-time algorithm that can succeed in \ufb01nding the submatrix with smaller SNR. Concurrently with our work, Chen and Xu (2014) provided a convex relaxation algorithm that improves the SNR boundary of Kolar et al. (2011) in the dense regime. On the downside, the implementation of the method requires a full SVD on each iteration, and therefore does not scale well with the dimensionality of the problem. Furthermore, there is no computational lower bound in the literature to guarantee the optimality of the SNR boundary achieved in Chen and Xu (2014). A problem similar to submatrix localization is that of clique \ufb01nding. Deshpande and Montanari (2013) presented an iterative approximate message passing algorithm to solve the latter problem with sharp boundaries on SNR. However, in contrast to submatrix localization, where the signal submatrix can be located anywhere within the matrix, the clique \ufb01nding problem requires the signal to be centered on the diagonal. We would like to emphasize the difference between detection and localization problems. When M is a vector, Donoho and Jin (2004) proposed the \u201chigher criticism\u201d approach to solve the detection problem under the Gaussian sequence model. Combining the results in (Donoho and Jin, 2004; Ma and Wu, 2013), in the computationally ef\ufb01cient region, there is no loss in treating M in model (2) as a vector and applying the higher criticism method to the vectorized matrix for the problem of submatrix detection. In fact, the procedure achieves sharper constants in the Gaussian setting. However, in contrast to the detection problem, we will show that for localization, it is crucial to utilize the matrix structure, even in the computationally ef\ufb01cient region. 3 \f1.3 Notation Let [m] denote the index set {1,2,...,m}. For a matrix X \u2208Rm\u00d7n, Xi\u00b7 \u2208Rn denotes its i-th row and X\u00b7j \u2208Rm denotes its j-th column. For any I \u2286[m], J \u2286[n], XI J denotes the submatrix corresponding to the index set I \u00d7 J. For a vector v \u2208Rn, \u2225v\u2225\u2113p = (P i\u2208[n] |vi|p)1/p and for a matrix M \u2208Rm\u00d7n, \u2225M\u2225\u2113p = supv\u0338=0 \u2225Mv\u2225\u2113p/\u2225v\u2225\u2113p. When p = 2, the latter is the usual spectral norm, abbreviated as \u2225M\u22252. The nuclear norm a matrix M is de\ufb01ned as a convex surrogate for the rank, with the notation to be \u2225M\u2225\u2217. The Frobenius norm of a matrix M is de\ufb01ned as \u2225M\u2225F = qP i,j M2 i j. The inner product associated with the Frobenius norm is de\ufb01ned as \u2329A,B\u232a= tr(AT B). Denote the asymptotic notation a(n) = \u0398(b(n)) if there exist two universal constants cl,cu such that cl \u2264lim n\u2192\u221e a(n)/b(n) \u2264lim n\u2192\u221ea(n)/b(n) \u2264cu. \u0398\u2217is asymptotic equivalence hiding logarithmic factors in the following sense: a(n) = \u0398\u2217(b(n)) iff there exists c > 0 such that a(n) = \u0398(b(n)logc n). Additionally, we use the notation a(n) \u224db(n) as equivalent to a(n) = \u0398(b(n)), a(n) \u227fb(n) iff limn\u2192\u221ea(n)/b(n) = \u221eand a(n) \u227eb(n) iff limn\u2192\u221ea(n)/b(n) = 0. We de\ufb01ne the zero-mean sub-Gaussian random variable z with sub-Gaussian parameter \u03c3 in terms of its Laplacian. If there exists a universal constant c > 0, Ee\u03bbz \u2264exp(\u03c32\u03bb2/2c), for all \u03bb > 0 (4) then we have P(|z| > \u03c3t) \u22642\u00b7exp(\u2212c \u00b7 t2/2). We call a random vector Z \u2208Rn isotropic with parameter \u03c3 if E(vT Z)2 = \u03c32\u2225v\u22252 \u21132, for all v \u2208Rn. Clearly, Gaussian and Bernoulli measures, and more general product measures of zero-mean sub-Gaussian random variables satisfy this isotropic de\ufb01nition up to a constant scalar factor. 1.4 Our Contributions To state our main results, let us \ufb01rst de\ufb01ne a hierarchy of algorithms in terms of their worst-case running time on instances of the submatrix localization problem: LinAlg \u2282PolyAlg \u2282ExpoAlg \u2282AllAlg. The set LinAlg contains algorithms A that produce an answer (in our case, the localization subset \u02c6 RA m , \u02c6 C A n ) in time linear in m \u00d7n (the minimal computation required to read the matrix). The classes PolyAlg and ExpoAlg of algorithms, respectively, terminate in polynomial and exponential time, while AllAlg has no restriction. Combining Theorem 3 and 4 in Section 2 and Theorem 5 in Section 3, the statistical and computational boundaries for submatrix localization can be summarized as follows. Theorem 1 (Computational and Statistical Boundaries). Consider the submatrix localization problem under the model (2). The computational boundary SNRc for the dense case when min{km,kn} \u227fmax{m1/2,n1/2} is SNRc \u224d s m \u2228n kmkn + s logn km \u2228logm kn , (5) 4 \fin the sense that lim m,n,km,kn\u2192\u221e inf A \u2208LinAlg sup M\u2208M P \u00b3 \u02c6 RA m \u0338= Rm or \u02c6 C A n \u0338= Cn \u00b4 = 0, if \u03bb \u03c3 \u227fSNRc (6) lim m,n,km,kn\u2192\u221e inf A \u2208PolyAlg sup M\u2208M P \u00b3 \u02c6 RA m \u0338= Rm or \u02c6 C A n \u0338= Cn \u00b4 > 0, if \u03bb \u03c3 \u227eSNRc (7) where (7) holds under the Hidden Clique hypothesis HCl (see Section 2.1). For the sparse case when max{km,kn} \u227emin{m1/2,n1/2}, the computational boundary is SNRc = \u0398\u2217(1), more precisely 1 \u227eSNRc \u227e s log m \u2228n kmkn . The statistical boundary SNRs is SNRs \u224d s logn km \u2228logm kn , (8) in the sense that lim m,n,km,kn\u2192\u221e inf A \u2208ExpoAlg sup M\u2208M P \u00b3 \u02c6 RA m \u0338= Rm or \u02c6 C A n \u0338= Cn \u00b4 = 0, if \u03bb \u03c3 \u227fSNRs (9) lim m,n,km,kn\u2192\u221e inf A \u2208AllAlg sup M\u2208M P \u00b3 \u02c6 RA m \u0338= Rm or \u02c6 C A n \u0338= Cn \u00b4 > 0, if \u03bb \u03c3 \u227eSNRs (10) under the minimal assumption max{km,kn} \u227emin{m,n}. If we parametrize the submatrix model as m = n,km \u224dkn \u224dk = \u0398\u2217(n\u03b1),\u03bb/\u03c3 = \u0398\u2217(n\u2212\u03b2), for some 0 < \u03b1,\u03b2 < 1, we can summarize the results of Theorem 1 in a phase diagram, as illustrated in Figure 1. \u21b5 \u03b2 computationally and statistically easy computationally hard but statistically easy statistically hard B A C Figure 1: Phase diagram for submatrix localization. Red region (C): statistically impossible, where even without computational budget, the problem is hard. Blue region (B): statistically possible but computationally expensive (under the hidden clique hypothesis), where the problem is hard to all polynomial time algorithm but easy with exponential time algorithm. Green region (A): statistically possible and computationally easy, where a fast polynomial time algorithm will solve the problem. 5 \fTo explain the diagram, consider the following cases. First, the statistical boundary is s logn km \u2228logm kn , which gives the line separating the red and the blue regions. For the dense regime \u03b1 \u22651/2, the computational boundary given by Theorem 1 is s m \u2228n kmkn + s logn km \u2228logm kn , which corresponds to the line separating the blue and the green regions. For the sparse regime \u03b1 < 1/2, the computational boundary is \u0398(1) \u227eSNRc \u227e\u0398( q log m\u2228n kmkn ), which is the horizontal line connecting (\u03b1 = 0,\u03b2 = 0) to (\u03b1 = 1/2,\u03b2 = 0). As a key part of Theorem 1, we provide various linear time spectral algorithms that will succeed in localizing the submatrix with high probability in the regime above the computational threshold. Furthermore, the method is adaptive: it does not require the prior knowledge of the size of the submatrix. This should be contrasted with the method of Chen and Xu (2014) which requires the prior knowledge of km,kn; furthermore, the running time of their SDP-based method is superlinear in nm. Under the hidden clique hypothesis, we prove that below the computational threshold there is no polynomial time algorithm that can succeed in localizing the submatrix. This is a new result that has not been established in the literature. We remark that the computational lower bound for localization requires a technique different from the lower bound for detection; the latter has been resolved in Ma and Wu (2013). Beyond localization of one single submatrix, we generalize both the computational and statistical story to a growing number of submatrices in Section 2.5. As mentioned earlier, the statistical boundary for one single submatrix localization has been investigated by Butucea et al. (2013) in the Gaussian case. Our result focuses on the computational intrinsic dif\ufb01culty of localization for a growing number of submatrices, at the expense of not providing the exact constants for the thresholds. The phase transition diagram in Figure 1 for localization should be contrasted with the corresponding result for detection, as shown in (Butucea and Ingster, 2013; Ma and Wu, 2013). For a large enough submatrix size (as quanti\ufb01ed by \u03b1 > 2/3), the computationally-intractable-but-statistically-possible region collapses for the detection problem, but not for localization. In plain words, detecting the presence of a large submatrix becomes both computationally and statistically easy beyond a certain size, while for localization there is always a gap between statistically possible and computationally feasible regions. This phenomenon also appears to be distinct to that of other problems like estimation of sparse principal components (Cai et al., 2013), where computational and statistical easiness coincide with each other over a large region of the parameter spaces. 1.5 Organization of the Paper The paper is organized as follows. Section 2 establishes the computational boundary, with the computational lower bounds given in Section 2.1 and upper bound results in Sections 2.2-2.4. An extension to the case of multiple submatrices is presented in Section 2.5. The upper and lower bounds for statistical boundary for multiple submatrices are discussed in Section 3. A discussion is given in Section 4. Technical proofs are deferred to Section 5. In addition to the spectral method given in Section 2.2 and 2.4, Appendix A contains a new analysis of a known method that is based on a convex relaxation (Chen and Xu, 2014). Comparison of computational lower bounds for localization and detection is included in Appendix B. 6 \f2 Computational Boundary We characterize in this section the computational boundaries for the submatrix localization problem. Sections 2.1 and 2.2 consider respectively the computational lower bound and upper bound. The computational lower bound given in Theorem 2 is based on the hidden clique hypothesis. 2.1 Algorithmic Reduction and Computational Lower Bound Theoretical Computer Science identi\ufb01es a range problems which are believed to be \u201chard,\u201d in the sense that in the worst-case the required computation grows exponentially with the size of the problem. Faced with a new computational problem, one might try to reduce any of the \u201chard\u201d problems to the new problem, and therefore claim that the new problem is as hard as the rest in this family. Since statistical procedures typically deal with a random (rather than worst-case) input, it is natural to seek token problems that are believed to be computationally dif\ufb01cult on average with respect to some distribution on instances. The hidden clique problem is one such example (for recent results on this problem, see Feldman et al. (2013); Deshpande and Montanari (2013)). While there exists a quasi-polynomial algorithm, no polynomial-time method (for the appropriate regime, described below) is known. Following several other works on reductions for statistical problems, we work under the hypothesis that no polynomialtime method exists. Let us make the discussion more precise. Consider the hidden clique model G(N,\u03ba) where N is the total number of nodes and \u03ba is the number of clique nodes. In the hidden clique model, a random graph instance is generated in the following way. Choose \u03ba clique nodes uniformly at random from all the possible choices, and connect all the edges within the clique. For all the other edges, connect with probability 1/2. Hidden Clique Hypothesis for Localization (HCl) Consider the random instance of hidden clique model G(N,\u03ba). For any sequence \u03ba(N) such that \u03ba(N) \u2264N \u03b2 for some 0 < \u03b2 < 1/2, there is no randomized polynomial time algorithm that can \ufb01nd the planted clique with probability tending to 1 as N \u2192\u221e. Mathematically, de\ufb01ne the randomized polynomial time algorithm class PolyAlg as the class of algorithms A that satis\ufb01es lim N,\u03ba(N)\u2192\u221e sup A \u2208PolyAlg ECliquePG(N,\u03ba)|Clique \u00a1runtime of A not polynomial in N \u00a2 = 0. Then lim N,\u03ba(N)\u2192\u221e inf A \u2208PolyAlg ECliquePG(N,\u03ba)|Clique \u00a1clique set returned by A not correct \u00a2 > 0, where PG(N,\u03ba)|Clique is the (possibly more detailed due to randomness of algorithm) \u03c3-\ufb01eld conditioned on the clique location and EClique is with respect to uniform distribution over all possible clique locations. Hidden Clique Hypothesis for Detection (HCd) Consider the hidden clique model G(N,\u03ba). For any sequence of \u03ba(N) such that \u03ba(N) \u2264N \u03b2 for some 0 < \u03b2 < 1/2, there is no randomized polynomial time algorithm that can distinguish between H0 : PER v.s. H\u03b1 : PHC 7 \fwith probability going to 1 as N \u2192\u221e. Here PER is the Erd\u02dd os-R\u00e9nyi model, while PHC is the hidden clique model with uniform distribution on all the possible locations of the clique. More precisely, lim N,\u03ba(N)\u2192\u221e inf A \u2208PolyAlg ECliquePG(N,\u03ba)|Clique \u00a1detection decision returned by A wrong \u00a2 > 0, where PG(N,\u03ba)|Clique and EClique are the same as de\ufb01ned in HCl. The hidden clique hypothesis has been used recently by several authors to claim computational intractability of certain statistical problems. In particular, Berthet and Rigollet (2013a); Ma and Wu (2013) assumed the hypothesis HCd and Wang et al. (2014) used HCl. Localization is harder than detection, in the sense that if an algorithm A solves the localization problem with high probability, it also correctly solves the detection problem. Assuming that no polynomial time algorithm can solve the detection problem implies impossibility results in localization as well. In plain language, HCl is a milder hypothesis than HCd. We will provide two computational lower bound results, one for localization and the other for detection, in Theorems 2 and 6. The latter one will be deferred to Appendix B to contrast the difference of constructions between localization and detection. The detection computational lower bound was \ufb01rst proved in Ma and Wu (2013). For the localization computational lower bound, to the best of our knowledge, there is no proof in the literature. Theorem 2 ensures the upper bound in Lemma 1 being sharp. Theorem 2 (Computational Lower Bound for Localization). Consider the submatrix model (2) with parameter tuple (m = n,km \u224dkn \u224dn\u03b1,\u03bb/\u03c3 = n\u2212\u03b2), where 1 2 < \u03b1 < 1, \u03b2 > 0. Under the computational assumption HCl, if \u03bb \u03c3 \u227e s m +n kmkn \u21d2\u03b2 > \u03b1\u22121 2, it is not possible to localize the true support of the submatrix with probability going to 1 within polynomial time. Our algorithmic reduction for localization relies on a bootstrapping idea based on the matrix structure and a cleaning-up procedure introduced in Lemma 13 given in Section 5. These two key ideas offer new insights in addition to the usual computational lower bound arguments. Bootstrapping introduces an additional randomness on top of the randomness in the hidden clique. Careful examination of these two \u03c3-\ufb01elds allows us to write the resulting object into mixture of submatrix models. For submatrix localization we need to transform back the submatrix support to the original hidden clique support exactly, with high probability. In plain language, even though we lose track of the exact location of the support when reducing the hidden clique to submatrix model, we can still recover the exact location of the hidden clique with high probability. For technical details of the proof, please refer to Section 5. 2.2 Adaptive Spectral Algorithm and Computational Upper Bound In this section, we introduce linear time algorithm that solves the submatrix localization problem above the computational boundary SNRc. Our proposed localization Algorithms 1 and 2 is motivated by the spectral algorithm in random graphs (McSherry, 2001; Ng et al., 2002). 8 \fAlgorithm 1: Vanilla Spectral Projection Algorithm for Dense Regime Input: X \u2208Rm\u00d7n the data matrix. Output: A subset of the row indexes \u02c6 Rm and a subset of column indexes \u02c6 Cn as the localization sets of the submatrix. 1. Compute top left and top right singular vectors U\u00b71 and V\u00b71, respectively (these correspond to the SVD X =U\u03a3V T ); 2. To compute \u02c6 Cn, calculate the inner products U T \u00b71 X\u00b7j,1 \u2264j \u2264n. These values form two clusters. Similarly, for the \u02c6 Rm, calculate Xi\u00b7V\u00b71,1 \u2264i \u2264m and obtain two separated clusters. A simple thresholding procedure returns the subsets \u02c6 Cn and \u02c6 Rm. The proposed algorithm has several advantages over the localization algorithms that appeared in literature. First, it is a linear time algorithm (that is, \u0398(mn) time complexity). The top singular vectors can be evaluated using fast iterative power methods, which is ef\ufb01cient both in terms of space and time. Secondly, this algorithm does not require the prior knowledge of km and kn and automatically adapts to the true submatrix size. Lemma 1 below justi\ufb01es the effectiveness of the spectral algorithm. Lemma 1 (Guarantee for Spectral Algorithm). Consider the submatrix model (2), Algorithm 1 and assume min{km,kn} \u227fmax{m1/2,n1/2}. There exist a universal C > 0 such that when \u03bb \u03c3 \u2265C \u00b7 \u00c3s m \u2228n kmkn + s logn km \u2228logm kn ! , the spectral method succeeds in the sense that \u02c6 Rm = Rm, \u02c6 Cn = Cn with probability at least 1\u2212m\u2212c \u2212n\u2212c \u2212 2exp(\u2212c(m +n)). 2.3 Dense Regime We are now ready to state the SNR boundary for polynomial-time algorithms (under an appropriate computational assumption), thus excluding the exhaustive search procedure. The results hold under the dense regime when k \u227fn1/2. Theorem 3 (Computational Boundary for Dense Regime). Consider the submatrix model (2) and assume min{km,kn} \u227fmax{m1/2,n1/2}. There exists a critical rate SNRc \u224d s m \u2228n kmkn + s logn km \u2228logm kn for the signal to noise ratio SNRc such that for \u03bb/\u03c3 \u227fSNRc, both the adaptive linear time Algorithm 1 and the robust polynomial time Algorithm 5 will succeed in submatrix localization, i.e., \u02c6 Rm = Rm, \u02c6 Cn = Cn, with high probability. For \u03bb/\u03c3 \u227eSNRc, there is no polynomial time algorithm that will work under the hidden clique hypothesis HCl. The proof of the above theorem is based on the theoretical justi\ufb01cation of the spectral Algorithm 1 and convex relaxation Algorithm 5, and the new computational lower bound result for localization in Theorem 2. We remark that the analyses can be extended to multiple, even growing number of submatrices case. We postpone a proof of this fact to Section 2.5 for simplicity and focus on the case of a single submatrix. 9 \f2.4 Sparse Regime Under the sparse regime when k \u227en1/2, a naive plug-in of Lemma 1 requires the SNRc to be larger than \u0398(n1/2/k) \u227f p logn, which implies the vanilla spectral Algorithm 1 is outperformed by simple entrywise thresholding. However, a modi\ufb01ed version with entrywise soft-thresholding as a preprocessing de-noising step turns out to provide near optimal performance in the sparse regime. Before we introduce the formal algorithm, let us de\ufb01ne the soft-thresholding function at level t to be \u03b7t(y) = sign(y)(|y|\u2212t)+. (11) Soft-thresholding as a de-noising step achieving optimal bias-and-variance trade-off has been widely understood in the wavelet literature, for example, see Donoho and Johnstone (1998). Now we are ready to state the following de-noised spectral Algorithm 2 to localize the submatrix under the sparse regime when k \u227en1/2. Algorithm 2: De-noised Spectral Algorithm for Sparse Regime Input: X \u2208Rm\u00d7n the data matrix, a thresholding level t = \u0398(\u03c3 q log m\u2228n kmkn ). Output: A subset of the row indexes \u02c6 Rm and a subset of column indexes \u02c6 Cn as the localization sets of the submatrix. 1. Soft-threshold each entry of the matrix X at level t, denote the resulting matrix as \u03b7t(X ) ; 2. Compute top left and top right singular vectors U\u00b71 and V\u00b71 of matrix \u03b7t(X ), respectively (these correspond to the SVD \u03b7t(X ) =U\u03a3V T ); 3. To compute \u02c6 Cn, calculate the inner products U T \u00b71 \u00b7\u03b7t(X\u00b7j),1 \u2264j \u2264n. These values form two clusters. Similarly, for the \u02c6 Rm, calculate \u03b7t(Xi\u00b7)\u00b7V\u00b71,1 \u2264i \u2264m and obtain two separated clusters. A simple thresholding procedure returns the subsets \u02c6 Cn and \u02c6 Rm. Lemma 2 below provides the theoretical guarantee for the above algorithm when k \u227en1/2. Lemma 2 (Guarantee for De-noised Spectral Algorithm). Consider the submatrix model (2), soft-thresholded spectral Algorithm 2 with thresholded level \u03c3t, and assume min{km,kn} \u227emax{m1/2,n1/2}. There exist a universal C > 0 such that when \u03bb \u03c3 \u2265C \u00b7 \u00c3\"s m \u2228n kmkn + s logn km \u2228logm kn # \u00b7e\u2212t2/2 + t ! , the spectral method succeeds in the sense that \u02c6 Rm = Rm, \u02c6 Cn = Cn with probability at least 1\u2212m\u2212c \u2212n\u2212c \u2212 2exp(\u2212c(m +n)). Further if we choose t = \u0398(\u03c3 q log m\u2228n kmkn ) as the optimal thresholding level, we have denoised spectral algorithm works when \u03bb \u03c3 \u227f s log m \u2228n kmkn . Combining the hidden clique hypothesis HCl together with Lemma 2, we have the following theorem holds under the sparse regime when k \u227en1/2. Theorem 4 (Computational Boundary for Sparse Regime). Consider the submatrix model (2) and assume max{km,kn} \u227emin{m1/2,n1/2}. There exists a critical rate for the signal to noise ratio SNRc between 1 \u227eSNRc \u227e s log m \u2228n kmkn 10 \fsuch that for \u03bb/\u03c3 \u227f q log m\u2228n kmkn , the linear time Algorithm 2 will succeed in submatrix localization, i.e., \u02c6 Rm = Rm, \u02c6 Cn = Cn, with high probability. For \u03bb/\u03c3 \u227e1, there is no polynomial time algorithm that will work under the hidden clique hypothesis HCl. Remark 4.1. The upper bound achieved by the de-noised spectral Algorithm 2 is optimal in the two boundary cases: k = 1 and k \u224dn1/2. When k = 1, both the information theoretic and computational boundary meet at p logn. When k \u224dn1/2, the computational lower bound and upper bound match in Theorem 4, thus suggesting the near optimality of Algorithm 2 within the polynomial time algorithm class. The potential logarithmic gap is due to the crudeness of the hidden clique hypothesis. Precisely, for k = 2, hidden clique is not only hard forG(n,p) with p = 1/2, but also hard forG(n,p) with p = 1/logn. Similarly for k = n\u03b1,\u03b1 < 1/2, hidden clique is not only hard for G(n,p) with p = 1/2, but also for some 0 < p < 1/2. 2.5 Extension to Growing Number of Submatrices The computational boundaries established in the previous sections for a single submatrix can be extended to non-overlapping multiple submatrices model (3). The non-overlapping assumption corresponds to that for any 1 \u2264s \u0338= t \u2264r, Rs \u2229Rt = ; and Cs \u2229Ct = ;. The Algorithm 3 below is an extension of the spectral projection Algorithm 1 to address the multiple submatrices localization problem. Algorithm 3: Spectral Algorithm for Multiple Submatrices Input: X \u2208Rm\u00d7n the data matrix. A pre-speci\ufb01ed number of submatrices r. Output: A subset of the row indexes { \u02c6 Rs m,1 \u2264s \u2264r} and a subset of column indexes { \u02c6 C s n,1 \u2264s \u2264r} as the localization of the submatrices. 1. Calculate top r left and right singular vectors in the SVD X =U\u03a3V T . Denote these vectors as Ur \u2208Rm\u00d7r and Vr \u2208Rn\u00d7r , respectively; 2. For the \u02c6 C s n,1 \u2264s \u2264r, calculate the projection Ur (U T r Ur )\u22121U T r X\u00b7j,1 \u2264j \u2264n, run k-means clustering algorithm (with k = r +1) for these n vectors in Rm. For the \u02c6 Rs m,1 \u2264s \u2264r, calculate Vr (V T r Vr )\u22121V T r X T i\u00b7 ,1 \u2264i \u2264m, run k-means clustering algorithm (with k = r +1) for these m vectors in Rn (while the effective dimension is Rr ). We emphasize that the following Proposition 3 holds even when the number of submatrices r grows with m,n. Lemma 3 (Spectral Algorithm for Non-overlapping Submatrices Case). Consider the non-overlapping multiple submatrices model (3) and Algorithm 3. Assume k(m) s \u224dkm,k(n) s \u224dkn,\u03bbs \u224d\u03bb for all 1 \u2264s \u2264r and min{km,kn} \u227fmax{m1/2,n1/2}. There exist a universal C > 0 such that when \u03bb \u03c3 \u2265C \u00b7 \u00c3r r km \u2227kn + s logn km \u2228 s logm kn + s m \u2228n kmkn ! , (12) the spectral method succeeds in the sense that \u02c6 R(s) m = R(s) m , \u02c6 C (s) n = C (s) n ,1 \u2264s \u2264r with probability at least 1\u2212m\u2212c \u2212n\u2212c \u22122exp(\u2212c(m +n)). Remark 4.2. Under the non-overlapping assumption, rkm \u227em, rkn \u227en hold in most cases. Thus the \ufb01rst term in Equation (12) is dominated by the latter two terms. Thus a growing number r does not affect the bound in Equation (12) as long as the non-overlapping assumption holds. 11 \f3 Statistical Boundary In this section we study the statistical boundary. As mentioned in the introduction, in the Gaussian noise setting, the statistical boundary for a single submatrix localization has been established in Butucea et al. (2013). In this section, we generalize to localization of a growing number of submatrices, as well as subGaussian noise, at the expense of having non-exact constants for the threshold. 3.1 Information Theoretic Bound We begin with the information theoretic lower bound for the localization accuracy. Lemma 4 (Information Theoretic Lower Bound). Consider the submatrix model (2) with Gaussian noise Zi j \u223cN (0,\u03c32). For any \ufb01xed 0 < \u03b1 < 1, there exist a universal constant C\u03b1 such that if \u03bb \u03c3 \u2264C\u03b1 \u00b7 s log(m/km) kn + log(n/kn) km , (13) any algorithm A will fail to localize the submatrix with probability at least 1\u2212\u03b1\u2212 log2 km log(m/km)+kn log(n/kn) in the following minimax sense: inf A \u2208AllAlg sup M\u2208M P \u00b3 \u02c6 RA m \u0338= Rm or \u02c6 C A n \u0338= Cn \u00b4 > 1\u2212\u03b1\u2212 log2 km log(m/km)+kn log(n/kn). 3.2 Combinatorial Search for Growing Number of Submatrices Combinatorial search over all submatrices of size km \u00d7kn \ufb01nds the location with the strongest aggregate signal and is statistically optimal (Butucea et al., 2013; Butucea and Ingster, 2013). Unfortunately, it requires computational complexity \u0398 \u00b3\u00a1 m km \u00a2 + \u00a1 n kn \u00a2\u00b4 , which is exponential in km,kn. The search Algorithm 4 was introduced and analyzed under the Gaussian setting for a single submatrix in Butucea and Ingster (2013), which can be used iteratively to solve multiple submatrices localization. Algorithm 4: Combinatorial Search Algorithm Input: X \u2208Rm\u00d7n the data matrix. Output: A subset of the row indexes \u02c6 Rm and a subset of column indexes \u02c6 Cn as the localization of the submatrix. For all index subsets I \u00d7 J with |I| = km and |J| = kn, calculate the sum of the entries in the submatrix XI J. Report the index subset \u02c6 Rm \u00d7 \u02c6 Cn with the largest sum. For the case of multiple submatrices, the submatrices can be extracted with the largest sum in a greedy fashion. Lemma 5 below provides a theoretical guarantee for Algorithm 4 to achieve the information theoretic lower bound. Lemma 5 (Guarantee for Search Algorithm). Consider the non-overlapping multiple submatrices model (3) and iterative application of Algorithm 4 in a greedy fashion for r times. Assume k(m) s \u224dkm,k(n) s \u224dkn,\u03bbs \u224d\u03bb 12 \ffor all 1 \u2264s \u2264r and max{km,kn} \u227emin{m,n}. There exists a universal constant C > 0 such that if \u03bb \u03c3 \u2265C \u00b7 s log(em/km) kn + log(en/kn) km , then Algorithm 4 will succeed in returning the correct location of the submatrix with probability at least 1\u22122kmkn mn . To complete Theorem 1, we include the following Theorem 5 capturing the statistical boundary. It is proved by exhibiting the information-theoretic lower bound Lemma 4 and analyzing Algorithm 4. Theorem 5 (Statistical Boundary). Consider the submatrix model (2). There exists a critical rate SNRs \u224d s logn km \u2228logm kn for the signal to noise ratio, such that for any problem with \u03bb/\u03c3 \u227fSNRs, the statistical search Algorithm 4 will succeed in submatrix localization, i.e., \u02c6 Rm = Rm, \u02c6 Cn = Cn, with high probability. On the other hand, if \u03bb/\u03c3 \u227eSNRs, no algorithm will work (in the minimax sense) with probability tending to 1. 4 Discussion In this paper we established the computational and statistical boundaries for submatrix localization in the setting of a growing number of submatrices with subgaussian noise. The primary goals are to demonstrate the intrinsic gap between what is statistical possible and what is computationally feasible and to contrast the interplay between computational ef\ufb01ciency and statistical accuracy for localization with that for detection. Submatrix Localization v.s. Detection As pointed out in Section 1.4, for any k = n\u03b1,0 < \u03b1 < 1, there is an intrinsic SNR gap between computational and statistical boundaries for submatrix localization. Unlike the submatrix detection problem where for the regime 2/3 < \u03b1 < 1, there is no gap between what is computationally possible and what is statistical possible. The inevitable gap in submatrix localization is due to the combinatorial structure of the problem. This phenomenon is also seen in some network related problems, for instance, stochastic block models with a growing number of communities. Compared to the submatrix detection problem, the algorithm to solve the localization problem is more complicated and the techniques required for the analysis are much more involved. Detection for Growing Number of Submatrices The current paper solves localization of a growing number of submatrices. In comparison, for detection, the only known results are for the case of a single submatrix as considered in Butucea and Ingster (2013) for the statistical boundary and in Ma and Wu (2013) for the computational boundary. The detection problem in the setting of a growing number of submatrices is of signi\ufb01cant interest. In particular, it is interesting to understand the computational and statistical trade-offs in such a setting. This will need further investigation. 13 \fEstimation of the Noise Level \u03c3 Although Algorithms 1 and 3 do not require the noise level \u03c3 as an input, Algorithm 2 does require the knowledge of \u03c3. The noise level \u03c3 can be estimated robustly. In the Gaussian case, a simple robust estimator of \u03c3 is the following median absolute deviation (MAD) estimator due to the fact that M is sparse: \u02c6 \u03c3 = mediani j|Xi j \u2212mediani j(Xi j)|/\u03a6\u22121(0.75) \u22481.4826\u00d7mediani j|Xi j \u2212mediani j(Xi j)|. 5 Proofs We prove in this section the main results given in the paper. We \ufb01rst collect and prove a few important technical lemmas that will be used in the proofs of the main results. 5.1 Prerequisite Lemmas We start with two Lemmas 6 and 7 that due to perturbation theory. Lemma 6 (Stewart and Sun (1990) Theorem 4.1). Suppose that \u02dc A = A +E, all of which are matrices of the same size, and we have the following singular value decomposition [U1,U2,U3]T A[V1,V2] = \uf8ee \uf8f0 \u03a31 0 0 \u03a32 0 0 \uf8f9 \uf8fb (14) and [ \u02dc U1, \u02dc U2, \u02dc U3]T \u02dc A[ \u02dc V1, \u02dc V2] = \uf8ee \uf8f0 \u02dc \u03a31 0 0 \u02dc \u03a32 0 0 \uf8f9 \uf8fb. (15) Let \u03a6 be the matrix of canonical angles between R(U1) and R( \u02dc U1), and let \u0398 be the matrix of canonical angles between R(V1) and R( \u02dc V1) (here R denotes the linear space). De\ufb01ne R = A \u02dc V1 \u2212\u02dc U1 \u02dc \u03a31 (16) S = AT \u02dc U1 \u2212\u02dc V1 \u02dc \u03a31 (17) Then suppose there is a number \u03b4 > 0 such that min|\u03c3(\u02dc \u03a31)\u2212\u03c3(\u03a32)| \u2265\u03b4 and min\u03c3(\u02dc \u03a31) \u2265\u03b4. Then q \u2225sin\u03a6\u22252 F +\u2225sin\u0398\u22252 F \u2264 q \u2225R\u22252 F +\u2225S\u22252 F \u03b4 . Further, suppose there are numbers \u03b1,\u03b4 such that min\u03c3(\u02dc \u03a31) \u2265\u03b4+\u03b1 and max\u03c3(\u03a32) \u2264\u03b1, then for 2-norm, or any unitarily invariant norm, we have max{\u2225sin\u03a6\u22252,\u2225sin\u0398\u22252} \u2264max{\u2225R\u22252,\u2225S\u22252} \u03b4 . 14 \fLet us use the above version of the perturbation bound to derive a lemma that is particularly useful in our case. Simple algebra tells us that \u02dc A[ \u02dc V1, \u02dc V2] = [ \u02dc U1, \u02dc U2, \u02dc U3] \uf8ee \uf8f0 \u02dc \u03a31 0 0 \u02dc \u03a32 0 0 \uf8f9 \uf8fb= [ \u02dc U1 \u02dc \u03a31, \u02dc U2 \u02dc \u03a32]. (18) A[ \u02dc V1, \u02dc V2] = [A \u02dc V1, A \u02dc V2] (19) ( \u02dc A \u2212A)[ \u02dc V1, \u02dc V2] = [ \u02dc U1 \u02dc \u03a31 \u2212A \u02dc V1, \u02dc U2 \u02dc \u03a32 \u2212A \u02dc V2] (20) \u2225\u02dc A \u2212A\u22252 F = Tr \u00a1 ( \u02dc A \u2212A)[ \u02dc V1, \u02dc V2][ \u02dc V1, \u02dc V2]T ( \u02dc A \u2212A)T \u00a2 (21) = \u2225R\u22252 F +\u2225\u02dc U2 \u02dc \u03a32 \u2212A \u02dc V2\u22252 F \u2265\u2225R\u22252 F. (22) Similarly, we have ( \u02dc A \u2212A)T [ \u02dc U1, \u02dc U2, \u02dc U3] = [ \u02dc V1 \u02dc \u03a31 \u2212AT \u02dc U1, \u02dc V2 \u02dc \u03a32 \u2212AT \u02dc U2,\u2212AT \u02dc U3] (23) \u2225\u02dc A \u2212A\u22252 F = Tr \u00a1 ( \u02dc A \u2212A)T [ \u02dc U1, \u02dc U2, \u02dc U3][ \u02dc U1, \u02dc U2, \u02dc U3]T ( \u02dc A \u2212A) \u00a2 (24) = \u2225S\u22252 F +\u2225\u02dc V2 \u02dc \u03a32 \u2212AT \u02dc U2\u22252 F +\u2225AT \u02dc U3\u22252 F \u2265\u2225S\u22252 F. (25) Thus, it holds that \u2225\u02dc A \u2212A\u2225F \u2265max(\u2225R\u2225F,\u2225S\u2225F) and similarly we have (since the operator norm of a whole matrix is larger than that of the submatrix) \u2225\u02dc A \u2212A\u22252 \u2265max(\u2225R\u22252,\u2225S\u22252). Thus the following version of the Wedin\u2019s Theorem holds. Lemma 7 (Davis-Kahan-Wedin\u2019s Type Perturbation Bound). It holds that q \u2225sin\u03a6\u22252 F +\u2225sin\u0398\u22252 F \u2264 p 2\u2225E\u2225F \u03b4 and also the following holds for 2-norm (or any unitary invariant norm) max{\u2225sin\u03a6\u22252,\u2225sin\u0398\u22252} \u2264\u2225E\u22252 \u03b4 . We will then introduce some concentration inequalities. Lemmas 8 and 9 are concentration of measure results from random matrix theory. Lemma 8 (Vershynin (2010), Theorem 39). Let Z \u2208Rm\u00d7n be a matrix whose rows Zi\u00b7 are independent sub-Gaussian isotropic random vectors in Rn with parameter \u03c3. Then for every t \u22650, with probability at least 1\u22122exp(\u2212ct2) one has \u2225Z\u22252 \u2264\u03c3( p m +C p n + t) where C,c > 0 are some universal constants. 15 \fLemma 9 (Hsu et al. (2012), Projection Lemma). Assume Z \u2208Rn is an isotropic sub-Gaussian vector with i.i.d. entries and parameter \u03c3. P is a projection operator to a subspace of dimension r, then we have the following concentration inequality P(\u2225P Z\u22252 \u21132 \u2265\u03c32(r +2 p r t +2t)) \u2264exp(\u2212ct), where c > 0 is a universal constant. The proof of this lemma is a simple application of Theorem 2.1 inHsu et al. (2012) for the case that P is a rank r positive semide\ufb01nite projection matrix. The following two are standard Chernoff-type bounds for bounded random variables. Lemma 10 (Hoeffding (1963), Hoeffding\u2019s Inequality). Let Xi,1 \u2264i \u2264n be independent random variables. Assume ai \u2264Xi \u2264bi,1 \u2264i \u2264n. Then for Sn = Pn i=1 Xi P(|Sn \u2212ESn| > u) \u22642exp \u00b5 \u2212 2u2 Pn i=1(bi \u2212ai)2 \u00b6 . (26) Lemma 11 (Bennett (1962), Bernstein\u2019s Inequality). Let Xi,1 \u2264i \u2264n be independent zero-mean random variables. Suppose |Xi| \u2264M,1 \u2264i \u2264n. Then P \u00c3 n X i=1 Xi > u ! \u2264exp \u00c3 \u2212 u2/2 Pn i=1 EX 2 i + Mu/3 ! . (27) We will end this section stating the Fano\u2019s information inequality, which plays a key role in many information theoretic lower bounds. Lemma 12 (Tsybakov (2009) Corollary 2.6). Let P0,P1,...,PM be probability measures on the same probability space (\u0398,F), M \u22652. If for some 0 < \u03b1 < 1 1 M +1 M X i=0 dKL(Pi|| \u00af P ) \u2264\u03b1\u00b7logM (28) where \u00af P = 1 M +1 M X i=0 Pi. Then pe,M \u2265\u00af pe,M \u2265log(M +1)\u2212log2 logM \u2212\u03b1 (29) where pe,M is the minimax error for the multiple testing problem. 5.2 Main Proofs Proof of Lemma 1. Recall the matrix form of the submatrix model, with the SVD decomposition of the mean signal matrix M X = \u03bb p kmknUV T + Z. The largest singular value of \u03bbUV T is \u03bb p kmkn, and all the other singular values are 0s. Davis-KahanWedin\u2019s perturbation bound tells us how close the singular space of X is to the singular space of M. 16 \fLet us apply the derived Lemma 7 to X = \u03bb p kmknUV T + Z. Denote the top left and right singular vector of X as \u02dc U and \u02dc V . One can see that E\u2225Z\u22252 \u224d\u03c3(pm + pn) under very mild \ufb01nite fourth moment conditions through a result in (Lata\u0142a, 2005). Lemma 8 provides a more explicit probabilisitic bound for the concentration of the largest singular value of i.i.d sub-Gaussian random matrix. Because the rows Zi\u00b7 are sampled from product measure of mean zero sub-Gaussians, they naturally satisfy the isotropic condition. Hence, with probability at least 1\u22122exp(\u2212c(m +n)), via Lemma 8, we reach \u2225Z\u22252 \u2264C \u00b7\u03c3( p m + p n). (30) Using Weyl\u2019s interlacing inequality, we have |\u03c3i(X )\u2212\u03c3i(M)| \u2264\u2225Z\u22252 and thus \u03c31(X ) \u2265\u03bb p kmkn \u2212\u2225Z\u22252 \u03c32(X ) \u2264\u2225Z\u22252. Applying Lemma 7, we have max \u00a9 |sin\u2220(U, \u02dc U)|,|sin\u2220(V, \u02dc V )| \u00aa \u2264 C\u03c3(pm +pn) \u03bb p kmkn \u2212C\u03c3(pm +pn) \u224d\u03c3(pm +pn) \u03bb p kmkn . In addition \u2225U \u2212\u02dc U\u2225\u21132 = q 2\u22122cos\u2220(U, \u02dc U) = 2|sin 1 2\u2220(U, \u02dc U)|, which means max \u00a9 \u2225U \u2212\u02dc U\u2225\u21132,\u2225V \u2212\u02dc V \u2225\u21132 \u00aa \u2264C \u00b7 \u03c3(pm +pn) \u03bb p kmkn . And according to the de\ufb01nition of the canonical angles, we have max \u00a9 \u2225UU T \u2212\u02dc U \u02dc U T \u22252,\u2225V V T \u2212\u02dc V \u02dc V T \u22252 \u00aa \u2264C \u00b7 \u03c3(pm +pn) \u03bb p kmkn . Now let us assume we have two observations of X . We use the \ufb01rst observation \u02dc X to solve for the singular vectors \u02dc U, \u02dc V , we use the second observation X to project to the singular vectors \u02dc U, \u02dc V . We can use Tsybakov\u2019s sample cloning argument (Tsybakov (2014), Lemma 2.1) to create two independent observations of X when noise is Gaussian as follows. Create a pure Gaussian matrix Z \u2032 and de\ufb01ne X1 = X + Z \u2032 = M + (Z + Z \u2032) and X2 = X \u2212Z \u2032 = M + (Z \u2212Z \u2032), making X1,X2 independent with the variance being doubled. This step is not essential because we can perform random subsampling as in Vu (2014); having two observations instead of one does not change the picture statistically or computationally. Recall X = M + Z = \u03bb p kmknUV T + Z. De\ufb01ne the projection operator to be P , we start the analysis by decomposing \u2225P \u02dc U X\u00b7j \u2212M\u00b7j\u2225\u21132 \u2264\u2225P \u02dc U(X\u00b7j \u2212M\u00b7j)\u2225\u21132 +\u2225(P \u02dc U \u2212I)M\u00b7j\u2225\u21132 (31) for 1 \u2264j \u2264n. 17 \fFor the \ufb01rst term of (31), note that X\u00b7j \u2212M\u00b7j = Z\u00b7j \u2208Rm is an i.i.d. isotropic sub-Gaussian vector, and thus we have through Lemma 9, for t = (1+1/c)logn, Z\u00b7j \u2208Rm,1 \u2264j \u2264n and r = 1 P \uf8eb \uf8ec \uf8ed\u2225P \u02dc U(X\u00b7j \u2212M\u00b7j)\u2225\u21132 \u2265\u03c3 p r v u u t1+2 p 1+1/c \u00b7 s logn r +2(1+1/c)\u00b7 logn r \uf8f6 \uf8f7 \uf8f8\u2264n\u2212c\u22121. (32) We invoke the union bound for all 1 \u2264j \u2264n to obtain max 1\u2264j\u2264n\u2225P \u02dc U(X\u00b7j \u2212M\u00b7j)\u2225\u21132 \u2264\u03c3 p r + p 2(1+1/c)\u00b7\u03c3 q logn (33) \u2264\u03c3+C \u00b7\u03c3 q logn (34) with probability at least 1\u2212n\u2212c. For the second term M\u00b7j = \u02dc X\u00b7j \u2212\u02dc Z\u00b7j of (31), there are two ways of upper bounding it. The \ufb01rst approach is to split \u2225(P \u02dc U \u2212I)M\u22252 \u2264\u2225(P \u02dc U \u2212I) \u02dc X \u22252 +\u2225(P \u02dc U \u2212I) \u02dc Z\u22252 \u22642\u2225\u02dc Z\u22252. (35) The \ufb01rst term of (35) is \u03c32( \u02dc X ) \u2264\u03c32(M) + \u2225\u02dc Z\u22252 through Weyl\u2019s interlacing inequality, while the second term is bounded by \u2225\u02dc Z\u22252. We also know that \u2225\u02dc Z\u22252 \u2264C3\u00b7\u03c3(pm+pn). Recall the de\ufb01nition of the induced \u21132 norm of a matrix (P \u02dc U \u2212I)M: \u2225(P \u02dc U \u2212I)M\u22252 \u2265\u2225(P \u02dc U \u2212I)MV \u2225\u21132 \u2225V \u2225\u21132 = \u2225(P \u02dc U \u2212I)\u03bb p kmknU\u2225\u21132 \u2265 p kn\u2225(P \u02dc U \u2212I)M\u00b7j\u2225\u21132. In the second approach, the second term of (31) can be handled through perturbation Sin Theta Theorem 7: \u2225(P \u02dc U \u2212I)M\u00b7j\u2225\u21132 = \u2225(P \u02dc U \u2212PU)M\u00b7j\u2225\u21132 \u2264\u2225\u02dc U \u02dc U T \u2212UU T \u22252 \u00b7\u2225M\u00b7j\u2225\u21132 \u2264C \u03c3pm +n \u03bb p kmkn \u03bb p km. This second approach will be used in the multiple submatrices analysis. Combining all the above, we have with probability at least 1\u2212n\u2212c \u2212m\u2212c, for all 1 \u2264j \u2264n \u2225P \u02dc U X\u00b7j \u2212M\u00b7j\u2225\u21132 \u2264C \u00b7 \u00c3 \u03c3 q logn +\u03c3 s m \u2228n kn ! . (36) Similarly we have for all 1 \u2264i \u2264m, \u2225P \u02dc V X T i\u00b7 \u2212MT i\u00b7 \u2225\u21132 \u2264C \u00b7 \u00c3 \u03c3 q logm +\u03c3 s m \u2228n km ! . (37) Clearly we know that for i \u2208Rm and i \u2032 \u2208[m]\\Rm \u2225MT i\u00b7 \u2212MT i \u2032\u00b7\u2225\u21132 = \u03bb p kn and for j \u2208Cn and j \u2032 \u2208[n]\\Cn \u2225M\u00b7j \u2212M\u00b7j \u2032\u2225\u21132 = \u03bb p km 18 \fThus if \u03bb p km \u22656C \u00b7 \u00c3 \u03c3 q logn +\u03c3 s m \u2228n kn ! (38) \u03bb p kn \u22656C \u00b7 \u00c3 \u03c3 q logm +\u03c3 s m \u2228n km ! (39) hold, then we have learned a metric d (a one dimensional line) such that on this line, data forms clusters in the sense that 2 max i,i \u2032\u2208Rm |di \u2212di \u2032| \u2264 min i\u2208Rm,i \u2032\u2208[m]\\Rm |di \u2212di \u2032|. In this case, a simple cut-off clustering recovers the nodes exactly. In summary, if \u03bb \u2265C \u00b7\u03c3 \u00c3s logn km + s logm kn + s m +n kmkn ! , the spectral algorithm succeeds with probability at least 1\u2212m\u2212c \u2212n\u2212c \u22122exp(\u2212c(m +n)). Proof of Lemma 2. The proof of the validity of thresholded spectral algorithm at level \u03c3t is easy based on the proof of Lemma 1. Firstly we have the following decomposition \u03b7\u03c3t(X ) = M +\u03b7\u03c3t(Z)+B = \u03bb1Rm1T Cn +\u03b7\u03c3t(Z)+B where B is the bias matrix satisfying Bi j = 0, if (i, j) \u2209Rm \u00d7Cn |Bi j| \u22642\u03c3t, if (i, j) \u2208Rm \u00d7Cn. Let us prove this fact. Clearly if (i, j) \u2209Rm \u00d7Cn, Bi j = 0. If (i, j) \u2208Rm \u00d7Cn, we have |Bi j| = |\u03b7\u03c3t(\u03bb+ Zi j)\u2212\u03bb\u2212\u03b7\u03c3t(Zi j)| \u2264|\u03b7\u03c3t(\u03bb+ Zi j)\u2212(\u03bb+ Zi j)|+|(\u03bb+ Zi j)\u2212\u03bb\u2212Zi j|+|Zi j \u2212\u03b7\u03c3t(Zi j)| \u22642\u03c3t where the last step uses |\u03b7t(y) \u2212y| \u2264t, for any y. Let us bound the variance of each thresholded entry \u03b7\u03c3t(Zi j), E \u00a3 \u03b7\u03c3t(Zi j) \u00a42 = Z\u221e 0 2z \u00b72P(\u03b7\u03c3t(Zi j) > z)dz = Z\u221e 0 4zP(Zi j > z +\u03c3t)dz = Z\u221e 0 4z exp \u00bd \u2212c \u00b7 (z +\u03c3t)2 2\u03c32 \u00be dz \u2264C \u00b7\u03c32 \u00b7exp(\u2212t2 2 ) 19 \ffor some universal constant C. Clearly after thresholding, \u03b7\u03c3t(Z) still have i.i.d entries, but the variance has been signi\ufb01cantly reduced as t \u2192\u221e. Via the perturbation analysis established in Proof of Lemma 1 \u2225\u03b7\u03c3t(Z)+B\u22252 \u2264\u2225\u03b7\u03c3t(Z)\u22252 +\u2225B\u22252 \u2264C \u00b7\u03c3 p m \u2228n \u00b7exp(\u2212t2 2 )+2 p kmkn\u03c3t as B only have kmkn non zero entries. Thus applying Lemma 7, we have max \u00a9 |sin\u2220(U, \u02dc U)|,|sin\u2220(V, \u02dc V )| \u00aa \u2264 C \u00b7\u03c3pm \u2228n \u00b7exp(\u2212t2 2 )+2 p kmkn\u03c3t \u03bb p kmkn \u2212C \u00b7\u03c3pm \u2228n \u00b7exp(\u2212t2 2 )\u22122 p kmkn\u03c3t \u227e pm \u2228n \u00b7\u03c3exp(\u2212t2 2 )+ p kmkn\u03c3t \u03bb p kmkn . As usual, we continue the analysis by decomposing (following the steps as in Lemma 7, but with an additional bias term B) \u2225P \u02dc U\u03b7(X\u00b7j)\u2212M\u00b7j\u2225\u21132 \u2264\u2225P \u02dc U(\u03b7(X\u00b7j)\u2212M\u00b7j)\u2225\u21132 +\u2225(P \u02dc U \u2212I)M\u00b7j\u2225\u21132 \u2264\u2225P \u02dc U\u03b7(Z\u00b7j)\u2225\u21132 +\u2225B\u00b7j\u2225\u21132 +\u2225(P \u02dc U \u2212I)M\u00b7j\u2225\u21132 \u2264C \u00b7 ( \u03c3exp(\u2212t2 2 ) q logn + p km\u03c3t + pm \u2228n \u00b7\u03c3exp(\u2212t2 2 )+ p kmkn\u03c3t \u03bb p kmkn \u03bb p km ) for 1 \u2264j \u2264n. We know for j \u2208Cn and j \u2032 \u2208[n]\\Cn \u2225M\u00b7j \u2212M\u00b7j \u2032\u2225\u21132 = \u03bb p km Thus if \u03bb p km \u22656C \u00b7 ( \u03c3exp(\u2212t2 2 ) q logn + p km\u03c3t + pm \u2228n \u00b7\u03c3exp(\u2212t2 2 )+ p kmkn\u03c3t p kn ) (40) hold, then we have learned a metric d (a one dimensional line) such that on this line, data forms clusters. In this case, a simple cut-off clustering recovers the nodes exactly. In summary, if \u03bb \u03c3 \u2265C \u00b7 \u00c3\"s m \u2228n kmkn + s logn km \u2228logm kn # \u00b7e\u2212t2/2 + t ! , the thresholded spectral algorithm succeeds with probability at least 1\u2212m\u2212c \u2212n\u2212c \u22122exp(\u2212c(m +n)). Proof of Theorem 2. Computational lower bound for localization (support recovery) is of different nature than the computational lower bound for detection (two point testing). The idea is to design a randomized polynomial time algorithmic reduction to relate a an instance of hidden clique problem to our submatrix localization problem. The proof proceeds in the following way: we will construct a randomized 20 \fpolynomial time transformation T to map a random instance of G(N,\u03ba) to a random instance of our submatrix M(m = n,km \u224dkn \u224dk,\u03bb/\u03c3) (abbreviated as M(n,k,\u03bb/\u03c3)). Then we will provide a quantitative computational lower bound by showing that if there is a polynomial time algorithm that pushes below the hypothesized computational boundary for localization in the submatrix model, there will be a polynomial time algorithm that solves hidden clique localization with high probability (a contradiction to HCl). Denote the randomized polynomial time transformation as T : G(N,\u03ba(N)) \u2192M(n,k = n\u03b1,\u03bb/\u03c3 = n\u2212\u03b2). There are several stages for the construction of the algorithmic reduction. First we de\ufb01ne a graph G e(N,\u03ba(N)) that is stochastically equivalent to the hidden clique graph G(N,\u03ba(N)), but is easier for theoretical analysis. G e has the property: each node independently has the probability \u03ba(N)/N to be a clique node, and with the remaining probability a non-clique node. Using Bernstein\u2019s inequality and the inequality (46) proved below. with probability at least 1\u22122N \u22121 the number of clique nodes \u03bae in G e \u03ba \uf8eb \uf8ed1\u2212 s 4logN \u03ba \uf8f6 \uf8f8\u2264\u03bae \u2264\u03ba \uf8eb \uf8ed1+ s 4logN \u03ba \uf8f6 \uf8f8\u21d2\u03bae \u224d\u03ba (41) as long as \u03ba \u227flogN. Consider a hidden clique graph G e(2N,2\u03ba(N)) with N = n and \u03ba(N) = \u03ba. Denote the set of clique nodes for G e(2N,2\u03ba(N)) to be CN,\u03ba. Represent the hidden clique graph using the symmetric adjacency matrix G \u2208{\u22121,1}2N\u00d72N, where Gi j = 1 if i, j \u2208CN,\u03ba, otherwise with equal probability to be either \u22121 or 1. As remarked before, with probability at least 1 \u22122N \u22121, we have planted 2\u03ba(1 \u00b1 o(1)) clique nodes in graph G e with 2N nodes. Take out the upper-right submatrix of G, denote as GUR where U is the index set 1 \u2264i \u2264N and R is the index set N +1 \u2264j \u22642N. Now GUR has independent entries. The construction of T employs the Bootstrapping idea. Generate l2 (with l \u224dn\u03b2,0 < \u03b2 < 1/2) matrices through bootstrap subsampling as follows. Generate l \u22121 independent index vectors \u03c8(s) \u2208Rn,1 \u2264 s < l, where each element \u03c8(s)(i),1 \u2264i \u2264n is a random draw with replacement from the row indices [n]. Denote vector \u03c6(0)(i) = i,1 \u2264i \u2264n as the original index set. Similarly, we can de\ufb01ne independently the column index vectors \u03c6(t),1 \u2264t < l. We remark that these bootstrap samples can be generated in polynomial time \u2126(l2n2). The transformation is a weighted average of l2 matrices of size n \u00d7n generated based on the original adjacency matrix GUR. T : Mi j = 1 l X 0\u2264s,t