index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
5,600
M-Best-Diverse Labelings for Submodular Energies and Beyond Alexander Kirillov1 Dmitrij Schlesinger1 Dmitry Vetrov2 Carsten Rother1 Bogdan Savchynskyy1 1 TU Dresden, Dresden, Germany 2 Skoltech, Moscow, Russia alexander.kirillov@tu-dresden.de Abstract We consider the problem of finding M best diverse solutions of energy minimization problems for graphical models. Contrary to the sequential method of Batra et al., which greedily finds one solution after another, we infer all M solutions jointly. It was shown recently that such jointly inferred labelings not only have smaller total energy but also qualitatively outperform the sequentially obtained ones. The only obstacle for using this new technique is the complexity of the corresponding inference problem, since it is considerably slower algorithm than the method of Batra et al. In this work we show that the joint inference of M best diverse solutions can be formulated as a submodular energy minimization if the original MAP-inference problem is submodular, hence fast inference techniques can be used. In addition to the theoretical results we provide practical algorithms that outperform the current state-of-the-art and can be used in both submodular and non-submodular case. 1 Introduction A variety of tasks in machine learning can be formulated in the form of an energy minimization problem, known also as maximum a posteriori (MAP) or maximum likelihood estimation (MLE) inference in an undirected graphical models (related to Markov or conditional random fields). Its modeling power and importance are well-recognized, which resulted into specialized benchmark, i.e. [18] and computational challenges [8] for its solvers. This underlines the importance of finding the most probable solution. Following [3] and [25] we argue, however, that finding M > 1 diverse configurations with low energies is also of importance in a number of scenarios, such as: (a) Expressing uncertainty of the found solution [27]; (b) Faster training of model parameters [14]; (c) Ranking of inference results [32]; (d) Empirical risk minimization [26]. We build on the new formulation for finding M-best-diverse-configurations, which was recently proposed in [19]. In this formulation all M configurations are inferred jointly, contrary to the established method [3], where a sequential greedy procedure is used. As shown in [19], the new formulation does not only reliably produce configurations with lower total energy, but also leads to better results in several application scenarios. In particular, for the image segmentation scenario the results of [19] significantly outperform those of [3]. This is true even when [19] uses a plain Hamming distance as a diversity measure and [3] uses more powerful diversity measures. Our contributions. • We show that finding M-best-diverse configurations of a binary submodular energy minimization can be formulated as a submodular MAP-inference problem, and hence can be solved This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 647769). D. Vetrov was supported by RFBR proj. (No. 15-31-20596) and by Microsoft (RPD 1053945). 1 efficiently for any node-wise diversity measure. • We show that for certain diversity measures, such as e.g. Hamming distance, the M-bestdiverse configurations of a multilabel submodular energy minimization can be formulated as a submodular MAP-inference problem, which also implies applicability of efficient graph cut-based solvers. • We give the insight that if the MAP-inference problem is submodular then the M-best-diverse configurations can be always fully ordered with respect to the natural partial order, induced in the space of all configurations. • We show experimentally that if the MAP-inference problem is submodular, we are quantitatively at least as good as [19] and considerably better than [3]. The main advantage of our method is a major speed up over [19], up to the order of two magnitudes. Our method has the same order of magnitude run-time as [3]. In the non-submodular case our results are slightly inferior to [19], but the advantage with respect to gain in speed up still holds. Related work. The importance of the considered problem may be justified by the fact that a procedure of computing M-best solutions to discrete optimization problems was proposed in [23], which dates back to 1972. Later, more efficient specialized procedures were introduced for MAP-inference on a tree [29, Ch. 8], junction-trees [24] and general graphical models [33, 12, 2]. Such methods are however not suited for scenarios where diversity of the solutions is required (like in machine translation, search engines, producing M-best hypothesis in cascaded algorithms), since they do not enforce it explicitly. Structural Determinant Point Processes [22] is a tool to model probabilistic distributions over structured models. Unfortunately an efficient sampling procedure is feasible for tree-structured graphical models only. The recently proposed algorithm [7] to find M best modes of a distribution is limited to the same narrow class of problems. Training of M independent graphical models to produce diverse solutions was proposed in [13, 15]. In contrast, we assume a single fixed model supporting reasonable MAP-solutions. Along with [3], the most related to our work is the recent paper [25], which proposes a subclass of new diversity penalties, for which the greedy nature of the algorithm [3] can be substantiated due to submodularity of the used diversity measures. In contrast to [25] we do not limit ourselves to diversity measures fulfilling such properties and moreover, we define a class of problems, for which our joint inference approach leads to polynomially and efficiently solvable problems in practice. We build on top of the work [19], which is explained in detail in Section 2. Organization of the paper. Section 2 provides background necessary for formulation of our results: energy minimization for graphical models and existing approaches to obtain diverse solutions. In Section 3 we introduce submodularity for graphical models and formulate the main results of our work. Finally, Section 4 and 5 are devoted to the experimental evaluation of our technique and conclusions. Supplementary material contains proofs of all mathematical claims and the concurrent submission [19]. 2 Preliminaries Energy minimization. Let 2A denote the powerset of a set A. The pair G = (V, F) is called a hyper-graph and has V as a finite set of variable nodes and F ⊆2V as a set of factors. Each variable node v ∈V is associated with a variable yv taking its values in a finite set of labels Lv. The set LA = Q v∈A Lv denotes a Cartesian product of sets of labels corresponding to the subset A ⊆V of variables. Functions θf : Lf →R, associated with factors f ∈F, are called potentials and define local costs on values of variables and their combinations. Potentials θf with |f| = 1 are called unary, with |f| = 2 pairwise and |f| > 2 higher order. The set {θf : f ∈F} of all potentials is referred by θ. For any factor f ∈F the corresponding set of variables {yv : v ∈f} will be denoted by yf. The energy minimization problem consists of finding a labeling y∗= {yv : v ∈V} ∈LV, which minimizes the total sum of corresponding potentials: y∗= arg min y∈LV E(y) = arg min y∈LV X f∈F θf(yf) . (1) Problem (1) is also known as MAP-inference. Labeling y∗satisfying (1) will be later called a solution of the energy-minimization or MAP-inference problem, shortly MAP-labeling or MAP-solution. 2 y1 1 θ1 y1 2 θ2 y1 3 θ3 y1 4 θ4 y2 1 θ1 y2 2 θ2 y2 3 θ3 y2 4 θ4 y3 1 θ1 y3 2 θ2 y3 3 θ3 y3 4 θ4 −∆M(y1, y2, y3) E(y1) E(y2) E(y3) θ1,2 θ2,3 θ3,4 θ1,2 θ2,3 θ3,4 θ1,2 θ2,3 θ3,4 (a) General diversity measure y3 1 θ1 y3 2 θ2 y3 3 θ3 y3 4 θ4 θ1,2 θ2,3 θ3,4 y2 1 θ1 y2 2 θ2 y2 3 θ3 y2 4 θ4 θ1,2 θ2,3 θ3,4 y1 1 θ1 y1 2 θ2 y1 3 θ3 y1 4 θ4 θ1,2 θ2,3 θ3,4 −∆M 1 (y1 1, y2 1, y3 1) −∆M 2 (y1 2, y2 2, y3 2) −∆M 3 (y1 3, y2 3, y3 3) −∆M 4 (y1 4, y2 4, y3 4) (b) Node-wise diversity measure −Jy1 1 ̸= y3 1K −Jy1 2 ̸= y3 2K −Jy1 3 ̸= y3 3K −Jy1 4 ̸= y3 4K y3 1 θ1 y3 2 θ2 y3 3 θ3 y3 4 θ4 θ1,2 θ2,3 θ3,4 −Jy2 1 ̸= y3 1K −Jy2 2 ̸= y3 2K −Jy2 3 ̸= y3 3K −Jy2 4 ̸= y3 4K y2 1 θ1 y2 2 θ2 y2 3 θ3 y2 4 θ4 θ1,2 θ2,3 θ3,4 −Jy1 1 ̸= y2 1K −Jy1 2 ̸= y2 2K −Jy1 3 ̸= y2 3K −Jy1 4 ̸= y2 4K y1 1 θ1 y1 2 θ2 y1 3 θ3 y1 4 θ4 θ1,2 θ2,3 θ3,4 (c) Hamming distance diversity Figure 1: Examples of factor graphs for 3 diverse solutions of the original MRF (1) with different diversity measures. The circles represent nodes of the original model that are copied 3 times. For clarity the diversity factors of order higher than 2 are shown as squares. Pairwise factors are depicted by edges connecting the nodes. We omit λ for readability. (a) The most general diversity measure (4), (b) the node-wise diversity measure (6), (c) Hamming distance as a diversity measure (5). Finally, a model is defined by the triple (G, LV, θ), i.e. the underlying hyper-graph, the sets of labels and the potentials. In the following, we use brackets to distinguish between upper index and power, i.e. (A)n means the n-th power of A, whereas n is an upper index in the expression An. We will keep, however, the standard notation Rn for the n-dimensional vector space. Sequential Computation of M Best Diverse Solutions [3]. Instead of looking for a single labeling with lowest energy, one might ask for a set of labelings with low energies, yet being significantly different from each other. In order to find such M diverse labelings y1, . . . , yM, the method proposed in [3] solves a sequence of problems of the form ym = arg min y " E(y) −λ m−1 X i=1 ∆(y, yi) # (2) for m = 1, 2 . . . , M, where λ > 0 determines a trade-off between diversity and energy, y1 is the MAP-solution and the function ∆: LV × LV →R defines the diversity of two labelings. In other words, ∆(y, y′) takes a large value if y and y′ are diverse, in a certain sense, and a small value otherwise. This problem can be seen as an energy minimization problem, where additionally to the initial potentials θ the potentials −λ∆(·, yi), associated with an additional factor V, are used. In the simplest and most commonly used form, ∆(y, y′) is represented by a sum of node-wise diversity measures ∆v : Lv × Lv →R, ∆(y, y′) = X v∈V ∆v(yv, y′ v) , (3) and the potentials are split to a sum of unary potentials, i.e. those associated with additional factors {v}, v ∈V. This implies that in case efficient graph-cut based inference methods (including αexpansion [6], α-β-swap [6] or their generalizations [1, 10]) are applicable to the initial problem (1) then they remain applicable to the augmented problem (2), which assures efficiency of the method. Joint computation of M-best-diverse labelings. The notation f M({y}) will be used as a shortcut for f M(y1, . . . , yM), for any function f M : (LV)M →R. Instead of the greedy sequential procedure (2), in [19] it was suggested to infer all M labelings jointly, by minimizing EM({y}) = M X i=1 E(yi) −λ∆M({y}) (4) for y1, . . . , yM and some λ > 0. Function ∆M defines the total diversity of any M labelings. It was shown in [19] that the M labelings obtained according to (4) have both lower total energy PM i=1 E(yi) and are better from the applied point of view, than those obtained by the sequential method (2). Hence we will build on the formulation (4) in this work. 3 Though the expression (4) looks complicated, it can be nicely represented in the form (1) and hence constitutes an energy minimization problem. To achieve this, one creates M copies (Gi, Li V, θi) = (G, LV, θ) of the initial model (G, LV, θ). The hyper-graph GM 1 = (VM 1 , FM 1 ) for the new task is defined as follows. The set of nodes in the new graph is the union of the node sets from the considered copies VM 1 = SM i=1 Vi. Factors are FM 1 = SM i=1 Fi ∪{VM 1 }, i.e. again the union of the initial ones extended by a special factor corresponding to the diversity penalty that depends on all nodes of the new graph. Each node v ∈Vi is associated with the label set Li v = Lv. The corresponding potentials θM 1 are defined as {−λ∆M, θ1, . . . , θM}, see Fig. 1a for illustration. The model (GM 1 , LVM 1 , θM 1 ) corresponds to the energy (4). An optimal M-tuple of these labelings, corresponding to a minimum of (4), is a trade-off between low energy of individual labelings yi and their total diversity. Complexity of the Diversity Problem (4). Though the formulation (4) leads to better results than those of (2), minimization of EM is computationally demanding even if the original energy E can be easily (approximatively) optimized. This is due to the intrinsic repulsive structure of the diversity potentials −λ∆M: according to the intuitive meaning of the diversity, similar labels are penalized more than different one. Consider the simplest case with the Hamming distance applied node-wise as a diversity measure ∆M({y}) = M−1 X i=1 M X j=i+1 X v∈V ∆v(yi v, yj v), where ∆v(y, y′) = Jy ̸= y′K . (5) Here expression JAK equals 1 if A is true and 0 otherwise. The corresponding factor graph is sketched in Fig. 1c. Such potentials can not be optimized with efficient graph-cut based methods and moreover, as shown in [19], the bounds delivered by LP-relaxation [31] based solvers are very loose in practice. Indeed, solutions delivered by such solvers are significantly inferior even to the results of the sequential method (2). To cope with this issue a clique encoding representation of (4) was proposed in [19]. In this representation M-tuples of labels y1 v, . . . , yM v (in the M nodes corresponding to the single initial node v) were considered as the new labels. In this way the difficult diversity factors were incorporated into the unary factors of the new representation and the pairwise factors were adjusted respectively. This allowed to (approximately) solve the problem (4) with graph-cuts based techniques if those techniques were applicable to the energy E of a single labeling. The disadvantage of the clique encoding representation is the exponential growth of the label space, which was reflected in a significantly higher inference time for the problem (4) compared to the procedure (2). In what follows, we show an alternative transformation of the problem (4), which (i) does not have this drawback (its size is basically the same as those of (4)) and (ii) allows to exactly solve (4) in the case the energy E is submodular. Node-wise Diversity. In what follows we will mainly consider the node-wise diversity measures, i.e. those, which can be represented in the form ∆M({y}) = X v∈V ∆M v ({y}v) (6) for some node diversity measures ∆M v : (Lv)M →R, see Fig. 1b for illustration. 3 M-Best-Diverse Labelings for Submodular Problems Submodularity. In what follows we will assume that the sets Lv, v ∈V, of labels are completely ordered. This implies that for any s, t ∈Lv their maximum and minimum, denoted as s∨t and s∧t respectively, are well-defined. Similarly let y1 ∨y2 and y1 ∧y2 denote the node-wise maximum and minimum of any two labelings y1, y2 ∈LA, A ⊆V. Potential θf is called submodular, if for any two labelings y1, y2 ∈Lf it holds1: θf(y1) + θf(y2) ≥θf(y1 ∨y2) + θf(y1 ∧y2) . (7) Potential θ will be called supermodular, if (−θ) is submodular. 1Pairwise binary potentials satisfying θf(0, 1) + θf(1, 0) ≥θf(0, 0) + θf(1, 1) build an important special case of this definition. 4 Energy E is called submodular if for any two labelings y1, y2 ∈LV it holds: E(y1) + E(y2) ≥E(y1 ∨y2) + E(y1 ∧y2) . (8) Submodularity of energy trivially follows from the submodularity of all its non-unary potentials θf, f ∈F, |f| > 1. In the pairwise case the inverse also holds: submodularity of energy implies also submodularity of all its (pairwise) potentials (e.g. [31, Thm. 12]). There are efficient methods for solving energy minimization problems with submodular potentials, based on its transformation into min-cut/max-flow problem [21, 28, 16] in case all potentials are either unary or pairwise or to a submodular max-flow problem in the higher-order case [20, 10, 1]. Ordered M Solutions. In what follows we will write z≤zfor any two vectors z1 and z meaning that the inequality holds coordinate-wise. For an arbitrary set A we will call a function f : (A)n →R of n variables permutation invariant if for any (x1, x2, . . . , xn) ∈(A)n and any permutation π it holds f(x1, x2, . . . , xn) = f(xπ(1), xπ(2), . . . , xπ(n)). In what follows we will consider mainly permutation invariant diversity measures. Let us consider two arbitrary labelings y1, y2 ∈LV and their node-wise minimum y1 ∧y2 and maximum y1 ∨y2. Since (y1 v ∧y2 v, y1 v ∨y2 v) is either equal to (y1 v, y2 v) or to (y2 v, y1 v), for any permutation invariant node diversity measure it holds ∆2 v(y1 v, y2 v) = ∆2 v(y1 v ∧y2 v, y1 v ∨y2 v). This in its turn implies ∆2(y1 ∧y2, y1 ∨y2) = ∆2(y1, y2) for any node-wise diversity measure of the form (6). If E is submodular, then from (8) it additionally follows that E2(y1 ∧y2, y1 ∨y2) ≤E2(y1, y2) , (9) where E2 is defined as in (4). Note, that (y1 ∧y2) ≤(y1 ∨y2). Generalizing these considerations to M labelings one obtains Theorem 1. Let E be submodular and ∆M be a node-wise diversity measure with each component ∆M v being permutation invariant. Then there exists an ordered M-tuple (y1, . . . , yM), yi ≤yj for 1 ≤i < j ≤M, such that for any (z1, . . . , zM) ∈(LV)M it holds EM({y}) ≤EM({z}) , (10) where EM is defined as in (4). Theorem 1 in particular claims that in the binary case Lv = {0, 1}, v ∈V, the optimal M labelings define nested subsets of nodes, corresponding to the label 1. Submodular formulation of M-Best-Diverse problem. Due to Theorem 1, for submodular energies and node-wise diversity measures it is sufficient to consider only ordered M-tuples of labelings. This order can be enforced by modifying the diversity measure accordingly: ˆ ∆M v (y1, . . . , yM) := ∆M v (y1, . . . , yM), y1 ≤y2 ≤· · · ≤yM −∞, otherwise , (11) and using it instead of the initial measure ∆M v . Note that ˆ ∆M v is not permutation invariant. In practice one can use sufficiently big numbers in place of ∞in (11). This implies Lemma 1. Let E be submodular and ∆M be a node-wise diversity measure with each component ∆M v being permutation invariant. Then any solution of the ordering enforcing M-best-diverse problem ˆEM({y}) = M X i=1 E(yi) −λ X v∈V ˆ ∆M v (y1 v, . . . , yM v ) (12) is a solution of the corresponding M-best-diverse problem (4) EM({y}) = M X i=1 E(yi) −λ X v∈V ∆M v (y1 v, . . . , yM v ) , (13) where ˆ ∆M v and ∆M v are related by (11). We will say that a vector (y1, . . . , yM) ∈(Lv)M is ordered, if it holds y1 ≤y2 ≤· · · ≤yM. 5 Given submodularity of E the submodularity (an hence – solvability) of EM in (13) would trivially follow from the supermodularity of ∆M. However there hardly exist supermodular diversity measures. The ordering provided by Theorem 1 and the corresponding form of the orderingenforcing diversity measure ˆ ∆M significantly weaken this condition, which is precisely stated by the following lemma. In the lemma we substitute ∞of (11) with a sufficiently big values such as C∞≥max{y} EM({y}) for the sake of numerical implementation. Moreover, this values will differ from each other to keep ˆ ∆M v supermodular. Lemma 2. Let for any two ordered vectors y = (y1, . . . , yM) ∈(Lv)M and z = (z1, . . . , zM) ∈ (Lv)M it holds ∆v(y ∨z) + ∆v(y ∧z) ≥∆v(y) + ∆v(z), (14) where y ∨z and y ∧z are element-wise maximum and minimum respectively. Then ˆ ∆v, defined as ˆ ∆v(y1, . . . , yM) = ∆v(y1, . . . , yM) −C∞·   M−1 X i=1 M X j=i+1 3max(0,yi−yj) −1   (15) is supermodular. Note, eq. (11) and (15) are the same up to the infinity values in (11). Though condition (14) resembles the supermodularity condition, it has to be fulfilled for ordered vectors only. The following corollaries of Lemma 2 give two most important examples of the diversity measures fulfilling (14). Corollary 1. Let |Lv| = 2 for all v ∈V. Then the statement of Lemma 2 holds for arbitrary ∆v : (Lv)M →R. Corollary 2. Let ∆M v (y1, . . . , yM) = PM−1 i=1 PM j=i+1 ∆ij(yi, yj). Then the condition of Lemma 2 is equivalent to ∆ij(yi, yj) + ∆ij(yi + 1, yj + 1) ≥∆ij(yi + 1, yj) + ∆ij(yi, yj + 1) for yi < yj (16) and 1 ≤i < j ≤M. In particular, condition (16) is satisfied for the Hamming distance ∆ij(y, y′) = Jy ̸= y′K. The following theorem trivially summarizes Lemmas 1 and 2: Theorem 2. Let energy E and diversity measure ∆M satisfy conditions of Lemmas 1 and 2. Then the ordering enforcing problem (12) delivers solution to the M-best-diverse problem (13) and is submodular. Moreover, submodularity of all non-unary potentials of the energy E implies submodularity of all non-unary potentials of the ordering enforcing energy ˆEM. 4 Experimental evaluation We have tested our algorithms in two application scenarios: (a) interactive foreground/background image segmentation, where annotation is available in the form of scribbles [3] and (b) Category level segmentation on PASCAL VOC 2012 data [9]. As baselines we use: (i) the sequential method DivMBest (2) proposed in [3, 25] and (ii) the clique-encoding CE method [19] for an (approximate) joint computation of M-best-diverse labelings. As mentioned in Section 2, this method addresses the energy EM defined in (4), however it has the disadvantage that its label space grows exponentially with M. Our method that solves the problem (12) with the Hamming diversity measure (5) by transforming it into min-cut/max-flow problem [21, 28, 16] and running the solver [5] is denoted as Joint-DivMBest. Diversity measures used in experiments are: the Hamming distance (5) HD, Label Cost LC, Label Transitions LT and Hamming Ball HB. The last three measures are higher order diversity potentials introduced in [25] and used only in connection with the DivMBest algorithm. If not stated otherwise, the Hamming distance (5) is used as a diversity measure. Both the clique encoding (CE) based approaches and the submodularity-based methods proposed in this work use only the Hamming distance as a diversity measure. As [25] suggests, certain combinations of different diversity measures may lead to better results. To denote such combinations, the signs ⊗and ⊕were used in [25]. We refer to [25] for a detailed description of this notation and treat such combined methods as a black box for our comparison. 6 M=2 M=6 M=10 quality time quality time quality time DivMBest 93.16 0.45 95.02 2.4 95.16 4.4 CE 95.13 2.9 96.01 47.6 96.19 1247 Joint-DivMBest 95.13 0.77 96.01 5.2 96.19 20.4 Table 1: Interactive segmentation: per-pixel accuracies (quality) for the best segmentation out of M ones and run-time. Compare to the average quality 91.57 of a single labeling. Hamming distance is used as a diversity measure. The run-time is in milliseconds (ms). Joint-DivMBest quantitatively outperforms DivMBest, and is equal to CE, however, it is considerably faster than CE. 4.1 Interactive segmentation Instead of returning a single segmentation corresponding to a MAP-solution, diversity methods provide to the user a small number of possible low-energy results based on the scribbles. Following [3] we model only the first iteration of such an interactive procedure, i.e. we consider user scribbles to be given and compare the sets of segmentations returned by the compared diversity methods. Authors of [3] kindly provided us their 50 graphical model instances, corresponding to the MAPinference problem (1). They are based on a subset of the PASCAL VOC 2010 [9] segmentation challenge with manually added scribbles. Pairwise potentials constitute contrast sensitive Potts terms [4], which are submodular. This implies that (i) the MAP-inference is solvable by min-cut/max-flow algorithms [21] and (ii) Theorem 2 is applicable and the M-best-diverse solutions can be found by reducing the ordering preserving problem (12) to min-cut/max-flow and applying the corresponding algorithm. Quantitative comparison and run-time of the considered methods is provided in Table 1, where each method was used with the parameter λ (see (2), (4)), optimally tuned via cross-validation. Following [3], as a quality measure we used the per pixel accuracy of the best solution for each sample averaged over all test images. Methods CE and Joint-DivMBest gave the same quality, which confirms the observation made in [19], that CE returns an exact MAP solution for each sample in this dataset. Combined methods with more sophisticated diversity measures return results that are either inferior to DivMBest or only negligibly improved once, hence we omitted them. The runtime provided is also averaged over all samples. The max-flow algorithm was used for DivMBest and Joint-DivMBest and α-expansion for CE. Summary. It can be seen that the Joint-DivMBest qualitatively outperforms DivMBest and is equal to CE. However, it is considerably faster than the latter (the difference grows exponentially with M) and the runtime is of the same order of magnitude as the one of DivMBest. 4.2 Category level segmentation The category level segmentation from PASCAL VOC 2012 challenge [9] contains 1449 validation images with known ground truth, which we used for evaluation of diversity methods. Corresponding pairwise models with contrast sensitive Potts terms of the form θuv(y, y′) = wuvJy ̸= y′K, uv ∈F, were used in [25] and kindly provided to us by the authors. Contrary to interactive segmentation, the label sets contain 21 elements and hence the respective MAP-inference problem (1) is not submodular anymore. However it still can be approximatively solved by α-expansion or α-β-swap. Since the MAP-inference problem (1) is not submodular in this experiment, Theorem 2 is not applicable. We used two ways to overcome it. First, we modified the diversity potentials according to (15), as if Theorem 2 were to be correct. This basically means we were explicitly looking for ordered M best diverse labelings. The resulting inference problem was addressed with α-β-swap (since neither max-flow nor the α-expansion algorithms are applicable). We refer to this method as to Joint-DivMBest-ordered. The second way to overcome the non-submodularity problem, is based on learning. Using structured SVM technique we trained pairwise potentials with additional constraints enforcing their submodularity, as it is done in e.g. [11]. We kept the contrast terms wuv and learned only a single submodular function ˆθ(y, y′), which we used in place of Jy ̸= y′K. After the learning, all our potentials had the form θuv(y, y′) = wuv ˆθ(y, y′), uv ∈F. We refer to 7 MAP inference M=5 M=15 M=16 quality time quality time quality time DivMBest α-exp[4] 51.21 0.01 52.90 0.03 53.07 0.03 HB∗ HB-HOP-MAP[30] 51.71 55.32 DivMBest∗⊕HB∗ HB-HOP-MAP[30] 55.89 HB∗⊗LC∗⊗LT∗ LT – coop. cuts[17] 56.97 DivMBest∗⊗HB∗⊗LC∗⊗LT∗ LT – coop. cuts[17] 57.39 CE α-exp[4] 54.22 733 CE3 α-exp[4] 54.14 2.28 57.76 5.87 58.36 7.24 Joint-DivMBest-ordered α-β-swap[4] 53.81 0.01 56.08 0.08 56.31 0.08 Joint-DivMBest-learned max-flow[5] 53.85 0.38 56.14 35.47 56.33 38.67 Joint-DivMBest-learned α-exp[4] 53.84 0.01 56.08 0.08 56.31 0.08 Table 2: PASCAL VOC 2012. Intersection over union quality measure/running time. The best segmentation out of M is considered. Compare to the average quality 43.51 of a single labeling. Time is in seconds (s). Notation ’-’ correspond to absence of result due to computational reasons or inapplicability of the method. (∗)- methods were not run by us and the results were taken from [25] directly. The MAP-inference column references the slowest inference technique out of those used by the method. this method as to Joint-DivMBest-learned. For the model we use max-flow[5] as an exact inference method and α-expansion[4] as a fast approximate inference method. Quantitative comparison and run-time of the considered methods is provided in Table 2, where each method was used with the parameter λ (see (2), (4)) optimally tuned via crossvalidation on the validation set in PASCAL VOC 2012. Following [3], we used the Intersection over union quality measure, averaged over all images. Among combined methods with higher order diversity measures we selected only those providing the best results. The method CE3 [19] is a hybrid of DivMBest and CE delivering a reasonable trade-off between running time and accuracy of inference for the model EM (4). Quantitative results delivered by Joint-DivMBest-ordered and Joint-DivMBest-learned are very similar (though the latter is negligibly better), significantly outperform those of DivMBest and only slightly inferior to those of CE3. However the run-time for Joint-DivMBest-ordered and α-expansion version of Joint-DivMBest-learned are comparable to those of DivMBest and outperform all other competitors due to use of the fast inference algorithms and linearly growing label space, contrary to the label space of CE3, which grows as (Lv)3. Though we do not know exact run-time for the combined methods (where ⊕and ⊗are used) we expect them to be significantly higher then those for DivMBest and Joint-DivMBest-ordered because of the intrinsically slow MAP-inference techniques used. However contrary to the latter one the inference in Joint-DivMBest-learned can be exact due to submodularity of the underlying energy. 5 Conclusions We have shown that submodularity of the MAP-inference problem implies a fully ordered set of M best diverse solutions given a node-wise permutation invariant diversity measure. Enforcing such ordering leads to a submodular formulation of the joint M-best-diverse problem and implies its efficient solvability. Moreover, we have shown that even in non-submodular cases, when the MAP-inference is (approximately) solvable with efficient graph-cut based methods, enforcing this ordering leads to the M-best-diverse problem, which is (approximately) solvable with graph-cut based methods as well. In our test cases (and there are likely others), such an approximative technique lead to notably better results then those provided by the established sequential DivMBest technique [3], whereas its run-time remains quite comparable to the run-time of DivMBest and is much smaller than the run-time of other competitors. 8 References [1] C. Arora, S. Banerjee, P. Kalra, and S. Maheshwari. Generalized flows for optimal inference in higher order MRF-MAP. TPAMI, 2015. [2] D. Batra. An efficient message-passing algorithm for the M-best MAP problem. arXiv:1210.4841, 2012. [3] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-best solutions in markov random fields. In ECCV. Springer Berlin/Heidelberg, 2012. [4] Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images. In ICCV, 2001. [5] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. TPAMI, 26(9):1124–1137, 2004. [6] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. TPAMI, 23(11):1222–1239, 2001. [7] C. Chen, V. Kolmogorov, Y. Zhu, D. N. Metaxas, and C. H. Lampert. Computing the M most probable modes of a graphical model. In AISTATS, 2013. [8] G. Elidan and A. Globerson. The probabilistic inference challenge (PIC2011). [9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. [10] A. Fix, A. Gruber, E. Boros, and R. Zabih. A graph cut algorithm for higher-order Markov random fields. In ICCV, 2011. [11] V. Franc and B. Savchynskyy. Discriminative learning of max-sum classifiers. JMLR, 9:67–104, 2008. [12] M. Fromer and A. Globerson. An lp view of the m-best map problem. In NIPS 22, 2009. [13] A. Guzman-Rivera, D. Batra, and P. Kohli. Multiple choice learning: Learning to produce multiple structured outputs. In NIPS 25, 2012. [14] A. Guzman-Rivera, P. Kohli, and D. Batra. DivMCuts: Faster training of structural SVMs with diverse M-best cutting-planes. In AISTATS, 2013. [15] A. Guzman-Rivera, P. Kohli, D. Batra, and R. A. Rutenbar. Efficiently enforcing diversity in multi-output structured prediction. In AISTATS, 2014. [16] H. Ishikawa. Exact optimization for Markov random fields with convex priors. TPAMI, 2003. [17] S. Jegelka and J. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In CVPR, 2011. [18] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schn¨orr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, T. Kr¨oger, J. Lellmann, N. Komodakis, B. Savchynskyy, and C. Rother. A comparative study of modern inference techniques for structured discrete energy minimization problems. IJCV, pages 1–30, 2015. [19] A. Kirillov, B. Savchynskyy, D. Schlesinger, D. Vetrov, and C. Rother. Inferring M-best diverse labelings in a single one. In ICCV, 2015. [20] V. Kolmogorov. Minimizing a sum of submodular functions. Discrete Applied Mathematics, 2012. [21] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? TPAMI, 2004. [22] A. Kulesza and B. Taskar. Structured determinantal point processes. In NIPS 23, 2010. [23] E. L. Lawler. A procedure for computing the K best solutions to discrete optimization problems and its application to the shortest path problem. Management Science, 18(7), 1972. [24] D. Nilsson. An efficient algorithm for finding the m most probable configurationsin probabilistic expert systems. Statistics and Computing, 8(2):159–173, 1998. [25] A. Prasad, S. Jegelka, and D. Batra. Submodular meets structured: Finding diverse subsets in exponentially-large structured item sets. In NIPS 27, 2014. [26] V. Premachandran, D. Tarlow, and D. Batra. Empirical minimum bayes risk prediction: How to extract an extra few % performance from vision models with just three more parameters. In CVPR, 2014. [27] V. Ramakrishna and D. Batra. Mode-marginals: Expressing uncertainty via diverse M-best solutions. In NIPS Workshop on Perturbations, Optimization, and Statistics, 2012. [28] D. Schlesinger and B. Flach. Transforming an arbitrary minsum problem into a binary one. TU Dresden, Fak. Informatik, 2006. [29] M. I. Schlesinger and V. Hlavac. Ten lectures on statistical and structural pattern recognition, volume 24. Springer Science & Business Media, 2002. [30] D. Tarlow, I. E. Givoni, and R. S. Zemel. Hop-map: Efficient message passing with high order potentials. In AISTATS, 2010. [31] T. Werner. A linear programming approach to max-sum problem: A review. TPAMI, 29(7), 2007. [32] P. Yadollahpour, D. Batra, and G. Shakhnarovich. Discriminative re-ranking of diverse segmentations. In CVPR, 2013. [33] C. Yanover and Y. Weiss. Finding the M most probable configurations using loopy belief propagation. In NIPS 17, 2004. 9
2015
108
5,601
BinaryConnect: Training Deep Neural Networks with binary weights during propagations Matthieu Courbariaux ´Ecole Polytechnique de Montr´eal matthieu.courbariaux@polymtl.ca Yoshua Bengio Universit´e de Montr´eal, CIFAR Senior Fellow yoshua.bengio@gmail.com Jean-Pierre David ´Ecole Polytechnique de Montr´eal jean-pierre.david@polymtl.ca Abstract Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN. 1 Introduction Deep Neural Networks (DNN) have substantially pushed the state-of-the-art in a wide range of tasks, especially in speech recognition [1, 2] and computer vision, notably object recognition from images [3, 4]. More recently, deep learning is making important strides in natural language processing, especially statistical machine translation [5, 6, 7]. Interestingly, one of the key factors that enabled this major progress has been the advent of Graphics Processing Units (GPUs), with speed-ups on the order of 10 to 30-fold, starting with [8], and similar improvements with distributed training [9, 10]. Indeed, the ability to train larger models on more data has enabled the kind of breakthroughs observed in the last few years. Today, researchers and developers designing new deep learning algorithms and applications often find themselves limited by computational capability. This along, with the drive to put deep learning systems on low-power devices (unlike GPUs) is greatly increasing the interest in research and development of specialized hardware for deep networks [11, 12, 13]. Most of the computation performed during training and application of deep networks regards the multiplication of a real-valued weight by a real-valued activation (in the recognition or forward propagation phase of the back-propagation algorithm) or gradient (in the backward propagation phase of the back-propagation algorithm). This paper proposes an approach called BinaryConnect 1 to eliminate the need for these multiplications by forcing the weights used in these forward and backward propagations to be binary, i.e. constrained to only two values (not necessarily 0 and 1). We show that state-of-the-art results can be achieved with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN. What makes this workable are two ingredients: 1. Sufficient precision is necessary to accumulate and average a large number of stochastic gradients, but noisy weights (and we can view discretization into a small number of values as a form of noise, especially if we make this discretization stochastic) are quite compatible with Stochastic Gradient Descent (SGD), the main type of optimization algorithm for deep learning. SGD explores the space of parameters by making small and noisy steps and that noise is averaged out by the stochastic gradient contributions accumulated in each weight. Therefore, it is important to keep sufficient resolution for these accumulators, which at first sight suggests that high precision is absolutely required. [14] and [15] show that randomized or stochastic rounding can be used to provide unbiased discretization. [14] have shown that SGD requires weights with a precision of at least 6 to 8 bits and [16] successfully train DNNs with 12 bits dynamic fixed-point computation. Besides, the estimated precision of the brain synapses varies between 6 and 12 bits [17]. 2. Noisy weights actually provide a form of regularization which can help to generalize better, as previously shown with variational weight noise [18], Dropout [19, 20] and DropConnect [21], which add noise to the activations or to the weights. For instance, DropConnect [21], which is closest to BinaryConnect, is a very efficient regularizer that randomly substitutes half of the weights with zeros during propagations. What these previous works show is that only the expected value of the weight needs to have high precision, and that noise can actually be beneficial. The main contributions of this article are the following. • We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations (Section 2). • We show that BinaryConnect is a regularizer and we obtain near state-of-the-art results on the permutation-invariant MNIST, CIFAR-10 and SVHN (Section 3). • We make the code for BinaryConnect available 1. 2 BinaryConnect In this section we give a more detailed view of BinaryConnect, considering which two values to choose, how to discretize, how to train and how to perform inference. 2.1 +1 or −1 Applying a DNN mainly consists in convolutions and matrix multiplications. The key arithmetic operation of DL is thus the multiply-accumulate operation. Artificial neurons are basically multiplyaccumulators computing weighted sums of their inputs. BinaryConnect constraints the weights to either +1 or −1 during propagations. As a result, many multiply-accumulate operations are replaced by simple additions (and subtractions). This is a huge gain, as fixed-point adders are much less expensive both in terms of area and energy than fixed-point multiply-accumulators [22]. 2.2 Deterministic vs stochastic binarization The binarization operation transforms the real-valued weights into the two possible values. A very straightforward binarization operation would be based on the sign function: wb =  +1 if w ≥0, −1 otherwise. (1) 1https://github.com/MatthieuCourbariaux/BinaryConnect 2 Where wb is the binarized weight and w the real-valued weight. Although this is a deterministic operation, averaging this discretization over the many input weights of a hidden unit could compensate for the loss of information. An alternative that allows a finer and more correct averaging process to take place is to binarize stochastically: wb =  +1 with probability p = σ(w), −1 with probability 1 −p. (2) where σ is the “hard sigmoid” function: σ(x) = clip(x + 1 2 , 0, 1) = max(0, min(1, x + 1 2 )) (3) We use such a hard sigmoid rather than the soft version because it is far less computationally expensive (both in software and specialized hardware implementations) and yielded excellent results in our experiments. It is similar to the “hard tanh” non-linearity introduced by [23]. It is also piece-wise linear and corresponds to a bounded form of the rectifier [24]. 2.3 Propagations vs updates Let us consider the different steps of back-propagation with SGD udpates and whether it makes sense, or not, to discretize the weights, at each of these steps. 1. Given the DNN input, compute the unit activations layer by layer, leading to the top layer which is the output of the DNN, given its input. This step is referred as the forward propagation. 2. Given the DNN target, compute the training objective’s gradient w.r.t. each layer’s activations, starting from the top layer and going down layer by layer until the first hidden layer. This step is referred to as the backward propagation or backward phase of backpropagation. 3. Compute the gradient w.r.t. each layer’s parameters and then update the parameters using their computed gradients and their previous values. This step is referred to as the parameter update. Algorithm 1 SGD training with BinaryConnect. C is the cost function for minibatch and the functions binarize(w) and clip(w) specify how to binarize and clip weights. L is the number of layers. Require: a minibatch of (inputs, targets), previous parameters wt−1 (weights) and bt−1 (biases), and learning rate η. Ensure: updated parameters wt and bt. 1. Forward propagation: wb ←binarize(wt−1) For k = 1 to L, compute ak knowing ak−1, wb and bt−1 2. Backward propagation: Initialize output layer’s activations gradient ∂C ∂aL For k = L to 2, compute ∂C ∂ak−1 knowing ∂C ∂ak and wb 3. Parameter update: Compute ∂C ∂wb and ∂C dbt−1 knowing ∂C ∂ak and ak−1 wt ←clip(wt−1 −η ∂C ∂wb ) bt ←bt−1 −η ∂C ∂bt−1 A key point to understand with BinaryConnect is that we only binarize the weights during the forward and backward propagations (steps 1 and 2) but not during the parameter update (step 3), as illustrated in Algorithm 1. Keeping good precision weights during the updates is necessary for SGD to work at all. These parameter changes are tiny by virtue of being obtained by gradient descent, i.e., SGD performs a large number of almost infinitesimal changes in the direction that most improves the training objective (plus noise). One way to picture all this is to hypothesize that what matters 3 most at the end of training is the sign of the weights, w∗, but that in order to figure it out, we perform a lot of small changes to a continuous-valued quantity w, and only at the end consider its sign: w∗= sign( X t gt) (4) where gt is a noisy estimator of ∂C(f(xt,wt−1,bt−1),yt) ∂wt−1 , where C(f(xt, wt−1, bt−1), yt) is the value of the objective function on (input,target) example (xt, yt), when wt−1 are the previous weights and w∗is its final discretized value of the weights. Another way to conceive of this discretization is as a form of corruption, and hence as a regularizer, and our empirical results confirm this hypothesis. In addition, we can make the discretization errors on different weights approximately cancel each other while keeping a lot of precision by randomizing the discretization appropriately. We propose a form of randomized discretization that preserves the expected value of the discretized weight. Hence, at training time, BinaryConnect randomly picks one of two values for each weight, for each minibatch, for both the forward and backward propagation phases of backprop. However, the SGD update is accumulated in a real-valued variable storing the parameter. An interesting analogy to understand BinaryConnect is the DropConnect algorithm [21]. Just like BinaryConnect, DropConnect only injects noise to the weights during the propagations. Whereas DropConnect’s noise is added Gaussian noise, BinaryConnect’s noise is a binary sampling process. In both cases the corrupted value has as expected value the clean original value. 2.4 Clipping Since the binarization operation is not influenced by variations of the real-valued weights w when its magnitude is beyond the binary values ±1, and since it is a common practice to bound weights (usually the weight vector) in order to regularize them, we have chosen to clip the real-valued weights within the [−1, 1] interval right after the weight updates, as per Algorithm 1. The real-valued weights would otherwise grow very large without any impact on the binary weights. 2.5 A few more tricks Optimization No learning rate scaling Learning rate scaling SGD 11.45% Nesterov momentum 15.65% 11.30% ADAM 12.81% 10.47% Table 1: Test error rates of a (small) CNN trained on CIFAR-10 depending on optimization method and on whether the learning rate is scaled with the weights initialization coefficients from [25]. We use Batch Normalization (BN) [26] in all of our experiments, not only because it accelerates the training by reducing internal covariate shift, but also because it reduces the overall impact of the weights scale. Moreover, we use the ADAM learning rule [27] in all of our CNN experiments. Last but not least, we scale the weights learning rates respectively with the weights initialization coefficients from [25] when optimizing with ADAM, and with the squares of those coefficients when optimizing with SGD or Nesterov momentum [28]. Table 1 illustrates the effectiveness of those tricks. 2.6 Test-Time Inference Up to now we have introduced different ways of training a DNN with on-the-fly weight binarization. What are reasonable ways of using such a trained network, i.e., performing test-time inference on new examples? We have considered three reasonable alternatives: 1. Use the resulting binary weights wb (this makes most sense with the deterministic form of BinaryConnect). 4 2. Use the real-valued weights w, i.e., the binarization only helps to achieve faster training but not faster test-time performance. 3. In the stochastic case, many different networks can be sampled by sampling a wb for each weight according to Eq. 2. The ensemble output of these networks can then be obtained by averaging the outputs from individual networks. We use the first method with the deterministic form of BinaryConnect. As for the stochastic form of BinaryConnect, we focused on the training advantage and used the second method in the experiments, i.e., test-time inference using the real-valued weights. This follows the practice of Dropout methods, where at test-time the “noise” is removed. Method MNIST CIFAR-10 SVHN No regularizer 1.30 ± 0.04% 10.64% 2.44% BinaryConnect (det.) 1.29 ± 0.08% 9.90% 2.30% BinaryConnect (stoch.) 1.18 ± 0.04% 8.27% 2.15% 50% Dropout 1.01 ± 0.04% Maxout Networks [29] 0.94% 11.68% 2.47% Deep L2-SVM [30] 0.87% Network in Network [31] 10.41% 2.35% DropConnect [21] 1.94% Deeply-Supervised Nets [32] 9.78% 1.92% Table 2: Test error rates of DNNs trained on the MNIST (no convolution and no unsupervised pretraining), CIFAR-10 (no data augmentation) and SVHN, depending on the method. We see that in spite of using only a single bit per weight during propagation, performance is not worse than ordinary (no regularizer) DNNs, it is actually better, especially with the stochastic version, suggesting that BinaryConnect acts as a regularizer. Figure 1: Features of the first layer of an MLP trained on MNIST depending on the regularizer. From left to right: no regularizer, deterministic BinaryConnect, stochastic BinaryConnect and Dropout. 3 Benchmark results In this section, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN. 3.1 Permutation-invariant MNIST MNIST is a benchmark image classification dataset [33]. It consists in a training set of 60000 and a test set of 10000 28 × 28 gray-scale images representing digits ranging from 0 to 9. Permutationinvariance means that the model must be unaware of the image (2-D) structure of the data (in other words, CNNs are forbidden). Besides, we do not use any data-augmentation, preprocessing or unsupervised pretraining. The MLP we train on MNIST consists in 3 hidden layers of 1024 Rectifier Linear Units (ReLU) [34, 24, 3] and a L2-SVM output layer (L2-SVM has been shown to perform better than Softmax on several classification benchmarks [30, 32]). The square hinge loss is minimized with SGD without momentum. We use an exponentially decaying learning rate. We use Batch 5 Figure 2: Histogram of the weights of the first layer of an MLP trained on MNIST depending on the regularizer. In both cases, it seems that the weights are trying to become deterministic to reduce the training error. It also seems that some of the weights of deterministic BinaryConnect are stuck around 0, hesitating between −1 and 1. Figure 3: Training curves of a CNN on CIFAR-10 depending on the regularizer. The dotted lines represent the training costs (square hinge losses) and the continuous lines the corresponding validation error rates. Both versions of BinaryConnect significantly augment the training cost, slow down the training and lower the validation error rate, which is what we would expect from a Dropout scheme. Normalization with a minibatch of size 200 to speed up the training. As typically done, we use the last 10000 samples of the training set as a validation set for early stopping and model selection. We report the test error rate associated with the best validation error rate after 1000 epochs (we do not retrain on the validation set). We repeat each experiment 6 times with different initializations. The results are in Table 2. They suggest that the stochastic version of BinaryConnect can be considered a regularizer, although a slightly less powerful one than Dropout, in this context. 3.2 CIFAR-10 CIFAR-10 is a benchmark image classification dataset. It consists in a training set of 50000 and a test set of 10000 32 × 32 color images representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. We preprocess the data using global contrast normalization and ZCA whitening. We do not use any data-augmentation (which can really be a game changer for this dataset [35]). The architecture of our CNN is: (2×128C3)−MP2−(2×256C3)−MP2−(2×512C3)−MP2−(2×1024FC)−10SV M (5) Where C3 is a 3 × 3 ReLU convolution layer, MP2 is a 2 × 2 max-pooling layer, FC a fully connected layer, and SVM a L2-SVM output layer. This architecture is greatly inspired from VGG [36]. The square hinge loss is minimized with ADAM. We use an exponentially decaying learning 6 rate. We use Batch Normalization with a minibatch of size 50 to speed up the training. We use the last 5000 samples of the training set as a validation set. We report the test error rate associated with the best validation error rate after 500 training epochs (we do not retrain on the validation set). The results are in Table 2 and Figure 3. 3.3 SVHN SVHN is a benchmark image classification dataset. It consists in a training set of 604K and a test set of 26K 32 × 32 color images representing digits ranging from 0 to 9. We follow the same procedure that we used for CIFAR-10, with a few notable exceptions: we use half the number of hidden units and we train for 200 epochs instead of 500 (because SVHN is quite a big dataset). The results are in Table 2. 4 Related works Training DNNs with binary weights has been the subject of very recent works [37, 38, 39, 40]. Even though we share the same objective, our approaches are quite different. [37, 38] do not train their DNN with Backpropagation (BP) but with a variant called Expectation Backpropagation (EBP). EBP is based on Expectation Propagation (EP) [41], which is a variational Bayes method used to do inference in probabilistic graphical models. Let us compare their method to ours: • It optimizes the weights posterior distribution (which is not binary). In this regard, our method is quite similar as we keep a real-valued version of the weights. • It binarizes both the neurons outputs and weights, which is more hardware friendly than just binarizing the weights. • It yields a good classification accuracy for fully connected networks (on MNIST) but not (yet) for ConvNets. [39, 40] retrain neural networks with ternary weights during forward and backward propagations, i.e.: • They train a neural network with high-precision, • After training, they ternarize the weights to three possible values −H, 0 and +H and adjust H to minimize the output error, • And eventually, they retrain with ternary weights during propagations and high-precision weights during updates. By comparison, we train all the way with binary weights during propagations, i.e., our training procedure could be implemented with efficient specialized hardware avoiding the forward and backward propagations multiplications, which amounts to about 2/3 of the multiplications (cf. Algorithm 1). 5 Conclusion and future works We have introduced a novel binarization scheme for weights during forward and backward propagations called BinaryConnect. We have shown that it is possible to train DNNs with BinaryConnect on the permutation invariant MNIST, CIFAR-10 and SVHN datasets and achieve nearly state-of-the-art results. The impact of such a method on specialized hardware implementations of deep networks could be major, by removing the need for about 2/3 of the multiplications, and thus potentially allowing to speed-up by a factor of 3 at training time. With the deterministic version of BinaryConnect the impact at test time could be even more important, getting rid of the multiplications altogether and reducing by a factor of at least 16 (from 16 bits single-float precision to single bit precision) the memory requirement of deep networks, which has an impact on the memory to computation bandwidth and on the size of the models that can be run. Future works should extend those results to other models and datasets, and explore getting rid of the multiplications altogether during training, by removing their need from the weight update computation. 7 6 Acknowledgments We thank the reviewers for their many constructive comments. We also thank Roland Memisevic for helpful discussions. We thank the developers of Theano [42, 43], a Python library which allowed us to easily develop a fast and optimized code for GPU. We also thank the developers of Pylearn2 [44] and Lasagne, two Deep Learning libraries built on the top of Theano. We are also grateful for funding from NSERC, the Canada Research Chairs, Compute Canada, and CIFAR. References [1] Geoffrey Hinton, Li Deng, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29(6):82–97, Nov. 2012. [2] Tara Sainath, Abdel rahman Mohamed, Brian Kingsbury, and Bhuvana Ramabhadran. Deep convolutional neural networks for LVCSR. In ICASSP 2013, 2013. [3] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS’2012. 2012. [4] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. Technical report, arXiv:1409.4842, 2014. [5] Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. Fast and robust neural network joint models for statistical machine translation. In Proc. ACL’2014, 2014. [6] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS’2014, 2014. [7] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR’2015, arXiv:1409.0473, 2015. [8] Rajat Raina, Anand Madhavan, and Andrew Y. Ng. Large-scale deep unsupervised learning using graphics processors. In ICML’2009, 2009. [9] Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. [10] J. Dean, G.S Corrado, R. Monga, K. Chen, M. Devin, Q.V. Le, M.Z. Mao, M.A. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In NIPS’2012, 2012. [11] Sang Kyun Kim, Lawrence C McAfee, Peter Leonard McMahon, and Kunle Olukotun. A highly scalable restricted Boltzmann machine FPGA implementation. In Field Programmable Logic and Applications, 2009. FPL 2009. International Conference on, pages 367–372. IEEE, 2009. [12] Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In Proceedings of the 19th international conference on Architectural support for programming languages and operating systems, pages 269–284. ACM, 2014. [13] Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, et al. Dadiannao: A machine-learning supercomputer. In Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, pages 609–622. IEEE, 2014. [14] Lorenz K Muller and Giacomo Indiveri. Rounding methods for neural networks with low resolution synaptic weights. arXiv preprint arXiv:1504.05767, 2015. [15] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML’2015, 2015. [16] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Low precision arithmetic for deep learning. In Arxiv:1412.7024, ICLR’2015 Workshop, 2015. [17] Thomas M Bartol, Cailey Bromer, Justin P Kinney, Michael A Chirillo, Jennifer N Bourne, Kristen M Harris, and Terrence J Sejnowski. Hippocampal spine head sizes are highly precise. bioRxiv, 2015. [18] Alex Graves. Practical variational inference for neural networks. In J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2348–2356. Curran Associates, Inc., 2011. [19] Nitish Srivastava. Improving neural networks with dropout. Master’s thesis, U. Toronto, 2013. [20] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014. 8 [21] Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICML’2013, 2013. [22] J.P. David, K. Kalach, and N. Tittley. Hardware complexity of modular multiplication and exponentiation. Computers, IEEE Transactions on, 56(10):1308–1319, Oct 2007. [23] R. Collobert. Large Scale Machine Learning. PhD thesis, Universit´e de Paris VI, LIP6, 2004. [24] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In AISTATS’2011, 2011. [25] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS’2010, 2010. [26] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015. [27] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [28] Yu Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k2). Doklady AN SSSR (translated as Soviet. Math. Docl.), 269:543–547, 1983. [29] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. Technical Report Arxiv report 1302.4389, Universit´e de Montr´eal, February 2013. [30] Yichuan Tang. Deep learning using linear support vector machines. Workshop on Challenges in Representation Learning, ICML, 2013. [31] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [32] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. arXiv preprint arXiv:1409.5185, 2014. [33] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. [34] V. Nair and G.E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML’2010, 2010. [35] Benjamin Graham. Spatially-sparse convolutional neural networks. arXiv preprint arXiv:1409.6070, 2014. [36] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [37] Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPS’2014, 2014. [38] Zhiyong Cheng, Daniel Soudry, Zexi Mao, and Zhenzhong Lan. Training binary multilayer neural networks for image classification using expectation backpropgation. arXiv preprint arXiv:1503.03562, 2015. [39] Kyuyeon Hwang and Wonyong Sung. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pages 1–6. IEEE, 2014. [40] Jonghong Kim, Kyuyeon Hwang, and Wonyong Sung. X1000 real-time phoneme recognition vlsi using feed-forward deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 7510–7514. IEEE, 2014. [41] Thomas P Minka. Expectation propagation for approximate bayesian inference. In UAI’2001, 2001. [42] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation. [43] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [44] Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Fr´ed´eric Bastien, and Yoshua Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013. 9
2015
109
5,602
Differentially Private Subspace Clustering Yining Wang, Yu-Xiang Wang and Aarti Singh Machine Learning Department, Carnegie Mellon Universty, Pittsburgh, USA {yiningwa,yuxiangw,aarti}@cs.cmu.edu Abstract Subspace clustering is an unsupervised learning problem that aims at grouping data points into multiple “clusters” so that data points in a single cluster lie approximately on a low-dimensional linear subspace. It is originally motivated by 3D motion segmentation in computer vision, but has recently been generically applied to a wide range of statistical machine learning problems, which often involves sensitive datasets about human subjects. This raises a dire concern for data privacy. In this work, we build on the framework of differential privacy and present two provably private subspace clustering algorithms. We demonstrate via both theory and experiments that one of the presented methods enjoys formal privacy and utility guarantees; the other one asymptotically preserves differential privacy while having good performance in practice. Along the course of the proof, we also obtain two new provable guarantees for the agnostic subspace clustering and the graph connectivity problem which might be of independent interests. 1 Introduction Subspace clustering was originally proposed to solve very specific computer vision problems having a union-of-subspace structure in the data, e.g., motion segmentation under an affine camera model [11] or face clustering under Lambertian illumination models [15]. As it gains increasing attention in the statistics and machine learning community, people start to use it as an agnostic learning tool in social network [5], movie recommendation [33] and biological datasets [19]. The growing applicability of subspace clustering in these new domains inevitably raises the concern of data privacy, as many such applications involve dealing with sensitive information. For example, [19] applies subspace clustering to identify diseases from personalized medical data and [33] in fact uses subspace clustering as a effective tool to conduct linkage attacks on individuals in movie rating datasets. Nevertheless, privacy issues in subspace clustering have been less explored in the past literature, with the only exception of a brief analysis and discussion in [29]. However, the algorithms and analysis presented in [29] have several notable deficiencies. For example, data points are assumed to be incoherent and it only protects the differential privacy of any feature of a user rather than the entire user profile in the database. The latter means it is possible for an attacker to infer with high confidence whether a particular user is in the database, given sufficient side information. It is perhaps reasonable why there is little work focusing on private subspace clustering, which is by all means a challenging task. For example, a negative result in [29] shows that if utility is measured in terms of exact clustering, then no private subspace clustering algorithm exists when neighboring databases are allowed to differ on an entire user profile. In addition, state-of-the-art subspace clustering methods like Sparse Subspace Clustering (SSC, [11]) lack a complete analysis of its clustering output, thanks to the notorious “graph connectivity” problem [21]. Finally, clustering could have high global sensitivity even if only cluster centers are released, as depicted in Figure 1. As a result, general private data releasing schemes like output perturbation [7, 8, 2] do not apply. In this work, we present a systematic and principled treatment of differentially private subspace clustering. To circumvent the negative result in [29], we use the perturbation of recovered low1 dimensional subspace from the ground truth as the utility measure. Our contributions are two-fold. First, we analyze two efficient algorithms based on the sample-aggregate framework [22] and established formal privacy and utility guarantees when data are generated from some stochastic model or satisfy certain deterministic separation conditions. New results on (non-private) subspace clustering are obtained along our analysis, including a fully agnostic subspace clustering on well-separated datasets using stability arguments and exact clustering guarantee for thresholding-based subspace clustering (TSC, [14]) in the noisy setting. In addition, we employ the exponential mechanism [18] and propose a novel Gibbs sampler for sampling from this distribution, which involves a novel tweak in sampling from a matrix Bingham distribution. The method works well in practice and we show it is closely related to the well-known mixtures of probabilistic PCA model [27]. Related work Subspace clustering can be thought as a generalization of PCA and k-means clustering. The former aims at finding a single low-dimensional subspace and the latter uses zerodimensional subspaces as cluster centers. There has been extensive research on private PCA [2, 4, 10] and k-means [2, 22, 26]. Perhaps the most similar work to ours is [22, 4]. [22] applies the sample-aggregate framework to k-means clustering and [4] employs the exponential mechanism to recover private principal vectors. In this paper we give non-trivial generalization of both work to the private subspace clustering setting. 2 Preliminaries 2.1 Notations For a vector x ∈Rd, its p-norm is defined as ∥x∥p = (P i xp i )1/p. If p is not explicitly specified then the 2-norm is used. For a matrix A ∈Rn×m, we use σ1(A) ≥· · · ≥σn(A) ≥0 to denote its singular values (assuming without loss of generality that n ≤m). We use ∥· ∥ξ to denote matrix norms, with ξ = 2 the matrix spectral norm and ξ = F the Frobenious norm. That is, ∥A∥2 = σ1(A) and ∥A∥F = pPn i=1 σi(A)2. For a q-dimensional subspace S ⊆Rd, we associate with a basis U ∈Rd×q, where the q columns in U are orthonormal and S = range(U). We use Sd q to denote the set of all q-dimensional subspaces in Rd. Given x ∈Rd and S ⊆Rd, the distance d(x, S) is defined as d(x, S) = infy∈S ∥x −y∥2. If S is a subspace associated with a basis U, then we have d(x, S) = ∥x −PS(x)∥2 = ∥x −UU⊤x∥2, where PS(·) denotes the projection operator onto subspace S. For two subspaces S, S′ of dimension q, the distance d(S, S′) is defined as the Frobenious norm of the sin matrix of principal angles; i.e., d(S, S′) = ∥sin Θ(S, S′)∥F = ∥UU⊤−U′U′⊤∥F , (1) where U, U′ are orthonormal basis associated with S and S′, respectively. 2.2 Subspace clustering Given n data points x1, · · · , xn ∈Rd, the task of subspace clustering is to cluster the data points into k clusters so that data points within a subspace lie approximately on a low-dimensional subspace. Without loss of generality, we assume ∥xi∥2 ≤1 for all i = 1, · · · , n. We also use X = {x1, · · · , xn} to denote the dataset and X ∈Rd×n to denote the data matrix by stacking all data points in columnwise order. Subspace clustering seeks to find k q-dimensional subspaces ˆC = { ˆS1, · · · , ˆSk} so as to minimize the Wasserstein’s distance or distance squared defined as d2 W ( ˆC, C∗) = min π:[k]→[k] k X i=1 d2( ˆSi, S∗ π(i)), (2) where π are taken over all permutations on [k] and S∗are the optimal/ground-truth subspaces. In a model based approach, C∗is fixed and data points {xi}n i=1 are generated either deterministically or stochastically from one of the ground-truth subspaces in C∗with noise corruption; for a completely agnostic setting, C∗is defined as the minimizer of the k-means subspace clustering objective: C∗:= argminC={S1,··· ,Sk}⊆Sd qcost(C; X) = argminC={S1,··· ,Sk}⊆Sd q 1 n n X i=1 min j d2(xi, Sj). (3) To simplify notations, we use ∆k(X) = cost(C∗; X) to denote cost of the optimal solution. 2 Algorithm 1 The sample-aggregate framework [22] 1: Input: X = {xi}n i=1 ⊆Rd, number of subsets m, privacy parameters ε, δ; f, dM. 2: Initialize: s = √m, α = ε/(5 p 2 ln(2/δ)), β = ε/(4(D + ln(2/δ))). 3: Subsampling: Select m random subsets of size n/m of X independently and uniformly at random without replacement. Repeat this step until no single data point appears in more than √m of the sets. Mark the subsampled subsets XS1, · · · , XSm. 4: Separate queries: Compute B = {si}m i=1 ⊆RD, where si = f(XSi). 5: Aggregation: Compute g(B) = si∗where i∗= argminm i=1ri(t0) with t0 = ( m+s 2 + 1). Here ri(t0) denotes the distance dM(·, ·) between si and the t0-th nearest neighbor to si in B. 6: Noise calibration: Compute S(B) = 2 maxk(ρ(t0 + (k + 1)s) · e−βk), where ρ(t) is the mean of the top ⌊s/β⌋values in {r1(t), · · · , rm(t)}. 7: Output: A(X) = g(B) + S(B) α u, where u is a standard Gaussian random vector. 2.3 Differential privacy Definition 2.1 (Differential privacy, [7, 8]). A randomized algorithm A is (ε, δ)-differentially private if for all X, Y satisfying d(X, Y) = 1 and all sets S of possible outputs the following holds: Pr[A(X) ∈S] ≤eε Pr[A(Y) ∈S] + δ. (4) In addition, if δ = 0 then the algorithm A is ε-differentially private. In our setting, the distance d(·, ·) between two datasets X and Y is defined as the number of different columns in X and Y. Differential privacy ensures the output distribution is obfuscated to the point that every user has a plausible deniability about being in the dataset, and in addition any inferences about individual user will have nearly the same confidence before and after the private release. 3 Sample-aggregation based private subspace clustering In this section we first summarize the sample-aggregate framework introduced in [22] and argue why it should be preferred to conventional output perturbation mechanisms [7, 8] for subspace clustering. We then analyze two efficient algorithms based on the sample-aggregate framework and prove formal privacy and utility guarantees. We also prove new results in our analysis regarding the stability of k-means subspace clustering (Lem. 3.3) and graph connectivity (i.e., consistency) of noisy threshold-based subspace clustering (TSC, [14]) under a stochastic model (Lem. 3.5). 3.1 Smooth local sensitivity and the sample-aggregate framework Figure 1: Illustration of instability of k-means subspace clustering solutions (d = 2, k = 2, q = 1). Blue dots represent evenly spaced data points on the unit circle; blue crosses indicate an additional data point. Red lines are optimal solutions. Most existing privacy frameworks [7, 8] are based on the idea of global sensitivity, which is defined as the maximum output perturbation ∥f(X1) −f(X2)∥ξ, where maximum is over all neighboring databases X1, X2 and ξ = 1 or 2. Unfortunately, global sensitivity of clustering problems is usually high even if only cluster centers are released. For example, Figure 1 shows that the global sensitivity of k-means subspace clustering could be as high as O(1), which ruins the algorithm utility. To circumvent the above-mentioned challenges, Nissim et al. [22] introduces the sample-aggregate framework based on the concept of a smooth version of local sensitivity. Unlike global sensitivity, local sensitivity measures the maximum perturbation ∥f(X) −f(X ′)∥ξ over all databases X ′ neighboring to the input database X. The proposed sample-aggregate framework (pseudocode in Alg. 1) enjoys local sensitivity and comes with the following guarantee: Theorem 3.1 ([22], Theorem 4.2). Let f : D →RD be an efficiently computable function where D is the collection of all databases and D is the output dimension. Let dM(·, ·) be a semimetric on 3 the outer space of f. 1 Set ε > 2D/√m and m = ω(log2 n). The sample-aggregate algorithm A in Algorithm 1 is an efficient (ε, δ)-differentially private algorithm. Furthermore, if f and m are chosen such that the ℓ1 norm of the output of f is bounded by Λ and Pr XS⊆X [dM(f(XS), c) ≤r] ≥3 4 (5) for some c ∈RD and r > 0, then the standard deviation of Gaussian noise added is upper bounded by O(r/ε) + Λ ε e−Ω( ε√m D ). In addition, when m satisfies m = ω(D2 log2(r/Λ)/ε2), with high probability each coordinate of A(X)−¯c is upper bounded by O(r/ε), where ¯c depending on A(X) satisfies dM(c, ¯c) = O(r). Let f be any subspace clustering solver that outputs k estimated low-dimensional subspaces and dM be the Wasserstein’s distance as defined in Eq. (2). Theorem 3.1 provides privacy guarantee for an efficient meta-algorithm with any f. In addition, utility guarantee holds with some more assumptions on input dataset X. In following sections we establish utility guarantees. The main idea is to prove stability results as outlined in Eq. (5) for particular subspace clustering solvers and then apply Theorem 3.1. 3.2 The agnostic setting We first consider the setting when data points {xi}n i=1 are arbitrarily placed. Under such agnostic setting the optimal solution C∗is defined as the one that minimizes the k-means cost as in Eq. (3). The solver f is taken to be any (1+ϵ)-approximation2 of optimal k-means subspace clustering; that is, f always outputs subspaces ˆC satisfying cost( ˆC; X) ≤(1 + ϵ)cost(C∗; X). Efficient core-set based approximation algorithms exist, for example, in [12]. The key task of this section it to identify assumptions under which the stability condition in Eq. (5) holds with respect to an approximate solver f. The example given in Figure 1 also suggests that identifiability issue arises when the input data X itself cannot be well clustered. For example, no two straight lines could well approximate data uniformly distributed on a circle. To circumvent the above-mentioned difficulty, we impose the following well-separation condition on the input data X: Definition 3.2 (Well-separation condition for k-means subspace clustering). A dataset X is (φ, η, ψ)-well separated if there exist constants φ, η and ψ, all between 0 and 1, such that ∆2 k(X) ≤min  φ2∆2 k−1(X), ∆2 k,−(X) −ψ, ∆2 k,+(X) + η , (6) where ∆k−1, ∆k,−and ∆k,+ are defined as ∆2 k−1(X) = minS1:k−1∈Sdq cost({Si}; X); ∆2 k,−(X) = minS1∈Sd q−1,S2:k∈Sdq cost({Si}; X); and ∆2 k,+(X) = minS1∈Sd q+1,S2:k∈Sdq cost({Si}; X). The first condition in Eq. (6), ∆2 k(X) ≤φ2∆2 k−1(X), constrains that the input dataset X cannot be well clustered using k −1 instead of k clusters. It was introduced in [23] to analyze stability of k-means solutions. For subspace clustering, we need another two conditions regarding the intrinsic dimension of each subspace. The ∆2 k(X) ≤∆2 k,−(X) −ψ asserts that replacing a q-dimensional subspace with a (q −1)-dimensional one is not sufficient, while ∆2 k(X) ≤∆2 k,+(X) + η means an additional subspace dimension does not help much with clustering X. The following lemma is our main stability result for subspace clustering on well-separated datasets. It states that when a candidate clustering ˆC is close to the optimal clustering C∗in terms of clustering cost, they are also close in terms of the Wasserstein distance defined in Eq. (2). Lemma 3.3 (Stability of agnostic k-means subspace clustering). Assume X is (φ, η, ψ)-well separated with φ2 < 1/1602, ψ > η. Suppose a candidate clustering ˆC = { ˆS1, · · · , ˆSk} ⊆Sd q satisfies cost( ˆC; X) ≤a · cost(C∗; X) for some a < 1−802φ2 800φ2 . Then the following holds: dW ( ˆC, C∗) ≤ 600 √ 2φ2√ k (1 −150φ2)(ψ −η). (7) The following theorem is then a simple corollary, with a complete proof in Appendix B. 1dM(·, ·) satisfies dM(x, y) ≥0, dM(x, x) = 0 and dM(x, y) ≤dM(x, z) + dM(y, z) for all x, y, z. 2Here ϵ is an approximation constant and is not related to the privacy parameter ε. 4 Algorithm 2 Threshold-based subspace clustering (TSC), a simplified version 1: Input: X = {xi}n i=1 ⊆Rd, number of clusters k and number of neighbors s. 2: Thresholding: construct G ∈{0, 1}n×n by connecting xi to the other s data points in X with the largest absolute inner products |⟨xi, x′⟩|. Complete G so that it is undirected. 3: Clustering: Let X (1), · · · , X (ℓ) be the connected components in G. Construct ¯ X (ℓ) by sampling q points from X (ℓ) uniformly at random without replacement. 4: Output: subspaces ˆC = { ˆS(ℓ)}k ℓ=1; ˆS(ℓ) is the subspace spanned by q arbitrary points in ¯ X (ℓ). Theorem 3.4. Fix a (φ, η, ψ)-well separated dataset X with n data points and φ2 < 1/1602, ψ > η. Suppose XS ⊆X is a subset of X with size m, sampled uniformly at random without replacement. Let ˆC = { ˆS1, · · · , ˆS2} be an (1 + ϵ)-approximation of optimal k-means subspace clustering computed on XS. If m = Ω( kqd log(qd/γ′∆2 k(X)) γ′2∆4 k(X) ) with γ′ < 1−802φ2 800φ2 −2(1 + ϵ), then we have: Pr XS " dW ( ˆC, C∗) ≤ 600 √ 2φ2√ k (1 −150φ2)(ψ −η) # ≥3 4, (8) where C∗= {S∗ 1, · · · , S∗ k} is the optimal clustering on X; that is, cost(C∗; X) = ∆2 k(X). Consequently, applying Theorem 3.4 together with the sample-aggregate framework we obtain a weak polynomial-time ε-differentially private algorithm for agnostic k-means subspace clustering, with additional amount of per-coordinate Gaussian noise upper bounded by O( φ2√ k ε(ψ−η)). Our bound is comparable to the one obtained in [22] for private k-means clustering, except for the (ψ −η) term which characterizes the well-separatedness under the subspace clustering scenario. 3.3 The stochastic setting We further consider the case when data points are stochastically generated from some underlying “true” subspace set C∗= {S∗ 1, · · · , S∗ k}. Such settings were extensively investigated in previous development of subspace clustering algorithms [24, 25, 14]. Below we give precise definition of the considered stochastic subspace clustering model: The stochastic model For every cluster ℓassociated with subspace S∗ ℓ, a data point x(ℓ) i ∈Rd belonging to cluster ℓcan be written as x(ℓ) i = y(ℓ) i + ε(ℓ) i , where y(ℓ) i is sampled uniformly at random from {y ∈S∗ ℓ: ∥y∥2 = 1} and εi ∼N(0, σ2/d · Id) for some noise parameter σ. Under the stochastic setting we consider the solver f to be the Threshold-based Subspace Clustering (TSC, [14]) algorithm. A simplified version of TSC is presented in Alg. 2. An alternative idea is to apply results in the previous section since the stochastic model implies well-separated dataset when noise level σ is small. However, the running time of TSC is O(n2d), which is much more efficient than core-set based methods. TSC is provably correct in that the similarity graph G has no false connections and is connected per cluster, as shown in the following lemma: Lemma 3.5 (Connectivity of TSC). Fix γ > 1 and assume max 0.04nℓ≤s ≤min nℓ/6. If for every ℓ∈{1, · · · , k}, the number of data points nℓand the noise level σ satisfy nℓ log nℓ > γπ√2q(12π)q−1 0.01(q/2 −1)(q −1); σ(1 + σ) √log n √q √ d ≤ 1 15 log n − s 1 −min ℓ̸=ℓ′ d2(S∗ ℓ, S∗ ℓ′) q ; ¯σ < s d 24 log n " cos 12π γ√2πq log nℓ nℓ  1 q−1 ! −cos 0.01(q/2 −1)(q −1) √π  1 q−1 !# , where ¯σ = 2 √ 5σ + σ2. Then with probability at least 1 −n2e− √ d −n P ℓe−nℓ/400 − P ℓn1−γ ℓ /(γ log nℓ) −12/n −P ℓnℓe−c(nℓ−1), the connected components in G correspond exactly to the k subspaces. Conditions in Lemma 3.5 characterize the interaction between sample complexity nℓ, noise level σ and “signal” level minℓ̸=ℓ′ d(S∗ ℓ, S∗ ℓ′). Theorem 3.6 is then a simple corollary of Lemma 3.5. Complete proofs are deferred to Appendix C. 5 Theorem 3.6 (Stability of TSC on stochastic data). Assume conditions in Lemma 3.5 hold with respect to n′ = n/m for ω(log2 n) ≤m ≤o(n). Assume in addition that limn→∞nℓ= ∞for all ℓ= 1, · · · , L and the failure probability does not exceed 1/8. Then for every ϵ > 0 we have lim n→∞Pr XS h dW ( ˆC, C∗) > ϵ i = 0. (9) Compared to Theorem 3.4 for the agnostic model, Theorem 3.6 shows that one can achieve consistent estimation of underlying subspaces under a stochastic model. It is an interesting question to derive finite sample bounds for the differentially private TSC algorithm. 3.4 Discussion It is worth noting that the sample-aggregate framework is an (ε, δ)-differentially private mechanism for any computational subroutine f. However, the utility claim (i.e., the O(r/ε) bound on each coordinate of A(X) −c) requires the stability of the particular subroutine f, as outlined in Eq. (5). It is unfortunately hard to theoretically argue for stability of state-of-the-art subspace clustering methods such as sparse subspace cluster (SSC, [11]) due to the “graph connectivity” issue [21]3. Nevertheless, we observe satisfactory performance of SSC based algorithms in simulations (see Sec. 5). It remains an open question to derive utility guarantee for (user) differentially private SSC. 4 Private subspace clustering via the exponential mechanism In Section 3 we analyzed two algorithms with provable privacy and utility guarantees for subspace clustering based on the sample-aggregate framework. However, empirical evidence shows that sample-aggregate based private clustering suffers from poor utility in practice [26]. In this section, we propose a practical private subspace clustering algorithm based on the exponential mechanism [18]. In particular, given the dataset X with n data points, we propose to samples parameters θ = ({Sℓ}k ℓ=1, {zi}n i=1) where Sℓ∈Sq d, zj ∈{1, · · · , k} from the following distribution: p(θ; X) ∝exp −ε 2 · n X i=1 d2(xi, Szi) ! , (10) where ε > 0 is the privacy parameter. The following proposition shows that exact sampling from the distribution in Eq. (10) results in a provable differentially private algorithm. Its proof is trivial and is deferred to Appendix D.1. Note that unlike sample-aggregate based methods, the exponential mechanism can privately release clustering assignment z. This does not violate the lower bound in [29] because the released clustering assignment z is not guaranteed to be exactly correct. Proposition 4.1. The random algorithm A : X 7→θ that outputs one sample from the distribution defined in Eq. (10) is ε-differential private. 4.1 A Gibbs sampling implementation It is hard in general to sample parameters from distributions as complicated as in Eq. (10). We present a Gibbs sampler that iteratively samples subspaces {Si} and cluster assignments {zj} from their conditional distributions. Update of zi: When {Sℓ} and z−i are fixed, the conditional distribution of zi is p(zi|{Sℓ}k ℓ=1, z−i; X) ∝exp(−ε/2 · d2(xi, Szi)). (11) Since d(xi, Szi) can be efficiently computed (given an orthonormal basis of Szi), update of zi can be easily done by sampling zj from a categorical distribution. Update of Sℓ: Let e X (ℓ) = {xi ∈X : zi = ℓ} denote data points that are assigned to cluster ℓand ˜nℓ= | e X (ℓ)|. Denote eX(ℓ) ∈Rdטnℓas the matrix with columns corresponding to all data points in e X (ℓ). The distribution over Sℓconditioned on z can then be written as p(Sℓ= range(Uℓ)|z; X) ∝exp(ε/2 · tr(U⊤ ℓAℓUℓ)); Uℓ∈Rd×q, U⊤ ℓUℓ= Iq×q, (12) where Aℓ= eX(ℓ) eX(ℓ)⊤is the unnormalized sample covariance matrix. Distribution of the form in Eq. (12) is a special case of the matrix Bingham distribution, which admits a Gibbs sampler [16]. We give implementation details in Appendix D.2 with modifications so that the resulting Gibbs sampler is empirically more efficient for a wide range of parameter settings. 3Recently [28] established full clustering guarantee for SSC, however, under strong assumptions. 6 4.2 Discussion The proposed Gibbs sampler resembles the k-plane algorithm for subspace clustering [3]. It is in fact a “probabilistic” version of k-plane since sampling is performed at each iteration rather than deterministic updates. Furthermore, the proposed Gibbs sampler could be viewed as posterior sampling for the following generative model: first sample Uℓuniformly at random from Sd q for each subspace Sℓ; afterwards, cluster assignments {zi}n i=1 are sampled such that Pr[zi = j] = 1/k and xi is set as xi = Uℓyi + PU⊥ ℓwi, where yi is sampled uniformly at random from the qdimensional unit ball and wi ∼N(0, Id/ε). Connection between the above-mentioned generative model and Gibbs sampler is formally justified in Appendix D.3. The generative model is strikingly similar to the well-known mixtures of probabilistic PCA (MPPCA, [27]) model by setting variance parameters σℓin MPPCA to p 1/ε. The only difference is that yi are sampled uniformly at random from a unit ball 4 and noise wi is constrained to U⊥ ℓ, the complement space of Uℓ. Note that this is closely related to earlier observation that “posterior sampling is private” [20, 6, 31], but different in that we constructed a model from a private procedure rather than the other way round. As the privacy parameter ε →∞(i.e., no privacy guarantee), we arrive immediately at the exact k-plane algorithm and the posterior distribution concentrates around the optimal k-means solution (C∗, z∗). This behavior is similar to what a small-variance asymptotic analysis on MPPCA models reveals [30]. On the other hand, the proposed Gibbs sampler is significantly different from previous Bayesian probabilisitic PCA formulation [34, 30] in that the subspaces are sampled from a matrix Bingham distribution. Finally, we remark that the proposed Gibbs sampler is only asymptotically private because Proposition 4.1 requires exact (or nearly exact [31]) sampling from Eq. (10). 5 Numerical results We provide numerical results of both the sample-aggregate and Gibbs sampling algorithms on synthetic and real-world datasets. We also compare with a baseline method implemented based on the k-plane algorithm [3] with perturbed sample covariance matrix via the SuLQ framework [2] (details presented in Appendix E). Three solvers are considered for the sample-aggregate framework: threshold-based subspace clustering (TSC, [14]), which has provable utility guarantee with sampleaggregation on stochastic models, along with sparse subspace clustering (SSC, [11]) and low-rank representation (LRR, [17]), the two state-of-the-art methods for subspace clustering. For Gibbs sampling, we use non-private SSC and LRR solutions as initialization for the Gibbs sampler. All methods are implemented using Matlab. For synthetic datasets, we first generate k random q-dimensional linear subspaces. Each subspace is generated by first sampling a d × q random Gaussian matrix and then recording its column space. n data points are then assigned to one of the k subspaces (clusters) uniformly at random. To generate a data point xi assigned with subspace Sℓ, we first sample yi ∈Rq with ∥yi∥2 = 1 uniformly at random from the q-dimensional unit sphere. Afterwards, xi is set as xi = Uℓyi + wi, where Uℓ∈Rd×q is an orthonormal basis associated with Sℓand wi ∼N(0, σ2Id) is a noise vector. Figure 2 compares the utility (measured in terms of k-means objective cost( ˆC; X) and the Wasserstein’s distance dW ( ˆC, C∗)) of sample aggregation, Gibbs sampling and SuLQ subspace clustering. As shown in the plots, sample-aggregation algorithms have poor utility unless the privacy parameter ε is truly large (which means very little privacy protection). On the other hand, both Gibbs sampling and SuLQ subspace clustering give reasonably good performance. Figure 2 also shows that SuLQ scales poorly with the ambient dimension d. This is because SuLQ subspace clustering requires calibrating noise to a d × d sample covariance matrix, which induces much error when d is large. Gibbs sampling seems to be robust to various d settings. We also experiment on real-world datasets. The right two plots in Figure 2 report utility on a subset of the extended Yale Face Dataset B [13] for face clustering. 5 random individuals are picked, forming a subset of the original dataset with n = 320 data points (images). The dataset is preprocessed by projecting each individual onto a 9D affine subspace via PCA. Such preprocessing step was adopted in [32, 29] and was theoretically justified in [1]. Afterwards, ambient dimension of the entire dataset is reduced to d = 50 by random Gaussian projection. The plots show that Gibbs sampling significantly outperforms the other algorithms. 4In MPPCA latent variables yi are sampled from a normal distribution N(0, ρ2Iq). 7 −1 −0.5 0 0.5 1 1.5 2 2.5 3 0 0.05 0.1 0.15 0.2 0.25 0.3 Log10ε K−means cost s.a., SSC s.a., TSC s.a., LRR exp., SSC exp. LRR SuLQ−10 SuLQ−50 −1 −0.5 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Log10ε K−means cost s.a., SSC s.a., TSC s.a., LRR exp., SSC exp. LRR SuLQ−10 SuLQ−50 −1 −0.5 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Log10ε K−means cost s.a., SSC s.a., TSC s.a., LRR exp., SSC exp. LRR SuLQ−10 SuLQ−50 −1 −0.5 0 0.5 1 1.5 2 2.5 3 −0.5 0 0.5 1 1.5 2 2.5 3 Log10ε Wasserstein distance s.a., SSC s.a., TSC s.a., LRR exp., SSC exp. LRR SuLQ−10 SuLQ−50 −1 −0.5 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 3.5 4 Log10ε Wasserstein distance s.a., SSC s.a., TSC s.a., LRR exp., SSC exp. LRR SuLQ−10 SuLQ−50 −1 −0.5 0 0.5 1 1.5 2 2.5 3 2 3 4 5 6 7 8 9 Log10ε Wasserstein distance s.a., SSC s.a., TSC s.a., LRR exp., SSC exp. LRR SuLQ−10 SuLQ−50 Figure 2: Utility under fixed privacy budget ε. Top row shows k-means cost and bottom row shows the Wasserstein’s distance dW ( ˆC, C∗). From left to right: synthetic dataset, n = 5000, d = 5, k = 3, q = 3, σ = 0.01; n = 1000, d = 10, k = 3, q = 3, σ = 0.1; extended Yale Face Dataset B (a subset). n = 320, d = 50, k = 5, q = 9, σ = 0.01. δ is set to 1/(n ln n) for (ε, δ)-privacy algorithms. “s.a.” stands for smooth sensitivity and “exp.” stands for exponential mechanism. “SuLQ-10” and “SuLQ-50” stand for the SuLQ framework performing 10 and 50 iterations. Gibbs sampling is run for 10000 iterations and the mean of the last 100 samples is reported. 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 × 100 iterations Test statistic ε=0.1 ε=1 ε=10 ε=100 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 × 100 iterations K−means cost 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 3.5 4 × 100 iterations Wasserstein distance Figure 3: Test statistics, k-means cost and dW ( ˆC, C∗) of 8 trials of the Gibbs sampler under different privacy settings. Synthetic dataset setting: n = 1000, d = 10, k = 3, q = 3, σ = 0.1. In Figure 3 we investigate the mixing behavior of proposed Gibbs sampler. We plot for multiple trials of Gibbs sampling the k-means objective, Wasserstein’s distance and a test statistic 1/√kq · (Pk ℓ=1 ∥1/T · PT t=1 U(t) ℓ∥2 F )1/2, where U(t) ℓ is a basis sample of Sℓat the tth iteration. The test statistic has mean zero under distribution in Eq. (10) and a similar statistic was used in [4] as a diagnostic of the mixing behavior of another Gibbs sampler. Figure 3 shows that under various privacy parameter settings, the proposed Gibbs sampler mixes quite well after 10000 iterations. 6 Conclusion In this paper we consider subspace clustering subject to formal differential privacy constraints. We analyzed two sample-aggregate based algorithms with provable utility guarantees under agnostic and stochastic data models. We also propose a Gibbs sampling subspace clustering algorithm based on the exponential mechanism that works well in practice. Some interesting future directions include utility bounds for state-of-the-art subspace clustering algorithms like SSC or LRR. Acknowledgement This research is supported in part by grant NSF CAREER IIS-1252412, NSF Award BCS-0941518, and a grant by Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative administered by the IDM Programme Office. 8 References [1] R. Basri and D. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):218–233, 2003. [2] A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SULQ framework. In PODS, 2015. [3] P. S. Bradley and O. L. Mangasarian. k-plane clustering. Journal of Global Optimization, 16(1), 2000. [4] K. Chaudhuri, A. Sarwate, and K. Sinha. Near-optimal algorithms for differentially private principal components. In NIPS, 2012. [5] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization. The Journal of Machine Learning Research, 15(1):2213–2238, 2014. [6] C. Dimitrakakis, B. Nelson, A. Mitrokotsa, and B. I. Rubinstein. Robust and private bayesian inference. In Algorithmic Learning Theory, pages 291–305. Springer, 2014. [7] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT, 2006. [8] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In TCC, 2006. [9] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014. [10] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze Gauss: Optimal bounds for privacy-preserving principal component analysis. In STOC, 2014. [11] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2765–2781, 2013. [12] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca and projective clustering. In SODA, 2013. [13] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):643–660, 2001. [14] R. Heckel and H. B¨olcskei. Robust subspace clustering via thresholding. arXiv:1307.4891, 2013. [15] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman. Clustering appearances of objects under varying illumination conditions. In CVPR, 2003. [16] P. Hoff. Simulation of the matrix bingham-conmises-fisher distribution, with applications to multivariate and relational data. Journal of Computational and Graphical Statistics, 18(2):438–456, 2009. [17] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Ma, and Y. Yu. Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):171–184, 2012. [18] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, 2007. [19] B. McWilliams and G. Montana. Subspace clustering of high-dimensional data: a predictive approach. Data Mining and Knowledge Discovery, 28(3):736–772, 2014. [20] D. J. Mir. Differential privacy: an exploration of the privacy-utility landscape. PhD thesis, Rutgers University, 2013. [21] B. Nasihatkon and R. Hartley. Graph connectivity in sparse subspace clustering. In CVPR, 2011. [22] K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In STOC, 2007. [23] R. Ostrovksy, Y. Rabani, L. Schulman, and C. Swamy. The effectiveness of Lloyd-type methods for the k-means problem. In FOCS, 2006. [24] M. Soltanolkotabi, E. J. Candes, et al. A geometric analysis of subspace clustering with outliers. The Annals of Statistics, 40(4):2195–2238, 2012. [25] M. Soltanolkotabi, E. Elhamifa, and E. Candes. Robust subspace clustering. The Annals of Statistics, 42(2):669–699, 2014. [26] D. Su, J. Cao, N. Li, E. Bertino, and H. Jin. Differentially private k-means clustering. arXiv, 2015. [27] M. Tipping and C. Bishop. Mixtures of probabilistic principle component anlyzers. Neural computation, 11(2):443–482, 1999. [28] Y. Wang, Y.-X. Wang, and A. Singh. Clustering consistent sparse subspace clustering. arXiv, 2015. [29] Y. Wang, Y.-X. Wang, and A. Singh. A deterministic analysis of noisy sparse subspace clustering for dimensionality-reduced data. In ICML, 2015. [30] Y. Wang and J. Zhu. DP-space: Bayesian nonparametric subspace clustering with small-variance asymptotic analysis. In ICML, 2015. [31] Y.-X. Wang, S. Fienberg, and A. Smola. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In ICML, 2015. [32] Y.-X. Wang and H. Xu. Noisy sparse subspace clustering. In ICML, pages 89–97, 2013. [33] A. Zhang, N. Fawaz, S. Ioannidis, and A. Montanari. Guess who rated this movie: Identifying users through subspace clustering. arXiv, 2012. [34] Z. Zhang, K. L. Chan, J. Kwok, and D.-Y. Yeung. Bayesian inference on principal component analysis using reversible jump markov chain monte carlo. In AAAI, 2004. 9
2015
11
5,603
No-Regret Learning in Bayesian Games Jason Hartline Northwestern University Evanston, IL hartline@northwestern.edu Vasilis Syrgkanis Microsoft Research New York, NY vasy@microsoft.com ´Eva Tardos Cornell University Ithaca, NY eva@cs.cornell.edu Abstract Recent price-of-anarchy analyses of games of complete information suggest that coarse correlated equilibria, which characterize outcomes resulting from no-regret learning dynamics, have near-optimal welfare. This work provides two main technical results that lift this conclusion to games of incomplete information, a.k.a., Bayesian games. First, near-optimal welfare in Bayesian games follows directly from the smoothness-based proof of near-optimal welfare in the same game when the private information is public. Second, no-regret learning dynamics converge to Bayesian coarse correlated equilibrium in these incomplete information games. These results are enabled by interpretation of a Bayesian game as a stochastic game of complete information. 1 Introduction A recent confluence of results from game theory and learning theory gives a simple explanation for why good outcomes in large families of strategically-complex games can be expected. The advance comes from (a) a relaxation the classical notion of equilibrium in games to one that corresponds to the outcome attained when players’ behavior ensures asymptotic no-regret, e.g., via standard online learning algorithms such as weighted majority, and (b) an extension theorem that shows that the standard approach for bounding the quality of classical equilibria automatically implies the same bounds on the quality of no-regret equilibria. This paper generalizes these results from static games to Bayesian games, for example, auctions. Our motivation for considering learning outcomes in Bayesian games is the following. Many important games model repeated interactions between an uncertain set of participants. Sponsored search, and more generally, online ad-auction market places, are important examples of such games. Platforms are running millions of auctions, with each individual auction slightly different and of only very small value, but such market places have high enough volume to be the financial basis of large industries. This online auction environment is best modeled by a repeated Bayesian game: the auction game is repeated over time, with the set of participants slightly different each time, depending on many factors from budgets of the players to subtle differences in the opportunities. A canonical example to which our methods apply is a single-item first-price auction with players’ values for the item drawn from a product distribution. In such an auction, players simultaneously submit sealed bids and the player with the highest bid wins and pays her bid. The utility of the winner is her value minus her bid; the utilities of the losers are zero. When the values are drawn from non-identical continuous distributions the Bayes-Nash equilibrium is given by a differential equation 1 that is not generally analytically tractable, cf. [8] (and generalizations of this model, computationally hard, see [3]). Again, though their Bayes-Nash equilibria are complex, we show that good outcomes can be expected in these kinds of auctions. Our approach to proving that good equilibria can be expected in repeated Bayesian games is to extend an analogous result for static games,1 i.e., the setting where the same game with the same payoffs and the same players is repeated. Nash equilibrium is the classical model of equilibrium for each stage of the static game. In such an equilibrium the strategies of players may be randomized; however, the randomizations of the players are independent. To measure the quality of outcomes in games Koutsoupias and Papadimitriou [9] introduced the price of anarchy, the ratio of the quality of the worst Nash equilibrium over a socially optimal solution. Price of anarchy results have been shown for large families of games, with a focus on those relevant for computer networks. Roughgarden [11] identified the canonical approach for bounding the price of anarchy of a game as showing that it satisfies a natural smoothness condition. There are two fundamental flaws with Nash equilibrium as a description of strategic behavior. First, computing a Nash equilibrium can be PPAD hard and, thus, neither should efficient algorithms for computing a Nash equilibrium be expected nor should any dynamics (of players with bounded computational capabilities) converge to a Nash equilibrium. Second, natural behavior tends to introduce correlations in strategies and therefore does not converge to Nash equilibrium even in the limit. Both of these issues can be resolved for large families of games. First, there are relaxations of Nash equilibrium which allow for correlation in the players’ strategies. Of these, this paper will focus on coarse correlated equilibrium which requires the expected payoff of a player for the correlated strategy be no worse than the expected payoff of any action at the player’s disposal. Second, it was proven by Blum et al. [2] that the (asymptotic) no-regret property of many online learning algorithms implies convergence to the set of coarse correlated equilibria.2 Blum et al. [2] extended the definition of the price of anarchy to outcomes obtained when each player follows a no-regret learning algorithm.3 As coarse correlated equilibrium generalize Nash equilibrium it could be that the worst case equilibrium under the former is worse than the latter. Roughgarden [11], however, observed that there is often no degradation; specifically, the very same smoothness property that he identified as implying good welfare in Nash equilibrium also proves good welfare of coarse correlated equilibrium (equivalently: for outcomes from no-regret learners). Thus, for a large family of static games, we can expect strategic behavior to lead to good outcomes. This paper extends this theory to Bayesian games. Our contribution is two-fold: (i) We show an analog of the convergence of no-regret learning to coarse correlated equilibria in Bayesian games, which is of interest independently of our price of anarchy analysis; and (ii) we show that the coarse correlated equilibria of the Bayesian version of any smooth static game have good welfare. Combining these results, we conclude that no-regret learning in smooth Bayesian games achieves good welfare. These results are obtained as follows. It is possible to view a Bayesian game as a stochastic game, i.e., where the payoff structure is fixed but there is a random action on the part of Nature. This viewpoint applied to the above auction example considers a population of bidders associated for each player and, in each stage, Nature uniformly at random selects one bidder from each population to participate in the auction. We re-interpret and strengthen a result of Syrgkanis and Tardos [12] by showing that the smoothness property of the static game (for any fixed profile of bidder values) implies smoothness of this stochastic game. From the perspective of coarse correlated equilibrium, there is no difference between a stochastic game and the non-stochastic game with each random variable replaced with its expected value. Thus, the smoothness framework of Roughgarden [11] extends this result to imply that the coarse correlated equilibria of the stochastic game are good. To show that we can expect good outcomes in Bayesian games, it suffices to show that no-regret learning converges to the coarse correlated equilibrium of this stochastic game. Importantly, when we consider learning algorithms there is a distinction between the stochastic game where players’ payoffs are random variables and the non-stochastic game where players’ payoffs are the expectation 1In the standard terms of the game theory literature, we extend results for learning in games of complete information to games of incomplete information. 2This result is a generalization of one of Foster and Vohra [7]. 3They referred to this price of anarchy for no-regret learners as the price of total anarchy. 2 of these variables. Our analysis addressed this distinction and, in particular, shows that, in the stochastic game on populations, no-regret learning converges almost surely to the set of coarse correlated equilibrium. This result implies that the average welfare of no-regret dynamics will be good, almost surely, and not only in expectation over the random draws of Nature. 2 Preliminaries This section describes a general game theoretic environment which includes auctions and resource allocation mechanisms. For this general environment we review the results from the literature for analyzing the social welfare that arises from no-regret learning dynamics in repeated game play. The subsequent sections of the paper will generalize this model and these results to Bayesian games, a.k.a., games of incomplete information. General Game Form. A general game M is specified by a mapping from a profile a ∈A ≡ A1 × · · · × An of allowable actions of players to an outcome. Behavior in a game may result in (possibly correlated) randomized actions a ∈∆(A).4 Player i’s utility in this game is determined by a profile of individual values v ∈V ≡V1 × · · · × Vn and the (implicit) outcome of the game; it is denoted Ui(a; vi) = Ea∼a [Ui(a; vi)]. In games with a social planner or principal who does not take an action in the game, the utility of the principal is R(a) = Ea∼a [R(a)]. In many games of interest, such as auctions or allocation mechanisms, the utility of the principal is the revenue from payments from the players. We will use the term mechanism and game interchangeably. In a static game the payoffs of the players (given by v) are fixed. Subsequent sections will consider Bayesian games in the independent private value model, i.e., where player i’s value vi is drawn independently from the other players’ values and is known only privately to player i. Classical game theory assumes complete information for static games, i.e., that v is known, and incomplete information in Bayesian games, i.e., that the distribution over V is known. For our study of learning in games no assumptions of knowledge are made; however, to connect to the classical literature we will use its terminology of complete and incomplete information to refer to static and Bayesian games, respectively. Social Welfare. We will be interested in analyzing the quality of the outcome of the game as defined by the social welfare, which is the sum of the utilities of the players and the principal. We will denote by SW(a; v) = P i∈[n] Ui(a; vi) + R(a) the expected social welfare of mechanism M under a randomized action profile a. For any valuation profile v ∈V we will denote the optimal social welfare, i.e, the maximum over outcomes of the game of the sum of utilities, by OPT(v). No-regret Learning and Coarse Correlated Equilibria. For complete information games, i.e., fixed valuation profile v, Blum et al. [2] analyzed repeated play of players using no-regret learning algorithms, and showed that this play converges to a relaxation of Nash equilibrium, namely, coarse correlated equilibrium. Definition 1 (no regret). A player achieves no regret in a sequence of play a1, . . . , aT if his regret against any fixed strategy a′ i vanishes to zero: limT →∞1 T PT t=1(Ui(a′ i, at −i; vi) −Ui(at; vi)) = 0. (1) Definition 2 (coarse correlated equilibrium, CCE). A randomized action profile a ∈∆(A) is a coarse correlated equilibrium of a complete information game with valuation profile v if for every player i and a′ i ∈Ai: Ea [Ui(a; vi)] ≥Ea [Ui(a′ i, a−i; vi)] (2) Theorem 3 (Blum et al. [2]). The empirical distribution of actions of any no-regret sequence in a repeated game converges to the set of CCE of the static game. Price of Anarchy of CCE. Roughgarden [11] gave a unifying framework for comparing the social welfare, under various equilibrium notions including coarse correlated equilibrium, to the optimal social welfare by defining the notion of a smooth game. This framework was extended to games like auctions and allocation mechanisms by Syrgkanis and Tardos [12]. 4Bold-face symbols denote random variables. 3 Game/Mechanism (λ, µ) POA Reference Simultaneous First Price Auction with Submodular Bidders (1 −1/e, 1) e e−1 [12] First Price Multi-Unit Auction (1 −1/e, 1) e e−1 [5] First Price Position Auction (1/2, 1) 2 [12] All-Pay Auction (1/2, 1) 2 [12] Greedy Combinatorial Auction with d-complements (1 −1/e, d) de e−1 [10] Proportional Bandwitdth Allocation Mechanism (1/4, 1) 4 [12] Submodular Welfare Games (1, 1) 2 [13, 11] Congestion Games with Linear Delays (5/3, 1/3) 5/2 [11] Figure 1: Examples of smooth games and mechanisms Definition 4 (smooth mechanism). A mechanism M is (λ, µ)-smooth for some λ, µ ≥0 there exists an independent randomized action profile a∗(v) ∈∆(A1)×· · ·×∆(An) for each valuation profile v, such that for any action profile a ∈A and valuation profile v ∈V: P i∈[n] Ui(a∗ i (v), a−i; vi) ≥λ · OPT(v) −µ · R(a). (3) Many important games and mechanisms satisfy this smoothness definition for various parameters of λ and µ (see Figure 1); the following theorem shows that the welfare of any coarse correlated equilibrium in any of these games is nearly optimal. Theorem 5 (efficiency of CCE; [12]). If a mechanism is (λ, µ)-smooth then the social welfare of any course correlated equilibrium at least λ max{1,µ} of the optimal welfare, i.e., the price of anarchy satisfies POA ≤max{1,µ} λ . Price of Anarchy of No-regret Learning. Following Blum et al. [2], Theorem 3 and Theorem 5 imply that no-regret learning dynamics have near-optimal social welfare. Corollary 6 (efficiency of no-regret dyhamics; [12]). If a mechanism is (λ, µ)-smooth then the average welfare of any no-regret dynamics of the repeated game with a fixed player set and valuation profile, achieves average social welfare at least λ max{1,µ} of the optimal welfare, i.e., the price of anarchy satisfies POA ≤max{1,µ} λ . Importantly, Corollary 6 holds the valuation profile v ∈V fixed throughout the repeated game play. The main contribution of this paper is in extending this theory to games of incomplete information, e.g., where the values of the players are drawn at random in each round of game play. 3 Population Interpretation of Bayesian Games In the standard independent private value model of a Bayesian game there are n players. Player i has type vi drawn uniformly from the set of type Vi (and this distribution is denoted Fi).5 We will restrict attention to the case when the type space Vi is finite. A player’s strategy in this Bayesian game is a mapping si : Vi →Ai from a valuation vi ∈Vi to an action ai ∈Ai. We will denote with Σi = AVi i the strategy space of each player and with Σ = Σ1 × . . . × Σn. In the game, each player i realizes his type vi from the distribution and then makes action si(vi) in the game. In the population interpretation of the Bayesian game, also called the agent normal form representation [6], there are n finite populations of players. Each player in population i has a type vi which we assume to be distinct for each player in each population and across populations.6 The set of players in the population is denoted Vi. and the player in population i with type vi is called player vi. In the population game, each player vi chooses an action si(vi). Nature uniformly draws one player from 5The restriction to the uniform distribution is without loss of generality for any finite type space and for any distribution over the type space that involves only rational probabilities. 6The restriction to distinct types is without of loss of generality as we can always augment a type space with an index that does not affect player utilities. 4 each population, and the game is played with those players’ actions. In other words, the utility of player vi from population i is: U AG i,vi(s) = Ev [Ui(s(v); vi) · 1{vi = vi}] (4) Notice that the population interpretation of the Bayesian game is in fact a stochastic game of complete information. There are multiple generalizations of coarse correlated equilibria from games of complete information to games of incomplete information (c.f. [6], [1], [4]). One of the canonical definitions is simply the coarse correlated equilibrium of the stochastic game of complete information that is defined by the population interpretation above.7 Definition 7 (Bayesian coarse correlated equilibrium - BAYES-CCE). A randomized strategy profile s ∈∆(Σ) is a Bayesian coarse correlated equilibrium if for every a′ i ∈Ai and for every vi ∈Vi: EsEv [Ui(s(v); vi) | vi = vi] ≥EsEv [Ui(a′ i, s−i(v−i); vi) | vi = vi] (5) In a game of incomplete information the welfare in equilibrium will be compared to the expected ex-post optimal social welfare Ev[OPT(v)]. We will refer to the worst-case ratio of the expected optimal social welfare over the expected social welfare of any BAYES-CCE as BAYES-CCE-POA. 4 Learning in Repeated Bayesian Game Consider a repeated version of the population interpretation of a Bayesian game. At each iteration one player vi from each population is sampled uniformly and independently from other populations. The set of chosen players then participate in an instance of a mechanism M. We assume that each player vi ∈Vi, uses some no-regret learning rule to play in this repeated game.8 In Definition 8, we describe the structure of the game and our notation more elaborately. Definition 8. The repeated Bayesian game of M proceeds as follows. In stage t: 1. Each player vi ∈Vi in each population i picks an action st i(vi) ∈Ai. We denote with st i ∈A|Vi| i the function that maps a player vi ∈Vi to his action. 2. From each population i one player vt i ∈Vi is selected uniformly at random. Let vt = (vt 1, . . . , vt n) be the chosen profile of players and st(vt) = (st 1(vt 1), . . . , st n(vt n)) be the profile of chosen actions. 3. Each player vt i participates in an instance of game M, in the role of player i ∈[n], with action st i(vt i) and experiences a utility of Ui(st(vt); vt i). All players not selected in Step 2 experience zero utility. Remark. We point out that for each player in a population to achieve no-regret he does not need to know the distribution of values in other populations. There exist algorithms that can achieve the no-regret property and simply require an oracle that returns the utility of a player at each iteration. Thus all we need to assume is that each player receives as feedback his utility at each iteration. Remark. We also note that our results would extend to the case where at each period multiple matchings are sampled independently and players potentially participate in more than one instance of the mechanism M and potentially with different players from the remaining population. The only thing that the players need to observe in such a setting is their average utility that resulted from their action st i(vi) ∈Ai from all the instances that they participated at the given period. Such a scenario seems an appealing model in online ad auction marketplaces where players receive only average utility feedback from their bids. 7This notion is the coarse analog of the agent normal form Bayes correlated equilibrium defined in Section 4.2 of Forges [6]. 8An equivalent and standard way to view a Bayesian game is that each player draws his value independently from his distribution each time the game is played. In this interpretation the player plays by choosing a strategy that maps his value to an action (or distribution over actions). In this interpretation our no-regret condition requires that the player not regret his actions for each possible value. 5 Bayesian Price of Anarchy for No-regret Learners. In this repeated game setting we want to compare the average social welfare of any sequence of play where each player uses a vanishing regret algorithm versus the average optimal welfare. Moreover, we want to quantify the worst-case such average welfare over all possible valuation distributions within each population: sup F1,...,Fn lim sup T →∞ PT t=1 OPT(vt) PT t=1 SW M(st(vt);vt) (6) We will refer to this quantity as the Bayesian price of anarchy for no-regret learners. The numerator of this term is simply the average optimal welfare when players from each population are drawn independently in each stage; it converges almost surely to the expected ex-post optimal welfare Ev[OPT(v)] of the stage game. Our main theorem is that if the mechanism is smooth and players follow no-regret strategies then the expected welfare is guaranteed to be close to the optimal welfare. Theorem 9 (Main Theorem). If a mechanism is (λ, µ)-smooth then the average (over time) welfare of any no-regret dynamics of the repeated Bayesian game achieves average social welfare at least λ max{1,µ} of the average optimal welfare, i.e. POA ≤max{1,µ} λ , almost surely. Roadmap of the proof. In Section 5, we show that any vanishing regret sequence of play of the repeated Bayesian game, will converge almost surely to the Bayesian version of a coarse correlated equilibrium of the incomplete information stage game. Therefore the Bayesian price of total anarchy will be upper bounded by the efficiency of guarantee of any Bayesian coarse correlated equilibrium. Finally, in Section 6 we show that the price of anarchy bound of smooth mechanisms directly extends to Bayesian coarse correlated equilibria, thereby providing an upper bound on the Bayesian price of total anarchy of the repeated game. Remark. We point out that our definition of BAYES-CCE is inherently different and more restricted than the one defined in Caragiannis et al. [4]. There, a BAYES-CCE is defined as a joint distribution D over V × A, such that if (v, a) ∼D then for any vi ∈Vi and a′ i(vi) ∈Ai: E(v,a) [Ui(a; vi)] ≥E(v,a) [Ui(a′ i(vi), a−i; vi)] (7) The main difference is that the product distribution defined by a distribution in ∆(Σ) and the distribution of values, cannot produce any possible joint distribution over (V, A), but the type of joint distributions are restricted to satisfy a conditional independence property described by [6]. Namely that player i’s action is conditionally independent of some other player j’s value, given player i’s type. Such a conditional independence property is essential for the guarantees that we will present in this work to extend to a BAYES-CCE and hence do not seem to extend to the notion given in [4]. However, as we will show in Section 5, the no-regret dynamics that we analyze, which are mathematically equivalent to the dynamics in [4], do converge to this smaller set of BAYES-CCE that we define and for which our efficiency guarantees will extend. This extra convergence property is not needed when the mechanism satisfies the stronger semi-smoothness property defined in [4] and thereby was not needed to show efficiency bounds in their setting. 5 Convergence of Bayesian No-Regret to BAYES-CCE In this section we show that no-regret learning in the repeated Bayesian game converges almost surely to the set of Bayesian coarse correlated equilibria. Any given sequence of play of the repeated Bayesian game, which we defined in Definition 8, gives rise to a sequence of strategy-value pairs (st, vt) where st = (st 1, . . . , st n) and st i ∈AVi i , captures the actions that each player vi in population i would have chosen, had they been picked. Then observe that all that matters to compute the average social welfare of the game for any given time step T, is the empirical distribution of pairs (s, v), up till time step T, denoted as DT , i.e. if (sT , vT ) is a random sample from DT : 1 T PT t=1 SW(st(vt); vt) = E(sT ,vT )  SW(sT (vT ); vT )  (8) Lemma 10 (Almost sure convergence to BAYES-CCE). Consider a sequence of play of the random matching game, where each player uses a vanishing regret algorithm and let DT be the empirical distribution of (strategy, valuation) profile pairs up till time step T. Consider any subsequence of {DT }T that converges in distribution to some distribution D. Then, almost surely, D is a product distribution, i.e. D = Ds × Dv, with Ds ∈∆(Σ) and Dv × ∆(V) such that Dv = F and Ds ∈BAYES-CCE of the static incomplete information game with distributional beliefs F. 6 Proof. We will denote with ri(a∗ i , a; vi) = Ui(a∗ i , a−i; vi) −Ui(a; vi), the regret of player vi from population i, for action a∗ i at action profile a. For a vi ∈Vi let xt i(vi) = 1{vt i = vi}. Since the sequence has vanishing regret for each player vi in population Pi, it must be that for any s∗ i ∈Σi: PT t=1 xt i(vi) · ri (s∗ i (vi), st(vt); vi) ≤o(T) (9) For any fixed T, let DT s ∈∆(Σ) denote the empirical distribution of st and let s be a random sample from DT s . For each s ∈Σ, let Ts ⊂[T] denote the time steps such that st = s for each t ∈Ts. Then we can re-write Equation (9) as: Es h 1 |Ts| P t∈Ts xt i(vi) · ri (s∗ i (vi), st(vt); vi) i ≤o(T ) T (10) For any s ∈Σ and w ∈V, let Ts,w = {t ∈Ts : vt = w}. Then we can re-write Equation (10) as: Es hP w∈V |Ts,w| |Ts| 1{wi = vi} · ri (s∗ i (vi), s(w); vi) i ≤o(T ) T (11) Now we observe that |Ts,w| |Ts| is the empirical frequency of the valuation vector w ∈V, when filtered at time steps where the strategy vector was s. Since at each time step t the valuation vector vt is picked independently from the distribution of valuation profiles F, this is the empirical frequency of Ts independent samples from F. By standard arguments from empirical processes theory, if Ts →∞then this empirical distribution converges almost surely to the distribution F. On the other hand if Ts doesn’t go to ∞, then the empirical frequency of strategy s vanishes to 0 as T →∞and therefore has measure zero in the above expectation as T →∞. Thus for any convergent subsequence of {DT }, if D is the limit distribution, then if s is in the support of D, then almost surely the distribution of w conditional on strategy s is F. Thus we can write D as a product distribution Ds × F. Moreover, if we denote with w the random variable that follows distribution F, then the limit of Equation (11) for any convergent sub-sequence, will give that: a.s.: Es∼DsEw∼F [1{wi = vi} · ri (s∗ i (vi), s(w); vi)] ≤0 Equivalently, we get that Ds will satisfy that for all vi ∈Vi and for all s∗ i : a.s.: Es∼DsEw∼F [ri (s∗ i (wi), s(w); wi) | wi = vi] ≤0 The latter is exactly the BAYES-CCE condition from Definition 7. Thus Ds is in the set of BAYES-CCE of the static incomplete incomplete information game among n players, where the type profile is drawn from F. Given the latter convergence theorem we can easily conclude the following the following theorem, whose proof is given in the supplementary material. Theorem 11. The price of anarchy for Bayesian no-regret dynamics is upper bounded by the price of anarchy of Bayesian coarse correlated equilibria, almost surely. 6 Efficiency of Smooth Mechanisms at Bayes Coarse Correlated Equilibria In this section we show that smoothness of a mechanism M implies that any BAYES-CCE of the incomplete information setting achieves at least λ max{1,µ} of the expected optimal welfare. To show this we will adopt the interpretation of BAYES-CCE that we used in the previous section, as coarse correlated equilibria of a more complex normal form game; the stochastic agent normal form representation of the Bayesian game. We can interpret this complex normal form game as the game that arises from a complete information mechanism MAG among P i |Vi| players, which randomly samples one player from each of the n population and where the utility of a player in the complete information mechanism MAG is given by Equation (4). The set of possible outcomes in this agent 7 game corresponds to the set of mappings from a profile of chosen players to an outcome in the underlying mechanism M. The optimal welfare of this game, is then the expected ex-post optimal welfare OPTAG = Ev [OPT(v)]. The main theorem that we will show is that whenever mechanism M is (λ, µ)-smooth, then also mechanism MAG is (λ, µ)-smooth. Then we will invoke a theorem of [12, 11], which shows that any coarse correlated equilibrium of a complete information mechanism achieves at least λ max{1,µ} of the optimal welfare. By the equivalence between BAYES-CCE and CCE of this complete information game, we get that every BAYES-CCE of the Bayesian game achieves at least λ max{1,µ} of the expected optimal welfare. Theorem 12 (From complete information to Bayesian smoothness). If a mechanism M is (λ, µ)smooth, then for any vector of independent valuation distributions F = (F1, . . . , Fn), the complete information mechanism MAG is also (λ, µ)-smooth. Proof. Consider the following randomized deviation for each player vi ∈Vi in population i: He random samples a valuation profile w ∼F. Then he plays according to the randomized action s∗ i (vi, w−i), i.e., the player deviates using the randomized action guaranteed by the smoothness property of mechanism M for his type vi and the random sample of the types of the others w−i. Consider an arbitrary action profile s = (s1, . . . , sn) for all players in all populations. In this context it is better to think of each si as a |Vi| dimensional vector in A|Vi| i and to view s as a P i |Vi| dimensional vector. Then with s−vi we will denote all the components of this large vector except the ones corresponding to player vi ∈Vi. Moreover, we will be denoting with v a sample from F drawn by mechanism MAG. We now argue about the expected utility of player vi from this deviation, which is: Ew  U AG i,vi(s∗ i (vi, w−i), s−vi)  = EwEv [Ui(s∗ i (vi, w−i), s−i(v−i); vi) · 1{vi = vi}] Summing the latter over all players vi ∈Vi in population i: X vi∈Vi Ew  U AG i,vi(s∗ i (vi, w−i), s−vi)  = Ew,v P vi∈Vi Ui(s∗ i (vi, w−i), s−i(v−i); vi) · 1{vi = vi}  = Ev,w [Ui(s∗ i (vi, w−i), s−i(v−i); vi)] = Ev,w [Ui(s∗ i (wi, w−i), s−i(v−i); wi)] = Ev,w [Ui(s∗ i (w), s−i(v−i); wi)] , where the second to last equation is an exchange of variable names and regrouping using independence. Summing over populations and using smoothness of M, we get smoothness of MAG: X i∈[n] X vi∈Vi Ew  U AG i,vi(s∗ i (vi, w−i), s−vi)  = Ev,w hP i∈[n] Ui(s∗ i (w), s−i(v−i); wi) i ≥Ev,w [λOPT(w) −µR(s(v))] = λEw [OPT(w)] −µRAG(s) Corollary 13. Every BAYES-CCE of the incomplete information setting of a smooth mechanism M, achieves expected welfare at least λ max{1,µ} of the expected optimal welfare. 7 Finite Time Analysis and Convergence Rates In the previous section we argued about the limit average efficiency of the game as time goes to infinity. In this section we analyze the convergence rate to BAYES-CCE and we show approximate efficiency results even for finite time, when players are allowed to have some ϵ-regret. Theorem 14. Consider the repeated matching game with a (λ, µ)-smooth mechanism. Suppose that for any T ≥T 0, each player in each of the n populations has regret at most ϵ n. Then for every δ and ρ, there exists a T ∗(δ, ρ), such that for any T ≥min{T 0, T ∗}, with probability 1 −ρ: 1 T PT t=1 SW(st(vt); vt) ≥ λ max{1,µ}Ev [OPT(v)] −δ −µ · ϵ (12) Moreover, T ∗(δ, ρ) ≤54·n3·|Σ|·|V|2·H3 δ3 log  2 ρ  . 8 References [1] Dirk Bergemann and Stephen Morris. Correlated Equilibrium in Games with Incomplete Information. Cowles Foundation Discussion Papers 1822, Cowles Foundation for Research in Economics, Yale University, October 2011. [2] Avrim Blum, MohammadTaghi Hajiaghayi, Katrina Ligett, and Aaron Roth. Regret minimization and the price of total anarchy. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, STOC ’08, pages 373–382, New York, NY, USA, 2008. ACM. [3] Yang Cai and Christos Papadimitriou. Simultaneous bayesian auctions and computational complexity. In Proceedings of the fifteenth ACM conference on Economics and Computation, EC ’14, pages 895–910, New York, NY, USA, 2014. ACM. [4] Ioannis Caragiannis, Christos Kaklamanis, Panagiotis Kanellopoulos, Maria Kyropoulou, Brendan Lucier, Renato Paes Leme, and ´Eva Tardos. Bounding the inefficiency of outcomes in generalized second price auctions. Journal of Economic Theory, (0):–, 2014. [5] Bart de Keijzer, Evangelos Markakis, Guido Schfer, and Orestis Telelis. Inefficiency of standard multi-unit auctions. In HansL. Bodlaender and GiuseppeF. Italiano, editors, Algorithms ESA 2013, volume 8125 of Lecture Notes in Computer Science, pages 385–396. Springer Berlin Heidelberg, 2013. [6] Franoise Forges. Five legitimate definitions of correlated equilibrium in games with incomplete information. Theory and Decision, 35(3):277–310, 1993. [7] Dean P Foster and Rakesh V Vohra. Asymptotic calibration. Biometrika, 85(2):379–390, 1998. [8] ToddR. Kaplan and Shmuel Zamir. Asymmetric first-price auctions with uniform distributions: analytic solutions to the general case. Economic Theory, 50(2):269–302, 2012. [9] Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria. In Proceedings of the 16th annual conference on Theoretical aspects of computer science, STACS’99, pages 404– 413, Berlin, Heidelberg, 1999. Springer-Verlag. [10] B. Lucier and A. Borodin. Price of anarchy for greedy auctions. In Proceedings of the TwentyFirst Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’10, pages 537–553, Philadelphia, PA, USA, 2010. Society for Industrial and Applied Mathematics. [11] T. Roughgarden. Intrinsic robustness of the price of anarchy. In Proceedings of the 41st annual ACM symposium on Theory of computing, STOC ’09, pages 513–522, New York, NY, USA, 2009. ACM. [12] Vasilis Syrgkanis and ´Eva Tardos. Composable and efficient mechanisms. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 211–220, New York, NY, USA, 2013. ACM. [13] A. Vetta. Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions. In Foundations of Computer Science, 2002. Proceedings. The 43rd Annual IEEE Symposium on, pages 416–425, 2002. 9
2015
110
5,604
Robust Gaussian Graphical Modeling with the Trimmed Graphical Lasso Eunho Yang IBM T.J. Watson Research Center eunhyang@us.ibm.com Aur´elie C. Lozano IBM T.J. Watson Research Center aclozano@us.ibm.com Abstract Gaussian Graphical Models (GGMs) are popular tools for studying network structures. However, many modern applications such as gene network discovery and social interactions analysis often involve high-dimensional noisy data with outliers or heavier tails than the Gaussian distribution. In this paper, we propose the Trimmed Graphical Lasso for robust estimation of sparse GGMs. Our method guards against outliers by an implicit trimming mechanism akin to the popular Least Trimmed Squares method used for linear regression. We provide a rigorous statistical analysis of our estimator in the high-dimensional setting. In contrast, existing approaches for robust sparse GGMs estimation lack statistical guarantees. Our theoretical results are complemented by experiments on simulated and real gene expression data which further demonstrate the value of our approach. 1 Introduction Gaussian graphical models (GGMs) form a powerful class of statistical models for representing distributions over a set of variables [1]. These models employ undirected graphs to encode conditional independence assumptions among the variables, which is particularly convenient for exploring network structures. GGMs are widely used in variety of domains, including computational biology [2], natural language processing [3], image processing [4, 5, 6], statistical physics [7], and spatial statistics [8]. In many modern applications, the number of variables p can exceed the number of observations n. For instance, the number of genes in microarray data is typically larger than the sample size. In such high-dimensional settings, sparsity constraints are particularly pertinent for estimating GGMs, as they encourage only a few parameters to be non-zero and induce graphs with few edges. The most widely used estimator among others (see e.g. [9]) minimizes the Gaussian negative log-likelihood regularized by the ℓ1 norm of the entries (or the off-diagonal entries) of the precision matrix (see [10, 11, 12]). This estimator enjoys strong statistical guarantees (see e.g. [13]). The corresponding optimization problem is a log-determinant program that can be solved with interior point methods [14] or by co-ordinate descent algorithms [11, 12]. Alternatively neighborhood selection [15, 16] can be employed to estimate conditional independence relationships separately for each node in the graph, via Lasso linear regression, [17]. Under certain assumptions, the sparse GGM structure can still be recovered even under high-dimensional settings. The aforementioned approaches rest on a fundamental assumption: the multivariate normality of the observations. However, outliers and corruption are frequently encountered in high-dimensional data (see e.g. [18] for gene expression data). Contamination of a few observations can drastically affect the quality of model estimation. It is therefore imperative to devise procedures that can cope with observations deviating from the model assumption. Despite this fact, little attention has been paid to robust estimation of high-dimensional graphical models. Relevant work includes [19], which leverages multivariate t-distributions for robustified inference and the EM algorithm. They also propose an alternative t-model which adds flexibility to the classical t but requires the use of Monte Carlo EM or variational approximation as the likelihood function is not available explicitly. Another per1 tinent work is that of [20] which introduces a robustified likelihood function. A two-stage procedure is proposed for model estimation, where the graphical structure is first obtained via coordinate gradient descent and the concentration matrix coefficients are subsequently re-estimated using iterative proportional fitting so as to guarantee positive definiteness of the final estimate. In this paper, we propose the Trimmed Graphical Lasso method for robust Gaussian graphical modeling in the sparse high-dimensional setting. Our approach is inspired by the classical Least Trimmed Squares method used for robust linear regression [21], in the sense that it disregards the observations that are judged less reliable. More specifically the Trimmed Graphical Lasso seeks to minimize a weighted version of the negative log-likelihood regularized by the ℓ1 penalty on the concentration matrix for the GGM and under some simple constraints on the weights. These weights implicitly induce the trimming of certain observations. Our key contributions can be summarized as follows. • We introduce the Trimmed Graphical Lasso formulation, along with two strategies for solving the objective. One involves solving a series of graphical lasso problems; the other is more efficient and leverages composite gradient descent in conjunction with partial optimization. • As our key theoretical contribution, we provide statistical guarantees on the consistency of our estimator. To the best of our knowledge, this is in stark contrast with prior work on robust sparse GGM estimation (e.g. [19, 20]) which do not provide any statistical analysis. • Experimental results under various data corruption scenarios further demonstrate the value of our approach. 2 Problem Setup and Robust Gaussian Graphical Models Notation. For matrices U ∈Rp×p and V ∈Rp×p, ⟨⟨U, V ⟩⟩denotes the trace inner product tr(A BT ). For a matrix U ∈Rp×p and parameter a ∈[1, ∞], ∥U∥a denotes the element-wise ℓa norm, and ∥U∥a,off does the element-wise ℓa norm only for off-diagonal entries. For example, ∥U∥1,off := P i̸=j |Uij|. Finally, we use ∥U∥F and |||U|||2 to denote the Frobenius and spectral norms, respectively. Setup. Let X = (X1, X2, . . . , Xp) be a zero-mean Gaussian random field parameterized by p × p concentration matrix Θ∗: P(X; Θ∗) = exp  −1 2⟨⟨Θ∗, XX⊤⟩⟩−A(Θ∗)  (1) where A(Θ∗) is the log-partition function of Gaussian random field. Here, the probability density function in (1) is associated with p-variate Gaussian distribution, N(0, Σ∗) where Σ∗= (Θ∗)−1. Given n i.i.d. samples  X(1), . . . , X(n) from high-dimensional Gaussian random field (1), the standard way to estimate the inverse covariance matrix is to solve the ℓ1 regularized maximum likelihood estimator (MLE) that can be written as the following regularized log-determinant program: minimize Θ∈Ω DD Θ, 1 n n X i=1 X(i)(X(i))⊤EE −log det(Θ) + λ∥Θ∥1,off (2) where Ωis the space of the symmetric positive definite matrices, and λ is a regularization parameter that encourages a sparse graph model structure. In this paper, we consider the case where the number of random variables p may be substantially larger than the number of sample size n, however, the concentration parameter of the underlying distribution is sparse: (C-1) The number of non-zero off-diagonal entries of Θ∗is at most k, that is |{Θ∗ ij : Θ∗ ij ̸= 0 for i ̸= j}| ≤k. Now, suppose that n samples are drawn from this underlying distribution (1) with true parameter Θ∗. We further allow some samples are corrupted and not drawn from (1). Specifically, the set of sample indices {1, 2, . . . , n} is separated into two disjoint subsets: if i-th sample is in the set of “good” samples, which we name G, then it is a genuine sample from (1) with the parameter Θ∗. On 2 Algorithm 1 Trimmed Graphical Lasso in (3) Initialize Θ(0) (e.g. Θ(0) = (S + λI)−1) repeat Compute w(t) given Θ(t−1), by assigning a weight of one to the h observations with lowest negative log-likelihood and a weight of zero to the remaining ones. ∇L(t) ←1 h Pn i=1 w(t) i X(i)(X(i))⊤−(Θ(t−1))−1 Line search. Choose η(t) (See Nesterov (2007) for a discussion of how the stepsize may be chosen), checking that the following update maintains positive definiteness. This can be verified via Cholesky factorization (as in [23]). Update. Θ(t) ←Sη(t)λ(Θ(t−1) −η(t)∇L(t)), where S is the soft-thresholding operator: [Sν(U)]i,j = sign(Ui,j) max(|Ui,j| −ν, 0) and is only applied to the off-diagonal elements of matrix U. Compute (Θ(t))−1 reusing the Cholesky factor. until stopping criterion is satisfied the other hand, if the i-th sample is in the set of “bad” samples, B, the sample is corrupted. The identifications of G and B are hidden to us. However, we naturally assume that only a small number of samples are corrupted: (C-2) Let h be the number of good samples: h := |G| and hence |B| = n −h. Then, we assume that larger portion of samples are genuine and uncorrupted so that |G|−|B| |G| ≥α where 0 < α ≤1. If we assume that 40% of samples are corrupted, then α = 0.6n−0.4n 0.6n = 1 3. In later sections, we will derive a robust estimator for corrupted samples of sparse Gaussian graphical models and provide statistical guarantees of our estimator under the conditions (C-1) and (C-2). 2.1 Trimmed Graphical Lasso We now propose a Trimmed Graphical Lasso for robust estimation of sparse GGMs: minimize Θ∈Ω,w DD Θ, 1 h n X i=1 wiX(i)(X(i))⊤EE −log det(Θ) + λ∥Θ∥1,off s. t. w ∈[0, 1]n , 1⊤w = h , ∥Θ∥1 ≤R (3) where λ is a regularization parameter to decide the sparsity of our estimation, and h is another parameter, which decides the number of samples (or sum of weights) used in the training. h is ideally set as the number of uncorrupted samples in G, but practically we can tune the parameter h by cross-validation. Here, the constraint ∥Θ∥1 ≤R is required to analyze this non-convex optimization problem as discussed in [22]. For another tuning parameter R, any positive real value would be sufficient for R as long as ∥Θ∗∥1 ≤R. Finally note that when h is fixed as n (and R is set as infinity), the optimization problem (3) will be simply reduced to the vanilla ℓ1 regularized MLE for sparse GGM without concerning outliers. The optimization problem (3) is convex in w as well as in Θ, however this is not the case jointly. Nevertheless, we will show later that any local optimum of (3) is guaranteed to be strongly consistent under some fairly mild conditions. Optimization As we briefly discussed above, the problem (3) is not jointly convex but biconvex. One possible approach to solve the objective of (3) thus is to alternate between solving for Θ with fixed w and solving for w with fixed Θ. Given Θ, solving for w is straightforward and boils down to assigning a weight of one to the h observations with lowest negative log-likelihood and a weight of zero to the remaining ones. Given w, solving for Θ can be accomplished by any algorithm solving the “vanilla” graphical Lasso program, e.g. [11, 12]. Each step solves a convex problem hence the objective is guaranteed to decrease at each iteration and will converge to a local minima. A more efficient optimization approach can be obtained by adopting a partial minimization strategy for Θ: rather than solving to completion for Θ each time w is updated, one performs a single step update. This approach stems from considering the following equivalent reformulation of our 3 objective: minimize Θ∈Ω DD Θ, 1 h n X i=1 wi(Θ)X(i)(X(i))⊤EE −log det(Θ) + λ∥Θ∥1,off s. t. wi(Θ) = argmin w∈[0,1]n , 1⊤w=h DD Θ, 1 h n X i=1 wiX(i)(X(i))⊤EE , ∥Θ∥1 ≤R , (4) On can then leverage standard first-order methods such as projected and composite gradient descent [24] that will converge to local optima. The overall procedure is depicted in Algorithm 1. Therein we assume that we pick R sufficiently large, so one does not need to enforce the constraint ∥Θ∥1 ≤R explicitly. If needed the constraint can be enforced by an additional projection step [22]. 3 Statistical Guarantees of Trimmed Graphical Lasso One of the main contributions of this paper is to provide the statistical guarantees of our Trimmed Graphical Lasso estimator for GGMs. The optimization problem (3) is non-convex, and therefore the gradient-type methods solving (3) will find estimators by local minima. Hence, our theory in this section provides the statistical error bounds on any local minimum measured by ∥· ∥F and ∥· ∥1,off norms simultaneously. Suppose that we have some local optimum (eΘ, ew) of (3) by arbitrary gradient-based method. While Θ∗is fixed unconditionally, we define w∗as follows: for a sample index i ∈G, w∗ i is simply set to ewi so that w∗ i −ewi = 0. Otherwise for a sample index i ∈B, we set w∗ i = 0. Hence, w∗is dependent on ew. In order to derive the upper bound on the Frobenius norm error, we first need to assume the standard restricted strong convexity condition of (3) with respective to the parameter Θ: (C-3) (Restricted strong convexity condition) Let ∆be an arbitrary error of parameter Θ. That is, ∆:= Θ −Θ∗. Then, for any possible error ∆such that ∥∆∥F ≤1, DDΘ∗−1 − Θ∗+ ∆ −1, ∆ EE ≥κl∥∆∥2 F (5) where κl is a curvature parameter. Note that in order to guarantee the Frobenius norm-based error bounds, (C-3) is required even for the vanilla Gaussian graphical models without outliers, which has been well studied by several works such as the following lemma: Lemma 1 (Section B.4 of [22]). For any ∆∈Rp×p such that ∥∆∥F ≤1, DDΘ∗−1 − Θ∗+ ∆ −1, ∆ EE ≥ |||Θ∗|||2 + 1 −2∥∆∥2 F , thus (C-3) holds with κl = |||Θ∗|||2 + 1 −2. While (C-3) is a standard condition that is also imposed for the conventional estimators under clean set of of samples, we additionally require the following condition for a successful estimation of (3) on corrupted samples: (C-4) Consider arbitrary local optimum (eΘ, ew). Let e∆:= eΘ −Θ∗and eΓ := ew −w∗. Then, DD 1 h n X i=1 eΓiX(i)(X(i))⊤, e∆ EE ≤τ1(n, p)∥e∆∥F + τ2(n, p)∥e∆∥1 with some positive quantities τ1(n, p) and τ2(n, p) on n and p. These will be specified below for some concrete examples. (C-4) can be understood as a structural incoherence condition between the model parameter Θ and the weight parameter w. Such a condition is usually imposed when analyzing estimators with multiple parameters (for example, see [25] for a robust linear regression estimator). Since w∗is defined 4 depending on ew, each local optimum has its own (C-4) condition. We will see in the sequel that under some reasonable cases, this condition for any local optimum holds with high probability. Also note that for the case with clean samples, the condition (C-4) is trivially satisfied since eΓi = 0 for all i ∈{1, . . . , n} and hence the LHS becomes 0. Armed with these conditions, we now state our main theorem on the error bounds of our estimator (3): Theorem 1. Consider corrupted Gaussian graphical models. Let (eΘ, ew) be an any local optimum of M-estimator (3). Suppose that (eΘ, ew) satisfies the condition (C-4). Suppose also that the regularization parameter λ in (3) is set such that 4 max ( 1 h n X i=1 w∗ i X(i)(X(i))⊤−(Θ∗)−1 ∞, τ2(n, p) ) ≤λ ≤κl −τ1(n, p) 3R . (6) Then, this local optimum (eΘ, ew) is guaranteed to be consistent as follows: ∥eΘ −Θ∗∥F ≤1 κl 3λ√k + p 2 + τ1(n, p)  and ∥eΘ −Θ∗∥1,off ≤ 2 λ κl  3λ p k + p + τ1(n, p) 2 . (7) The statement in Theorem 1 holds deterministically, and the probabilistic statement comes where we show (C-4) and (6) for a given (eΘ, ew) are satisfied. Note that, defining L(Θ, w  := Θ, 1 h Pn i=1 wiX(i)(X(i))⊤ −log det(Θ), it is a standard way of choosing λ based on ∥∇ΘL Θ∗, w∗ ∥∞(see [26], for details). Also it is important to note that the term √k + p captures the relation between element-wise ℓ1 norm and the error norm ∥·∥F including diagonal entries. Due to the space limit, the proof of Theorem 1 (and all other proofs) are provided in the Supplements [27]. Now, it is natural to ask how easily we can satisfy the conditions in Theorem 1. Intuitively it is impossible to recover true parameter by weighting approach as in (3) when the amount of corruptions exceeds that of normal observation errors. To this end, suppose that we have some upper bound on the corruptions: (C-B1) For some function f(·), we have |||XB|||2 2 ≤f(XB)√h log p where XB denotes the sub-design matrix in R|B|×p corresponding to outliers. Under this assumption, we can properly choose the regularization parameter λ satisfying (6) as follows: Corollary 1. Consider corrupted Gaussian graphical models with conditions (C-2) and (C-B1). Suppose that we choose the regularization parameter λ = 4 max ( 8(max i Σ∗ ii) s 10τ log p h −|B| + |B| h ∥Σ∗∥∞, f(XB) r log p h ) ≤ κl −f(XB) q |B| log p h 3R . Then, any local optimum of (3) is guaranteed to satisfy (C-4) and have the error bounds in (7) with probability at least 1 −c1 exp(−c′ 1hλ2) for some universal positive constants c1 and c′ 1. If we further assume the number of corrupted samples scales with √n at most : (C-B2) |B| ≤a√n for some constant a ≥0, then we can derive the following result as another corollary of Theorem 1: Corollary 2. Consider corrupted Gaussian graphical models. Suppose that the conditions (C2), (C-B1) and (C-B2) hold. Also suppose that the regularization parameter λ is set as c q log p n where c := 4 max  16(maxi Σ∗ ii) √ 5τ + 2a∥Σ∗∥∞ √log p , √ 2f(XB) . Then, if the sample size n is lower 5 bounded as n ≥max  16a2 , |||Θ∗|||2 + 1 4 3Rc + f(XB) p 2|B| 2 (log p)  , then any local optimum of (3) is guaranteed to satisfy (C-4) and have the following error bound: ∥eΘ −Θ∗∥F ≤1 κl 3c 2 r (k + p) log p n + f(XB) r 2|B| log p n ! (8) with probability at least 1 −c1 exp(−c′ 1hλ2) for some universal positive constants c1 and c′ 1. Note that the ∥· ∥1,off norm based error bound also can be easily derived using the selection of λ from (7). Corollary 2 reveals an interesting result: even when O(√n) samples out of total n samples are corrupted, our estimator (3) can successfully recover the true parameter with guaranteed error in (8). The first term in this bound is O q (k+p) log p n  which exactly recovers the Frobenius error bound for the case without outliers (see [13, 22] for example). Due to the outliers, we have the performance degrade with the second term, which is O q |B| log p n  . To the best of our knowledge, this is the first statistical error bounds on the parameter estimation for Gaussian graphical models with outliers. Also note that Corollary 1 only concerns on any local optimal point derived by an arbitrary optimization algorithm. For the guarantees of multiple local optima simultaneously, we may use a union bound from the corollary. When Outliers Follow a Gaussian Graphical Model Now let us provide a concrete example and show how f(XB) in (C-B1) is precisely specified in this case: (C-B3) Outliers in the set B are drawn from another Gaussian graphical model (1) with a parameter (ΣB)−1. This can be understood as the Gaussian mixture model where the most of the samples are drawn from (Θ∗)−1 that we want to estimate, and small portion of samples are drawn from ΣB. In this case, Corollary 2 can be further shaped as follows: Corollary 3. Suppose that the conditions (C-2), (C-B2) and (C-B3) hold. Then the statement in Corollary 2 holds with f(XB) := 4 √ 2a 1+√log p 2 |||ΣB|||2 √log p . 4 Experiments In this section we corroborate the performance of our Trimmed Graphical Lasso (trim-glasso) algorithm on simulated data. We compare against glasso: the vanilla Graphical Lasso [11]; the t-Lasso and t*-lasso methods [19], and robust-LL: the robustified-likelihood approach of [20]. 4.1 Simulated data Our simulation setup is similar to [20] and is a akin to gene regulatory networks. Namely we consider four different scenarios where the outliers are generated from models with different graphical structures. Specifically, each sample is generated from the following mixture distribution: yk ∼(1 −p0)Np(0, Θ−1) + p0 2 Np(−µ, Θ−1 o ) + p0 2 Np(µ, Θ−1 o ), k = 1, . . . , n, where po = 0.1, n = 100, and p = 150. Four different outlier distributions are considered: M1: µ = (1, . . . , 1)T , Θo = ˜Θ, M2: µ = (1.5, . . . , 1.5)T , Θo = ˜Θ, M3: µ = (1, . . . , 1)T , Θo = Ip, M4: µ = (1.5, . . . , 1.5)T , Θo = Ip. We also consider the scenario where the outliers are not symmetric about the mean and simulate data from the following model 6 0.0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 0.6 1-specificity sensitivity glasso t-lasso t*-lasso robust-LL (best) trim-glasso (best) (a) M1 0.0 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.5 0.6 1-specificity sensitivity (b) M2 0.0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1-specificity sensitivity (c) M3 0.0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1-specificity sensitivity (d) M4 Figure 1: Average ROC curves for the comparison methods for contamination scenarios M1-M4. M5: yk ∼(1 −po)Np(0, Θ−1) + poNp(2, Ip), k = 1, . . . , n. For each simulation run, Θ is a randomly generated precision matrix corresponding to a network with 9 hub nodes simulated as follows. Let A be the adjacency of the network. For all i < j we set Aij = 1 with probability 0.03, and zero otherwise. We set Aji = Aij. We then randomly select 9 hub nodes and set the elements of the corresponding rows and columns of A to one with probability 0.4 and zero otherwise. Using A, the simulated nonzero coefficients of the precision matrix are sampled as follows. First we create a matrix E so that Ei,j = 0 if Ai,j = 0, and Ei,j is sampled uniformly from [−0.75, −0.23] ∪[0.25, 0.75] if Ai,j ̸= 0. Then we set E = E+ET 2 . Finally we set Θ = E + (0.1 −Λmin(E))Ip, where Λmin(E) is the smallest eigenvalue of E. ˜Θ is a randomly generated precision matrix in the same way Θ is generated. For the robustness parameter β of the robust-LL method, we consider β ∈{0.005, 0.01, 0.02, 0.03} as recommended in [20]. For the trim-glasso method we consider 100h n ∈{90, 85, 80}. Since all the robust comparison methods converge to a stationary point, we tested various initialization strategies for the concentration matrix, including Ip, (S + λIp)−1 and the estimate from glasso. We did not observe any noticeable impact on the results. Figure 1 presents the average ROC curves of the comparison methods over 100 simulation data sets for scenarios M1-M4 as the tuning parameter λ varies. In the figure, for robust-LL and trim-glasso methods, we depict the best curves with respect to parameter β and h respectively. Due to space constraints, the detailed results for all the values of β and h considered, as well as the results for model M5 are provided in the Supplements [27]. From the ROC curves we can see that our proposed approach is competitive compared the alternative robust approaches t-lasso, t*-lasso and robust-LL. The edge over glasso is even more pronounced for 7 rescaled ORC3 gene expression Frequency -3 -2 -1 0 1 2 0 2 4 6 8 Figure 2: (a) Histogram of standardized gene expression levels for gene ORC3. (b) Network estimated by trim-glasso scenarios M2, M4 and M5. Surprisingly, trim-glasso with h/n = 80% achieves superior sensitivity for nearly any specificity. Computationally the trim-glasso method is also competitive compared to alternatives. The average run-time over the path of tuning parameters λ is 45.78s for t-lasso, 22.14s for t*-lasso, 11.06s for robust-LL, 1.58s for trimmed lasso, 1.04s for glasso. Experiments were run on R in a single computing node with a Intel Core i5 2.5GHz CPU and 8G memory. For t-lasso, t*-lasso and robustLL we used the R implementations provided by the methods’ authors. For glasso we used the glassopath package. 4.2 Application to the analysis of Yeast Gene Expression Data We analyze a yeast microarray dataset generated by [28]. The dataset concerns n = 112 yeast segregants (instances). We focused on p = 126 genes (variables) belonging to cell-cycle pathway as provided by the KEGG database [29]. For each of these genes we standardize the gene expression data to zero-mean and unit standard deviation. We observed that the expression levels of some genes are clearly not symmetric about their means and might include outliers. For example the histogram of gene ORC3 is presented in Figure 2(a). For the robust-LL method we set β = 0.05 and for trimglasso we use h/n = 80%. We use 5-fold-CV to choose the tuning parameters for each method. After λ is chosen for each method, we rerun the methods using the full dataset to obtain the final precision matrix estimates. Figure 2(b) shows the cell-cycle pathway estimated by our proposed method. For comparison the cell-cycle pathway from the KEGG [29] is provided in the Supplements [27]. It is important to note that the KEGG graph corresponds to what is currently known about the pathway. It should not be treated as the ground truth. Certain discrepancies between KEGG and estimated graphs may also be caused by inherent limitations in the dataset used for modeling. For instance, some edges in cell-cycle pathway may not be observable from gene expression data. Additionally, the perturbation of cellular systems might not be strong enough to enable accurate inference of some of the links. glasso tends to estimate more links than the robust methods. We postulate that the lack of robustness might result in inaccurate network reconstruction and the identification of spurious links. Robust methods tend to estimate networks that are more consistent with that from the KEGG (F1-score of 0.23 for glasso, 0.37 for t*-lasso, 0.39 for robust-NLL and 0.41 for trim-glasso, where the F1 score is the harmonic mean between precision and recall). For instance our approach recovers several characteristics of the KEGG pathway. For instance, genes CDC6 (a key regulator of DNA replication playing important roles in the activation and maintenance of the checkpoint mechanisms coordinating S phase and mitosis) and PDS1 (essential gene for meiotic progression and mitotic cell cycle arrest) are identified as a hub genes, while genes CLB3,BRN1,YCG1 are unconnected to any other genes. 8 References [1] S.L. Lauritzen. Graphical models. Oxford University Press, USA, 1996. [2] Jung Hun Oh and Joseph O. Deasy. Inference of radio-responsive gene regulatory networks using the graphical lasso algorithm. BMC Bioinformatics, 15(S-7):S5, 2014. [3] C. D. Manning and H. Schutze. Foundations of Statistical Natural Language Processing. MIT Press, 1999. [4] J.W. Woods. Markov image modeling. IEEE Transactions on Automatic Control, 23:846–850, October 1978. [5] M. Hassner and J. Sklansky. Markov random field models of digitized image texture. In ICPR78, pages 538–540, 1978. [6] G. Cross and A. Jain. Markov random field texture models. IEEE Trans. PAMI, 5:25–39, 1983. [7] E. Ising. Beitrag zur theorie der ferromagnetismus. Zeitschrift f¨ur Physik, 31:253–258, 1925. [8] B. D. Ripley. Spatial statistics. Wiley, New York, 1981. [9] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for graphical models. In Neur. Info. Proc. Sys. (NIPS), 27, 2014. [10] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1): 19–35, 2007. [11] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso. Biostatistics, 2007. [12] O. Bannerjee, , L. El Ghaoui, and A. d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Jour. Mach. Lear. Res., 9:485–516, March 2008. [13] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011. [14] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, UK, 2004. [15] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436–1462, 2006. [16] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur. Info. Proc. Sys. (NIPS), 25, 2012. [17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996. [18] Z.J. Daye, J. Chen, and Li H. High-dimensional heteroscedastic regression with an application to eqtl data analysis. Biometrics, 68:316–326, 2012. [19] Michael Finegold and Mathias Drton. Robust graphical modeling of gene networks using classical and alternative t-distributions. The Annals of Applied Statistics, 5(2A):1057–1080, 2011. [20] H. Sun and H. Li. Robust Gaussian graphical modeling via l1 penalization. Biometrics, 68:1197–206, 2012. [21] A. Alfons, C. Croux, and S. Gelper. Sparse least trimmed squares regression for analyzing highdimensional large data sets. Ann. Appl. Stat., 7:226–248, 2013. [22] P-L Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. Arxiv preprint arXiv:1305.2436v2, 2013. [23] C. J. Hsieh, M. Sustik, I. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In Neur. Info. Proc. Sys. (NIPS), 24, 2011. [24] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical Report 76, Center for Operations Research and Econometrics (CORE), Catholic Univ. Louvain (UCL)., 2007. [25] N. H. Nguyen and T. D. Tran. Robust Lasso with missing and grossly corrupted observations. IEEE Trans. Info. Theory, 59(4):2036–2058, 2013. [26] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 2012. [27] E. Yang and A. C. Lozano. Robust gaussian graphical modeling with the trimmed graphical Lasso. arXiv:1510.08512, 2015. [28] Rachel B Brem and Leonid Kruglyak. The landscape of genetic complexity across 5,700 gene expression traits in yeast. Proceedings of the National Academy of Sciences of the United States of America, 102(5): 1572–1577, 2005. [29] M. Kanehisa, S. Goto, Y. Sato, M. Kawashima, M. Furumichi, and M. Tanabe. Data, information, knowledge and principle: back to metabolism in kegg. Nucleic Acids Res., 42:D199–D205, 2014. 9
2015
111
5,605
Parallelizing MCMC with Random Partition Trees Xiangyu Wang Dept. of Statistical Science Duke University xw56@stat.duke.edu Fangjian Guo Dept. of Computer Science Duke University guo@cs.duke.edu Katherine A. Heller Dept. of Statistical Science Duke University kheller@stat.duke.edu David B. Dunson Dept. of Statistical Science Duke University dunson@stat.duke.edu Abstract The modern scale of data has brought new challenges to Bayesian inference. In particular, conventional MCMC algorithms are computationally very expensive for large data sets. A promising approach to solve this problem is embarrassingly parallel MCMC (EP-MCMC), which first partitions the data into multiple subsets and runs independent sampling algorithms on each subset. The subset posterior draws are then aggregated via some combining rules to obtain the final approximation. Existing EP-MCMC algorithms are limited by approximation accuracy and difficulty in resampling. In this article, we propose a new EP-MCMC algorithm PART that solves these problems. The new algorithm applies random partition trees to combine the subset posterior draws, which is distribution-free, easy to resample from and can adapt to multiple scales. We provide theoretical justification and extensive experiments illustrating empirical performance. 1 Introduction Bayesian methods are popular for their success in analyzing complex data sets. However, for large data sets, Markov Chain Monte Carlo (MCMC) algorithms, widely used in Bayesian inference, can suffer from huge computational expense. With large data, there is increasing time per iteration, increasing time to convergence, and difficulties with processing the full data on a single machine due to memory limits. To ameliorate these concerns, various methods such as stochastic gradient Monte Carlo [1] and sub-sampling based Monte Carlo [2] have been proposed. Among directions that have been explored, embarrassingly parallel MCMC (EP-MCMC) seems most promising. EP-MCMC algorithms typically divide the data into multiple subsets and run independent MCMC chains simultaneously on each subset. The posterior draws are then aggregated according to some rules to produce the final approximation. This approach is clearly more efficient as now each chain involves a much smaller data set and the sampling is communication-free. The key to a successful EP-MCMC algorithm lies in the speed and accuracy of the combining rule. Existing EP-MCMC algorithms can be roughly divided into three categories. The first relies on asymptotic normality of posterior distributions. [3] propose a “Consensus Monte Carlo” algorithm, which produces final approximation by a weighted averaging over all subset draws. This approach is effective when the posterior distributions are close to Gaussian, but could suffer from huge bias when skewness and multi-modes are present. The second category relies on calculating an appropriate variant of a mean or median of the subset posterior measures [4, 5]. These approaches rely on asymptotics (size of data increasing to infinity) to justify accuracy, and lack guarantees in finite samples. The third category relies on the product density equation (PDE) in (1). Assuming X is the 1 observed data and θ is the parameter of interest, when the observations are iid conditioned on θ, for any partition of X = X(1) ∪X(2) ∪· · · ∪X(m), the following identity holds, p(θ|X) ∝π(θ)p(X|θ) ∝p(θ|X(1))p(θ|X(2)) · · · p(θ|X(m)), (1) if the prior on the full data and subsets satisfy π(θ) = Qm i=1 πi(θ). [6] proposes using kernel density estimation on each subset posterior and then combining via (1). They use an independent Metropolis sampler to resample from the combined density. [7] apply the Weierstrass transform directly to (1) and developed two sampling algorithms based on the transformed density. These methods guarantee the approximation density converges to the true posterior density as the number of posterior draws increase. However, as both are kernel-based, the two methods are limited by two major drawbacks. The first is the inefficiency of resampling. Kernel density estimators are essentially mixture distributions. Assuming we have collected 10,000 posterior samples on each machine, then multiplying just two densities already yields a mixture distribution containing 108 components, each of which is associated with a different weight. The resampling requires the independent Metropolis sampler to search over an exponential number of mixture components and it is likely to get stuck at one “good” component, resulting in high rejection rates and slow mixing. The second is the sensitivity to bandwidth choice, with one bandwidth applied to the whole space. In this article, we propose a novel EP-MCMC algorithm termed “parallel aggregation random trees” (PART), which solves the above two problems. The algorithm inhibits the explosion of mixture components so that the aggregated density is easy to resample. In addition, the density estimator is able to adapt to multiple scales and thus achieve better approximation accuracy. In Section 2, we motivate the new methodology and present the algorithm. In Section 3, we present error bounds and prove consistency of PART in the number of posterior draws. Experimental results are presented in Section 4. Proofs and part of the numerical results are provided in the supplementary materials. 2 Method Recall the PDE identity (1) in the introduction. When data set X is partitioned into m subsets X = X(1) ∪· · · ∪X(m), the posterior distribution of the ith subset can be written as f (i)(θ) ∝π(θ)1/mp(X(i)|θ), (2) where π(θ) is the prior assigned to the full data set. Assuming observations are iid given θ, the relationship between the full data posterior and subset posteriors is captured by p(θ|X) ∝π(θ) m Y i=1 p(X(i)|θ) ∝ m Y i=1 f (i)(θ). (3) Due to the flaws of applying kernel-based density estimation to (3) mentioned above, we propose to use random partition trees or multi-scale histograms. Let FK be the collection of all Rppartitions formed by K disjoint rectangular blocks, where a rectangular block takes the form of Ak def = (lk,1, rk,1] × (lk,2, rk,2] × · · · (lk,p, rk,p] ⊆Rp for some lk,q < rk,q. A K-block histogram is then defined as ˆf (i)(θ) = K X k=1 n(i) k N|Ak|1(θ ∈Ak), (4) where {Ak : k = 1, 2, · · · , K} ∈FK are the blocks and N, n(i) k are the total number of posterior samples on the ith subset and of those inside the block Ak respectively (assuming the same N across subsets). We use | · | to denote the area of a block. Assuming each subset posterior is approximated by a K-block histogram, if the partition {Ak} is restricted to be the same across all subsets, then the aggregated density after applying (3) is still a K-block histogram (illustrated in the supplement), ˆp(θ|X) = 1 Z m Y i=1 ˆf (i)(θ) = 1 Z K X k=1  m Y i=1 n(i) k |Ak|  1(θ ∈Ak) = K X k=1 wkgk(θ), (5) where Z = PK k=1 Qm i=1 n(i) k /|Ak|m−1 is the normalizing constant, wk’s are the updated weights, and gk(θ) = unif(θ; Ak) is the block-wise distribution. Common histogram blocks across subsets control the number of mixture components, leading to simple aggregation and resampling procedures. Our PART algorithm consists of space partitioning followed by density aggregation, with aggregation simply multiplying densities across subsets for each block and then normalizing. 2 2.1 Space Partitioning To find good partitions, our algorithm recursively bisects (not necessarily evenly) a previous block along a randomly selected dimension, subject to certain rules. Such partitioning is multi-scale and related to wavelets [8]. Assume we are currently splitting the block A along the dimension q and denote the posterior samples in A by {θ(i) j }j∈A for the ith subset. The cut point on dimension q is determined by a partition rule φ({θ(1) j,q }, {θ(2) j,q }, · · · , {θ(m) j,q }). The resulting two blocks are subject to further bisecting under the same procedure until one of the following stopping criteria is met — (i) nk/N < δρ or (ii) the area of the block |Ak| becomes smaller than δ|A|. The algorithm returns a tree with K leafs, each corresponding to a block Ak. Details are provided in Algorithm 1. Algorithm 1 Partition tree algorithm 1: procedure BUILDTREE({θ(1) j }, {θ(2) j }, · · · , {θ(m) j }, φ(·), δρ, δa, N, L, R) 2: D ←{1, 2, · · · , p} 3: while D not empty do 4: Draw q uniformly at random from D. ▷Randomly choose the dimension to cut 5: θ∗ q ←φ({θ(1) j,q }, {θ(2) j,q }, · · · , {θ(m) j,q }), T .n(i) ←Cardinality of {θ(i) j } for all i 6: if θ∗ q −Lq > δa, Rq −θ∗ q > δa and min(P j 1(θ(i) j,q ≤θ∗ q), P j 1(θ(i) j,q > θ∗ q)) > Nδρ for all i then 7: L′ ←L, L′ q ←θ∗ q, R′ ←R, R′ q ←θ∗ q ▷Update left and right boundaries 8: T .L ←BUILDTREE({θ(1) j : θ(1) j,q ≤θ∗ q}, · · · , {θ(m) j : θ(m) j,q ≤θ∗ q}, · · · , N, L, R′) 9: T .R ←BUILDTREE({θ(1) j : θ(1) j,q > θ∗ q}, · · · , {θ(m) j : θ(m) j,q > θ∗ q}, · · · , N, L′, R) 10: return T 11: else 12: D ←D \ {q} ▷Try cutting at another dimension 13: end if 14: end while 15: T .L ←NULL, T .R ←NULL, return T ▷Leaf node 16: end procedure In Algorithm 1, δ|A| becomes the minimum edge length of a block δa (possibly different across dimensions). Quantities L, R ∈Rp are the left and right boundaries of the samples respectively, which take the sample minimum/maximum when the support is unbounded. We consider two choices for the partition rule φ(·) — maximum (empirical) likelihood partition (ML) and median/KD-tree partition (KD). Maximum Likelihood Partition (ML) ML-partition searches for partitions by greedily maximizing the empirical log likelihood at each iteration. For m = 1 we have θ∗= φML({θj,q, j = 1, · · · , n}) = arg max n1+n2=n,A1∪A2=A  n1 n|A1| n1 n2 n|A2| n2 , (6) where n1 and n2 are counts of posterior samples in A1 and A2, respectively. The solution to (6) falls inside the set {θj}. Thus, a simple linear search after sorting samples suffices (by book-keeping the ordering, sorting the whole block once is enough for the entire procedure). For m > 1, we have φq,ML(·) = arg max θ∗∈∪m i=1{θ(i) j } m Y i=1  n(i) 1 n(i)|A1| n(i) 1  n(i) 2 n(i)|A2| n(i) 2 , (7) similarly solved by a linear search. This is dominated by sorting and takes O(n log n) time. Median/KD-Tree Partition (KD) Median/KD-tree partition cuts at the empirical median of posterior samples. When there are multiple subsets, the median is taken over pooled samples to force {Ak} to be the same across subsets. Searching for median takes O(n) time [9], which is faster than ML-partition especially when the number of posterior draws is large. The same partitioning strategy is adopted by KD-trees [10]. 3 2.2 Density Aggregation Given a common partition, Algorithm 2 aggregates all subsets in one stage. However, assuming a single “good” partition for all subsets is overly restrictive when m is large. Hence, we also consider pairwise aggregation [6, 7], which recursively groups subsets into pairs, combines each pair with Algorithm 2, and repeats until one final set is obtained. Run time of PART is dominated by space partitioning (BUILDTREE), with normalization and resampling very fast. Algorithm 2 Density aggregation algorithm (drawing N ′ samples from the aggregated posterior) 1: procedure ONESTAGEAGGREGATE({θ(1) j }, {θ(2) j }, · · · , {θ(m) j }, φ(·), δρ, δa, N, N ′, L, R) 2: T ←BUILDTREE({θ(1) j }, {θ(2) j }, · · · , {θ(m) j }, φ(·), δρ, δa, N, L, R), Z ←0 3: ({Ak}, {n(i) k }) ←TRAVERSELEAF(T ) 4: for k = 1, 2, · · · , K do 5: ˜wk ←Qm i=1 n(i) k /|Ak|m−1, Z ←Z + ˜wk ▷Multiply inside each block 6: end for 7: wk ←˜wk/Z for all k ▷Normalize 8: for t = 1, 2, · · · , N ′ do 9: Draw k with weights {wk} and then draw θt ∼gk(θ) 10: end for 11: return {θ1, θ2, · · · , θN ′} 12: end procedure 2.3 Variance Reduction and Smoothing Random Tree Ensemble Inspired by random forests [11, 12], the full posterior is estimated by averaging T independent trees output by Algorithm 1. Smoothing and averaging can reduce variance and yield better approximation accuracy. The trees can be built in parallel and resampling in Algorithm 2 only additionally requires picking a tree uniformly at random. Local Gaussian Smoothing As another approach to increase smoothness, the blockwise uniform distribution in (5) can be replaced by a Gaussian distribution gk = N(θ; µk, Σk), with mean and covariance estimated “locally” by samples within the block. A multiplied Gaussian approximation is used: Σk = (Pm i=1 ˆΣ(i)−1 k )−1, µk = Σk(Pm i=1 ˆΣ(i)−1 k ˆµ(i) k ), where ˆΣ(i) k and ˆµ(i) k are estimated with the ith subset. We apply both random tree ensembles and local Gaussian smoothing in all applications of PART in this article unless explicitly stated otherwise. 3 Theory In this section, we provide consistency theory (in the number of posterior samples) for histograms and the aggregated density. We do not consider the variance reduction and smoothing modifications in these developments for simplicity in exposition, but extensions are possible. Section 3.1 provides error bounds on ML and KD-tree partitioning-based histogram density estimators constructed from N independent samples from a single joint posterior; modified bounds can be obtained for MCMC samples incorporating the mixing rate, but will not be considered here. Section 3.2 then provides corresponding error bounds for our PART-aggregrated density estimators in the one-stage and pairwise cases. Detailed proofs are provided in the supplementary materials. Let f(θ) be a p-dimensional posterior density function. Assume f is supported on a measurable set Ω⊆Rp. Since one can always transform Ωto a bounded region by scaling, we simply assume Ω= [0, 1]p as in [8, 13] without loss of generality. We also assume that f ∈C1(Ω). 3.1 Space partitioning Maximum likelihood partition (ML) For a given K, ML partition solves the following problem: ˆfML = arg max 1 N K X k=1 nk log  nk N|Ak|  , s.t. nk/N ≥c0ρ, |Ak| ≥ρ/D, (8) 4 for some c0 and ρ, where D = ∥f∥∞< ∞. We have the following result. Theorem 1. Choose ρ = 1/K1+1/(2p). For any δ > 0, if the sample size satisfies that N > 2(1 −c0)−2K1+1/(2p) log(2K/δ), then with probability at least 1 −δ, the optimal solution to (8) satisfies that DKL(f∥ˆfML) ≤(C1 + 2 log K)K−1 2p + C2 max  log D, 2 log K s K N log 3eN K  log 8 δ  , where C1 = log D + 4pLD with L = ∥f ′∥∞and C2 = 48√p + 1. When multiple densities f (1)(θ), · · · , f (m)(θ) are presented, our goal of imposing the same partition on all functions requires solving a different problem, ( ˆf (i) ML)m i=1 = arg max m X i=1 1 Ni K X k=1 n(i) k log  n(i) k Ni|Ak|  , s.t. n(i) k /Ni ≥c0ρ, |Ak| ≥ρ/D, (9) where Ni is the number of posterior samples for function f (i). A similar result as Theorem 1 for (9) is provided in the supplementary materials. Median partition/KD-tree (KD) The KD-tree ˆfKD cuts at the empirical median for different dimensions. We have the following result. Theorem 2. For any ε > 0, define rε = log2  1 + 1 2+3L/ε  . For any δ > 0, if N > 32e2(log K)2K log(2K/δ), then with probability at least 1 −δ, we have ∥ˆfKD −fKD∥1 ≤ε + pLK−rε/p + 4e log K s 2K N log 2K δ  . If f(θ) is further lower bounded by some constant b0 > 0, we can then obtain an upper bound on the KL-divergence. Define rb0 = log2  1 + 1 2+3L/b0  and we have DKL(f∥ˆfKD) ≤pLD b0 K−rb0/p + 8e log K s 2K N log 2K δ  . When there are multiple functions and the median partition is performed on pooled data, the partition might not happen at the empirical median on each subset. However, as long as the partition quantiles are upper and lower bounded by α and 1 −α for some α ∈[1/2, 1), we can establish results similar to Theorem 2. The result is provided in the supplementary materials. 3.2 Posterior aggregation The previous section provides estimation error bounds on individual posterior densities, through which we can bound the distance between the true posterior conditional on the full data set and the aggregated density via (3). Assume we have m density functions {f (i), i = 1, 2, · · · , m} and intend to approximate their aggregated density fI = Q i∈I f (i)/ R Q i∈I f (i), where I = {1, 2, · · · , m}. Notice that for any I′ ⊆I, fI′ = p(θ| S i∈I′ X(i)). Let D = maxI′⊆I ∥fI′∥∞, i.e., D is an upper bound on all posterior densities formed by a subset of X. Also define ZI′ = R Q i∈I′ f (i). These quantities depend only on the model and the observed data (not posterior samples). We denote ˆfML and ˆfKD by ˆf as the following results apply similarly to both methods. The “one-stage” aggregation (Algorithm 2) first obtains an approximation for each f (i) (via either ML-partition or KD-partition) and then computes ˆfI = Q i∈I ˆf (i)/ R Q i∈I ˆf (i). 5 Theorem 3 (One-stage aggregation). Denote the average total variation distance between f (i) and ˆf (i) by ε. Assume the conditions in Theorem 1 and 2 and for ML-partition √ N ≥32c−1 0 p 2(p + 1)K 3 2 + 1 2p s log 3eN K  log 8 δ  and for KD-partition N > 128e2K(log K)2 log(K/δ). Then with high probability the total variation distance between fI and ˆfI is bounded by ∥fI−ˆfI∥1 ≤ 2 ZI m(2D)m−1ε, where ZI is a constant that does not depend on the posterior samples. The approximation error of Algorithm 2 increases dramatically with the number of subsets. To ameliorate this, we introduce the pairwise aggregation strategy in Section 2, for which we have the following result. Theorem 4 (Pairwise aggregation). Denote the average total variation distance between f (i) and ˆf (i) by ε. Assume the conditions in Theorem 3. Then with high probability the total variation distance between fI and ˆfI is bounded by ∥fI −ˆfI∥1 ≤(4C0D)log2 m+1ε, where C0 = maxI′′⊂I′⊆I ZI′′ZI′\I′′ ZI′ is a constant that does not depend on posterior samples. 4 Experiments In this section, we evaluate the empirical performance of PART1 and compare the two algorithms PART-KD and PART-ML to the following posterior aggregation algorithms. 1. Simple averaging (average): each aggregated sample is an arithmetic average of M samples coming from M subsets. 2. Weighted averaging (weighted): also called Consensus Monte Carlo algorithm [3], where each aggregated sample is a weighted average of M samples. The weights are optimally chosen for a Gaussian posterior. 3. Weierstrass rejection sampler (Weierstrass): subset posterior samples are passed through a rejection sampler based on the Weierstrass transform to produce the aggregated samples [7]. We use its R package2 for experiments. 4. Parametric density product (parametric): aggregated samples are drawn from a multivariate Gaussian, which is a product of Laplacian approximations to subset posteriors [6]. 5. Nonparametric density product (nonparametric): aggregated posterior is approximated by a product of kernel density estimates of subset posteriors [6]. Samples are drawn with an independent Metropolis sampler. 6. Semiparametric density product (semiparametric): similar to the nonparametric, but with subset posteriors estimated semiparametrically [6, 14]. All experiments except the two toy examples use adaptive MCMC [15, 16] 3 for posterior sampling. For PART-KD/ML, one-stage aggregation (Algorithm 2) is used only for the toy examples (results from pairwise aggregation are provided in the supplement). For other experiments, pairwise aggregation is used, which draws 50,000 samples for intermediate stages and halves δρ after each stage to refine the resolution (The value of δρ listed below is for the final stage). The random ensemble of PART consists of 40 trees. 4.1 Two Toy Examples The two toy examples highlight the performance of our methods in terms of (i) recovering multiple modes and (ii) correctly locating posterior mass when subset posteriors are heterogeneous. The PART-KD/PART-ML results are obtained from Algorithm 2 without local Gaussian smoothing. 1MATLAB implementation available from https://github.com/richardkwo/ random-tree-parallel-MCMC 2https://github.com/wwrechard/weierstrass 3http://helios.fmi.fi/˜lainema/mcmc/ 6 Bimodal Example Figure 1 shows an example consisting of m = 10 subsets. Each subset consists of 10,000 samples drawn from a mixture of two univariate normals 0.27N(µi,1, σ2 i,1) + 0.73N(µi,2, σ2 i,2), with the means and standard deviations slightly different across subsets, given by µi,1 = −5 + ϵi,1, µi,2 = 5 + ϵi,2 and σi,1 = 1 + |δi,1|, σi,2 = 4 + |δi,2|, where ϵi,l ∼N(0, 0.5), δi,l ∼N(0, 0.1) independently for m = 1, · · · , 10 and l = 1, 2. The resulting true combined posterior (red solid) consists of two modes with different scales. In Figure 1, the left panel shows the subset posteriors (dashed) and the true posterior; the right panel compares the results with various methods to the truth. A few are omitted in the graph: average and weighted average overlap with parametric, and Weierstrass overlaps with PART-KD/PART-ML. x -10 -5 0 5 10 15 density 0 0.2 0.4 0.6 0.8 1 True density Subset densities x -10 -5 0 5 10 15 0 2 4 6 8 1 True density PART-KD PART-ML Parametric Nonparametric Semiparametric Figure 1: Bimodal posterior combined from 10 subsets. Left: the true posterior and subset posteriors (dashed). Right: aggregated posterior output by various methods compared to the truth. Results are based on 10,000 aggregated samples. Rare Bernoulli Example We consider N = 10, 000 Bernoulli trials xi iid∼Ber(θ) split into m = 15 subsets. The parameter θ is chosen to be 2m/N so that on average each subset only contains 2 successes. By random partitioning, the subset posteriors are rather heterogeneous as plotted in dashed lines in the left panel of Figure 2. The prior is set as π(θ) = Beta(θ; 2, 2). The right panel of Figure 2 compares the results of various methods. PART-KD, PART-ML and Weierstrass capture the true posterior shape, while parametric, average and weighted average are all biased. The nonparametric and semiparametric methods produce flat densities near zero (not visible in Figure 2 due to the scale). 3 0 0.005 0.01 0.015 0.02 density 0 200 400 600 800 True posterior Subset posteriors PART-KD PART-ML Figure 2: The posterior for the probability θ of a rare event. Left: the full posterior (solid) and m = 15 subset posteriors (dashed). Right: aggregated posterior output by various methods. All results are based on 20,000 aggregated samples. 4.2 Bayesian Logistic Regression Synthetic dataset The dataset {(xi, yi)}N i=1 consists of N = 50, 000 observations in p = 50 dimensions. All features xi ∈Rp−1 are drawn from Np−1(0, Σ) with p = 50 and Σk,l = 0.9|k−l|. The model intercept is set to −3 and the other coefficient θ∗ j ’s are drawn randomly from N(0, 52). Conditional on xi, yi ∈{0, 1} follows p(yi = 1) = 1/(1 + exp(−θ∗T [1, xi])). The dataset is randomly split into m = 40 subsets. For both full chain and subset chains, we run adaptive MCMC for 200,000 iterations after 100,000 burn-in. Thinning by 4 results in T = 50, 000 samples. The samples from the full chain (denoted as {θj}T j=1) are treated as the ground truth. To compare the accuracy of different methods, we resample T points {ˆθj} from each aggregated posterior and then 7 compare them using the following metrics: (1) RMSE of posterior mean ∥1 pT (P j ˆθj −P j θj)∥2 (2) approximate KL divergence DKL(p(θ)∥ˆp(θ)) and DKL(ˆp(θ)∥p(θ)), where ˆp and p are both approximated by multivariate Gaussians (3) the posterior concentration ratio, defined as r = qP j ∥ˆθj −θ∗∥2 2/P j ∥θj −θ∗∥2 2, which measures how posterior spreads out around the true value (with r = 1 being ideal). The result is provided in Table 1. Figure 4 shows the DKL(p∥ˆp) versus the length of subset chains supplied to the aggregation algorithm. The results of PART are obtained with δρ = 0.001, δa = 0.0001 and 40 trees. Figure 3 showcases the aggregated posterior for two parameters in terms of joint and marginal distributions. Method RMSE DKL(p∥ˆp) DKL(ˆp∥p) r PART (KD) 0.587 3.95 × 102 6.45 × 102 3.94 PART (ML) 1.399 8.05 × 101 5.47 × 102 9.17 average 29.93 2.53 × 103 5.41 × 104 184.62 weighted 38.28 2.60 × 104 2.53 × 105 236.15 Weierstrass 6.47 7.20 × 102 2.62 × 103 39.96 parametric 10.07 2.46 × 103 6.12 × 103 62.13 nonparametric 25.59 3.40 × 104 3.95 × 104 157.86 semiparametric 25.45 2.06 × 104 3.90 × 104 156.97 Table 1: Accuracy of posterior aggregation on logistic regression. PART-KD PART-ML Figure 3: Posterior of θ1 and θ17. Real datasets We also run experiments on two real datasets: (1) the Covertype dataset4 [17] consists of 581,012 observations in 54 dimensions, and the task is to predict the type of forest cover with cartographic measurements; (2) the MiniBooNE dataset5 [18, 19] consists of 130,065 observations in 50 dimensions, whose task is to distinguish electron neutrinos from muon neutrinos with experimental data. For both datasets, we reserve 1/5 of the data as the test set. The training set is randomly split into m = 50 and m = 25 subsets respectively for covertype and MiniBooNE. Figure 5 shows the prediction accuracy versus total runtime (parallel subset MCMC + aggregation time) for different methods. For each MCMC chain, the first 20% iterations are discarded before aggregation as burn-in. The aggregated chain is required to be of the same length as the subset chains. As a reference, we also plot the result for the full chain and lasso [20] run on the full training set. PART-KD PART-ML Figure 4: Approximate KL divergence between the full chain and the combined posterior versus the length of subset chains. total time (sec) 0 200 400 600 800 prediction accuracy 0.6 0.65 0.7 0.75 0.8 Covertype PART-KD PART-ML Parametric Nonparametric Weierstrass Average Weighted Full chain Lasso total time (sec) 0 50 100 150 200 250 300 0.2 0.4 0.6 0.8 1 MiniBooNE Figure 5: Prediction accuracy versus total runtime (running chain + aggregation) on Covertype and MiniBooNE datasets (semiparametric is not compared due to its long running time). Plots against the length of chain are provided in the supplement. 5 Conclusion In this article, we propose a new embarrassingly-parallel MCMC algorithm PART that can efficiently draw posterior samples for large data sets. PART is simple to implement, efficient in subset combining and has theoretical guarantees. Compared to existing EP-MCMC algorithms, PART has substantially improved performance. Possible future directions include (1) exploring other multi-scale density estimators which share similar properties as partition trees but with a better approximation accuracy (2) developing a tuning procedure for choosing good δρ and δa, which are essential to the performance of PART. 4http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/binary.html 5https://archive.ics.uci.edu/ml/machine-learning-databases/00199 8 References [1] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011. [2] Dougal Maclaurin and Ryan P Adams. Firefly Monte Carlo: Exact MCMC with subsets of data. Proceedings of the conference on Uncertainty in Artificial Intelligence (UAI), 2014. [3] Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I George, and Robert E McCulloch. Bayes and big data: The consensus Monte Carlo algorithm. In EFaBBayes 250 conference, volume 16, 2013. [4] Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin, and David Dunson. Scalable and robust bayesian inference via the median posterior. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), 2014. [5] Sanvesh Srivastava, Volkan Cevher, Quoc Tran-Dinh, and David B Dunson. WASP: Scalable Bayes via barycenters of subset posteriors. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 38, 2015. [6] Willie Neiswanger, Chong Wang, and Eric Xing. Asymptotically exact, embarrassingly parallel MCMC. In Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-14), pages 623–632, Corvallis, Oregon, 2014. AUAI Press. [7] Xiangyu Wang and David B Dunson. Parallel MCMC via Weierstrass sampler. arXiv preprint arXiv:1312.4605, 2013. [8] Linxi Liu and Wing Hung Wong. Multivariate density estimation based on adaptive partitioning: Convergence rate, variable selection and spatial adaptation. arXiv preprint arXiv:1401.2597, 2014. [9] Manuel Blum, Robert W Floyd, Vaughan Pratt, Ronald L Rivest, and Robert E Tarjan. Time bounds for selection. Journal of Computer and System Sciences, 7(4):448–461, 1973. [10] Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509–517, 1975. [11] Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. [12] Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996. [13] Xiaotong Shen and Wing Hung Wong. Convergence rate of sieve estimates. The Annals of Statistics, pages 580–615, 1994. [14] Nils Lid Hjort and Ingrid K Glad. Nonparametric density estimation with a parametric start. The Annals of Statistics, pages 882–904, 1995. [15] Heikki Haario, Marko Laine, Antonietta Mira, and Eero Saksman. DRAM: efficient adaptive MCMC. Statistics and Computing, 16(4):339–354, 2006. [16] Heikki Haario, Eero Saksman, and Johanna Tamminen. An adaptive Metropolis algorithm. Bernoulli, pages 223–242, 2001. [17] Jock A Blackard and Denis J Dean. Comparative accuracies of neural networks and discriminant analysis in predicting forest cover types from cartographic variables. In Proc. Second Southern Forestry GIS Conf, pages 189–199, 1998. [18] Byron P Roe, Hai-Jun Yang, Ji Zhu, Yong Liu, Ion Stancu, and Gordon McGregor. Boosted decision trees as an alternative to artificial neural networks for particle identification. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 543(2):577–584, 2005. [19] M. Lichman. UCI machine learning repository, 2013. [20] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996. 9
2015
112
5,606
Convergence rates of sub-sampled Newton methods Murat A. Erdogdu Department of Statistics Stanford University erdogdu@stanford.edu Andrea Montanari Department of Statistics and Electrical Engineering Stanford University montanari@stanford.edu Abstract We consider the problem of minimizing a sum of n functions via projected iterations onto a convex parameter set C ⇢Rp, where n ≫p ≫1. In this regime, algorithms which utilize sub-sampling techniques are known to be effective. In this paper, we use sub-sampling techniques together with low-rank approximation to design a new randomized batch algorithm which possesses comparable convergence rate to Newton’s method, yet has much smaller per-iteration cost. The proposed algorithm is robust in terms of starting point and step size, and enjoys a composite convergence rate, namely, quadratic convergence at start and linear convergence when the iterate is close to the minimizer. We develop its theoretical analysis which also allows us to select near-optimal algorithm parameters. Our theoretical results can be used to obtain convergence rates of previously proposed sub-sampling based algorithms as well. We demonstrate how our results apply to well-known machine learning problems. Lastly, we evaluate the performance of our algorithm on several datasets under various scenarios. 1 Introduction We focus on the following minimization problem, minimize f(✓) := 1 n n X i=1 fi(✓), (1.1) where fi : Rp ! R. Most machine learning models can be expressed as above, where each function fi corresponds to an observation. Examples include logistic regression, support vector machines, neural networks and graphical models. Many optimization algorithms have been developed to solve the above minimization problem [Bis95, BV04, Nes04]. For a given convex set C ⇢Rp, we denote the Euclidean projection onto this set by PC. We consider the updates of the form ˆ✓t+1 = PC ⇣ ˆ✓t −⌘tQtr✓f(ˆ✓t) ⌘ , (1.2) where ⌘t is the step size and Qt is a suitable scaling matrix that provides curvature information. Updates of the form Eq. (1.2) have been extensively studied in the optimization literature (for simplicity, we assume C = Rp throughout the introduction). The case where Qt is equal to identity matrix corresponds to Gradient Descent (GD) which, under smoothness assumptions, achieves linear convergence rate with O(np) per-iteration cost. More precisely, GD with ideal step size yields kˆ✓t+1 −✓⇤k2 ⇠t 1,GDkˆ✓t −✓⇤k2 , where, as limt!1 ⇠t 1,GD = 1 −(λ⇤ p/λ⇤ 1), and λ⇤ i is the i-th largest eigenvalue of the Hessian of f(✓) at minimizer ✓⇤. Second order methods such as Newton’s Method (NM) and Natural Gradient Descent (NGD) [Ama98] can be recovered by taking Qt to be the inverse Hessian and the Fisher information evaluated at the current iterate, respectively. Such methods may achieve quadratic convergence rates with 1 O(np2 + p3) per-iteration cost [Bis95, Nes04]. In particular, for t large enough, Newton’s method yields kˆ✓t+1 −✓⇤k2 ⇠2,NMkˆ✓t −✓⇤k2 2, and it is insensitive to the condition number of the Hessian. However, when the number of samples grows large, computing Qt becomes extremely expensive. A popular line of research tries to construct the matrix Qt in a way that the update is computationally feasible, yet still provides sufficient second order information. Such attempts resulted in Quasi-Newton methods, in which only gradients and iterates are utilized, resulting in an efficient update on Qt. A celebrated Quasi-Newton method is the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm which requires O(np + p2) per-iteration cost [Bis95, Nes04]. An alternative approach is to use sub-sampling techniques, where scaling matrix Qt is based on randomly selected set of data points [Mar10, BCNN11, VP12, Erd15]. Sub-sampling is widely used in the first order methods, but is not as well studied for approximating the scaling matrix. In particular, theoretical guarantees are still missing. A key challenge is that the sub-sampled Hessian is close to the actual Hessian along the directions corresponding to large eigenvalues (large curvature directions in f(✓)), but is a poor approximation in the directions corresponding to small eigenvalues (flatter directions in f(✓)). In order to overcome this problem, we use low-rank approximation. More precisely, we treat all the eigenvalues below the r-th as if they were equal to the (r + 1)-th. This yields the desired stability with respect to the sub-sample: we call our algorithm NewSamp. In this paper, we establish the following: 1. NewSamp has a composite convergence rate: quadratic at start and linear near the minimizer, as illustrated in Figure 1. Formally, we prove a bound of the form kˆ✓t+1 −✓⇤k2  ⇠t 1kˆ✓t −✓⇤k2 + ⇠t 2kˆ✓t −✓⇤k2 2 with coefficient that are explicitly given (and are computable from data). 2. The asymptiotic behavior of the linear convergence coefficient is limt!1 ⇠t 1 = 1 − (λ⇤ p/λ⇤ r+1) + δ, for δ small. The condition number (λ⇤ 1/λ⇤ p) which controls the convergence of GD, has been replaced by the milder (λ⇤ r+1/λ⇤ p). For datasets with strong spectral features, this can be a large improvement, as shown in Figure 1. 3. The above results are achived without tuning the step-size, in particular, by setting ⌘t = 1. 4. The complexity per iteration of NewSamp is O(np + |S|p2) with |S| the sample size. 5. Our theoretical results can be used to obtain convergence rates of previously proposed subsampling algorithms. The rest of the paper is organized as follows: Section 1.1 surveys the related work. In Section 2, we describe the proposed algorithm and provide the intuition behind it. Next, we present our theoretical results in Section 3, i.e., convergence rates corresponding to different sub-sampling schemes, followed by a discussion on how to choose the algorithm parameters. Two applications of the algorithm are discussed in Section 4. We compare our algorithm with several existing methods on various datasets in Section 5. Finally, in Section 6, we conclude with a brief discussion. 1.1 Related Work Even a synthetic review of optimization algorithms for large-scale machine learning would go beyond the page limits of this paper. Here, we emphasize that the method of choice depends crucially on the amount of data to be used, and their dimensionality (i.e., respectively, on the parameters n and p). In this paper, we focus on a regime in which n and p are large but not so large as to make gradient computations (of order np) and matrix manipulations (of order p3) prohibitive. Online algorithms are the option of choice for very large n since the computation per update is independent of n. In the case of Stochastic Gradient Descent (SGD), the descent direction is formed by a randomly selected gradient. Improvements to SGD have been developed by incorporating the previous gradient directions in the current update equation [SRB13, Bot10, DHS11]. Batch algorithms, on the other hand, can achieve faster convergence and exploit second order information. They are competitive for intermediate n. Several methods in this category aim at quadratic, or at least super-linear convergence rates. In particular, Quasi-Newton methods have proven effective [Bis95, Nes04]. Another approach towards the same goal is to utilize sub-sampling to form an approximate Hessian [Mar10, BCNN11, VP12, Erd15]. If the sub-sampled Hessian is close to the true Hessian, these methods can approach NM in terms of convergence rate, nevertheless, they enjoy 2 Algorithm 1 NewSamp Input: ˆ✓0, r, ✏, {⌘t}t, t = 0. 1. Define: PC(✓) = argmin✓02Ck✓−✓0k2 is the Euclidean projection onto C, [Uk, ⇤k] = TruncatedSVDk(H) is rank-k truncated SVD of H with ⇤ii = λi. 2. while kˆ✓t+1 −ˆ✓tk2 ✏do Sub-sample a set of indices St ⇢[n]. Let HSt = 1 |St| P i2St r2 ✓fi(ˆ✓t), and [Ur+1, ⇤r+1] = TruncatedSVDr+1(HSt), Qt = λ−1 r+1Ip + Ur " ⇤−1 r −λ−1 r+1Ir # UT r , ˆ✓t+1 = PC ⇣ ˆ✓t −⌘tQtr✓f(ˆ✓t) ⌘ , t t + 1. 3. end while Output: ˆ✓t. much smaller complexity per update. No convergence rate analysis is available for these methods: this analysis is the main contribution of our paper. To the best of our knowledge, the best result in this direction is proven in [BCNN11] that estabilishes asymptotic convergence without quantitative bounds (exploiting general theory from [GNS09]). On the further improvements of the sub-sampling algorithms, a common approach is to use Conjugate Gradient (CG) methods and/or Krylov sub-spaces [Mar10, BCNN11, VP12]. Lastly, there are various hybrid algorithms that combine two or more techniques to increase the performance. Examples include, sub-sampling and Quasi-Newton [BHNS14], SGD and GD [FS12], NGD and NM [LRF10], NGD and low-rank approximation [LRMB08]. 2 NewSamp : Newton-Sampling method via rank thresholding In the regime we consider, n ≫p, there are two main drawbacks associated with the classical second order methods such as Newton’s method. The dominant issue is the computation of the Hessian matrix, which requires O(np2) operations, and the other issue is inverting the Hessian, which requires O(p3) computation. Sub-sampling is an effective and efficient way of tackling the first issue. Recent empirical studies show that sub-sampling the Hessian provides significant improvement in terms of computational cost, yet preserves the fast convergence rate of second order methods [Mar10, VP12]. If a uniform sub-sample is used, the sub-sampled Hessian will be a random matrix with expected value at the true Hessian, which can be considered as a sample estimator to the mean. Recent advances in statistics have shown that the performance of various estimators can be significantly improved by simple procedures such as shrinkage and/or thresholding [CCS10, DGJ13]. To this extent, we use low-rank approximation as the important second order information is generally contained in the largest few eigenvalues/vectors of the Hessian. NewSamp is presented as Algorithm 1. At iteration step t, the sub-sampled set of indices, its size and the corresponding sub-sampled Hessian is denoted by St, |St| and HSt, respectively. Assuming that the functions fi’s are convex, eigenvalues of the symmetric matrix HSt are non-negative. Therefore, SVD and eigenvalue decomposition coincide. The operation TruncatedSVDk(HSt) = [Uk, ⇤k] is the best rank-k approximation, i.e., takes HSt as input and returns the largest k eigenvalues ⇤k 2 Rk⇥k with the corresponding k eigenvectors Uk 2 Rp⇥k. This procedure requires O(kp2) computation [HMT11]. Operator PC projects the current iterate to the feasible set C using Euclidean projection. We assume that this projection can be done efficiently. To construct the curvature matrix [Qt]−1, instead of using the basic rank-r approximation, we fill its 0 eigenvalues with the (r + 1)-th eigenvalue of the sub-sampled Hessian which is the largest eigenvalue below the threshold. If we compute a truncated SVD with k = r + 1 and ⇤ii = λi, the described operation results in Qt = λ−1 r+1Ip + Ur " ⇤−1 r −λ−1 r+1Ir # UT r , (2.1) which is simply the sum of a scaled identity matrix and a rank-r matrix. Note that the low-rank approximation that is suggested to improve the curvature estimation has been further utilized to reduce the cost of computing the inverse matrix. Final per-iteration cost of NewSamp will be O " np + (|St| + r)p2# ⇡O " np + |St|p2# . NewSamp takes the parameters {⌘t, |St|}t and r as inputs. We discuss in Section 3.4, how to choose them optimally, based on the theory in Section 3. 3 −5 −4 −3 −2 −1 0 0 200 400 600 Iterations log(Error) Sub−sample size NewSamp : St = 100 NewSamp : St = 200 NewSamp : St = 500 Convergence Rate 0.15 0.20 0.25 0 20 40 Rank Value Coefficient ξ1 : linear ξ2 : quadratic Convergence Coefficients Figure 1: Left plot demonstrates convergence rate of NewSamp , which starts with a quadratic rate and transitions into linear convergence near the true minimizer. The right plot shows the effect of eigenvalue thresholding on the convergence coefficients up to a scaling constant. x-axis shows the number of kept eigenvalues. Plots are obtained using Covertype dataset. By the construction of Qt, NewSamp will always be a descent algorithm. It enjoys a quadratic convergence rate at start which transitions into a linear rate in the neighborhood of the minimizer. This behavior can be observed in Figure 1. The left plot in Figure 1 shows the convergence behavior of NewSamp over different sub-sample sizes. We observe that large sub-samples result in better convergence rates as expected. As the sub-sample size increases, slope of the linear phase decreases, getting closer to that of quadratic phase. We will explain this phenomenon in Section 3, by Theorems 3.2 and 3.3. The right plot in Figure 1 demonstrates how the coefficients of two phases depend on the thresholded rank. Coefficient of the quadratic phase increases with the rank threshold, whereas for the linear phase, relation is reversed. 3 Theoretical results In this section, we provide the convergence analysis of NewSamp based on two different subsampling schemes: S1: Independent sub-sampling: At each iteration t, St is uniformly sampled from [n] = {1, 2, ..., n}, independently from the sets {S⌧}⌧<t, with or without replacement. S2: Sequentially dependent sub-sampling: At each iteration t, St is sampled from [n], based on a distribution which might depend on the previous sets {S⌧}⌧<t, but not on any randomness in the data. The first sub-sampling scheme is simple and commonly used in optimization. One drawback is that the sub-sampled set at the current iteration is independent of the previous sub-samples, hence does not consider which of the samples were previously used to form the approximate curvature information. In order to prevent cycles and obtain better performance near the optimum, one might want to increase the sample size as the iteration advances [Mar10], including previously unused samples. This process results in a sequence of dependent sub-samples which falls into the subsampling scheme S2. In our theoretical analysis, we make the following assumptions: Assumption 1 (Lipschitz continuity). For any subset S ⇢[n], 9M|S| depending on the size of S, such that 8✓, ✓0 2 C, kHS(✓) −HS(✓0)k2 M|S| k✓−✓0k2. Assumption 2 (Bounded Hessian). 8i 2 [n], r2 ✓fi(✓) is upper bounded by a constant K, i.e., max in !!r2 ✓fi(✓) !! 2 K. 3.1 Independent sub-sampling In this section, we assume that St ⇢[n] is sampled according to the sub-sampling scheme S1. In fact, many stochastic algorithms assume that St is a uniform subset of [n], because in this case the sub-sampled Hessian is an unbiased estimator of the full Hessian. That is, 8✓2 C, E [HSt(✓)] = H[n](✓), where the expectation is over the randomness in St. We next show that for any scaling matrix Qt that is formed by the sub-samples St, iterations of the form Eq. (1.2) will have a composite convergence rate, i.e., combination of a linear and a quadratic phases. 4 Lemma 3.1. Assume that the parameter set C is convex and St ⇢[n] is based on sub-sampling scheme S1 and sufficiently large. Further, let the Assumptions 1 and 2 hold and ✓⇤2 C. Then, for an absolute constant c > 0, with probability at least 1 −2/p, the updates of the form Eq. (1.2) satisfy kˆ✓t+1 −✓⇤k2 ⇠t 1kˆ✓t −✓⇤k2 + ⇠t 2kˆ✓t −✓⇤k2 2, for coefficients ⇠t 1 and ⇠t 2 defined as ⇠t 1 = !!!I −⌘tQtHSt(ˆ✓t) !!! 2 + ⌘tcK !!Qt!! 2 s log(p) |St| , ⇠t 2 = ⌘t Mn 2 !!Qt!! 2 . Remark 1. If the initial point ˆ✓0 is close to ✓⇤, the algorithm will start with a quadratic rate of convergence which will transform into linear rate later in the close neighborhood of the optimum. The above lemma holds for any matrix Qt. In particular, if we choose Qt = H−1 St , we obtain a bound for the simple sub-sampled Hessian method. In this case, the coefficients ⇠t 1 and ⇠t 2 depend on kQtk2 = 1/λt p where λt p is the smallest eigenvalue of the sub-sampled Hessian. Note that λt p can be arbitrarily small which might blow up both of the coefficients. In the following, we will see how NewSamp remedies this issue. Theorem 3.2. Let the assumptions in Lemma 3.1 hold. Denote by λt i, the i-th eigenvalue of HSt(ˆ✓t) where ˆ✓t is given by NewSamp at iteration step t. If the step size satisfies ⌘t  2 1 + λtp/λt r+1 , (3.1) then we have, with probability at least 1 −2/p, kˆ✓t+1 −✓⇤k2 ⇠t 1kˆ✓t −✓⇤k2 + ⇠t 2kˆ✓t −✓⇤k2 2, for an absolute constant c > 0, for the coefficients ⇠t 1 and ⇠t 2 are defined as ⇠t 1 = 1 −⌘t λt p λt r+1 + ⌘t cK λt r+1 s log(p) |St| , ⇠t 2 = ⌘t Mn 2λt r+1 . NewSamp has a composite convergence rate where ⇠t 1 and ⇠t 2 are the coefficients of the linear and the quadratic terms, respectively (See the right plot in Figure 1). We observe that the sub-sampling size has a significant effect on the linear term, whereas the quadratic term is governed by the Lipschitz constant. We emphasize that the case ⌘t = 1 is feasible for the conditions of Theorem 3.2. 3.2 Sequentially dependent sub-sampling Here, we assume that the sub-sampling scheme S2 is used to generate {S⌧}⌧≥1. Distribution of sub-sampled sets may depend on each other, but not on any randomness in the dataset. Examples include fixed sub-samples as well as sub-samples of increasing size, sequentially covering unused data. In addition to Assumptions 1-2, we assume the following. Assumption 3 (i.i.d. observations). Let z1, z2, ..., zn 2 Z be i.i.d. observations from a distribution D. For a fixed ✓2 Rp and 8i 2 [n], we assume that the functions {fi}n i=1 satisfy fi(✓) = '(zi, ✓), for some function ' : Z ⇥Rp ! R. Most statistical learning algorithms can be formulated as above, e.g., in classification problems, one has access to i.i.d. samples {(yi, xi)}n i=1 where yi and xi denote the class label and the covariate, and ' measures the classification error (See Section 4 for examples). For sub-sampling scheme S2, an analogue of Lemma 3.1 is stated in Appendix as Lemma B.1, which leads to the following result. Theorem 3.3. Assume that the parameter set C is convex and St ⇢[n] is based on the sub-sampling scheme S2. Further, let the Assumptions 1, 2 and 3 hold, almost surely. Conditioned on the event E = {✓⇤2 C}, if the step size satisfies Eq. 3.1, then for ˆ✓t given by NewSamp at iteration t, with probability at least 1 −cE e−p for cE = c/P(E), we have kˆ✓t+1 −✓⇤k2 ⇠t 1kˆ✓t −✓⇤k2 + ⇠t 2kˆ✓t −✓⇤k2 2, for the coefficients ⇠t 1 and ⇠t 2 defined as ⇠t 1 = 1 −⌘t λt p λt r+1 + ⌘t c0K λt r+1 s p |St| log ✓diam(C)2 $ Mn + M|St| %2 |St| K2 ◆ , ⇠t 2= ⌘t Mn 2λt r+1 , where c, c0 > 0 are absolute constants and λt i denotes the i-th eigenvalue of HSt(ˆ✓t). 5 Compared to the Theorem 3.2, we observe that the coefficient of the quadratic term does not change. This is due to Assumption 1. However, the bound on the linear term is worse, since we use the uniform bound over the convex parameter set C. 3.3 Dependence of coefficients on t and convergence guarantees The coefficients ⇠t 1 and ⇠t 2 depend on the iteration step which is an undesirable aspect of the above results. However, these constants can be well approximated by their analogues ⇠⇤ 1 and ⇠⇤ 2 evaluated at the optimum which are defined by simply replacing λt j with λ⇤ j in their definition, where the latter is the j-th eigenvalue of full-Hessian at ✓⇤. For the sake of simplicity, we only consider the case where the functions ✓! fi(✓) are quadratic. Theorem 3.4. Assume that the functions fi(✓) are quadratic, St is based on scheme S1 and ⌘t = 1. Let the full Hessian at ✓⇤be lower bounded by k. Then for sufficiently large |St| and absolute constants c1, c2, with probability 1 −2/p !!⇠t 1 −⇠⇤ 1 !!  c1K p log(p)/|St| k # k −c2K p log(p)/|St| $ := δ. Theorem 3.4 implies that, when the sub-sampling size is sufficiently large, ⇠t 1 will concentrate around ⇠⇤ 1. Generalizing the above theorem to non-quadratic functions is straightforward, in which case, one would get additional terms involving the difference kˆ✓t −✓⇤k2. In the case of scheme S2, if one uses fixed sub-samples, then the coefficient ⇠t 1 does not depend on t. The following corollary gives a sufficient condition for convergence. A detailed discussion on the number of iterations until convergence and further local convergence properties can be found in [Erd15, EM15]. Corollary 3.5. Assume that ⇠t 1 and ⇠t 2 are well-approximated by ⇠⇤ 1 and ⇠⇤ 2 with an error bound of δ, i.e., ⇠t i ⇠⇤ i + δ for i = 1, 2, as in Theorem 3.4. For the initial point ˆ✓0, a sufficient condition for convergence is kˆ✓0 −✓⇤k2 < 1 −⇠⇤ 1 −δ ⇠⇤ 2 + δ . 3.4 Choosing the algorithm parameters Step size: Let γ = O(log(p)/|St|). We suggest the following step size for NewSamp at iteration t, ⌘t(γ) = 2 1 + λtp/λt r+1 + γ . (3.2) Note that ⌘t(0) is the upper bound in Theorems 3.2 and 3.3 and it minimizes the first component of ⇠t 1. The other terms in ⇠t 1 and ⇠t 2 linearly depend on ⌘t. To compensate for that, we shrink ⌘t(0) towards 1. Contrary to most algorithms, optimal step size of NewSamp is larger than 1. A rigorous derivation of Eq. 3.2 can be found in [EM15]. Sample size: By Theorem 3.2, a sub-sample of size O((K/λ⇤ p)2 log(p)) should be sufficient to obtain a small coefficient for the linear phase. Also note that sub-sample size |St| scales quadratically with the condition number. Rank threshold: For a full-Hessian with effective rank R (trace divided by the largest eigenvalue), it suffices to use O(R log(p)) samples [Ver10]. Effective rank is upper bounded by the dimension p. Hence, one can use p log(p) samples to approximate the full-Hessian and choose a rank threshold which retains the important curvature information. 4 Examples 4.1 Generalized Linear Models (GLM) Maximum likelihood estimation in a GLM setting is equivalent to minimizing the negative loglikelihood `(✓), minimize ✓2C f(✓) = 1 n n X i=1 [Φ(hxi, ✓i) −yihxi, ✓i] , (4.1) where Φ is the cumulant generating function, xi 2 Rp denote the rows of design matrix X 2 Rn⇥p, and ✓2 Rp is the coefficient vector. Here, hx, ✓i denotes the inner product between the vectors x, ✓. The function Φ defines the type of GLM, i.e., Φ(z) = z2 gives ordinary least squares (OLS) and Φ(z) = log(1 + ez) gives logistic regression (LR). Using the results from Section 3, we perform a convergence analysis of our algorithm on a GLM problem. 6 Corollary 4.1. Let St ⇢[n] be a uniform sub-sample, and C = Rp be the parameter set. Assume that the second derivative of the cumulant generating function, Φ(2) is bounded by 1, and it is Lipschitz continuous with Lipschitz constant L. Further, assume that the covariates are contained in a ball of radius pRx, i.e. maxi2[n] kxik2 pRx. Then, for ˆ✓t given by NewSamp with constant step size ⌘t = 1 at iteration t, with probability at least 1 −2/p, we have kˆ✓t+1 −✓⇤k2 ⇠t 1kˆ✓t −✓⇤k2 + ⇠t 2kˆ✓t −✓⇤k2 2, for constants ⇠t 1 and ⇠t 2 defined as ⇠t 1 =1 − λt i λt r+1 + cRx λt r+1 s log(p) |St| , ⇠t 2 =LR3/2 x 2λt r+1 , where c > 0 is an absolute constant and λt i is the ith eigenvalue of HSt(ˆ✓t). 4.2 Support Vector Machines (SVM) A linear SVM provides a separating hyperplane which maximizes the margin, i.e., the distance between the hyperplane and the support vectors. Although the vast majority of the literature focuses on the dual problem [SS02], SVMs can be trained using the primal as well. Since the dual problem does not scale well with the number of data points (some approaches get O(n3) complexity) the primal might be better-suited for optimization of linear SVMs [Cha07]. The primal problem for the linear SVM can be written as minimize ✓2C f(✓) = 1 2k✓k2 2 + 1 2C n X i=1 `(yi, h✓, xii) (4.2) where (yi, xi) denote the data samples, ✓defines the separating hyperplane, C > 0 and ` could be any loss function. The most commonly used loss functions include Hinge-p loss, Huber loss and their smoothed versions [Cha07]. Smoothing or approximating such losses with more stable functions is sometimes crucial in optimization. In the case of NewSamp which requires the loss function to be twice differentiable (almost everywhere), we suggest either smoothed Huber loss, or Hinge-2 loss [Cha07]. In the case of Hinge-2 loss, i.e., `(y, h✓, xi) = max {0, 1 −yh✓, xi}2, by combining the offset and the normal vector of the hyperplane into a single parameter vector ✓, and denoting by SVt the set of indices of all the support vectors at iteration t, we may write the Hessian, r2 ✓f(✓) = 1 |SVt| n I + C X i2SVt xixT i o , where SVt = {i : yih✓t, xii < 1}. When |SVt| is large, the problem falls into our setup and can be solved efficiently using NewSamp. Note that unlike the GLM setting, Lipschitz condition of our Theorems do not apply here. However, we empirically demonstrate that NewSamp works regardless of such assumptions. 5 Experiments In this section, we validate the performance of NewSamp through numerical studies. We experimented on two optimization problems, namely, Logistic Regression (LR) and SVM. LR minimizes Eq. 4.1 for the logistic function, whereas SVM minimizes Eq. 4.2 for the Hinge-2 loss. In the following, we briefly describe the algorithms that are used in the experiments: 1. Gradient Descent (GD), at each iteration, takes a step proportional to negative of the full gradient evaluated at the current iterate. Under certain regularity conditions, GD exhibits a linear convergence rate. 2. Accelerated Gradient Descent (AGD) is proposed by Nesterov [Nes83], which improves over the gradient descent by using a momentum term. 3. Newton’s Method (NM) achieves a quadratic convergence rate by utilizing the inverse Hessian evaluated at the current iterate. 4. Broyden-Fletcher-Goldfarb-Shanno (BFGS) is the most popular and stable Quasi-Newton method. Qt is formed by accumulating the information from iterates and gradients. 5. Limited Memory BFGS (L-BFGS) is a variant of BFGS, which uses only the recent iterates and gradients to construct Qt, providing improvement in terms of memory usage. 6. Stochastic Gradient Descent (SGD) is a simplified version of GD where, at each iteration, a randomly selected gradient is used. We follow the guidelines of [Bot10] for the step size. 7 −4 −2 0 0 10 20 30 40 50 Time(sec) log(Error) Method NewSamp BFGS LBFGS Newton GD AGD SGD AdaGrad Logistic Regression, rank=3 −4 −3 −2 −1 0 1 0 25 50 75 100 Time(sec) log(Error) Method NewSamp BFGS LBFGS Newton GD AGD SGD AdaGrad SVM, rank=3 −4 −2 0 2 0 10 20 30 Time(sec) log(Error) Method NewSamp BFGS LBFGS Newton GD AGD SGD AdaGrad SVM, rank=60 −4 −3 −2 −1 0 1 0 5 10 15 Time(sec) log(Error) Method NewSamp BFGS LBFGS Newton GD AGD SGD AdaGrad Logistic Regression, rank=60 −4 −3 −2 −1 0 1 0 10 20 30 40 50 Time(sec) log(Error) Method NewSamp BFGS LBFGS Newton GD AGD SGD AdaGrad Logistic Regression, rank=60 −4 −2 0 2 0 30 60 90 120 Time(sec) log(Error) Method NewSamp BFGS LBFGS Newton GD AGD SGD AdaGrad SVM, rank=60 Synthe'c) CT)Slices) MSD) Dataset:) Figure 2: Performance of several algorithms on different datasets. NewSamp is represented with red color . 7. Adaptive Gradient Scaling (AdaGrad) uses an adaptive learning rate based on the previous gradients. AdaGrad significantly improves the performance and stability of SGD [DHS11]. For batch algorithms, we used constant step size and for all the algorithms, the step size that provides the fastest convergence is chosen. For stochastic algorithms, we optimized over the parameters that define the step size. Parameters of NewSamp are selected following the guidelines in Section 3.4. We experimented over various datasets that are given in Table 1. Each dataset consists of a design matrix X 2 Rn⇥p and the corresponding observations (classes) y 2 Rn. Synthetic data is generated through a multivariate Gaussian distribution. As a methodological choice, we selected moderate values of p, for which Newton’s method can still be implemented, and nevertheless we can demonstrate an improvement. For larger values of p, comparison is even more favorable to our approach. The effects of sub-sampling size |St| and rank threshold are demonstrated in Figure 1. A thorough comparison of the aforementioned optimization techniques is presented in Figure 2. In the case of LR, we observe that stochastic methods enjoy fast convergence at start, but slows down after several epochs. The algorithm that comes close to NewSamp in terms of performance is BFGS. In the case of SVM, NM is the closest algorithm to NewSamp . Note that the global convergence of BFGS is not better than that of GD [Nes04]. The condition for super-linear rate is P t k✓t−✓⇤k2 < 1 for which, an initial point close to the optimum is required [DM77]. This condition can be rarely satisfied in practice, which also affects the performance of other second order methods. For NewSamp, even though rank thresholding provides a level of robustness, we found that initial point is still an important factor. Details about Figure 2 and additional experiments can be found in Appendix C. Dataset n p r Reference CT slices 53500 386 60 [GKS+11, Lic13] Covertype 581012 54 20 [BD99, Lic13] MSD 515345 90 60 [MEWL, Lic13] Synthetic 500000 300 3 – Table 1: Datasets used in the experiments. 6 Conclusion In this paper, we proposed a sub-sampling based second order method utilizing low-rank Hessian estimation. The proposed method has the target regime n ≫p and has O " np + |S|p2# complexity per-iteration. We showed that the convergence rate of NewSamp is composite for two widely used sub-sampling schemes, i.e., starts as quadratic convergence and transforms to linear convergence near the optimum. Convergence behavior under other sub-sampling schemes is an interesting line of research. Numerical experiments demonstrate the performance of the proposed algorithm which we compared to the classical optimization methods. 8 References [Ama98] Shun-Ichi Amari, Natural gradient works efficiently in learning, Neural computation 10 (1998). [BCNN11] Richard H Byrd, Gillian M Chin, Will Neveitt, and Jorge Nocedal, On the use of stochastic hessian information in optimization methods for machine learning, SIAM Journal on Optimization (2011). [BD99] Jock A Blackard and Denis J Dean, Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables, Compag (1999). [BHNS14] Richard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer, A stochastic quasi-newton method for large-scale optimization, arXiv preprint arXiv:1401.7020 (2014). [Bis95] Christopher M. Bishop, Neural networks for pattern recognition, Oxford University Press, 1995. [Bot10] L`eon Bottou, Large-scale machine learning with stochastic gradient descent, COMPSTAT, 2010. [BV04] Stephen Boyd and Lieven Vandenberghe, Convex optimization, Cambridge University Press, 2004. [CCS10] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization 20 (2010), no. 4, 1956–1982. [Cha07] Olivier Chapelle, Training a support vector machine in the primal, Neural Computation (2007). [DE15] Lee H Dicker and Murat A Erdogdu, Flexible results for quadratic forms with applications to variance components estimation, arXiv preprint arXiv:1509.04388 (2015). [DGJ13] David L Donoho, Matan Gavish, and Iain M Johnstone, Optimal shrinkage of eigenvalues in the spiked covariance model, arXiv preprint arXiv:1311.0851 (2013). [DHS11] John Duchi, Elad Hazan, and Yoram Singer, Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res. 12 (2011), 2121–2159. [DM77] John E Dennis, Jr and Jorge J Mor´e, Quasi-newton methods, motivation and theory, SIAM review 19 (1977), 46–89. [EM15] Murat A Erdogdu and Andrea Montanari, Convergence rates of sub-sampled Newton methods, arXiv preprint arXiv:1508.02810 (2015). [Erd15] Murat A. Erdogdu, Newton-Stein Method: A second order method for GLMs via Stein’s lemma, NIPS, 2015. [FS12] Michael P Friedlander and Mark Schmidt, Hybrid deterministic-stochastic methods for data fitting, SIAM Journal on Scientific Computing 34 (2012), no. 3, A1380–A1405. [GKS+11] Franz Graf, Hans-Peter Kriegel, Matthias Schubert, Sebastian P¨olsterl, and Alexander Cavallaro, 2d image registration in ct images using radial image descriptors, MICCAI 2011, Springer, 2011. [GN10] David Gross and Vincent Nesme, Note on sampling without replacing from a finite collection of matrices, arXiv preprint arXiv:1001.2738 (2010). [GNS09] Igor Griva, Stephen G Nash, and Ariela Sofer, Linear and nonlinear optimization, Siam, 2009. [HMT11] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, no. 2, 217–288. [Lic13] M. Lichman, UCI machine learning repository, 2013. [LRF10] Nicolas Le Roux and Andrew W Fitzgibbon, A fast natural newton method, ICML, 2010. [LRMB08] Nicolas Le Roux, Pierre-A Manzagol, and Yoshua Bengio, Topmoumoute online natural gradient algorithm, NIPS, 2008. [Mar10] James Martens, Deep learning via hessian-free optimization, ICML, 2010, pp. 735–742. [MEWL] Thierry B. Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere, The million song dataset, ISMIR-11. [Nes83] Yurii Nesterov, A method for unconstrained convex minimization problem with the rate of convergence o (1/k2), Doklady AN SSSR, vol. 269, 1983, pp. 543–547. [Nes04] , Introductory lectures on convex optimization: A basic course, vol. 87, Springer, 2004. [SRB13] Mark Schmidt, Nicolas Le Roux, and Francis Bach, Minimizing finite sums with the stochastic average gradient, arXiv preprint arXiv:1309.2388 (2013). [SS02] Bernhard Sch¨olkopf and Alexander J Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond, MIT press, 2002. [Tro12] Joel A Tropp, User-friendly tail bounds for sums of random matrices, Foundations of Computational Mathematics (2012). [Ver10] Roman Vershynin, Introduction to the non-asymptotic analysis of random matrices, arXiv:1011.3027 (2010). [VP12] Oriol Vinyals and Daniel Povey, Krylov Subspace Descent for Deep Learning, AISTATS, 2012. 9
2015
113
5,607
Learning Theory and Algorithms for Forecasting Non-Stationary Time Series Vitaly Kuznetsov Courant Institute New York, NY 10011 vitaly@cims.nyu.edu Mehryar Mohri Courant Institute and Google Research New York, NY 10011 mohri@cims.nyu.edu Abstract We present data-dependent learning bounds for the general scenario of nonstationary non-mixing stochastic processes. Our learning guarantees are expressed in terms of a data-dependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. 1 Introduction Time series forecasting plays a crucial role in a number of domains ranging from weather forecasting and earthquake prediction to applications in economics and finance. The classical statistical approaches to time series analysis are based on generative models such as the autoregressive moving average (ARMA) models, or their integrated versions (ARIMA) and several other extensions [Engle, 1982, Bollerslev, 1986, Brockwell and Davis, 1986, Box and Jenkins, 1990, Hamilton, 1994]. Most of these models rely on strong assumptions about the noise terms, often assumed to be i.i.d. random variables sampled from a Gaussian distribution, and the guarantees provided in their support are only asymptotic. An alternative non-parametric approach to time series analysis consists of extending the standard i.i.d. statistical learning theory framework to that of stochastic processes. In much of this work, the process is assumed to be stationary and suitably mixing [Doukhan, 1994]. Early work along this approach consisted of the VC-dimension bounds for binary classification given by Yu [1994] under the assumption of stationarity and β-mixing. Under the same assumptions, Meir [2000] presented bounds in terms of covering numbers for regression losses and Mohri and Rostamizadeh [2009] proved general data-dependent Rademacher complexity learning bounds. Vidyasagar [1997] showed that PAC learning algorithms in the i.i.d. setting preserve their PAC learning property in the β-mixing stationary scenario. A similar result was proven by Shalizi and Kontorovitch [2013] for mixtures of β-mixing processes and by Berti and Rigo [1997] and Pestov [2010] for exchangeable random variables. Alquier and Wintenberger [2010] and Alquier et al. [2014] also established PAC-Bayesian learning guarantees under weak dependence and stationarity. A number of algorithm-dependent bounds have also been derived for the stationary mixing setting. Lozano et al. [2006] studied the convergence of regularized boosting. Mohri and Rostamizadeh [2010] gave data-dependent generalization bounds for stable algorithms for '-mixing and β-mixing stationary processes. Steinwart and Christmann [2009] proved fast learning rates for regularized algorithms with ↵-mixing stationary sequences and Modha and Masry [1998] gave guarantees for certain classes of models under the same assumptions. However, stationarity and mixing are often not valid assumptions. For example, even for Markov chains, which are among the most widely used types of stochastic processes in applications, stationarity does not hold unless the Markov chain is started with an equilibrium distribution. Similarly, 1 long memory models such as ARFIMA, may not be mixing or mixing may be arbitrarily slow [Baillie, 1996]. In fact, it is possible to construct first order autoregressive processes that are not mixing [Andrews, 1983]. Additionally, the mixing assumption is defined only in terms of the distribution of the underlying stochastic process and ignores the loss function and the hypothesis set used. This suggests that mixing may not be the right property to characterize learning in the setting of stochastic processes. A number of attempts have been made to relax the assumptions of stationarity and mixing. Adams and Nobel [2010] proved asymptotic guarantees for stationary ergodic sequences. Agarwal and Duchi [2013] gave generalization bounds for asymptotically stationary (mixing) processes in the case of stable on-line learning algorithms. Kuznetsov and Mohri [2014] established learning guarantees for fully non-stationary β- and '-mixing processes. In this paper, we consider the general case of non-stationary non-mixing processes. We are not aware of any prior work providing generalization bounds in this setting. In fact, our bounds appear to be novel even when the process is stationary (but not mixing). The learning guarantees that we present hold for both bounded and unbounded memory models. Deriving generalization bounds for unbounded memory models even in the stationary mixing case was an open question prior to our work [Meir, 2000]. Our guarantees cover the majority of approaches used in practice, including various autoregressive and state space models. The key ingredients of our generalization bounds are a data-dependent measure of sequential complexity (expected sequential covering number or sequential Rademacher complexity [Rakhlin et al., 2010]) and a measure of discrepancy between the sample and target distributions. Kuznetsov and Mohri [2014] also give generalization bounds in terms of discrepancy. However, unlike the result of Kuznetsov and Mohri [2014], our analysis does not require any mixing assumptions which are hard to verify in practice. More importantly, under some additional mild assumption, the discrepancy measure that we propose can be estimated from data, which leads to data-dependent learning guarantees for non-stationary non-mixing case. We devise new algorithms for non-stationary time series forecasting that benefit from our datadependent guarantees. The parameters of generative models such as ARIMA are typically estimated via the maximum likelihood technique, which often leads to non-convex optimization problems. In contrast, our objective is convex and leads to an optimization problem with a unique global solution that can be found efficiently. Another issue with standard generative models is that they address nonstationarity in the data via a differencing transformation which does not always lead to a stationary process. In contrast, we address the problem of non-stationarity in a principled way using our learning guarantees. The rest of this paper is organized as follows. The formal definition of the time series forecasting learning scenario as well as that of several key concepts is given in Section 2. In Section 3, we introduce and prove our new generalization bounds. In Section 4, we give data-dependent learning bounds based on the empirical discrepancy. These results, combined with a novel analysis of kernel-based hypotheses for time series forecasting (Appendix B), are used to devise new forecasting algorithms in Section 5. In Appendix C, we report the results of preliminary experiments using these algorithms. 2 Preliminaries We consider the following general time series prediction setting where the learner receives a realization (X1, Y1), . . . , (XT , YT ) of some stochastic process, with (Xt, Yt) 2 Z = X ⇥Y. The objective of the learner is to select out of a specified family H a hypothesis h: X ! Y that achieves a small generalization error E[L(h(XT +1), YT +1)|Z1, . . . , ZT ] conditioned on observed data, where L: Y ⇥Y ! [0, 1) is a given loss function. The path-dependent generalization error that we consider in this work is a finer measure of the generalization ability than the averaged generalization error E[L(h(XT +1), YT +1)] = E[E[L(h(XT +1), YT +1)|Z1, . . . , ZT ]] since it only takes into consideration the realized history of the stochastic process and does not average over the set of all possible histories. The results that we present in this paper also apply to the setting where the time parameter t can take non-integer values and prediction lag is an arbitrary number l ≥0. That is, the error is defined by E[L(h(XT +l), YT +l)|Z1, . . . , ZT ] but for notational simplicity we set l = 1. 2 Our setup covers a larger number of scenarios commonly used in practice. The case X = Yp corresponds to a large class of autoregressive models. Taking X = [1 p=1Yp leads to growing memory models which, in particular, include state space models. More generally, X may contain both the history of the process {Yt} and some additional side information. To simplify the notation, in the rest of the paper, we will use the shorter notation f(z) = L(h(x), y), for any z = (x, y) 2 Z and introduce the family F = {(x, y) ! L(h(x), y): h 2 H} containing such functions f. We will assume a bounded loss function, that is |f| M for all f 2 F for some M 2 R+. Finally, we will use the shorthand Zb a to denote a sequence of random variables Za, Za+1, . . . , Zb. The key quantity of interest in the analysis of generalization is the following supremum of the empirical process defined as follows: Φ(ZT 1 ) = sup f2F E[f(ZT +1)|ZT 1 ] − T X t=1 qtf(Zt) ! , (1) where q1, . . . , qT are real numbers, which in the standard learning scenarios are chosen to be uniform. In our general setting, different Zts may follow different distributions, thus distinct weights could be assigned to the errors made on different sample points depending on their relevance to forecasting the future ZT +1. The generalization bounds that we present below are for an arbitrary sequence q = (q1, . . . qT ) which, in particular, covers the case of uniform weights. Remarkably, our bounds do not even require the non-negativity of q. Our generalization bounds are expressed in terms of data-dependent measures of sequential complexity such as expected sequential covering number or sequential Rademacher complexity [Rakhlin et al., 2010]. We give a brief overview of the notion of sequential covering number and refer the reader to the aforementioned reference for further details. We adopt the following definition of a complete binary tree: a Z-valued complete binary tree z is a sequence (z1, . . . , zT ) of T mappings zt : {±1}t−1 ! Z, t 2 [1, T]. A path in the tree is σ = (σ1, . . . , σT −1). To simplify the notation we will write zt(σ) instead of zt(σ1, . . . , σt−1), even though zt depends only on the first t−1 elements of σ. The following definition generalizes the classical notion of covering numbers to sequential setting. A set V of R-valued trees of depth T is a sequential ↵-cover (with respect to q-weighted `p norm) of a function class G on a tree z of depth T if for all g 2 G and all σ 2 {±}T , there is v 2 V such that T X t=1 $$vt(σ) −g(zt(σ)) $$p ! 1 p kqk−1 q ↵, where k · kq is the dual norm. The (sequential) covering number Np(↵, G, z) of a function class G on a given tree z is defined to be the size of the minimal sequential cover. The maximal covering number is then taken to be Np(↵, G) = supz Np(↵, G, z). One can check that in the case of uniform weights this definition coincides with the standard definition of sequential covering numbers. Note that this is a purely combinatorial notion of complexity which ignores the distribution of the process in the given learning problem. Data-dependent sequential covering numbers can be defined as follows. Given a stochastic process distributed according to the distribution p with pt(·|zt−1 1 ) denoting the conditional distribution at time t, we sample a Z ⇥Z-valued tree of depth T according to the following procedure. Draw two independent samples Z1, Z0 1 from p1: in the left child of the root draw Z2, Z0 2 according to p2(·|Z1) and in the right child according to p2(·|Z0 2). More generally, for a node that can be reached by a path (σ1, . . . , σt), we draw Zt, Z0 t according to pt(·|S1(σ1), . . . , St−1(σt−1)), where St(1) = Zt and St(−1) = Z0 t. Let z denote the tree formed using Zts and define the expected covering number to be Ez⇠T (p)[Np(↵, G, z)], where T(p) denotes the distribution of z. In a similar manner, one can define other measures of complexity such as sequential Rademacher complexity and the Littlestone dimension [Rakhlin et al., 2015] as well as their data-dependent counterparts [Rakhlin et al., 2011]. 3 The final ingredient needed for expressing our learning guarantees is the notion of discrepancy between target distribution and the distribution of the sample: ∆= sup f2F ✓ E[f(ZT +1)|ZT 1 ] − T X t=1 qt E[f(Zt)|Zt−1 1 ] ◆ . (2) The discrepancy ∆is a natural measure of the non-stationarity of the stochastic process Z with respect to both the loss function L and the hypothesis set H. In particular, note that if the process Z is i.i.d., then we simply have ∆= 0 provided that qts form a probability distribution. It is also possible to give bounds on ∆in terms of other natural distances between distribution. For instance, Pinsker’s inequality yields ∆M '''PT +1(·|ZT 1 ) −PT t=1 qtPt(·|Zt−1 1 ) ''' TV  r 1 2D ⇣ PT +1(·|ZT 1 ) k PT t=1 qtPt(·|Zt−1 1 ) ⌘ , where k · kTV is the total variation distance and D(· k ·) the relative entropy, Pt+1(·|Zt 1) the conditional distribution of Zt+1, and PT t=1 qtPt(·|Zt−1 1 ) the mixture of the sample marginals. Alternatively, if the target distribution at lag l, P = PT +l is a stationary distribution of an asymptotically stationary process Z [Agarwal and Duchi, 2013, Kuznetsov and Mohri, 2014], then for qt = 1/T we have ∆M T T X t=1 kP −Pt+l(·|Zt −1)kTV φ(l), where φ(l) = sups supz[kP −Pl+s(·|zs −1)kTV] is the coefficient of asymptotic stationarity. The process is asymptotically stationary if liml!1 φ(l) = 0. However, the most important property of the discrepancy ∆is that, as shown later in Section 4, it can be estimated from data under some additional mild assumptions. [Kuznetsov and Mohri, 2014] also give generalization bounds for non-stationary mixing processes in terms of a related notion of discrepancy. It is not known if the discrepancy measure used in [Kuznetsov and Mohri, 2014] can be estimated from data. 3 Generalization Bounds In this section, we prove new generalization bounds for forecasting non-stationary time series. The first step consists of using decoupled tangent sequences to establish concentration results for the supremum of the empirical process Φ(ZT 1 ). Given a sequence of random variables ZT 1 we say that Z0T 1 is a decoupled tangent sequence if Z0 t is distributed according to P(·|Zt−1 1 ) and is independent of Z1 t . It is always possible to construct such a sequence of random variables [De la Pe˜na and Gin´e, 1999]. The next theorem is the main result of this section. Theorem 1. Let ZT 1 be a sequence of random variables distributed according to p. Fix ✏> 2↵> 0. Then, the following holds: P , Φ(ZT 1 ) −∆≥✏  E v⇠T (p) ⇥ N1(↵, F, v) ⇤ exp ✓ −(✏−2↵)2 2M 2kqk2 2 ◆ . Proof. The first step is to observe that, since the difference of the suprema is upper bounded by the supremum of the difference, it suffices to bound the probability of the following event ( sup f2F T X t=1 qt(E[f(Zt)|Zt−1 1 ] −f(Zt)) ! ≥✏ ) . By Markov’s inequality, for any λ > 0, the following inequality holds: P sup f2F T X t=1 qt(E[f(Zt)|Zt−1 1 ] −f(Zt)) ! ≥✏ ! exp(−λ✏) E " exp λ sup f2F T X t=1 qt(E[f(Zt)|Zt−1 1 ] −f(Zt)) !!# . 4 Since Z0T 1 is a tangent sequence the following equalities hold: E[f(Zt)|Zt−1 1 ] = E[f(Z0 t)|Zt−1 1 ] = E[f(Z0 t)|ZT 1 ]. Using these equalities and Jensen’s inequality, we obtain the following: E  exp ⇣ λ sup f2F T X t=1 qt , E[f(Zt)|Zt−1 1 ] −f(Zt) -⌘5 = E  exp ⇣ λ sup f2F E h T X t=1 qt , f(Z0 t) −f(Zt) |ZT 1 i⌘5 E  exp ⇣ λ sup f2F T X t=1 qt , f(Z0 t) −f(Zt) -⌘5 , where the last expectation is taken over the joint measure of ZT 1 and Z0T 1 . Applying Lemma 5 (Appendix A), we can further bound this expectation by E (z,z0)⇠T (p) E σ  exp ✓ λ sup f2F T X t=1 σtqt ⇣ f(z0 t(σ)) −f(zt(σ)) ⌘◆5  E (z,z0)⇠T (p) E σ  exp ✓ λ sup f2F T X t=1 σtqtf(z0 t(σ)) + λ sup f2F T X t=1 −σtqtf(zt(σ)) ◆5 1 2 E (z,z0) E σ  exp ✓ 2λ sup f2F T X t=1 σtqtf(z0 t(σ)) ◆5 + 1 2 E (z,z0) E σ  exp ✓ 2λ sup f2F T X t=1 σtqtf(zt(σ)) ◆5 = E z⇠T (p) E σ  exp ✓ 2λ sup f2F T X t=1 σtqtf(zt(σ)) ◆5 , where for the second inequality we used Young’s inequality and for the last equality we used symmetry. Given z let C denote the minimal ↵-cover with respect to the q-weighted `1-norm of F on z. Then, the following bound holds sup f2F T X t=1 σtqtf(zt(σ)) max c2C T X t=1 σtqtct(σ) + ↵. By the monotonicity of the exponential function, E σ  exp ✓ 2λ sup f2F T X t=1 σtqtf(zt(σ)) ◆5 exp(2λ↵) E σ  exp ✓ 2λ max c2C T X t=1 σtqtct(σ) ◆5 exp(2λ↵) X c2C E σ  exp ✓ 2λ T X t=1 σtqtct(σ) ◆5 . Since ct(σ) depends only on σ1, . . . , σT −1, by Hoeffding’s bound, E σ  exp ✓ 2λ T X t=1 σtqtct(σ) ◆5 = E  exp ✓ 2λ T −1 X t=1 σtqtct(σ) ◆ E σT  exp ✓ 2λσT qT cT (σ) ◆$$$$σT −1 1 55 E  exp ✓ 2λ T −1 X t=1 σtqtct(σ) ◆ exp(2λ2q2 T M 2) 5 and iterating this inequality and using the union bound, we obtain the following: P ✓ sup f2F T X t=1 qt(E[f(Zt)|Zt−1 1 ]−f(Zt)) ≥✏ ◆  E v⇠T (p)[N1(↵, G, v)] exp ⇣ −λ(✏−2↵)+2λ2M 2kqk2 2 ⌘ . Optimizing over λ completes the proof. An immediate consequence of Theorem 1 is the following result. 5 Corollary 2. For any δ > 0, with probability at least 1 −δ, for all f 2 F and all ↵> 0, E[f(ZT +1)|ZT 1 ]  T X t=1 qtf(Zt) + ∆+ 2↵+ Mkqk2 r 2 log Ev⇠T (P)[N1(↵, G, v)] δ . We are not aware of other finite sample bounds in a non-stationary non-mixing case. In fact, our bounds appear to be novel even in the stationary non-mixing case. Using chaining techniques bounds, Theorem 1 and Corollary 2 can be further improved and we will present these results in the full version of this paper. While Rakhlin et al. [2015] give high probability bounds for a different quantity than the quantity of interest in time series prediction, sup f2F T X t=1 qt(E[f(Zt)|Zt−1 1 ] −f(Zt)) ! , (3) their analysis of this quantity can also be used in our context to derive high probability bounds for Φ(ZT 1 ) −∆. However, this approach results in bounds that are in terms of purely combinatorial notions such as maximal sequential covering numbers N1(↵, F). While at first sight, this may seem as a minor technical detail, the distinction is crucial in the setting of time series prediction. Consider the following example. Let Z1 be drawn from a uniform distribution on {0, 1} and Zt ⇠p(·|Zt−1) with p(·|y) being a distribution over {0, 1} such that p(x|y) = 2/3 if x = y and 1/3 otherwise. Let G be defined by G = {g(x) = 1x≥✓: ✓2 [0, 1]}. Then, one can check that Ev⇠T (P)[N1(↵, G, v)] = 2, while N1(↵, G) ≥2T . The data-dependent bounds of Theorem 1 and Corollary 2 highlight the fact that the task of time series prediction lies in between the familiar i.i.d. scenario and adversarial on-line learning setting. However, the key component of our learning guarantees is the discrepancy term ∆. Note that in the general non-stationary case, the bounds of Theorem 1 may not converge to zero due to the discrepancy between the target and sample distributions. This is also consistent with the lower bounds of Barve and Long [1996] that we discuss in more detail in Section 4. However, convergence can be established in some special cases. In the i.i.d. case our bounds reduce to the standard covering numbers learning guarantees. In the drifting scenario, with ZT 1 being a sequence of independent random variables, our discrepancy measure coincides with the one used and studied in [Mohri and Mu˜noz Medina, 2012]. Convergence can also be established in asymptotically stationary and stationary mixing cases. However, as we show in Section 4, the most important advantage of our bounds is that the discrepancy measure we use can be estimated from data. 4 Estimating Discrepancy In Section 3, we showed that the discrepancy ∆is crucial for forecasting non-stationary time series. In particular, if we could select a distribution q over the sample ZT 1 that would minimize the discrepancy ∆and use it to weight training points, then we would have a better learning guarantee for an algorithm trained on this weighted sample. In some special cases, the discrepancy ∆can be computed analytically. However, in general, we do not have access to the distribution of ZT 1 and hence we need to estimate the discrepancy from the data. Furthermore, in practice, we never observe ZT +1 and it is not possible to estimate ∆without some further assumptions. One natural assumption is that the distribution Pt of Zt does not change drastically with t on average. Under this assumption the last s observations ZT T −s+1 are effectively drawn from the distribution close to PT +1. More precisely, we can write ∆sup f2F ✓1 s T X t=T −s+1 E[f(Zt)|Zt−1 1 ] − T X t=1 qt E[f(Zt)|Zt−1 1 ] ◆ + sup f2F ✓ E[f(ZT +1)|ZT 1 ] −1 s T X t=T −s+1 E[f(Zt)|Zt−1 1 ] ◆ . We will assume that the second term, denoted by ∆s, is sufficiently small and will show that the first term can be estimated from data. But, we first note that our assumption is necessary for learning in 6 this setting. Observe that sup f2F ⇣ E[ZT +1|ZT 1 ] −E[f(Zr)|Zr−1 1 ] ⌘  T X t=r sup f2F ⇣ E[f(Zt+1)|Zt 1] −E[f(Zt)|Zt−1 1 ] ⌘ M T X t=r kPt+1(·|Zt 1) −Pt(·|Zt−1 1 )kTV, for all r = T −s + 1, . . . , T. Therefore, we must have ∆s 1 s X t=T −s+1 sup f2F ⇣ E[ZT +1|ZT 1 ] −E[f(Zt)|Zt 1] ⌘ s + 1 2 Mγ, where γ =suptkPt+1(·|Zt 1)−Pt(·|Zt−1 1 )kTV. Barve and Long [1996] showed that [VC-dim(H)γ] 1 3 is a lower bound on the generalization error in the setting of binary classification where ZT 1 is a sequence of independent but not identically distributed random variables (drifting). This setting is a special case of the more general scenario that we are considering. The following result shows that we can estimate the first term in the upper bound on ∆. Theorem 3. Let ZT 1 be a sequence of random variables. Then, for any δ > 0, with probability at least 1 −δ, the following holds for all ↵> 0: sup f2F T X t=1 (pt −qt) E[f(Zt)|Zt−1 1 ] ! sup f2F T X t=1 (pt −qt)f(Zt) ! + B, where B = 2↵+Mkq−pk2 q 2 log Ez⇠T (p)[N1(↵,G,z)] δ and where p is the uniform distribution over the last s points. The proof of this result is given in Appendix A. Theorem 1 and Theorem 3 combined with the union bound yield the following result. Corollary 4. Let ZT 1 be a sequence of random variables. Then, for any δ > 0, with probability at least 1 −δ, the following holds for all f 2 F and all ↵> 0: E[f(ZT +1)|ZT 1 ]  T X t=1 qtf(Zt) + e∆+ ∆s + 4↵+ M ⇥ kqk2 + kq −pk2 ⇤q 2 log 2 Ev⇠T (p)[N1(↵,G,z)] δ , where e∆= supf2F ⇣PT t=1(pt −qt)f(Zt) ⌘ . 5 Algorithms In this section, we use our learning guarantees to devise algorithms for forecasting non-stationary time series. We consider a broad family of kernel-based hypothesis classes with regression losses. We present the full analysis of this setting in Appendix B including novel bounds on the sequential Rademacher complexity. The learning bounds of Theorem 1 can be generalized to hold uniformly over q at the price of an additional term in O ⇣ kq−uk1 q log2 log2 kq −uk−1 1 ⌘ . We prove this result in Theorem 8 (Appendix B). Suppose L is the squared loss and H = {x ! w · (x): kwkH ⇤}, where : X ! H is a feature mapping from X to a Hilbert space H. By Lemma 6 (Appendix B), we can bound the complexity term in our generalization bounds by O ⇣ (log3 T) ⇤r p T + (log3 T)kq −uk1 ⌘ , where K is a PDS kernel associated with H such that supx K(x, x) r and u is the uniform distribution over the sample. Then, we can formulate a joint optimization problem over both q and w based on the learning guarantee of Theorem 8, which holds uniformly over all q: min 0q1,w ⇢ T X t=1 qt(w · (xt) −yt)2 + λ1 T X t=1 dtqt + λ2kwk2 H + λ3kq −uk1 ; . (4) 7 Here, we have upper bounded the empirical discrepancy term by PT t=1 dtqt with each dt defined by supw0⇤| PT s=1 ps(w0 · (xs) −ys)2 −(w0 · (xt) −yt)2|. Each dt can be precomputed using DC-programming. For general loss functions, the DC-programming approach only guarantees convergence to a stationary point. However, for the squared loss, our problem can be cast as an instance of the trust region problem, which can be solved globally using the DCA algorithm of Tao and An [1998]. Note that problem (4) is not jointly convex in q and w. However, using the dual problem associated to w yields the following equivalent problem, it can be rewritten as follows: min 0q1 ⇢ max ↵ n −λ2 T X t=1 ↵2 t qt −↵T K↵+ 2λ2↵T Y o + λ1(d·q) + λ3kq −uk1 ; , (5) where d = (d1, . . . , dT )T , K is the kernel matrix and Y = (y1, . . . , yT )T . We use the change of variables rt = 1/qt and further upper bound λ3kq −uk1 by λ0 3kr −T 2uk2, which follows from |qt −ut| = |qtut(rt −T)| and H¨older’s inequality. Then, this yields the following optimization problem: min r2D ⇢ max ↵ n −λ2 T X t=1 rt↵2 t −↵T K↵+ 2λ2↵T Y o + λ1 T X t=1 dt rt + λ3kr −T 2uk2 2 ; , (6) where D = {r: rt ≥1, t 2 [1, T]}. The optimization problem (6) is convex since D is a convex set, the first term in (6) is convex as a maximum of convex (linear) functions of r. This problem can be solved using standard descent methods, where, at each iteration, we solve a standard QP in ↵, which admits a closed-form solution. Parameters λ1, λ2, and λ3 are selected through cross-validation. An alternative simpler algorithm based on the data-dependent bounds of Corollary 4 consists of first finding a distribution q minimizing the (regularized) discrepancy and then using that to find a hypothesis minimizing the (regularized) weighted empirical risk. This leads to the following twostage procedure. First, we find a solution q⇤of the following convex optimization problem: min q≥0 ⇢ sup w0⇤ ⇣ T X t=1 (pt −qt)(w0 · (xt) −yt)2⌘ + λ1kq −uk1 ; , (7) where λ1 and ⇤are parameters that can be selected via cross-validation. Our generalization bounds hold for arbitrary weights q but we restrict them to being positive sequences. Note that other regularization terms such as kqk2 2 and kq −pk2 2 from the bound of Corollary 4 can be incorporated in the optimization problem, but we discard them to minimize the number of parameters. This problem can be solved using standard descent optimization methods, where, at each step, we use DC-programming to evaluate the supremum over w0. Alternatively, one can upper bound the supremum by PT t=1 qtdt and then solve the resulting optimization problem. The solution q⇤of (7) is then used to solve the following (weighted) kernel ridge regression problem: min w ⇢ T X t=1 q⇤ t (w · (xt) −yt)2 + λ2kwk2 H ; . (8) Note that, in order to guarantee the convexity of this problem, we require q⇤≥0. 6 Conclusion We presented a general theoretical analysis of learning in the broad scenario of non-stationary nonmixing processes, the realistic setting for a variety of applications. We discussed in detail several algorithms benefitting from the learning guarantees presented. Our theory can also provide a finer analysis of several existing algorithms and help devise alternative principled learning algorithms. Acknowledgments This work was partly funded by NSF IIS-1117591 and CCF-1535987, and the NSERC PGS D3. 8 References T. M. Adams and A. B. Nobel. Uniform convergence of Vapnik-Chervonenkis classes under ergodic sampling. The Annals of Probability, 38(4):1345–1367, 2010. A. Agarwal and J. Duchi. The generalization ability of online algorithms for dependent data. Information Theory, IEEE Transactions on, 59(1):573–587, 2013. P. Alquier and O. Wintenberger. Model selection for weakly dependent time series forecasting. Technical Report 2010-39, Centre de Recherche en Economie et Statistique, 2010. P. Alquier, X. Li, and O. Wintenberger. Prediction of time series by statistical learning: general losses and fast rates. Dependence Modelling, 1:65–93, 2014. D. Andrews. First order autoregressive processes and strong mixing. Cowles Foundation Discussion Papers 664, Cowles Foundation for Research in Economics, Yale University, 1983. R. Baillie. Long memory processes and fractional integration in econometrics. Journal of Econometrics, 73 (1):5–59, 1996. R. D. Barve and P. M. Long. On the complexity of learning from drifting distributions. In COLT, 1996. P. Berti and P. Rigo. A Glivenko-Cantelli theorem for exchangeable random variables. Statistics & Probability Letters, 32(4):385 – 391, 1997. T. Bollerslev. Generalized autoregressive conditional heteroskedasticity. J Econometrics, 1986. G. E. P. Box and G. Jenkins. Time Series Analysis, Forecasting and Control. Holden-Day, Incorporated, 1990. P. J. Brockwell and R. A. Davis. Time Series: Theory and Methods. Springer-Verlag, New York, 1986. V. H. De la Pe˜na and E. Gin´e. Decoupling: from dependence to independence: randomly stopped processes, U-statistics and processes, martingales and beyond. Probability and its applications. Springer, NY, 1999. P. Doukhan. Mixing: properties and examples. Lecture notes in statistics. Springer-Verlag, New York, 1994. R. Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50(4):987–1007, 1982. J. D. Hamilton. Time series analysis. Princeton, 1994. V. Kuznetsov and M. Mohri. Generalization bounds for time series prediction with non-stationary processes. In ALT, 2014. A. C. Lozano, S. R. Kulkarni, and R. E. Schapire. Convergence and consistency of regularized boosting algorithms with stationary β-mixing observations. In NIPS, pages 819–826, 2006. R. Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, pages 5–34, 2000. D. Modha and E. Masry. Memory-universal prediction of stationary random processes. Information Theory, IEEE Transactions on, 44(1):117–133, Jan 1998. M. Mohri and A. Mu˜noz Medina. New analysis and algorithm for learning with drifting distributions. In ALT, 2012. M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes. In NIPS, 2009. M. Mohri and A. Rostamizadeh. Stability bounds for stationary '-mixing and β-mixing processes. Journal of Machine Learning Research, 11:789–814, 2010. V. Pestov. Predictive PAC learnability: A paradigm for learning from exchangeable input data. In GRC, 2010. A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial parameters, and learnability. In NIPS, 2010. A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Stochastic, constrained, and smoothed adversaries. In NIPS, 2011. A. Rakhlin, K. Sridharan, and A. Tewari. Sequential complexities and uniform martingale laws of large numbers. Probability Theory and Related Fields, 2015. C. Shalizi and A. Kontorovitch. Predictive PAC learning and process decompositions. In NIPS, 2013. I. Steinwart and A. Christmann. Fast learning from non-i.i.d. observations. In NIPS, 2009. P. D. Tao and L. T. H. An. A D.C. optimization algorithm for solving the trust-region subproblem. SIAM Journal on Optimization, 8(2):476–505, 1998. M. Vidyasagar. A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Springer-Verlag New York, Inc., 1997. B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability, 22(1):94–116, 1994. 9
2015
114
5,608
Equilibrated adaptive learning rates for non-convex optimization Yann N. Dauphin1 Universit´e de Montr´eal dauphiya@iro.umontreal.ca Harm de Vries1 Universit´e de Montr´eal devries@iro.umontreal.ca Yoshua Bengio Universit´e de Montr´eal yoshua.bengio@umontreal.ca Abstract Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the socalled equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent. 1 Introduction One of the challenging aspects of deep learning is the optimization of the training criterion over millions of parameters: the difficulty comes from both the size of these neural networks and because the training objective is non-convex in the parameters. Stochastic gradient descent (SGD) has remained the method of choice for most practitioners of neural networks since the 80’s, in spite of a rich literature in numerical optimization. Although it is well-known that first-order methods considerably slow down when the objective function is ill-conditioned, it remains unclear how to best exploit second-order structure when training deep networks. Because of the large number of parameters, storing the full Hessian or even a low-rank approximation is not practical, making parameter specific learning rates, i.e diagonal preconditioners, one of the viable alternatives. One of the open questions is how to set the learning rate for SGD adaptively, both over time and for different parameters, and several methods have been proposed (see e.g. Schaul et al. (2013) and references therein). On the other hand, recent work (Dauphin et al., 2014; Choromanska et al., 2014) has brought theoretical and empirical evidence suggesting that local minima are with high probability not the main obstacle to optimizing large and deep neural networks, contrary to what was previously believed: instead, saddle points are the most prevalent critical points on the optimization path (except when we approach the value of the global minimum). These saddle points can considerably slow down training, mostly because the objective function tends to be ill-conditioned in the neighborhood of 1Denotes first authors 1 (a) Original (b) Preconditioned Figure 1: Contour lines of a saddle point (black point) problem for (a) original function and (b) transformed function (by equilibration preconditioner). Gradient descent slowly escapes the saddle point in (a) because it oscillates along the high positive curvature direction. For the better conditioned function (b) these oscillations are reduced, and gradient descent makes faster progress. these saddle points. This raises the question: can we take advantage of the saddle structure to design good and computationally efficient preconditioners? In this paper, we bring these threads together. We first study diagonal preconditioners for saddle point problems, and find that the popular Jacobi preconditioner has unsuitable behavior in the presence of both positive and negative curvature. Instead, we propose to use the so-called equilibration preconditioner and provide new theoretical justifications for its use in Section 4. We provide specific arguments why equilibration is better suited to non-convex optimization problems than the Jacobi preconditioner and empirically demonstrate this for small neural networks in Section 5. Using this new insight, we propose a new adaptive learning rate schedule for SGD, called ESGD, that is based on the equilibration preconditioner. In Section 7 we evaluate the proposed method on two deep autoencoder benchmarks. The results, presented in Section 8, confirm that ESGD performs as well or better than RMSProp. In addition, we empirically find that the update direction of RMSProp is very similar to equilibrated update directions, which might explain its success in training deep neural networks. 2 Preconditioning It is well-known that gradient descent makes slow progress when the curvature of the loss function is very different in separate directions. The negative gradient will be mostly pointing in directions of high curvature, and a small enough learning rate have to be chosen in order to avoid divergence in the largest positive curvature direction. As a consequence, the gradient step makes very little progress in small curvature directions, leading to the slow convergence often observed with first-order methods. Preconditioning can be thought of as a geometric solution to the problem of pathological curvature. It aims to locally transform the optimization landscape so that its curvature is equal in all directions. This is illustrated in Figure 1 for a two-dimensional saddle point problem using the equilibration preconditioner (Section 4). Gradient descent method slowly escapes the saddle point due to the typical oscillations along the high positive curvature direction. By transforming the function to be more equally curved, it is possible for gradient descent to move much faster. More formally, we are interested in minimizing a function f with parameters θ ∈RN. We introduce preconditioning by a linear change of variables ˆθ = D 1 2 θ with a non-singular matrix D 1 2 . We use this change of variables to define a new function ˆf, parameterized by ˆθ, that is equivalent to the original function f: ˆf(ˆθ) = f(D−1 2 ˆθ) = f(θ) (1) The gradient and the Hessian of this new function ˆf are (by the chain rule): ∇ˆf(ˆθ) = D−1 2 ∇f(θ) (2) ∇2 ˆf(ˆθ) = D−1 2 ⊤HD−1 2 with H = ∇2f(θ) (3) 2 A gradient descent iteration ˆθt = ˆθt−1 −η∇ˆf(ˆθ) for the transformed function corresponds to θt = θt−1 −ηD−1∇f(θ) (4) for the original parameter θ. In other words, by left-multiplying the original gradient with a positive definite matrix D−1, we effectively apply gradient descent to the problem after a change of variables ˆθ = D 1 2 θ. The curvature of this transformed function is given by the Hessian D−1 2 ⊤HD−1 2 , and we aim to seek a preconditioning matrix D such that the new Hessian has equal curvature in all directions. One way to assess the success of D in doing so is to compute the relative difference between the biggest and smallest curvature direction, which is measured by the condition number of the Hessian: κ(H) = σmax(H) σmin(H) (5) where σmax(H), σmin(H) denote respectively the biggest and smallest singular values of H (which are the absolute value of the eigenvalues). It is important to stress that the condition number is defined for both definite and indefinite matrices. The famous Newton step corresponds to a change of variables D 1 2 = H 1 2 which makes the new Hessian perfectly conditioned. However, a change of variables only exists2 when the Hessian H is positive semi-definite. This is a problem for non-convex loss surfaces where the Hessian might be indefinite. In fact, recent studies (Dauphin et al., 2014; Choromanska et al., 2014) has shown that saddle points are dominating the optimization landscape of deep neural networks, implying that the Hessian is most likely indefinite. In such a setting, H−1 not a valid preconditioner and applying Newton’s step without modification would make you move towards the saddle point. Nevertheless, it is important to realize that the concept of preconditioning extends to non-convex problems, and reducing ill-conditioning around saddle point will often speed up gradient descent. At this point, it is natural to ask whether there exists a valid preconditioning matrix that always perfectly conditions the new Hessian? The answer is yes, and the corresponding preconditioning matrix is the inverse of the absolute Hessian |H| = X j |λj|qjq⊤ j , (6) which is obtained by an eigendecomposition of H and taking the absolute values of the eigenvalues. See Proposition 1 in Appendix A for a proof that |H|−1 is the only (up to a scalar3) symmetric positive definite preconditioning matrix that perfectly reduces the condition number. Practically, there are several computational drawbacks for using |H|−1 as a preconditioner. Neural networks typically have millions of parameters, rendering it infeasible to store the Hessian (O(N 2)), perform an eigendecomposition (O(N 3)) and invert the matrix (O(N 3)). Except for the eigendecomposition, other full rank preconditioners are facing the same computational issues. We therefore look for more computationally affordable preconditioners while maintaining its efficiency in reducing the condition number of indefinite matrices. In this paper, we focus on diagonal preconditioners which can be stored, inverted and multiplied by a vector in linear time. When diagonal preconditioners are applied in an online optimization setting (i.e. in conjunction with SGD), they are often referred to as adaptive learning rates in the neural network literature. 3 Related work The Jacobi preconditioner is one of the most well-known preconditioners. It is given by the diagonal of the Hessian DJ = |diag(H)| where | · | is element-wise absolute value. LeCun et al. (1998) proposes an efficient approximation of the Jacobi preconditioner using the Gauss-Newton matrix. The Gauss-Newton has been shown to approximate the Hessian under certain conditions (Pascanu & Bengio, 2014). The merit of this approach is that it is efficient but it is not clear what is lost by the Gauss-Newton approximation. What’s more the Jacobi preconditioner has not be found to be competitive for indefinite matrices (Bradley & Murray, 2011). This will be further explored for neural networks in Section 5. 2A real square root H 1 2 only exists when H is positive semi-definite. 3can be incorporated into the learning rate 3 A recent revival of interest in adaptive learning rates has been started by AdaGrad (Duchi et al., 2011). Adagrad collects information from the gradients across several parameter updates to tune the learning rate. This gives us the diagonal preconditioning matrix DA = (P t ∇f 2 (t))−1/2 which relies on the sum of gradients ∇f(t) at each timestep t. Duchi et al. (2011) relies strongly on convexity to justify this method. This makes the application to neural networks difficult from a theoretical perspective. RMSProp (Tieleman & Hinton, 2012) and AdaDelta (Zeiler, 2012) were follow-up methods introduced to be practical adaptive learning methods to train large neural networks. Although RMSProp has been shown to work very well (Schaul et al., 2013), there is not much understanding for its success in practice. Preconditioning might be a good framework to get a better understanding of such adaptive learning rate methods. 4 Equilibration Equilibration is a preconditioning technique developed in the numerical mathematics community (Sluis, 1969). When solving a linear system Ax = b with Gaussian Elimination, significant round-off errors can be introduced when small numbers are added to big numbers (Datta, 2010). To circumvent this issue, it is advised to properly scale the rows of the matrix before starting the elimination process. This step is often referred to as row equilibration, which formally scales the rows of A to unit magnitude in some p-norm. Throughout the following we consider 2-norm. Row equilibration is equivalent to multiplying A from the left by the matrix D−1 ii = 1 ∥Ai,·∥2. Instead of solving the original system, we now solve the equivalent left preconditioned system ˆAx = ˆb with ˆA = D−1A and ˆb = D−1 i b. In this paper, we apply the equilibration preconditioner in the context of large scale non-convex optimization. However, it is not straightforward how to apply the preconditioner. By choosing the preconditioning matrix DE ii = ∥Hi,·∥2, (7) the Hessian of the transformed function (DE)−1 2 ⊤H(DE)−1 2 (see Section 2) does not have equilibrated rows. Nevertheless, its spectrum (i.e. eigenvalues) is equal to the spectrum of the row equilibrated Hessian (DE)−1H and column equilibrated Hessian H(DE)−1. Consequently, if row equilibration succesfully reduces the condition number, then the condition number of the transformed Hessian (DE)−1 2 ⊤H(DE)−1 2 will be reduced by the same amount. The proof is given by Proposition 2. From the above observation, it seems more natural to seek for a diagonal preconditioning matrix D such that D−1 2 HD−1 2 is row and column equilibrated. In Bradley & Murray (2011) an iterative stochastic procedure is proposed for finding such matrix. However, we did not find it to work very well in an online optimization setting, and therefore stick to the original equilibration matrix DE. Although the original motivation for row equilibration is to prevent round-off errors, our interest is in how well it is able to reduce the condition number. Intuitively, ill-conditioning can be a result of matrix elements that are of completely different order. Scaling the rows to have equal norm could therefore significantly reduce the condition number. Although we are not aware of any proofs that row equilibration improves the condition number, there are theoretical results that motivates its use. In Sluis (1969) it is shown that the condition number of a row equilibrated matrix is at most a factor √ N worse than the diagonal preconditioning matrix that optimally reduces the condition number. Note that the bound grows sublinear in the dimension of the matrix, and can be quite loose for the extremely large matrices we consider. In this paper, we provide an alternative justification using the following upper bound on the condition number from Guggenheimer et al. (1995): κ(H) < 2 |det H| ∥H∥F √ N N (8) The proof in Guggenheimer et al. (1995) provides useful insight when we expect a tight upper bound to be tight: if all singular values, except for the smallest, are roughly equal. We prove by Proposition 4 that row equilibration improves this upper bound by a factor det(DE)  ∥H∥F √ N N . It is easy see that the bound is more reduced when the norms of the rows 4 (a) convex (b) non-convex Figure 2: Histogram of the condition number reduction (lower is better) for random Hessians in a (a) convex and b) non-convex setting. Equilibration clearly outperforms the other methods in the non-convex case. are more varied. Note that the proof can be easily extended to column equilibration, and row and column equilibration. In contrast, we can not prove that the Jacobi preconditioner improves the upper bound, which provides another justification for using the equilibration preconditioner. A deterministic implementation to calculate the 2-norm of all matrix rows needs to access all matrix elements. This is prohibitive for very large Hessian’s that can not even be stored. We therefore resort to a matrix-free estimator of the equilibration matrix that only uses matrix vector multiplications of the form (Hv)2 where the square is element-wise and vi ∼N(0, 1)4. As shown by Bradley & Murray (2011), this estimator is unbiased, i.e. ∥Hi,·∥2 = E[(Hv)2]. (9) Since multiplying the Hessian by a vector can be efficiently done without ever computing the Hessian, this method can be efficiently used in the context of neural networks using the R-operator Schraudolph (2002). The R-operator computation only uses gradient-like computations and costs about the same as two backpropagations. 5 Equilibrated learning rates are well suited to non-convex problems In this section, we demonstrate that equilibrated learning rates are well suited to non-convex optimization, particularly compared to the Jacobi preconditioner. First, the diagonal equilibration matrix can be seen as an approximation to diagonal of the absolute Hessian. Reformulating the equilibration matrix as DE ii = ∥Hi,·∥2 = p diag(H2)i (10) reveals an interesting connection. Changing the order of the square root and diagonal would give us the diagonal of |H|. In other words, the equilibration preconditioner can be thought of as the Jacobi preconditioner of the absolute Hessian. Recall that the inverse of the absolute Hessian |H|−1 is the only symmetric positive definite matrix that reduces the condition number to 1 (the proof of which can be be found in Proposition 1 in the Appendix). It can be considered as the gold standard, if we do not take computational costs into account. For indefinite matrices, the diagonal of the Hessian H and the diagonal of the absolute Hessian |H| will be very different, and therefore the behavior of the Jacobi and equilibration preconditioner will also be very different. In fact, we argue that the Jacobi preconditioner can cause divergence because it underestimates curvature. We can measure the amount of curvature in a given direction with the Raleigh quotient R(H, v) = vT Hv vT v . (11) 4Any random variable vi with zero mean and unit variance can be used. 5 Algorithm 1 Equilibrated Gradient Descent Require: Function f(θ) to minimize, learning rate ϵ and damping factor λ D ←0 for i = 1 →K do v ∼N(0, 1) D ←D + (Hv)2 θ ←θ −ϵ ∇f(θ) √ D/i+λ end for This quotient is large when there is a lot of curvature in the direction v. The Raleigh quotient can be decomposed into R(H, v) = PN j λjv⊤qjq⊤ j v where λj and qj are the eigenvalues and eigenvectors of H. It is easy to show that each element of the Jacobi matrix is given by DJ ii = |R(H, I·,i)|−1 = | PN j λjq2 j,i|−1. An element DJ ii is the inverse of the sum of the eigenvalues λj. Negative eigenvalues will reduce the total sum and make the step much larger than it should. Specifically, imagine a diagonal element where there are large positive and negative curvature eigendirections. The contributions of these directions will cancel each other and a large step will be taken in that direction. However, the function will probably also change fast in that direction (because of the high curvature), and the step is too large for the local quadratic approximation we have considered. Equilibration methods never diverge this way because they will not underestimate curvature. In equilibration, the curvature information is given by the Raleigh quotient of the squared Hessian DE ii = (R(H2, I·,i))−1/2 = (P j λ2 jq2 j,i)−1/2. Note that all the elements are positive and so will not cancel. Jensen’s inequality then gives us an upper bound DE ii ≤|H|−1 ii . (12) which ensures that equilibrated adaptive learning rate will in fact be more conservative than the Jacobi preconditioner of the absolute Hessian (see Proposition 2 for proof). This strengthens the links between equilibration and the absolute Hessian and may explain why equilibration has been found to work well for indefinite matrices Bradley & Murray (2011). We have verified this claim experimentally for random neural networks. The neural networks have 1 hidden layer of a 100 sigmoid units with zero mean unit-variance Gaussian distributed inputs, weights and biases. The output layer is a softmax with the target generated randomly. We also give results for similarly sampled logistic regressions. We compare reductions of the condition number between the methods. Figure 2 gives the histograms of the condition number reductions. We obtained these graphs by sampling a hundred networks and computing the ratio of the condition number before and after preconditioning. On the left we have the convex case, and on the right the non-convex case. We clearly observe that the Jacobi and equilibration method are closely matched for the convex case. However, in the non-convex case equilibration significantly outperforms the other methods. Note that the poor performance of the Gauss-Newton diagonal only means that its success in optimization is not due to preconditioning. As we will see in Section 8 these results extend to practical highdimensional problems. 6 Implementation We propose to build a scalable algorithm for preconditioning neural networks using equilibration. This method will estimate the same curvature information p diag(H2) with the unbiased estimator described in Equation 9. It is prohibitive to compute the full expectation at each learning step. Instead we will simply update our running average at each learning step much like RMSProp. The pseudo-code is given in Algorithm 1. The additional costs are one product with the Hessian, which is roughly the cost of two additional gradient calculations, and the sampling a random Gaussian vector. In practice we greatly amortize the cost by only performing the update every 20 iterations. This brings the cost of equilibration very close to that of regular SGD. The only added hyper-parameter is the damping λ. We find that a good setting for that hyper-parameter is λ = 10−4 and it is robust over the tasks we considered. 6 (a) MNIST (b) CURVES Figure 3: Learning curves for deep auto-encoders on a) MNIST and b) CURVES comparing the different preconditioned SGD methods. In the interest of comparison, we will evaluate SGD preconditioned with the Jacobi preconditioner. This will allow us to verify the claims that the equilibration preconditioner is better suited for nonconvex problems. Bekas et al. (2007) show that the diagonal of a matrix can be recovered by the expression diag(H) = E[v ⊙Hv] (13) where v are random vectors with entries ±1 and ⊙is the element-wise product. We use this estimator to precondition SGD in the same fashion as that described in Algorithm 1. The variance of this estimator for an element i is P j H2 ji −H2 ii, while the method in Martens et al. (2012) has H2 ii. Therefore, the optimal method depends on the situation. The computational complexity is the same as ESGD. 7 Experimental setup We consider the challenging optimization benchmark of training very deep neural networks. Following Martens (2010); Sutskever et al. (2013); Vinyals & Povey (2011), we train deep auto-encoders which have to reconstruct their input under the constraint that one layer is very low-dimensional. The networks have up to 11 layers of sigmoidal hidden units and have on the order of a million parameters. We use the standard network architectures described in Martens (2010) for the MNIST and CURVES dataset. Both of these datasets have 784 input dimensions and 60,000 and 20,000 examples respectively. We tune the hyper-parameters of the optimization methods with random search. We have sampled the learning rate from a logarithmic scale between [0.1, 0.01] for stochastic gradient descent (SGD) and equilibrated SGD (ESGD). The learning rate for RMSProp and the Jacobi preconditioner are sampled from [0.001, 0.0001]. The damping factor λ used before dividing the gradient is taken from either {10−4, 10−5, 10−6} while the exponential decay rate of RMSProp is taken from either {0.9, 0.95}. The networks are initialized using the sparse initialization described in Martens (2010). The minibatch size for all methods in 200. We do not make use of momentum in these experiments in order to evaluate the strength of each preconditioning method on its own. Similarly we do not use any regularization because we are only concerned with optimization performance. For these reasons, we report training error in our graphs. The networks and algorithms were implemented using Theano Bastien et al. (2012), simplifying the use of the R-operator in Jacobi and equilibrated SGD. All experiments were run on GPU’s. 8 Results 8.1 Comparison of preconditioned SGD methods We compare the different adaptive learning rates for training deep auto-encoders in Figure 3. We don’t use momentum to better isolate the performance of each method. We believe this is important because RMSProp has been found not to mix well with momentum (Tieleman & Hinton, 2012). Thus the results presented are not state-of-the-art, but they do reach state of the art when momentum is used. 7 (a) MNIST (b) CURVES Figure 4: Cosine distance between the diagonals estimated by each method during the training of a deep auto-encoder trained on a) MNIST and b) CURVES. We can see that RMSProp estimates a quantity close to the equilibration matrix. Our results on MNIST show that the proposed ESGD method significantly outperforms both RMSProp and Jacobi SGD. The difference in performance becomes especially notable after 250 epochs. Sutskever et al. (2013) reported a performance of 2.1 of training MSE for SGD without momentum and we can see all adaptive learning rates improve on this result, with equilibration reaching 0.86. We observe a convergence speed that is approximately three times faster then our baseline SGD. ESGD also performs best for CURVES, although the difference with RMSProp and Jacobi SGD is not as significant as for MNIST. We show in the next section that the smaller gap in performance is due to the different preconditioners behaving the same way on this dataset. 8.2 Measuring the similarity of the methods We train deep autoencoders with RMSProp and measure every 10 epochs the equilibration matrix DE = p diag(H2) and Jacobi matrix DJ = p diag(H)2 using 100 samples of the unbiased estimators described in Equations 9, respectively. We then measure the pairwise differences between these quantities in terms of the cosine distance cosine(u, v) = 1− u·v ∥u∥∥v∥, which measures the angle between two vectors and ignores their norms. Figure 4 shows the resulting cosine distances over training on MNIST and CURVES. For the latter dataset we observe that RMSProp remains remarkably close (around 0.05) to equilibration, while it is significantly different from Jacobi (in the order of 0.2). The same order of difference is observed when we compare equilibration and Jacobi, confirming the observations of Section 5 that both quantities are rather different in practice. For the MNIST dataset we see that RMSProp fairly well estimates p diag(H)2 in the beginning of training, but then quickly diverges. After 1000 epochs this difference has exceeded the difference between Jacobi and equilibration, and RMSProp no longer matches equilibration. Interestingly, at the same time that RMSProp starts diverging, we observe in Figure 3 that also the performance of the optimizer drops in comparison to ESGD. This may suggests that the success of RMSProp as a optimizer is tied to its similarity to the equilibration matrix. 9 Conclusion We have studied diagonal preconditioners for saddle point problems i.e. indefinite matrices. We have shown by theoretical and empirical arguments that the equilibration preconditioner is comparatively better suited to this kind of problems than the Jacobi preconditioner. Using this insight, we have proposed a novel adaptive learning rate schedule for non-convex optimization problems, called ESGD, which empirically outperformed RMSProp on two competitive deep autoencoder benchmark. Interestingly, we have found that the update direction of RMSProp was in practice very similar to the equilibrated update direction, which might provide more insight into why RMSProp has been so successfull in training deep neural networks. More research is required to confirm these results. However, we hope that our findings will contribute to a better understanding of SGD’s adaptive learning rate schedule for large scale, non-convex optimization problems. 8 References Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. Bekas, Costas, Kokiopoulou, Effrosyni, and Saad, Yousef. An estimator for the diagonal of a matrix. Applied numerical mathematics, 57(11):1214–1229, 2007. Bradley, Andrew M and Murray, Walter. Matrix-free approximate equilibration. arXiv preprint arXiv:1110.2805, 2011. Choromanska, Anna, Henaff, Mikael, Mathieu, Michael, Arous, Grard Ben, and LeCun, Yann. The loss surface of multilayer networks, 2014. Datta, Biswa Nath. Numerical Linear Algebra and Applications, Second Edition. SIAM, 2nd edition, 2010. ISBN 0898716853, 9780898716856. Dauphin, Yann, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, Ganguli, Surya, and Bengio, Yoshua. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS’2014, 2014. Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011. Guggenheimer, Heinrich W., Edelman, Alan S., and Johnson, Charles R. A simple estimate of the condition number of a linear system. The College Mathematics Journal, 26(1):pp. 2–5, 1995. ISSN 07468342. URL http://www.jstor.org/stable/2687283. LeCun, Yann, Bottou, L´eon, Orr, Genevieve B., and M¨uller, Klaus-Robert. Efficient backprop. In Neural Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524. Springer Verlag, 1998. Martens, J. Deep learning via Hessian-free optimization. In ICML’2010, pp. 735–742, 2010. Martens, James, Sutskever, Ilya, and Swersky, Kevin. Estimating the hessian by back-propagating curvature. arXiv preprint arXiv:1206.6464, 2012. Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. In International Conference on Learning Representations 2014(Conference Track), April 2014. Schaul, Tom, Antonoglou, Ioannis, and Silver, David. Unit tests for stochastic optimization. arXiv preprint arXiv:1312.6055, 2013. Schraudolph, Nicol N. Fast curvature matrix-vector products for second-order gradient descent. Neural Computation, 14(7):1723–1738, 2002. Sluis, AVD. Condition numbers and equilibration of matrices. Numerische Mathematik, 14(1): 14–23, 1969. Sutskever, Ilya, Martens, James, Dahl, George, and Hinton, Geoffrey. On the importance of initialization and momentum in deep learning. In ICML, 2013. Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Vinyals, Oriol and Povey, Daniel. Krylov subspace descent for deep learning. arXiv preprint arXiv:1111.4259, 2011. Zeiler, Matthew D. ADADELTA: an adaptive learning rate method. Technical report, arXiv 1212.5701, 2012. URL http://arxiv.org/abs/1212.5701. 9
2015
115
5,609
Optimal Linear Estimation under Unknown Nonlinear Transform Xinyang Yi The University of Texas at Austin yixy@utexas.edu Zhaoran Wang Princeton University zhaoran@princeton.edu Constantine Caramanis The University of Texas at Austin constantine@utexas.edu Han Liu Princeton University hanliu@princeton.edu Abstract Linear regression studies the problem of estimating a model parameter β∗∈Rp, from n observations {(yi, xi)}n i=1 from linear model yi = ⟨xi, β∗⟩+ ϵi. We consider a significant generalization in which the relationship between ⟨xi, β∗⟩ and yi is noisy, quantized to a single bit, potentially nonlinear, noninvertible, as well as unknown. This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing. We propose a novel spectral-based estimation procedure and show that we can recover β∗in settings (i.e., classes of link function f) where previous algorithms fail. In general, our algorithm requires only very mild restrictions on the (unknown) functional relationship between yi and ⟨xi, β∗⟩. We also consider the high dimensional setting where β∗is sparse, and introduce a two-stage nonconvex framework that addresses estimation challenges in high dimensional regimes where p ≫n. For a broad class of link functions between ⟨xi, β∗⟩and yi, we establish minimax lower bounds that demonstrate the optimality of our estimators in both the classical and high dimensional regimes. 1 Introduction We consider a generalization of the one-bit quantized regression problem, where we seek to recover the regression coefficient β∗∈Rp from one-bit measurements. Specifically, suppose that X is a random vector in Rp and Y is a binary random variable taking values in {−1, 1}. We assume the conditional distribution of Y given X takes the form P(Y = 1|X = x) = 1 2f(⟨x, β∗⟩) + 1 2, (1.1) where f : R →[−1, 1] is called the link function. We aim to estimate β∗from n i.i.d. observations {(yi, xi)}n i=1 of the pair (Y, X). In particular, we assume the link function f is unknown. Without any loss of generality, we take β∗to be on the unit sphere Sp−1 since its magnitude can always be incorporated into the link function f. The model in (1.1) is simple but general. Under specific choices of the link function f, (1.1) immediately leads to many practical models in machine learning and signal processing, including logistic regression and one-bit compressed sensing. In the settings where the link function is assumed to be known, a popular estimation procedure is to calculate an estimator that minimizes a certain loss 1 function. However, for particular link functions, this approach involves minimizing a nonconvex objective function for which the global minimizer is in general intractable to obtain. Furthermore, it is difficult or even impossible to know the link function in practice, and a poor choice of link function may result in inaccurate parameter estimation and high prediction error. We take a more general approach, and in particular, target the setting where f is unknown. We propose an algorithm that can estimate the parameter β∗in the absence of prior knowledge on the link function f. As our results make precise, our algorithm succeeds as long as the function f satisfies a single moment condition. As we demonstrate, this moment condition is only a mild restriction on f. In particular, our methods and theory are widely applicable even to the settings where f is non-smooth, e.g., f(z) = sign(z), or noninvertible, e.g., f(z) = sin(z). In particular, as we show in §2, our restrictions on f are sufficiently flexible so that our results provide a unified framework that encompasses a broad range of problems, including logistic regression, one-bit compressed sensing, one-bit phase retrieval as well as their robust extensions. We use these important examples to illustrate our results, and discuss them at several points throughout the paper. Main contributions. The key conceptual contribution of this work is a novel use of the method of moments. Rather than considering moments of the covariate, X, and the response variable, Y , we look at moments of differences of covariates, and differences of response variables. Such a simple yet critical observation enables everything that follows and leads to our spectral-based procedure. We also make two theoretical contributions. First, we simultaneously establish the statistical and computational rates of convergence of the proposed spectral algorithm. We consider both the low dimensional setting where the number of samples exceeds the dimension and the high dimensional setting where the dimensionality may (greatly) exceed the number of samples. In both these settings, our proposed algorithm achieves the same statistical rate of convergence as that of linear regression applied on data generated by the linear model without quantization. Second, we provide minimax lower bounds for the statistical rate of convergence, and thereby establish the optimality of our procedure within a broad model class. In the low dimensional setting, our results obtain the optimal rate with the optimal sample complexity. In the high dimensional setting, our algorithm requires estimating a sparse eigenvector, and thus our sample complexity coincides with what is believed to be the best achievable via polynomial time methods [2]; the error rate itself, however, is informationtheoretically optimal. We discuss this further in §3.4. Related works. Our model in (1.1) is close to the single-index model (SIM) in statistics. In the SIM, we assume that the response-covariate pair (Y, X) is determined by Y = f(⟨X, β∗⟩) + W (1.2) with unknown link function f and noise W. Our setting is a special case of this, as we restrict Y to be a binary random variable. The single index model is a classical topic, and therefore there is extensive literature – too much to exhaustively review it. We therefore outline the pieces of work most relevant to our setting and our results. For estimating β∗in (1.2), a feasible approach is M-estimation [8, 9, 12], in which the unknown link function f is jointly estimated using nonparametric estimators. Although these M-estimators have been shown to be consistent, they are not computationally efficient since they involve solving a nonconvex optimization problem. Another approach to estimate β∗is named the average derivative estimator (ADE; [24]). Further improvements of ADE are considered in [13, 22]. ADE and its related methods require that the link function f is at least differentiable, and thus excludes important models such as one-bit compressed sensing with f(z) = sign(z). Beyond estimating β∗, the works in [15, 16] focus on iteratively estimating a function f and vector β that are good for prediction, and they attempt to control the generalization error. Their algorithms are based on isotonic regression, and are therefore only applicable when the link function is monotonic and satisfies Lipschitz constraints. The work discussed above focuses on the low dimensional setting where p ≪n. Another related line of works is sufficient dimension reduction, where the goal is to find a subspace U of the input space such that the response Y only depends on the projection U⊤X. Single-index model and our problem can be regarded as special cases of this problem as we are primarily interested in recovering a one-dimensional subspace. Due to space limit, we refer readers to the long version of this paper for a detailed survey [29]. 2 In the high dimensional regime with p ≫n and β∗has some structure (for us this means sparsity), we note there exists some recent progress [1] on estimating f via PAC Bayesian methods. In the special case when f is linear function, sparse linear regression has attracted extensive study over the years. The recent work by Plan et al. [21] is closest to our setting. They consider the setting of normal covariates, X ∼N(0, Ip), and they propose a marginal regression estimator for estimating β∗, that, like our approach, requires no prior knowledge about f. Their proposed algorithm relies on the assumption that Ez∼N (0,1)  zf(z)  ̸= 0, and hence cannot work for link functions that are even. As we will describe below, our algorithm is based on a novel moment-based estimator, and avoids requiring such a condition, thus allowing us to handle even link functions under a very mild moment restriction, which we describe in detail below. Generally, the work in [21] requires different conditions, and thus beyond the discussion above, is not directly comparable to the work here. In cases where both approaches apply, the results are minimax optimal. 2 Example models In this section, we discuss several popular (and important) models in machine learning and signal processing that fall into our general model (1.1) under specific link functions. Variants of these models have been studied extensively in the recent literature. These examples trace through the paper, and we use them to illustrate the details of our algorithms and results. Logistic regression. In logistic regression (LR), we assume that P(Y = 1|X = x) = 1 1+exp (−⟨x,β∗⟩−ζ), where ζ is the intercept. The link function corresponds to f(z) = exp (z+ζ)−1 exp (z+ζ)+1. One robust variant of LR is called flipped logistic regression, where we assume that the labels Y generated from standard LR model are flipped with probability pe, i.e., P(Y = 1|X = x) = 1−pe 1+exp (−⟨x,β∗⟩−ζ) + pe 1+exp (⟨x,β∗⟩+ζ). This reduces to the standard LR model when pe = 0. For flipped LR, the link function f can be written as f(z) = exp (z + ζ) −1 exp (z + ζ) + 1 + 2pe · 1 −exp (z + ζ) 1 + exp (z + ζ). (2.1) Flipped LR has been studied by [19, 25]. In both papers, estimating β∗is based on minimizing some surrogate loss function involving a certain tuning parameter connected to pe. However, pe is unknown in practice. In contrast to their approaches, our method does not hinge on the unknown parameter pe. Our approach has the same formulation for both standard and flipped LR, thus unifies the two models. One-bit compressed sensing. One-bit compressed sensing (CS) aims at recovering sparse signals from quantized linear measurements (see e.g., [11, 20]). In detail, we define B0(s, p) := {β ∈Rp : | supp(β)| ≤s} as the set of sparse vectors in Rp with at most s nonzero elements. We assume (Y, X) ∈{−1, 1} × Rp satisfies Y = sign(⟨X, β∗⟩), (2.2) where β∗∈B0(s, p). In this paper, we also consider its robust version with noise ϵ, i.e., Y = sign(⟨X, β∗⟩+ ϵ). Assuming ϵ ∼N(0, σ2), the link function f of robust 1-bit CS thus corresponds to f(z) = 2 Z ∞ 0 1 √ 2πσ e−(u−z)2/2σ2du −1. (2.3) Note that (2.2) also corresponds to the probit regression model without the sparse constraint on β∗. Throughout the paper, we do not distinguish between the two model names. Model (2.2) is referred to as one-bit compressed sensing even in the case where β∗is not sparse. One-bit phase retrieval. The goal of phase retrieval (e.g., [5]) is to recover signals based on linear measurements with phase information erased, i.e., pair (Y, X) ∈R × Rp is determined by equation Y = |⟨X, β∗⟩|. Analogous to one-bit compressed sensing, we consider a new model named one-bit phase retrieval where the linear measurement with phase information erased is quantized to one bit. In detail, pair (Y, X) ∈{−1, 1} × Rp is linked through Y = sign(|⟨X, β∗⟩| −θ), where θ is the quantization threshold. Compared with one-bit compressed sensing, this problem is more difficult because Y only depends on β∗through the magnitude of ⟨X, β∗⟩instead of the value of ⟨X, β∗⟩. Also, it is more difficult than the original phase retrieval problem due to the additional quantization. 3 Using our general model, The link function thus corresponds to f(z) = sign(|z| −θ). (2.4) It is worth noting that, unlike previous models, here f is neither odd nor monotonic. 3 Main results We now turn to our algorithms for estimating β∗in both low and high dimensional settings. We first introduce a second moment estimator based on pairwise differences. We prove that the eigenstructure of the constructed second moment estimator encodes the information of β∗. We then propose algorithms to estimate β∗based upon this second moment estimator. In the high dimensional setting where β∗is sparse, computing the top eigenvector of our pairwise-difference matrix reduces to computing a sparse eigenvector. Beyond algorithms, we discuss minimax lower bound in §3.5. We present simulation results in §3.6 3.1 Conditions for success We now introduce several key quantities, which allow us to state precisely the conditions required for the success of our algorithm. Definition 3.1. For any (unknown) link function, f, define the quantity φ(f) as follows: φ(f) := µ2 1 −µ0µ2 + µ2 0. (3.1) where µ0, µ1 and µ2 are given by µk := E  f(Z)Zk , k = 0, 1, 2 . . . , (3.2) where Z ∼N(0, 1). As we discuss in detail below, the key condition for success of our algorithm is φ(f) ̸= 0. As we show below, this is a relatively mild condition, and in particular, it is satisfied by the three examples introduced in §2. For odd and monotonic f, φ(f) > 0 unless f(z) = 0 for all z in which case no algorithm is able to recover β∗. For even f, we have µ1 = 0. Thus φ(f) ̸= 0 if and only if µ0 ̸= µ2. 3.2 Second moment estimator We describe a novel moment estimator that enables our algorithm. Let {(yi, xi)}n i=1 be the n i.i.d. observations of (Y, X). Assuming without loss of generality that n is even, we consider the following key transformation ∆yi := y2i −y2i−1, ∆xi := x2i −x2i−1, (3.3) for i = 1, 2, ..., n/2. Our procedure is based on the following second moment M := 2 n n/2 X i=1 ∆y2 i ∆xi∆x⊤ i ∈Rp×p. (3.4) The intuition behind this second moment is as follows. By (1.1), the variation of X along the direction β∗has the largest impact on the variation of ⟨X, β∗⟩. Thus, the variation of Y directly depends on the variation of X along β∗. Consequently, {(∆yi, ∆xi)}n/2 i=1 encodes the information of such a dependency relationship. In the following, we make this intuition more rigorous by analyzing the eigenstructure of E(M) and its relationship with β∗. Lemma 3.2. For β∗∈Sp−1, we assume that (Y, X) ∈{−1, 1} × Rp satisfies (1.1). For X ∼ N(0, Ip), we have E(M) = 4φ(f) · β∗β∗⊤+ 4(1 −µ2 0) · Ip, (3.5) where µ0 and φ(f) are defined in (3.2) and (3.1). Lemma 3.2 proves that β∗is the leading eigenvector of E(M) as long as the eigengap φ(f) is positive. If instead we have φ(f) < 0, we can use a related moment estimator which has analogous properties. 4 To this end, define M′ := 2 n Pn/2 i=1(y2i + y2i−1)2∆xi∆x⊤ i . In parallel to Lemma 3.2, we have a similar result for M′ as stated below. Corollary 3.3. Under the setting of Lemma 3.2, E(M′) = −4φ(f) · β∗β∗⊤+ 4(1 + µ2 0) · Ip. Corollary 3.3 therefore shows that when φ(f) < 0, we can construct another second moment estimator M′ such that β∗is the leading eigenvector of E(M′). As discussed above, this is precisely the setting for one-bit phase retrieval when the quantization threshold in (3.1) satisfies θ < θm. For simplicity of the discussion, hereafter we assume that φ(f) > 0 and focus on the second moment estimator M defined in (3.4). A natural question to ask is whether φ(f) ̸= 0 holds for specific models. The following lemma demonstrates exactly this, for the example models introduced in §2. Lemma 3.4. (a) Consider the flipped logistic regression where f is given in (2.1). By setting the intercept to be ζ = 0, we have φ(f) ≳(1−2pe)2. (b) For robust one-bit compressed sensing where f is given in (2.3). We have φ(f) ≳min  1−σ2 1+σ2 2 , C′σ4 (1+σ3)2  . (c) For one-bit phase retrieval where f is given in (2.4). For Z ∼N(0, 1), we let θm be the median of |Z|, i.e., P(|Z| ≥θm) = 1/2. We have |φ(f)| ≳θ|θ −θm| exp(−θ2) and sign[φ(f)] = sign(θ −θm). We thus obtain φ(f) > 0 for θ > θm. 3.3 Low dimensional recovery We consider estimating β∗in the classical (low dimensional) setting where p ≪n. Based on the second moment estimator M defined in (3.4), estimating β∗amounts to solving a noisy eigenvalue problem. We solve this by a simple iterative algorithm: provided an initial vector β0 ∈Sp−1 (which may be chosen at random) we perform power iterations as shown in Algorithm 1. Theorem 3.5. We assume X ∼N(0, Ip) and (Y, X) follows (1.1). Let {(yi, xi)}n i=1 be n i.i.d. samples of response input pair (Y, X). For any link function f in (1.1) with µ0, φ(f) defined in (3.2) and (3.1), and φ(f) > 01. We let γ :=  1 −µ2 0 φ(f) + 1 −µ2 0 + 1   2, and ξ := γφ(f) + (γ −1)(1 −µ2 0) (1 + γ)  φ(f) + 1 −µ2 0  . (3.6) There exist constant Ci such that when n ≥C1p/ξ2, for Algorithm 1, we have that with probability at least 1 −2 exp(−C2p), βt −β∗ 2 ≤C3 · φ(f) + 1 −µ2 0 φ(f) · r p n | {z } Statistical Error + r 1 −α2 α2 · γt | {z } Optimization Error , for t = 1, . . . , Tmax. (3.7) Here α = β0, bβ , where bβ is the first leading eigenvector of M. Note that by (3.6) we have γ ∈(0, 1). Thus, the optimization error term in (3.7) decreases at a geometric rate to zero as t increases. For Tmax sufficiently large such that the statistical error and optimization error terms in (3.7) are of the same order, we have βTmax −β∗ 2 ≲ p p/n. This statistical rate of convergence matches the rate of estimating a p-dimensional vector in linear regression without any quantization, and will later be shown to be optimal. This result shows that the lack of prior knowledge on the link function and the information loss from quantization do not keep our procedure from obtaining the optimal statistical rate. 3.4 High dimensional recovery Next we consider the high dimensional setting where p ≫n and β∗is sparse, i.e., β∗∈Sp−1 ∩ B0(s, p) with s being support size. Although this high dimensional estimation problem is closely 1Recall that we have an analogous treatment and thus results for φ(f) < 0. 5 related to the well-studied sparse PCA problem, the existing works [4, 6, 17, 23, 27, 28, 31, 32] on sparse PCA do not provide a direct solution to our problem. In particular, they either lack statistical guarantees on the convergence rate of the obtained estimator [6, 23, 28] or rely on the properties of the sample covariance matrix of Gaussian data [4, 17], which are violated by the second moment estimator defined in (3.4). For the sample covariance matrix of sub-Gaussian data, [27] prove that the convex relaxation proposed by [7] achieves a suboptimal s p log p/n rate of convergence. Yuan and Zhang [31] propose the truncated power method, and show that it attains the optimal p s log p/n rate Algorithm 1 Low dimensional recovery Input {(yi, xi)}n i=1, number of iterations Tmax 1: Second moment estimation: Construct M from samples according to (3.4). 2: Initialization: Choose a random vector β0 ∈ Sn−1 3: For t = 1, 2, . . . , Tmax do 4: βt ←M · βt−1 5: βt ←βt/∥βt∥2 6: end For Output βTmax locally; that is, it exhibits this rate of convergence only in a neighborhood of the true solution where ⟨β0, β∗⟩> C where C > 0 is some constant. It is well understood that for a random initialization on Sp−1, such a condition fails with probability going to one as p →∞. Algorithm 2 Sparse recovery Input {(yi, xi)}n i=1, number of iterations Tmax, regularization parameter ρ, sparsity level bs. 1: Second moment estimation: Construct M from samples according to (3.4). 2: Initialization: 3: Π0 ←argmin Π∈Rp×p{−⟨M, Π⟩+ ρ∥Π∥1,1 | Tr(Π) = 1, 0 ⪯Π ⪯I} (3.8) 4: β0 ←first leading eigenvector of Π0 5: β0 ←trunc(β0, bs) 6: β0 ←β0/∥β0∥2 7: For t = 1, 2, . . . , Tmax do 8: βt ←trunc(M · βt−1, bs) 9: βt ←βt/∥βt∥2 10: end For Output βTmax Instead, we propose a two-stage procedure for estimating β∗in our setting. In the first stage, we adapt the convex relaxation proposed by [27] and use it as an initialization step, in order to obtain a good enough initial point satisfying the condition ⟨β0, β∗⟩> C. The convex optimization problem can be easily solved by the alternating direction method of multipliers (ADMM) algorithm (see [3, 27] for details). Then we adapt the truncated power method. This procedure is illustrated in Algorithm 2. In particular, we define truncation operator trunc(·, ·) as [trunc(β, s)]j = 1(j ∈S)βj, where S is the index set corresponding to the top s largest |βj|. The initialization phase of our algorithm requires O(s2 log p) samples (see below for more precise details) to succeed. As work in [2] suggests, it is unlikely that a polynomial time algorithm can avoid such dependence. However, once we are near the solution, as we show, this two-step procedure achieves the optimal error rate of p s log p/n. Theorem 3.6. Let κ :=  4(1 −µ2 0) + φ(f)  4(1 −µ2 0) + 3φ(f)  < 1, (3.9) and the minimum sample size be nmin := C · s2 log p · φ(f)2 · min  κ(1 −κ1/2)/2, κ/8   (1 −µ2 0) + φ(f) 2 . (3.10) Suppose ρ=C  φ(f)+(1−µ2 0) p log p/n with a sufficiently large constant C, where φ(f) and µ0 are specified in (3.2) and (3.5). Meanwhile, assume the sparsity parameter bs in Algorithm 2 is set to be bs=C′′ max  1/(κ−1/2−1)2 ,1 ·s∗. For n ≥nmin with nmin defined in (3.10), we have ∥βt −β∗∥2 ≤C ·  φ(f) + (1 −µ2 0)  5 2 (1 −µ2 0) 1 2 φ(f)3 · r s log p n | {z } Statistical Error + κt · q min  (1 −κ1/2)/2, 1/8 | {z } Optimization Error (3.11) with high probability. Here κ is defined in (3.9). The first term on the right-hand side of (3.11) is the statistical error while the second term gives the optimization error. Note that the optimization error decays at a geometric rate since κ < 1. For Tmax 6 sufficiently large, we have βTmax −β∗ 2 ≲ p s log p/n. In the sequel, we show that the right-hand side gives the optimal statistical rate of convergence for a broad model class under the high dimensional setting with p ≫n. 3.5 Minimax lower bound We establish the minimax lower bound for estimating β∗in the model defined in (1.1). In the sequel we define the family of link functions that are Lipschitz continuous and are bounded away from ±1. Formally, for any m ∈(0, 1) and L > 0, we define F(m, L) :=  f : |f(z)| ≤1 −m, |f(z) −f(z′)| ≤L|z −z′|, for all z, z′ ∈R . (3.12) Let X n f := {(yi, xi)}n i=1 be the n i.i.d. realizations of (Y, X), where X follows N(0, Ip) and Y satisfies (1.1) with link function f. Correspondingly, we denote the estimator of β∗∈B to be bβ(X n f ), where B is the domain of β∗. We define the minimax risk for estimating β∗as R(n, m, L, B) := inf f∈F(m,L) inf b β(X n f ) sup β∗∈B E bβ(X n f ) −β∗ 2. (3.13) In the above definition, we not only take the infimum over all possible estimators bβ, but also all possible link functions in F(m, L). For a fixed f, our formulation recovers the standard definition of minimax risk [30]. By taking the infimum over all link functions, our formulation characterizes the minimax lower bound under the least challenging f in F(m, L). In the sequel we prove that our procedure attains such a minimax lower bound for the least challenging f given any unknown link function in F(m, L). That is to say, even when f is unknown, our estimation procedure is as accurate as in the setting where we are provided the least challenging f, and the achieved accuracy is not improvable due to the information-theoretic limit. The following theorem establishes the minimax lower bound in the high dimensional setting. Theorem 3.7. Let B=Sp−1∩B0(s, p). We assume that n>m(1−m)/(2L2)2·  Cs log(p/s)/2−log 2  . For any s ∈(0, p/4], the minimax risk defined in (3.13) satisfies R(n, m, L, B) ≥C′ · p m(1 −m) L · r s log(p/s) n . Here C and C′ are absolute constants, while m and L are defined in (3.12). Theorem 3.7 establishes the minimax optimality of the statistical rate attained by our procedure for p≫n and s-sparse β∗. In particular, for arbitrary f ∈F(m, L) ∩{f : φ(f) > 0}, the estimator bβ attained by Algorithm 2 is minimax-optimal in the sense that its p s log p/n rate of convergence is not improvable, even when the information on the link function f is available. For general β∗∈Rp, one can show the best possible convergence rate is Ω( p m(1 −m)p/n/L) by setting s = p/4 in Theorem 3.7. It is worth to note that our lower bound becomes trivial for m = 0, i.e., there exists some z such that |f(z)| = 1. One example is the noiseless one-bit compressed sensing for which we have f(z) = sign(z). In fact, for noiseless one-bit compressed sensing, the p s log p/n rate is not optimal. For example, the Jacques et al. [14] provide an algorithm (with exponential running time) that achieves rate s log p/n. Understanding such a rate transition phenomenon for link functions with zero margin, i.e., m = 0 in (3.12), is an interesting future direction. 3.6 Numerical results We now turn to the numerical results that support our theory. For the three models introduced in §2, we apply Algorithm 1 and Algorithm 2 to do parameter estimation in the classic and high dimensional regimes. Our simulations are based on synthetic data. For classic recovery, β∗is randomly chosen from Sp−1; for sparse recovery, we set β∗ j = s−1/21(j ∈S) for all j ∈[p], where S is a random index subset of [p] with size s. In Figure 1, as predicted by Theorem 3.5, we observe that the same 7 p p/n leads to nearly identical estimation error. Figure 2 demonstrates similar results for the predicted rate p s log p/n of sparse recovery and thus validates Theorem 3.6. p p/n 0.05 0.1 0.15 0.2 Estimation Error 0.2 0.3 0.4 0.5 0.6 0.7 p = 10 p = 20 p = 40 (a) Flipped Logistic Regression p p/n 0.05 0.1 0.15 0.2 Estimation Error 0.2 0.3 0.4 0.5 p = 10 p = 20 p = 40 (b) One-bit Compressed Sensing p p/n 0.05 0.1 0.15 0.2 Estimation Error 0.2 0.3 0.4 0.5 0.6 0.7 p = 10 p = 20 p = 40 (c) One-bit Phase Retrieval Figure 1: Estimation error of low dimensional recovery. (a) pe = 0.1. (b) δ2 = 0.1. (c) θ = 1. p s log p/n 0.1 0.2 0.3 0.4 Estimation Error 0.2 0.4 0.6 0.8 1 1.2 p = 100, s = 5 p = 100, s = 10 p = 200, s = 5 p = 200, s = 10 (a) Flipped Logistic Regression p s log p/n 0.1 0.2 0.3 0.4 Estimation Error 0 0.2 0.4 0.6 0.8 p = 100, s = 5 p = 100, s = 10 p = 200, s = 5 p = 200, s = 10 (b) One-bit Compressed Sensing p s log p/n 0.1 0.15 0.2 0.25 0.3 Estimation Error 0 0.2 0.4 0.6 0.8 1 p = 100, s = 5 p = 100, s = 10 p = 200, s = 5 p = 200, s = 10 (c) One-bit Phase Retrieval Figure 2: Estimation error of sparse recovery. (a) pe = 0.1. (b) δ2 = 0.1. (c) θ = 1. 4 Discussion Sample complexity. In high dimensional regime, while our algorithm achieves optimal convergence rate, the sample complexity we need is Ω(s2 log p). The natural question is whether it can be reduced to O(s log p). We note that breaking the barrier s2 log p is challenging. Consider a simpler problem sparse phase retrieval where yi = |⟨xi, β∗⟩|, with a fairly extensive body of literature, the state-ofthe-art efficient algorithms (i.e., with polynomial running time) for recovering sparse β∗requires sample complexity Ω(s2 log p) [10]. It remains open to show whether it’s possible to do consistent sparse recovery with O(s log p) samples by any polynomial time algorithms. Acknowledgment XY and CC would like to acknowledge NSF grants 1056028, 1302435 and 1116955. This research was also partially supported by the U.S. Department of Transportation through the Data-Supported Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center. HL is grateful for the support of NSF CAREER Award DMS1454377, NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. ZW was partially supported by MSR PhD fellowship while this work was done. References [1] A L Q U I E R , P. and B I AU , G . (2013). Sparse single-index model. Journal of Machine Learning Research, 14 243–280. [2] B E RT H E T, Q . and R I G O L L E T, P. (2013). Complexity theoretic lower bounds for sparse principal component detection. In Conference on Learning Theory. [3] B OY D , S ., PA R I K H , N ., C H U , E ., P E L E AT O , B . and E C K S T E I N , J . (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3 1–122. 8 [4] C A I, T. T., M A, Z. and W U, Y. (2013). Sparse PCA: Optimal rates and adaptive estimation. Annals of Statistics, 41 3074–3110. [5] C A N D È S, E. J., E L DA R, Y. C., S T RO H M E R, T. and VO RO N I N S K I, V. (2013). Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6 199–225. [6] D’A S P R E M O N T, A., BAC H, F. and E L G H AO U I, L. (2008). Optimal solutions for sparse principal component analysis. Journal of Machine Learning Research, 9 1269–1294. [7] D ’ A S P R E M O N T, A ., E L G H AO U I , L ., J O R DA N , M . I . and L A N C K R I E T, G . R . (2007). A direct formulation for sparse PCA using semidefinite programming. SIAM Review 434–448. [8] D E L E C RO I X, M., H R I S TAC H E, M. and PAT I L E A, V. (2000). Optimal smoothing in semiparametric index approximation of regression functions. Tech. rep., Interdisciplinary Research Project: Quantification and Simulation of Economic Processes. [9] D E L E C RO I X, M., H R I S TAC H E, M. and PAT I L E A, V. (2006). On semiparametric M-estimation in single-index regression. Journal of Statistical Planning and Inference, 136 730–769. [10] E L DA R , Y. C . and M E N D E L S O N , S . (2014). Phase retrieval: Stability and recovery guarantees. Applied and Computational Harmonic Analysis, 36 473–494. [11] G O P I, S., N E T R A PA L L I, P., JA I N, P. and N O R I, A. (2013). One-bit compressed sensing: Provable support and vector recovery. In International Conference on Machine Learning. [12] H A R D L E, W., H A L L , P. and I C H I M U R A , H . (1993). Optimal smoothing in single-index models. Annals of Statistics, 21 157–178. [13] H R I S TAC H E , M ., J U D I T S K Y, A . and S P O K O I N Y, V. (2001). Direct estimation of the index coefficient in a single-index model. Annals of Statistics, 29 pp. 595–623. [14] JAC Q U E S, L., L A S K A, J. N., B O U F O U N O S, P. T. and BA R A N I U K, R. G. (2011). Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. arXiv preprint arXiv:1104.3160. [15] K A K A D E , S . M ., K A N A D E , V., S H A M I R , O . and K A L A I , A . (2011). Efficient learning of generalized linear and single index models with isotonic regression. In Advances in Neural Information Processing Systems. [16] K A L A I, A. T. and S A S T RY, R. (2009). The isotron algorithm: High-dimensional isotonic regression. In Conference on Learning Theory. [17] M A, Z. (2013). Sparse principal component analysis and iterative thresholding. The Annals of Statistics, 41 772–801. [18] M A S S A RT, P. and P I C A R D , J . (2007). Concentration inequalities and model selection, vol. 1896. Springer. [19] NATA R A JA N, N., D H I L L O N, I., R AV I K U M A R, P. and T E WA R I, A. (2013). Learning with noisy labels. In Advances in Neural Information Processing Systems. [20] P L A N, Y. and V E R S H Y N I N, R. (2013). One-bit compressed sensing by linear programming. Communications on Pure and Applied Mathematics, 66 1275–1297. [21] P L A N , Y., V E R S H Y N I N , R . and Y U D OV I N A , E . (2014). High-dimensional estimation with geometric constraints. arXiv preprint arXiv:1404.3749. [22] P O W E L L, J . L ., S T O C K, J . H . and S T O K E R, T. M . (1989). Semiparametric estimation of index coefficients. Econometrica, 57 pp. 1403–1430. [23] S H E N, H. and H UA N G, J. (2008). Sparse principal component analysis via regularized low rank matrix approximation. Journal of Multivariate Analysis, 99 1015–1034. [24] S T O K E R, T. M. (1986). Consistent estimation of scaled coefficients. Econometrica, 54 pp. 1461–1481. [25] T I B S H I R A N I , J . and M A N N I N G , C . D . (2013). Robust logistic regression using shift parameters. arXiv preprint arXiv:1305.4987. [26] V E R S H Y N I N , R . (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027. [27] V U, V. Q., C H O, J., L E I, J. and RO H E, K. (2013). Fantope projection and selection: A near-optimal convex relaxation of sparse PCA. In Advances in Neural Information Processing Systems. [28] W I T T E N, D., T I B S H I R A N I, R. and H A S T I E, T. (2009). A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics, 10 515–534. [29] Y I, X., WA N G, Z., C A R A M A N I S, C. and L I U, H. (2015). Optimal linear estimation under unknown nonlinear transform. arXiv preprint arXiv:1505.03257. [30] Y U, B. (1997). Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam. Springer, 423–435. [31] Y UA N , X . - T. and Z H A N G , T. (2013). Truncated power method for sparse eigenvalue problems. Journal of Machine Learning Research, 14 899–925. [32] Z O U, H., H A S T I E, T. and T I B S H I R A N I, R. (2006). Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15 265–286. 9
2015
116
5,610
Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 13244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 13244 yzhou35@syr.edu Yingbin Liang Department of EECS Syracuse University Syracuse, NY 13244 yliang06@syr.edu Abstract We investigate the robust PCA problem of decomposing an observed matrix into the sum of a low-rank and a sparse error matrices via convex programming Principal Component Pursuit (PCP). In contrast to previous studies that assume the support of the error matrix is generated by uniform Bernoulli sampling, we allow non-uniform sampling, i.e., entries of the low-rank matrix are corrupted by errors with unequal probabilities. We characterize conditions on error corruption of each individual entry based on the local incoherence of the low-rank matrix, under which correct matrix decomposition by PCP is guaranteed. Such a refined analysis of robust PCA captures how robust each entry of the low rank matrix combats error corruption. In order to deal with non-uniform error corruption, our technical proof introduces a new weighted norm and develops/exploits the concentration properties that such a norm satisfies. 1 Introduction We consider the problem of robust Principal Component Analysis (PCA). Suppose a n-by-n1 data matrix M can be decomposed into a low-rank matrix L and a sparse matrix S as M = L + S. (1) Robust PCA aims to find L and S with M given. This problem has been extensively studied recently. In [1, 2], Principal Component Pursuit (PCP) has been proposed to solve the robust PCA problem via the following convex programming PCP: minimize L,S kLk⇤+ λkSk1 (2) subject to M = L + S, where k · k⇤denotes the nuclear norm, i.e., the sum of singular values, and k · k1 denotes the l1 norm i.e., the sum of absolute values of all entries. It was shown in [1, 2] that PCP successfully recovers L and S if the two matrices are distinguishable from each other in properties, i.e., L is not sparse and S is not low-rank. One important quantity that determines similarity of L to a sparse matrix is the incoherence of L, which measures how column and row spaces of L are aligned with canonical basis and between themselves. Namely, suppose that L is a rank-r matrix with SVD L = U⌃V ⇤, where ⌃is a r ⇥r diagonal matrix with singular values as its diagonal entries, U is a n ⇥r matrix with columns as the left singular vectors of L, V is a n ⇥r matrix with columns as the right singular vectors of L, and V ⇤denotes the transpose of V . The incoherence of L is measured 1In this paper, we focus on square matrices for simplicity. Our results can be extended to rectangular matrices in a standard way. 1 by µ = max{µ0, µ1}, where µ0 and µ1 are defined as kU ⇤eik  rµ0r n , kV ⇤ejk  rµ0r n , for all i, j = 1, · · · , n (3) kUV ⇤k1  rµ1r n2 . (4) Previous studies suggest that the incoherence crucially determines conditions on sparsity of S in order for PCP to succeed. For example, Theorem 2 in [3] explicitly shows that the matrix L with larger µ can tolerate only smaller error density to guarantee correct matrix decomposition by PCP. In all previous work on robust PCA, the incoherence is defined to be the maximum over all column and row spaces of L as in (3) and (4), which can be viewed as the global parameter for the entire matrix L, and consequently, characterization of error density is based on such global (and in fact the worst case) incoherence. In fact, each (i, j) entry of the low rank matrix L can be associated with a local incoherence parameter µij, which is less than or equal to the global parameter µ, and then the allowable entry-wise error density can be potentially higher than that characterized based on the global incoherence. Thus, the total number of errors that the matrix can tolerate in robust PCA can be much higher than that characterized based on the global incoherence when errors are distributed accordingly. Motivated by such an observation, this paper aims to characterize conditions on error corruption of each entry of the low rank matrix based on the corresponding local incoherence parameter, which guarantee success of PCP. Such conditions imply how robust each individual entry of L to resist error corruption. Naturally, the error corruption probability is allowed to be non-uniform over the matrix (i.e., locations of non-zero entries in S are sampled non-uniformly). We note that the notion of local incoherence was first introduced in [4] for studying the matrix completion problem, in which local incoherence determines the local sampling density in order to guarantee correct matrix completion. Here, local incoherence plays a similar role, and determines the maximum allowable error density at each entry to guarantee correct matrix decomposition. The difference lies in that local incoherence here depends on both localized µ0 and µ1 rather than only on localized µ0 in matrix completion due to further difficulty of robust PCA, in which locations of error corrupted entries are unknown, as pointed out in [1,3]. Our Contribution. In this paper, we investigate a more general robust PCA problem, in which entries of the low rank matrix are corrupted by non-uniformly distributed Bernoulli errors. We characterize the conditions that guarantee correct matrix decomposition by PCP. Our result identifies the local incoherence (defined by localized µ0 and µ1 for each entry of the low rank matrix) to determine the condition that each local Bernoulli error corruption parameter should satisfy. Our results provide the following useful understanding of the robust PCA problem: • Our characterization provides a localized (and hence more refined) view of robust PCA, and determines how robust each entry of the low rank matrix combats error corruption. • Our results suggest that the total number of errors that the low-rank matrix can tolerate depends on how errors are distributed over the matrix. • Via cluster problems, our results provide an evidence that µ1 is necessary in characterizing conditions for robust PCA. In order to deal with non-uniform error corruption, our technical proof introduces a new weighted norm denoted by lw(1), which involves the information of both localized µ0 and µ1 and is hence different from the weighted norms introduced in [4] for matrix completion. Thus, our proof necessarily involves new technical developments associated with such a new norm. Related Work. A closely related but different problem from robust PCA is matrix completion, in which a low-rank matrix is partially observed and is to be completed. Such a problem has been previously studied in [5–8], and it was shown that a rank-r n-by-n matrix can be provably recoverable by convex optimization with as few as ⇥(max{µ0, µ1}nr log2 n)2 observed entries. Later on, it was shown in [4] that µ1 does not affect sample complexity for matrix completion and hence ⇥(µ0nr log2 n) observed entries are sufficient for guaranteeing correct matrix completion. It was further shown in [9] that a coherent low-rank matrix (i.e., with large µ0) can be recovered with 2f(n) 2 ⇥(g(n)) means k1 · g(n) f(n) k2 · g(n) for some positive k1, k2. 2 ⇥(nr log2 n) observations as long as the sampling probability is proportional to the leverage score (i.e., localized µ0). Our problem can be viewed as its counterpart in robust PCA, where the difference lies in the local incoherence in our problem depends on both localized µ0 and µ1. Robust PCA aims to decompose an observed matrix into the sum of a low-rank matrix and a sparse matrix. In [2,10], robust PCA with fixed error matrix was studied, and it was shown that the maximum number of errors in any row or column should be bounded from above in order to guarantee correct decomposition by PCP. Robust PCA with random error matrix was investigated in a number of studies. It has been shown in [1] that such decomposition can be exact with high probability if the percentage of corrupted entries is small enough, under the assumptions that the low-rank matrix is incoherent and the support set of the sparse matrix is uniformly distributed. It was further shown in [11] that if signs of nonzero entries in the sparse matrix are randomly chosen, then an adjusted convex optimization can produce exact decomposition even when the percentage of corrupted entries goes to one (i.e., error is dense). The problem was further studied in [1, 3, 12] for the case with the error-corrupted low-rank matrix only partially observed. Our work provides a more refined (i.e. entry-wise) view of robust PCA with random error matrix, aiming at understanding how local incoherence affects susceptibility of each matrix entry to error corruption. 2 Model and Main Result 2.1 Problem Statement We consider the robust PCA problem introduced in Section 1. Namely, suppose an n-by-n matrix M can be decomposed into two parts: M = L + S, where L is a low rank matrix and S is a sparse (error) matrix. We assume that the rank of L is r, and the support of S is selected randomly but non-uniformly. More specifically, let ⌦denote the support of S and then ⌦✓[n] ⇥[n], where [n] denotes the set {1, 2, . . . , n}. The event {(i, j) 2 ⌦} is independent across different pairs (i, j) and P ((i, j) 2 ⌦) = ⇢ij, (5) where ⇢ij represents the probability that the (i, j)-entry of L is corrupted by error. Hence, ⌦is determined by Bernoulli sampling with non-uniform probabilities. We study both the random sign and fixed sign models for S. For the fixed sign model, we assume signs of nonzero entries in S are arbitrary and fixed, whereas for the random sign model, we assume that signs of nonzero entries in S are independently distributed Bernoulli variables, randomly taking values +1 or −1 with probability 1/2 as follows: [sgn(S)]ij = 8 < : 1 with prob. ⇢ij/2 0 with prob. 1 −⇢ij −1 with prob. ⇢ij/2. (6) In this paper, our goal is to characterize conditions on ⇢ij that guarantees correct recovery of L and S with observation of M. We provide some notations that are used throughout this paper. A matrix X is associated with five norms: kXkF denotes the Frobenius norm, kXk⇤denotes the nuclear norm (i.e., the sum of singular values), kXk denotes the spectral norm (i.e., the largest singular value), and kXk1 and kXk1 represent respectively the l1 and l1 norms of the long vector stacked by X. The inner product between two matrices is defined as hX, Y i := trace(X⇤Y ). For a linear operator A that acts on the space of matrices, kAk denotes the operator norm given by kAk = sup{kXkF =1} kAXkF . 2.2 Main Theorems We adopt the PCP to solve the robust PCA problem. We define the following local incoherence parameters, which play an important role in our characterization of conditions on entry-wise ⇢ij. µ0ij := n 2r % kU ⇤eik2 + kV ⇤ejk2& , µ1ij := n2([UV ⇤]ij)2 r (7) µij := max{µ0ij, µ1ij}. (8) It is clear that µ0ij µ0 and µ1ij µ1 for all i, j = 1, · · · , n. We note that although maxi,j µij > 1, some µij might take values as small as zero. 3 We first consider the robust PCA problem under the random sign model as introduced in Section 2.1. The following theorem characterizes the condition that guarantees correct recovery by PCP. Theorem 1. Consider the robust PCA problem under the random sign model. If 1 −⇢ij ≥max ⇢ C0 rµijr n log n, 1 n3 ( for some sufficiently large constant C0 and for all i, j 2 [n], then PCP yields correct matrix recovery with λ = 1 32pn log n, with probability at least 1 −cn−10 for some constant c. We note that the term 1/n3 is introduced to justify dual certificate conditions in the proof (see Appendix A.2). We further note that satisfying the condition in Theorem 1 implies C0 p µr/n log n  1, which is an essential bound required in our proof and coincides with the conditions in previous studies [1, 12]. Although we set λ = 1 32pn log n for the sake of proof, in practice λ is often determined via cross validation. The above theorem suggests that the local incoherence parameter µij is closely related to how robust each entry of L to error corruption in matrix recovery. An entry corresponding to smaller µij tolerates larger error density ⇢ij. This is consistent with the result in [4] for matrix completion, in which smaller local incoherence parameter requires lower local sampling rate. The difference lies in that here both µ0ij and µ1ij play roles in µij whereas only µ0ij matters in matrix completion. The necessity of µ1ij for robust PCA is further demonstrated in Section 2.3 via an example. Theorem 1 also provides a more refined view for robust PCA in the dense error regime, in which the error corruption probability approaches one. Such an interesting regime was previously studied in [3, 11]. In [11], it is argued that PCP with adaptive λ yields exact recovery even when the error corruption probability approaches one if errors take random signs and the dimension n is sufficiently large. In [3], it is further shown that PCP with a fixed λ also yields exact recovery and the scaling behavior of the error corruption probability is characterized. The above Theorem 1 further provides the scaling behavior of the local entry-wise error corruption probability ⇢ij as it approaches one, and captures how such scaling behavior depends on local incoherence parameters µij. Such a result implies that robustness of PCP depends not only on the error density but also on how errors are distributed over the matrix with regard to µij. We next consider the robust PCA problem under the fixed sign model as introduced in Section 2.1. In this case, non-zero entries of the error matrix S can take arbitrary and fixed values, and only locations of non-zero entries are random. Theorem 2. Consider the robust PCA problem under the fixed sign model. If (1 −2⇢ij) ≥max ⇢ C0 rµijr n log n, 1 n3 ( for some sufficient large constant C0 and for all i, j 2 [n], then PCP yields correct recovery with λ = 1 32pn log n, with probability at least 1 −cn−10 for some constant c. Theorem 2 follows from Theorem 1 by adapting the elimination and derandomization arguments [1, Section 2.2] as follows. Let ⇢be the matrix with each (i, j)-entry being ⇢ij. If PCP yields exact recovery with a certain probability for the random sign model with the parameter 2⇢, then it also yields exact recovery with at least the same probability for the fixed sign model with locations of non-zero entries sampled using Bernoulli model with the parameter ⇢. We now compare Theorem 2 for robust PCA with non-uniform error corruption to Theorem 1.1 in [1] for robust PCA with uniform error corruption. It is clear that if we set ⇢i,j = ⇢for all i, j 2 [n], then the two models are the same. It can then be easily checked that conditions p µr/n log n ⇢r and ⇢⇢s in Theorem 1.1 of [1] implies the conditions in Theorem 2. Thus, Theorem 2 provides a more relaxed condition than Theorem 1.1 in [1]. Such benefit of condition relaxation should be attributed to the new golfing scheme introduced in [3, 12], and this paper provides a more refined view of robust PCA by further taking advantage of such a new golfing scheme to analyze local conditions. More importantly, Theorem 2 characterizes relationship between local incoherence parameters and local error corruption probabilities, which implies that different areas of the low-rank matrix have 4 different levels of ability to resist errors: a more incoherent area (i.e., with smaller µij) can tolerate more errors. Thus, Theorem 2 illustrates the following interesting fact. Whether PCP yields correct recovery depends not only on the total number of errors but also on how errors are distributed. If more errors are distributed to more incoherent areas (i.e, with smaller µij), then more errors in total can be tolerated. However, if errors are distributed in an opposite manner, then only smaller number of errors can be tolerated. 2.3 Implication on Cluster Matrix In this subsection, we further illustrate our result when the low rank matrix is a cluster matrix. Although robust PCA and even more sophisticated approaches have been applied to solve clustering problems, e.g., [13–15], our perspective here is to demonstrate how local incoherence affects entrywise robustness to error corruption, which has not been illustrated in previous studies. Suppose there are n elements to be clustered. We use a cluster matrix L to represent the clustering relationship of these n elements with Lij = 1 if elements i and j are in the same cluster and Lij = 0 otherwise. Thus, with appropriate ordering of the elements, L is a block diagonal matrix with all diagonal blocks containing all ‘1’s and off-diagonal blocks containing all ‘0’s. Hence, the rank r of L equals the number of clusters, which is typically small compared to n. Suppose these entries are corrupted by errors that flip entries from one to zero or from zero to one. This can be thought of as adding a (possibly sparse) error matrix S to L so that the observed matrix is L + S. Then PCP can be applied to recover the cluster matrix L. We first consider an example with clusters having equal size n/r. We set n = 600 and r = 4 (i.e., four equal-size clusters). We apply errors to diagonal-block entries and off-diagonal-block entries respectively with the probabilities ⇢d and ⇢od. In Fig. 1a, we plot recovery accuracy of PCP for each pairs of (⇢od, ⇢d). It is clear from the figure that failure occurs for larger ⇢od than ⇢d, which thus implies that off-diagonal blocks are more robust to errors than diagonal blocks. This can be explained by Theorem 2 as follows. For a cluster matrix with equal cluster size n/r, the local incoherence parameters are given by µ0ij = 1 for all (i, j), and µ1ij = ⇢r, (i, j) is in diagonal blocks 0, (i, j) is in off-diagonal blocks, and thus µij = max{µ0ij, µ1ij} = ⇢r, (i, j) is in diagonal blocks 1, (i, j) is in off-diagonal blocks. Off−diagonal−block error ρod Diagonal−block error ρd 0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 (a) Diagonal-block error vs. off-diagonal-block error. n = 600, r = 4 with equal cluster sizes Cluster1 error ρ1 Cluster2 error ρ2 0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 (b) Error vulnerability with respect to cluster sizes 500 vs. 100 Figure 1: Error vulnerability on different parts for cluster matrix. In both cases, for each probability pair, we generate 10 trials of independent random error matrices and count the number of successes of PCP. We declare a trial to be successful if the recovered ˆL satisfies kˆL −LkF /kLkF 10−3. Color from white to black represents the number of successful trials changes from 10 to 0. Based on Theorem 2, it is clear that diagonal-block entries are more locally coherent and hence are more vulnerable to errors, whereas off-diagonal-block entries are more locally incoherent and hence are more robust to errors. 5 Moreover, this example also demonstrates the necessity of µ1 in the robust PCA problem. [4] showed that µ1 is not necessary for matrix completion and argued informally that µ1 is necessary for robust PCA by connecting the robust PCA problem to hardness of finding a small clique in a large random graph. Here, the above example provides an evidence for such a fact. In the example, µ0ij are the same over the entire matrix, and hence it is µ1ij that differentiates incoherence between diagonal blocks and off-diagonal blocks, and thus differentiates their robustness to errors. We then consider the case with two clusters that have different sizes: cluster1 size 500 versus cluster2 size 100. Hence, r = 2. We apply errors to block diagonal entries corresponding to clusters 1 and 2 respectively with the probabilities ⇢1 and ⇢2. In Fig. 1b, we plot the recovery accuracy of PCP for each pair of (⇢1, ⇢2). It is clear from the figure that failure occurs for larger ⇢1 than ⇢2, which thus implies that entries corresponding to the larger cluster are more robust to errors than entries corresponding to smaller clusters. This can be explained by Theorem 2 because the local incoherence of a block diagonal entry is given by µij = n2 rK2 , where K is the corresponding cluster size, and hence the error corruption probability should satisfy 1 −2⇢ij > C0 pn K log n for correct recovery. Thus, a larger cluster can resist denser errors. This also coincides with the results on graph clustering in [13,16]. 2.4 Outline of the Proof of Theorem 1 The proof of Theorem 1 follows the idea established in [1] and further developed in [3, 12]. Our main technical development lies in analysis of non-uniform error corruption based on local incoherence parameters, for which we introduce a new weighted norm lw(1), and establish concentration properties and bounds associated with this norm. As a generalization of matrix infinity norm, lw(1) incorporates both µ0ij and µ1ij, and is hence different from the weighted norms lµ(1) and lµ(1,2) in [9] by its role in the analysis for the robust PCA problem. We next outline the proof here and the detailed proofs are provided in Appendix A. We first introduce some notations. We define the subspace T := {UX⇤+ Y V ⇤: X, Y 2 Rn⇥r}, where U, V are left and right singular matrix of L. Then T induces a projection operator PT given by PT (M) = UU ⇤M +MV V ⇤−UU ⇤MV V ⇤. Moreover, T ?, the complement subspace to T, induces an orthogonal projection operator PT ? with PT ?(M) = (I −UU ⇤)M(I −V V ⇤). We further define two operators associated with Bernoulli sampling. Let ⌦0 denote a generic subset of [n]⇥[n]. We define a corresponding projection operator P⌦0 as P⌦0(M) = P ij I{(i,j)2⌦0}hM, eie⇤ jieie⇤ j, where I{·} is the indicator function. If ⌦0 is a random set generated by Bernoulli sampling with P((i, j) 2 ⌦0) = tij with 0 < tij 1 for all i, j 2 [n], we further define a linear operator R⌦0 as R⌦0(M) = P ij 1 tij I{(i,j)2⌦0}hM, eie⇤ jieie⇤ j. We further note that throughout this paper “with high probability” means “with probability at least 1 −cn−10”, where the constant c may be different in various contexts. Our proof includes two main steps: establishing that existence of a certain dual certificate is sufficient to guarantee correct recovery and constructing such a dual certificate. For the first step, we establish the following proposition. Proposition 1. If 1−⇢ij ≥max n C0 q µijr n log n, 1 n3 o , PCP yields a unique solution which agrees with the correct (L, S) with high probability if there exists a dual certificate Y obeying P⌦Y = 0, (9) kY k1 λ 4 , (10) kPT ?(λ sgn(S) + Y )k 1 4, (11) kPT (Y + λ sgn(S) −UV ⇤)kF λ n2 (12) where λ = 1 32pn log n. The proof of the above proposition adapts the idea in [1,12] for uniform errors to non-uniform errors. In particular, the proof exploits the properties of R⌦associated with non-uniform errors, which are presented as Lemma 1 (established in [9]) and Lemma 2 in Appendix A.1. 6 Proposition 1 suggests that it suffices to prove Theorem 1 if we find a dual certificate Y that satisfies the dual certificate conditions (9)-(12). Thus, the second step is to construct Y via the golfing scheme. Although we adapt the steps in [12] to construct the dual certificate Y , our analysis requires new technical development based on local incoherence parameters. Recall the following definitions in Section 2.1: P((i, j) 2 ⌦) = ⇢ij and P((i, j) 2 Γ) = pij, where Γ = ⌦c and pij = 1 −⇢ij. Consider the golfing scheme with nonuniform sizes as suggested in [12] to establish bounds with fewer log factors. Let Γ = Γ1 [ Γ2 [ · · · [ Γl, where {Γk} are independent random sets given by P((i, j) 2 Γ1) = pij 6 , P((i, j) 2 Γk) = qij, for k = 2, · · · , l. Thus, if ⇢ij = (1 −pij 6 )(1 −qij)l−1, the two sampling strategies are equivalent. Due to the overlap between {Γk}, we have qij ≥5 6 pij l−1. We set l = b5 log n + 1c and construct a dual certificate Y in the following iterative way: Z0 = PT (UV ⇤−λ sgn(S)) (13) Zk = (PT −PT RΓkPT )Zk−1, for k = 1, · · · , l (14) Y = l X k=1 RΓkZk−1. (15) It is then sufficient to show that such constructed Y satisfies the dual certificate conditions (9)-(12). Condition (9) is due to the construction of Y . Condition (12) can be shown by a concentration property of each iteration step (14) with k · kF characterized in Lemma 3 in Appendix A.1. In order to show that Y satisfies conditions (10) and (11), we introduce the following weighted norm. Let ˆwij = q µijr n2 and wij = max{ ˆwij, ✏}, where ✏is the smallest nonzero ˆwij. Here ✏is introduced to avoid singularity. Then for any matrix Z, define kZkw(1) = max i,j |Zij| wij . (16) It is easy to verify k · kw(1) is a well defined norm. We can then show that each iteration step (14) with k · k and k · kw(1) norms satisfies two concentration properties characterized respectively in Lemmas 4 and 5, which are essential to prove conditions (10) and (11). 3 Numerical Experiments In this section, we provide numerical experiments to demonstrate our theoretical results. In these experiments, we adopt an augmented Lagrange multiplier algorithm in [17] to solve the PCP. We set λ = 1/pn log n. A trial of PCP (for a given realization of error locations) is declared to be successful if ˆL recovered by PCP satisfies kˆL −LkF /kLkF 10−3. We apply the following three models to construct the low rank matrix L. • Bernoulli model: L = XX⇤where X is n⇥r matrix with entries independently taking values +1/pn and −1/pn equally likely. • Gaussian model: L = XX⇤, where X is n ⇥r matrix with entries independently sampled from Gaussian distribution N(0, 1/n). • Cluster model: L is a block diagonal matrix with r equal-size blocks containing all ‘1’s. In order to demonstrate that the local incoherence parameter affects local robustness to error corruptions, we study the following two types of error corruption models. • Uniform error corruption: sgn(Sij) is generated as (6) with ⇢ij = ⇢for all i, j 2 [n], and S = sgn(S). • Adaptive error corruption: sgn(Sij) is generated as (6) with ⇢ij = ⇢ n2p 1/µij P ij p 1/µij for all i, j 2 [n], and S = sgn(S). It is clear in both cases, the error matrix has the same average error corruption percentage ⇢, but in adaptive error corruption, the local error corruption probability is adaptive to the local incoherence. Our first experiment demonstrates that robustness of PCP to error corruption not only depends on the number of errors but also depends on how errors are distributed over the matrix. For all three 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.2 0.4 0.6 0.8 1 Error percentage ρ Failure frequency uniform error adaptive error (a) Bernoulli model 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.2 0.4 0.6 0.8 1 Error percentage ρ Failure frequency uniform error adaptive error (b) Gaussian model 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.2 0.4 0.6 0.8 1 Error percentage ρ Failure frequency uniform noise adaptive noise (c) Cluster model Figure 2: Recovery failure of PCP versus error corruption percentage. low rank matrix models, we set n = 1200 and rank r = 10. For each low rank matrix model, we apply the uniform and adaptive error matrices, and plot the failure frequency of PCP versus the error corruption percentage ⇢in Fig. 2. For each value of ⇢, we perform 50 trials of independent error corruption and count the number of failures of PCP. Each plot of Fig. 2 compares robustness of PCP to uniform error corruption (the red square line) and adaptive error corruption (the blue circle line). We observe that PCP can tolerate more errors in the adaptive case. This is because the adaptive error matrix is distributed based on the local incoherence parameter, where error density is higher in areas where matrices can tolerate more errors. Furthermore, comparison among the three plots in Fig. 2 illustrates that the gap between uniform and adaptive error matrices is the smallest for Bernoulli model and the largest for cluster model. Our theoretic results suggest that the gap is due to the variation of the local incoherence parameter across the matrix, which can be measured by the variance of µij. Larger variance of µij should yield larger gap. Our numerical calculation of the variances for three models yield Var(µBernoulli) = 1.2109, Var(µGaussian) = 2.1678, and Var(µcluster) = 7.29, which confirms our explanation. 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 rank r Error percentage ρ uniform error adaptive error (a) Bernoulli model 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 rank r Error percentage ρ uniform error adative error (b) Gaussian model 2 4 6 8 10 12 14 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 rank r Error percentage ρ uniform error adaptive error (c) Cluster model Figure 3: Largest allowable error corruption percentage versus rank of L so that PCP yields correct recovery. We next study the phase transition in rank and error corruption probability. For the three low-rank matrix models, we set n = 1200. In Fig. 3, we plot the error corruption percentage versus the rank of L for both uniform and adaptive error corruption models. Each point on the curve records the maximum allowable error corruption percentage under the corresponding rank such that PCP yields correction recovery. We count a (r, ⇢) pair to be successful if nine trials out of ten are successful. We first observe that in each plot of Fig. 3, PCP is more robust in adaptive error corruption due to the same reason explained above. We further observe that the gap between the uniform and adaptive error corruption changes as the rank changes. In the low-rank regime, the gap is largely determined by the variance of incoherence parameter µij as we argued before. As the rank increases, the gap is more dominated by the rank and less affected by the local incoherence. Eventually for large enough rank, no error can be tolerated no matter how errors are distributed. 4 Conclusion We characterize refined conditions under which PCP succeeds to solve the robust PCA problem. Our result shows that the ability of PCP to correctly recover a low-rank matrix from errors is related not only to the total number of corrupted entries but also to locations of corrupted entries, more essentially to the local incoherence of the low rank matrix. Such result is well supported by our numerical experiments. Moreover, our result has rich implication when the low rank matrix is a cluster matrix, and our result coincides with state-of-the-art studies on clustering problems via low rank cluster matrix. Our result may motivate the development of weighted PCP to improve recovery performance similar to the weighted algorithms developed for matrix completion in [9,18]. 8 References [1] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM (JACM), 58(3):11, 2011. [2] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011. [3] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7):4324–4337, 2013. [4] Y. Chen. Incoherence-optimal matrix completion. IEEE Transactions on Information Theory, 61(5):2909–2923, May 2015. [5] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009. [6] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [7] D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548–1566, 2011. [8] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010. [9] Y. Chen, S. Bhojanapalli, S. Sanghavi, and R. Ward. Completing any low-rank matrix, provably. arXiv preprint arXiv:1306.2979, 2013. [10] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE Transactions on Information Theory, 57(11):7221–7234, 2011. [11] A. Ganesh, J. Wright, X. Li, E. J. Candes, and Y. Ma. Dense error correction for low-rank matrices via principal component pursuit. In IEEE International Symposium on Information Theory (ISIT), pages 1513–1517, Austin, TX, US, June 2010. [12] X. Li. Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1):73–99, 2013. [13] S. Oymak and B. Hassibi. Finding dense clusters via “low rank+ sparse” decomposition. arXiv preprint arXiv:1104.5186, 2011. [14] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Advances in Neural Information Processing Systems (NIPS), pages 2204–2212, Lake Tahoe, Nevada, US, December 2012. [15] Y. Chen, S. Sanghavi, and H. Xu. Improved graph clustering. IEEE Transactions on Information Theory, 60(10):6440–6455, Oct 2014. [16] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization. Journal of Machine Learning Research, 15(1):2213–2238, 2014. [17] Z. Lin, M. Chen, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010. [18] N. Srebro and R. R. Salakhutdinov. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems (NIPS), pages 2056–2064, Hyatt Regency, Vancouver, Canada, 2010. December. [19] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. [20] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. 9
2015
117
5,611
Probabilistic Variational Bounds for Graphical Models Qiang Liu Computer Science Dartmouth College qliu@cs.dartmouth.edu John Fisher III CSAIL MIT fisher@csail.mit.edu Alexander Ihler Computer Science Univ. of California, Irvine ihler@ics.uci.edu Abstract Variational algorithms such as tree-reweighted belief propagation can provide deterministic bounds on the partition function, but are often loose and difficult to use in an “any-time” fashion, expending more computation for tighter bounds. On the other hand, Monte Carlo estimators such as importance sampling have excellent any-time behavior, but depend critically on the proposal distribution. We propose a simple Monte Carlo based inference method that augments convex variational bounds by adding importance sampling (IS). We argue that convex variational methods naturally provide good IS proposals that “cover” the target probability, and reinterpret the variational optimization as designing a proposal to minimize an upper bound on the variance of our IS estimator. This both provides an accurate estimator and enables construction of any-time probabilistic bounds that improve quickly and directly on state-of-the-art variational bounds, and provide certificates of accuracy given enough samples relative to the error in the initial bound. 1 Introduction Graphical models such as Bayesian networks, Markov random fields and deep generative models provide a powerful framework for reasoning about complex dependency structures over many variables [see e.g., 14, 13]. A fundamental task is to calculate the partition function, or normalization constant. This task is #P-complete in the worst case, but in many practical cases it is possible to find good deterministic or Monte Carlo approximations. The most useful approximations should give not only accurate estimates, but some form of confidence interval, so that for easy problems one has a certificate of accuracy, while harder problems are identified as such. Broadly speaking, approximations fall into two classes: variational optimization, and Monte Carlo sampling. Variational inference [29] provides a spectrum of deterministic estimates and upper and lower bounds on the partition function; these include loopy belief propagation (BP), which is often quite accurate; its convex variants, such as tree reweighted BP (TRW-BP), which give upper bounds on the partition function; and mean field type methods that give lower bounds. Unfortunately, these methods often lack useful accuracy assessments; although in principle a pair of upper and lower bounds (such as TRW-BP and mean field) taken together give an interval containing the true solution, the gap is often too large to be practically useful. Also, improving these bounds typically means using larger regions, which quickly runs into memory constraints. Monte Carlo methods, often based on some form of importance sampling (IS), can also be used to estimate the partition function [e.g., 15]. In principle, IS provides unbiased estimates, with the potential for a probabilistic bound: a bound which holds with some user-selected probability 1 −δ. Sampling estimates can also easily trade time for increased accuracy, without using more memory. Unfortunately, choosing the proposal distribution in IS is often both crucial and difficult; if poorly chosen, not only is the estimator high-variance, but the samples’ empirical variance estimate is also misleading, resulting in both poor accuracy and poor confidence estimates; see e.g., [35, 1]. 1 We propose a simple algorithm that combines the advantages of variational and Monte Carlo methods. Our result is based on an observation that convex variational methods, including TRW-BP and its generalizations, naturally provide good importance sampling proposals that “cover” the probability of the target distribution; the simplest example is a mixture of spanning trees constructed by TRW-BP. We show that the importance weights of this proposal are uniformly bounded by the convex upper bound itself, which admits a bound on the variance of the estimator, and more importantly, allows the use of exponential concentration inequalities such as the empirical Bernstein inequality to provide explicit confidence intervals. Our method provides several important advantages: First, the upper bounds resulting from our sampling approach improve directly on the initial variational upper bound. This allows our bound to start at a state-of-the-art value, and be quickly and easily improved in an any-time, memory efficient way. Additionally, using a two-sided concentration bound provides a “certificate of accuracy” which improves over time at an easily analyzed rate. Our upper bound is significantly better than existing probabilistic upper bounds, while our corresponding lower bound is typically worse with few samples but eventually outperforms state-of-the-art probabilistic bounds [11]. Our approach also results in improved estimates of the partition function. As in previous work [32, 34, 31], applying importance sampling serves as a “bias correction” to variational approximations. Here, we interpret the variational bound optimization as equivalent to minimizing an upper bound on the IS estimator’s variance. Empirically, this translates into estimates that can be significantly more accurate than IS using other variational proposals, such as mean field or belief propagation. Related Work. Importance sampling and related approaches have been widely explored in the Bayesian network literature, in which the partition function corresponds to the probability of observed evidence; see e.g., [8, 26, 33, 11] and references therein. Dagum and Luby [4] derive a sample size to ensure a probabilistic bound with given relative accuracy; however, they use the normalized Bayes net distribution as a proposal, leading to prohibitively large numbers of samples when the partition function is small, and making it inapplicable to Markov random fields. Cheng [2] refines this result, including a user-specified bound on the importance weights, but leaves the choice of proposal unspecified. Some connections between IS and variational methods are also explored in Yuan and Druzdzel [32, 34], Wexler and Geiger [31], Gogate and Dechter [11], in which proposals are constructed based on loopy BP or mean field methods. While straightforward in principle, we are not aware of any prior work which uses variational upper bounds to construct a proposal, or more importantly, analyzes their properties. An alternative probabilistic upper bound can be constructed using “perturb and MAP” methods [23, 12] combined with recent concentration results [22]; however, in our experiments the resulting bounds were quite loose. Although not directly related to our work, there are also methods that connect variational inference with MCMC [e.g., 25, 6]. Our work is orthogonal to the line of research on adaptive importance sampling, which refines the proposal as more samples are drawn [e.g., 21, 3]; we focus on developing a good fixed proposal based on variational ideas, and leave adaptive improvement as a possible future direction. Outline. We introduce background on graphical models in Section 2. Our main result is presented in Section 3, where we construct a tree reweighted IS proposal, discuss its properties, and propose our probabilistic bounds based on it. We give a simple extension of our method to higher order cliques based on the weighted mini-bucket framework in Section 4. We then show experimental comparisons in Section 5 and conclude with Section 6. 2 Background 2.1 Undirected Probabilistic Graphical Models Let x = [x1, . . . , xp] be a discrete random vector taking values in X def = X1 × · · · × Xp; a probabilistic graphical model on x, in an over-complete exponential family form, is p(x; θ) = f(x; θ) Z(θ) , with f(x; θ) = exp  X α∈I θα(xα)  , Z(θ) = X x∈X f(x; θ), (1) 2 where I = {α} is a set of subsets of variable indices, and θα : Xα →R are functions of xα; we denote by θ = {θα(xα): ∀α ∈I, xα ∈Xα} the vector formed by the elements of θα(·), called the natural parameters. Our goal is to calculate the partition function Z(θ) that normalizes the distribution; we often drop the dependence on θ and write p(x) = f(x)/Z for convenience. The factorization of p(x; θ) can be represented by an undirected graph G = (V, EG), called its Markov graph, where each vertex k ∈V is associated with a variable xk, and nodes k, l ∈V are connected (i.e., (kl) ∈EG) iff there exists some α ∈I that contains both k and l; then, I is a set of cliques of G. A simple special case of (1) is the pairwise model, in which I = V ∪E: f(x; θ) = exp X i∈V θk(xk) + X (kl)∈EG θkl(xk, xl)  . (2) 2.2 Monte Carlo Estimation via Importance Sampling Importance sampling (IS) is at the core of many Monte Carlo methods for estimating the partition function. The idea is to take a tractable, normalized distribution q(x), called the proposal, and estimate Z using samples {xi}n i=1 ∼q(x): ˆZ = 1 n n X i=1 w(xi), with w(xi) = f(xi) q(xi) , where w(x) is called the importance weight. It is easy to show that ˆZ is an unbiased estimator of Z, in that E ˆZ = Z, if q(x) > 0 whenever p(x) > 0, and has a MSE of E( ˆZ −Z)2 = 1 nvar(w(x)). Unfortunately, the IS estimator often has very high variance if the choice of proposal distribution is very different from the target, especially when the proposal is more peaked or has thinner tails than the target. In these cases, there exist configurations x such that q(x) ≪p(x), giving importance weights w(x) = f(x)/q(x) with extremely large values, but very small probabilities. Due to the low probability of seeing these large weights, a “typical” run of IS often underestimates Z in practice, that is, ˆZ ≤Z with high probability, despite being unbiased. Similarly, the empirical variance of {w(xi)} can also severely underestimate the true variance var(w(x)), and so fail to capture the true uncertainty of the estimator. For this reason, concentration inequalities that make use of the empirical variance (see Section 3) also require that w, or its variance, be bounded. It is thus desirable to construct proposals that are similar to, and less peaked than, the target distribution p(x). The key observation of this work is to show that tree reweighted BP and its generalizations provide a easy way to construct such good proposals. 2.3 Tree Reweighted Belief Propagation Next we describe the tree reweighted (TRW) upper bound on the partition function, restricting to pairwise models (2) for notational ease. In Section 4 we give an extension that includes both more general factor graphs, and more general convex upper bounds. Let T = {T} be a set of spanning trees T = (V, ET ) of G that covers G: ∪T ET = EG. We assign a set of nonnegative weights {ρT : T ∈T } on T such that P T ρT = 1. Let θT = {θT : T ∈T } be a set of natural parameters that satisfies P T ρT θT = θ, and each θT respects the structure of T (so that θT kl(xk, xl) ≡0 for ∀(kl) ̸∈ET ). Define pT (x) def = p(x; θT ) = f(x; θT ) Z(θT ) , with f(x; θT ) = exp  X k∈V θT k (xk) + X (kl)∈ET θT kl(xk, xl)  ; then pT (x) is a tree structured graphical model with Markov graph T. Wainwright et al. [30] use the fact that log Z(θ) is a convex function of θ to propose to upper bound log Z(θ) by log Ztrw(θT ) = X T ∈T ρT log Z(θT ) ≥ log Z( X T ∈T ρT θT ) = log Z(θ), 3 via Jensen’s inequality. Wainwright et al. [30] find the tightest bound via a convex optimization: log Z∗ trw(θ) = min θT  log Ztrw(θT ), s.t. X T ρT θT = θ  . (3) Wainwright et al. [30] solve this optimization by a tree reweighted belief propagation (TRW-BP) algorithm, and note that the optimality condition of (3) is equivalent to enforcing a marginal consistency condition on the trees – a θT optimizes (3) if and only if there exists a set of common singleton and pairwise “pseudo-marginals” {bk(xk), bkl(xk, xl)}, corresponding to the fixed point of TRW-BP in Wainwright et al. [30], such that b(xk, xl) = pT (xk, xl), ∀(kl) ∈T, and b(xk) = pT (xk), ∀k ∈V, where pT (xk) and pT (xk, xl) are the marginals of pT (x). Thus, after running TRW-BP, we can calculate pT (x) via pT (x) = p(x ; θT ) = Y k∈V bk(xk) Y kl∈ET bkl(xk, xl) bk(xk)bl(xl). (4) Because TRW provides a convex upper bound, it is often well-suited to the inner loop of learning algorithms [e.g., 28]. However, it is often far less accurate than its non-convex counterpart, loopy BP; in some sense, this can be viewed as the cost of being a bound. In the next section, we show that our importance sampling procedure can “de-bias” the TRW bound, to produce an estimator that significantly outperforms loopy BP; in addition, due to the nice properties of our TRW-based proposal, we can use an empirical Bernstein inequality to construct a non-asymptotic confidence interval for our estimator, turning the deterministic TRW bound into a much tighter probabilistic bound. 3 Tree Reweighted Importance Sampling We propose to use the collection of trees pT (x) and weights ρT in TRW to form an importance sampling proposal, q(x; θT ) = X T ∈T ρT pT (x), (5) which defines an estimator ˆZ = 1 n Pn i=1 w(xi) with xi drawn i.i.d. from q(x; θT ). Our observation is that this proposal is good due to the special convex construction of TRW. To see this, we note that the reparameterization constraint P T ρT θT = θ can be rewritten as f(x; θ) = Ztrw(θT ) Y T  pT (x) ρT , (6) that is, f(x; θ) is the {ρT }-weighted geometric mean of pT (x) up to a constant Ztrw; on the other hand, q(x; θT ), by its definition, is the arithmetic mean of pT (x), and hence will always be larger than the geometric mean by the AM-GM inequality, guaranteeing good coverage of the target’s probability. To be specific, we have q(x; θT ) is always no smaller than f(x; θ)/Ztrw(θT ), and hence the importance weight w(x) is always upper bounded by Ztrw(θT ). Note that (5)–(6) immediately implies that q(x; θT ) > 0 whenever f(x; θ) > 0. We summarize our result as follows. Proposition 3.1. i) If P T ρT θT = θ, ρT ≥0, P T ρT = 1, then the importance weight w(x) = f(x; θ)/q(x; θT ), with q(x; θT ) defined in (5), satisfies w(x) ≤Ztrw(θT ), ∀x ∈X, (7) that is, the importance weights of (5) are always bounded by the TRW upper bound; this reinterprets the TRW optimization (3) as finding the mixture proposal in (5) that has the smallest upper bound on the importance weights. ii) As a result, we have max{var(w(x)), c var(w(x))} ≤1 4Z2 trw for x ∼q(x; θT ), where c var(w(x)) is the empirical variance of the weights. This implies that E( ˆZ −Z)2 ≤ 1 4nZ2 trw. 4 Proof. i) Directly apply AM-GM inequality on (5) and (6). ii) Note that E(w(x)) = Z and hence var(w(x)) = E(w(x)2) −E(w(x))2 ≤ZtrwZ −Z2 ≤1 4Z2 trw. Note that the TRW reparameterization (6) is key to establishing our results. Its advantage is two-fold: First, it provides a simple upper bound on w(x); for an arbitrary q(·), establishing such an upper bound may require a difficult combinatorial optimization over x. Second, it enables that bound to be optimized over q(·), resulting in a good proposal. Empirical Bernstein Confidence Bound. The upper bound of w(x) in Proposition 3.1 allows us to use exponential concentration inequalities and construct tight finite-sample confidence bounds. Based on the empirical Bernstein inequality in Maurer and Pontil [19], we have Corollary 3.2 (Maurer and Pontil [19]). Let ˆZ be the IS estimator resulting from q(x) in (5). Define ∆= r 2 c var(w(x)) log(2/δ) n + 7Ztrw(θT ) log(2/δ) 3(n −1) , (8) where c var(w(x) is the empirical variance of the weights, then ˆZ+ = ˆZ + ∆and Z−= ˆZ −∆are upper and lower bounds of Z with at least probability (1 −δ), respectively, that is, Pr(Z ≤ˆZ+) ≥ 1 −δ and Pr( ˆZ−≤Z) ≥1 −δ. The quantity ∆is quite intuitive, with the first term proportional to the empirical standard deviation and decaying at the classic 1/√n rate. The second term captures the possibility that the empirical variance is inaccurate; it depends on the boundedness of w(x) and decays at rate 1/n. Since c var(w) < Z2 trw, the second term typically dominates for small n, and the first term for large n. When ∆is large, the lower bound ˆZ −∆may be negative; this is most common when n is small and Ztrw is much larger than Z. In this case, we may replace ˆZ−with any deterministic lower bound, or with ˆZδ, which is a (1 −δ) probabilistic bound by the Markov inequality; see Gogate and Dechter [11] for more Markov inequality based lower bounds. However, once n is large enough, we expect ˆZ−should be much tighter than using Markov’s inequality, since ˆZ−also leverages boundedness and variance information.1 On the other hand, the Bernstein upper bound ˆZ+ readily gives a good upper bound, and is usually much tighter than Ztrw even with a relatively small n. For example, if ˆZ ≪Ztrw (e.g., the TRW bound is not tight), our upper bound ˆZ+ improves rapidly on Ztrw at rate 1/n and passes Ztrw when n ≥7 3 log(2/δ) + 1 (for example, for δ = 0.025 used in our experiments, we have ˆZ+ ≤Ztrw by n = 12). Meanwhile, one can show that the lower bound must be non-trivial ( ˆZ−> 0) if n > 6(Ztrw/ ˆZ) log(2/δ) + 1. During sampling, we can roughly estimate the point at which it will become non-trivial, by finding n such that ˆZ ≥∆. More rigorously, one can apply a stopping criterion [e.g., 5, 20] on n to guarantee a relative error ϵ with probability at least 1 −δ, using the bound on w(x); roughly, the expected number of samples will depend on Ztrw/Z, the relative accuracy of the variational bound. 4 Weighted Mini-bucket Importance Sampling We have so far presented our results for tree reweighted BP on pairwise models, which approximates the model using combinations of trees. In this section, we give an extension of our results to general higher order models, and approximations based on combinations of low-treewidth graphs. Our extension is based on the weighted mini-bucket framework [7, 17, 16], but extensions based on other higher order generalizations of TRW, such as Globerson and Jaakkola [9], are also possible. We only sketch the main idea in this section. We start by rewriting the distribution using the chain rule along some order o = [x1, . . . , xp], f(x) = Z Y k p(xk|xpa(k)). (9) 1The Markov lower bounds by Gogate and Dechter [11] have the undesirable property that they may not become tighter with increasing n, and may even decrease. 5 where pa(k), called the induced parent set of k, is the set of variables adjacent to xk when it is eliminated along order o. The largest parent size ω := maxk∈V |pa(k)| is called the induced width of G along order o, and the computational complexity of exact variable elimination along order o is O(exp(ω)), which is intractable when ω is large. Weighted mini-bucket is an approximation method that avoids the O(exp(ω)) complexity by splitting each pa(k) into several smaller “mini-buckets” paℓ(k), such that ∪ℓpaℓ(k) = pa(k), where the size of the paℓ(k) is controlled by a predefined number ibound ≥|paℓ(k)|, so that the ibound trades off the computational complexity with approximation quality. We associate each paℓ(k) with a nonnegative weight ρkℓ, such that P ℓρkℓ= 1. The weighted mini-bucket algorithm in Liu [16] then frames a convex optimization to output an upper bound Zwmb ≥Z together with a set of “pseudo-” conditional distributions bkℓ(xk|xpaℓ(k)), such that f(x) = Zwmb Y k Y ℓ bkℓ(xk|xpaℓ(k))ρkℓ, (10) which, intuitively speaking, can be treated as approximating each conditional distribution p(xk|xpa(k)) with a geometric mean of the bkℓ(xk|xpaℓ(k)); while we omit the details of weighted mini-bucket [17, 16] for space, what is most important for our purpose is the representation (10). Similarly to with TRW, we define a proposal distribution by replacing the geometric mean with an arithmetic mean: q(x) = Y k X ℓ ρkℓbkℓ(xk|xpaℓ(k)). (11) We can again use the AM-GM inequality to obtain a bound on w(x), that w(x) ≤Zwmb. Proposition 4.1. Let w(x) = f(x)/q(x), where f(x) and q(x) satisfy (10) and (11), with P ℓρkℓ= 1, ρkℓ≥0, ∀k, ℓ. Then, w(x) ≤Zwmb, ∀x ∈X. Proof. Use the AM-GM inequality, Q ℓbkℓ(xk|xpaℓ(k))ρkℓ≤P ℓρkℓbkℓ(xk|xpaℓ(k)), for each k. Note that the form of q(x) makes it convenient to sample by sequentially drawing each variable xk from the mixture P ℓρkℓbkℓ(xk|xpaℓ(k)) along the reverse order [xp, . . . , x1]. The proposal q(x) also can be viewed as a mixture of a large number of models with induced width controlled by ibound; this can be seen by expanding the form in (11), q(x) = X ℓ1···ℓp ρℓ1···ℓpqℓ1···ℓp(x), where ρℓ1···ℓp = Y k ρkℓk, qℓ1···ℓp(x) = Y k bkℓk(xk|xpaℓ(k)). 5 Experiments We demonstrate our algorithm using synthetic Ising models, and real-world models from recent UAI inference challenges. We show that our TRW proposal can provide better estimates than other proposals constructed from mean field or loopy BP, particularly when it underestimates the partition function; in this case, the proposal may be too peaked and fail to approach the true value even for extremely large sample sizes n. Using the empirical Bernstein inequality, our TRW proposal also provides strong probabilistic upper and lower bounds. When the model is relatively easy or n is large, our upper and lower bounds are close, demonstrating the estimate has high confidence. 5.1 MRFs on 10 × 10 Grids We illustrate our method using pairwise Markov random fields (2) on a 10 × 10 grid. We start with a simple Ising model with θk(xk) = σsxk and θkl(xk, xl) = σpxkxl, xk ∈{−1, 1}, where σs represents the external field and σp the correlation. We fix σs = 0.01 and vary σp from −1.5 (strong negative correlation) to 1.5 (strong positive correlation). Different σp lead to different inference hardness: inference is easy when the correlation is either very strong (|σp| large) or very weak (|σp| small), but difficult for an intermediate range of values, corresponding to a phase transition. 6 −1 0 1 −3 −2 −1 0 IS(TRW) IS(MF) IS(LBP) Loopy BP -1 0 1 -5 0 5 10 −1 0 1 −0.1 0 0.1 TRW/MF Interval IS(TRW) Bernstein 10 1 10 3 10 5 10 7 −5 0 5 10 TRW LBP MF IS(TRW) Markov (TRW) Pairwise Strength σp Pairwise Strength σp Pairwise Strength σp Sample Size n (a) Fixed n = 104 (b) Fixed n = 104 (c) Fixed n = 107 (d) Fixed σp = −0.5 log ˆZ −log Z Figure 1: Experiments on 10 × 10 Ising models with interaction strength σp ranging from strong negative (-1.5) to strong positive (1.5). We first run the standard variational algorithms, including loopy BP (LBP), tree reweighted BP (TRW), and mean field (MF). We then calculate importance sampling estimators based on each of the three algorithms. The TRW trees are chosen by adding random spanning trees until their union covers the grid; we assign uniform probability ρT to each tree. The LBP proposal follows Gogate [10], constructing a (randomly selected) tree structured proposal based on the LBP pseudomarginals. The MF proposal is q(x) = Q k∈V qk(xk), where the qk(xk) are the mean field beliefs. Figure 1(a) shows the result of the IS estimates based on a relatively small number of importance samples (n = 104). In this case the TRW proposal outperforms both the MF and LBP proposals; all the methods degrade when σp ≈±.5, corresponding to inherently more difficult inference. However, the TRW proposal converges to the correct values when the correlation is strong (e.g., |σp| > 1), while the MF and LBP proposals underestimate the true value, indicating that the MF and LBP proposals are too peaked, and miss a significant amount of probability mass of the target. Examining the deterministic estimates, we note that the LBP approximation, which can be shown to be a lower bound on these models [27, 24], is also significantly worse than IS with the TRW proposal, and slightly worse than IS based on the LBP proposal. The TRW and MF bounds, of course, are far less accurate compared to either LBP or the IS methods, and are shown separately in Figure 1(b). This suggests it is often beneficial to follow the variational procedure with an importance sampling process, and use the corresponding IS estimators instead of the variational approximations to estimate the partition function. Figure 1(b) compares the 95% confidence interval of the IS based on the TRW proposal (filled with red), with the interval formed by the TRW upper bound and the MF lower bound (filled with green). We can see that the Bernstein upper bound is much tighter than the TRW upper bound, although at the cost of turning a deterministic bound into a (1 −δ) probabilistic bound. On the other hand, the Bernstein interval fails to report a meaningful lower bound when the model is difficult (σp ≈±0.5), because n = 104 is small relative to the difficulty of the model. As shown in Figure 1(c), our method eventually produces both tight upper and lower bounds as sample size increases. Figure 1(d) shows the Bernstein bound as we increase n on a fixed model with σp = −0.5, which is relatively difficult according to Figure 1. Of the methods, our IS estimator becomes the most accurate by around n = 103 samples. We also show the Markov lower bound ˆZmarkov = ˆZδ as suggested by Gogate [10]; it provides non-negative lower bounds for all sample sizes, but does not converge to the true value even with n →+∞(in fact, it converges to Zδ). 0.5 1 1.5 2 −0.1 0 0.1 0.2 log ˆZ −log Z Pairwise Strength σp IS(TRW) Bernstein Loopy BP IS (BP) Figure 2: MRF with mixed interactions. In addition to the simple Ising model, we also tested grid models with normally distributed parameters: θk(xk) ∼ N(0, σ2 s) and θkl(xk, xl) ∼N(0, σ2 p). Figure 2 shows the results when σs = 0.01 and we vary σp. In this case, LBP tends to overestimate the partition function, and IS with the LBP proposal performs quite well (similarly to our TRW IS); but with the previous example, this illustrates that it is hard to know whether BP will result in a high- or low-variance proposal. On this model, mean field IS is significantly worse and is not shown in the figure. 7 10 1 10 2 10 3 10 4 10 5 −5 0 5 WMB IS(WMB) Markov (WMB) Sample Size (n) 10 1 10 2 10 3 10 4 10 5 10 6 −5 0 5 WMB IS(WMB) Markov (WMB) Sample Size (n) (a) BN 6, ibound = 1 (b) BN 11, ibound = 1 log ˆZ −log Z Figure 3: The Bernstein interval on (a) BN 6 and (b) BN 11 using ibound = 1 and different sample sizes n. These problems are relatively easy for variational approximations; we illustrate that our method gives tight bounds despite using no more memory than the original model. 10 1 10 2 10 3 10 4 10 5 10 6 −2 0 2 IS(WMB) GBP Sample Size (n) 10 1 10 2 10 3 10 4 10 5 10 6 −1 0 1 IS(WMB) GBP Sample Size (n) (a) ibound = 8 (b) ibound = 15 log ˆZ −log Z Figure 4: Results on a harder instance, pedigree20, at ibound = 8, 15 and different n. 5.2 UAI Instances We test the weighted mini-bucket (WMB) version of our algorithm on instances from past UAI approximate inference challenges. For space reasons, we only report a few instances for illustration. BN Instances. Figure 3 shows two Bayes net instances, BN 6 (true log Z = −58.41) and BN 11 (true log Z = −39.37). These examples are very easy for loopy BP, which estimates log Z nearly exactly, but of course gives no accuracy guarantees. For comparison, we run our WMB IS estimator using ibound = 1, e.g., cliques equal to the original factors. We find that we get tight confidence intervals by around 104–105 samples. For comparison, the method of Dagum and Luby [4], using the normalized distribution as a proposal, would require samples proportional to 1/Z: approximately 1025 and 1017, respectively. Pedigree Instances. We next show results for our method on pedigree20, (log Z = −68.22, induced width ω = 21). and various ibounds; Figure 4 shows the results for ibound 8 and 15. For comparision, we also evaluate GBP, defined on a junction graph with cliques found in the same way as WMB [18], and complexity controlled by the same ibound. Again, LBP and GBP generally give accurate estimates; the absolute error of LBP (not shown) is about 0.7, reducing to 0.4 and 0.2 at ibound = 8 and 15, respectively. The initial WMB bounds overestimate by 6.3 and 2.4 at ibound = 8 and 15, and are much less accurate. However, our method surpasses GBP’s accuracy with a modest number of samples: for example, with ibound = 15 (Figure 4b), our IS estimator is more accurate than GBP with fewer than 100 samples, and our 95% Bernstein confidence interval passes GBP at roughly 1000 samples. 6 Conclusion We propose a simple approximate inference method that augments convex variational bounds by adding importance sampling. Our formulation allows us to frame the variational optimization as designing a proposal that minimizes an upper bound on our estimator’s variance, providing guarantees on the goodness of the resulting proposal. More importantly, this enables the construction of anytime probabilistic bounds that improve quickly and directly on state-of-the-art variational bounds, and provide certificates of accuracy given enough samples, relative to the error in the initial bound. One potential future direction is whether one can adaptively improve the proposal during sampling. Acknowledgement This work is supported in part by VITALITE, under the ARO MURI program (Award number W911NF-11-1-0391); NSF grants IIS-1065618 and IIS-1254071; and by the United States Air Force under Contract No. FA8750-14-C-0011 under the DARPA PPAML program. 8 References [1] T. Bengtsson, P. Bickel, and B. Li. Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. In Probability and statistics: Essays in honor of David A. Freedman, pages 316–334. Institute of Mathematical Statistics, 2008. [2] J. Cheng. Sampling algorithms for estimating the mean of bounded random variables. Computational Statistics, 16(1):1–23, 2001. [3] J. Cheng and M. Druzdzel. AIS-BN: An adaptive importance sampling algorithm for evidential reasoning in large Bayesian networks. Journal of Artificial Intelligence Research, 2000. [4] P. Dagum and M. Luby. An optimal approximation algorithm for Bayesian inference. Artificial Intelligence, 93(1):1–27, 1997. [5] P. Dagum, R. Karp, M. Luby, and S. Ross. An optimal algorithm for Monte Carlo estimation. SIAM Journal on Computing, 29:1484–1496, 2000. [6] N. De Freitas, P. Højen-Sørensen, M. Jordan, and S. Russell. Variational MCMC. In UAI, 2001. [7] R. Dechter and I. Rish. Mini-buckets: A general scheme for bounded inference. Journal of the ACM, 50 (2):107–153, 2003. [8] R. Fung and K. Chang. Weighing and integrating evidence for stochastic simulation in Bayesian networks. In UAI, 1990. [9] A. Globerson and T. Jaakkola. Approximate inference using conditional entropy decompositions. In UAI, pages 130–138, 2007. [10] V. Gogate. Sampling Algorithms for Probabilistic Graphical Models with Determinism. PhD thesis, UC Irvine, 2009. [11] V. Gogate and R. Dechter. Sampling-based lower bounds for counting queries. Intelligenza Artificiale, 5 (2):171–188, 2011. [12] T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. In ICML, 2012. [13] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [14] S. Lauritzen. Graphical models. Oxford University Press, 1996. [15] J. Liu. Monte Carlo strategies in scientific computing. Springer Science & Business Media, 2008. [16] Q. Liu. Reasoning and Decisions in Probabilistic Graphical Models–A Unified Framework. PhD thesis, UC Irvine, 2014. [17] Q. Liu and A. Ihler. Bounding the partition function using H¨older’s inequality. In ICML, 2011. [18] R. Mateescu, K. Kask, V. Gogate, and R. Dechter. Join-graph propagation algorithms. JAIR, 37(1): 279–328, 2010. [19] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample-variance penalization. In COLT, pages 115–124, 2009. [20] V. Mnih, C. Szepesv´ari, and J.-Y. Audibert. Empirical Bernstein stopping. In ICML, 2008. [21] M.-S. Oh and J. Berger. Adaptive importance sampling in Monte Carlo integration. J. Stat. Comput. Simul., 41(3-4):143–168, 1992. [22] F. Orabona, T. Hazan, A. Sarwate, and T. Jaakkola. On measure concentration of random maximum a-posteriori perturbations. In ICML, 2014. [23] G. Papandreou and A. Yuille. Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. In ICCV, 2011. [24] N. Ruozzi. The bethe partition function of log-supermodular graphical models. In NIPS, 2012. [25] T. Salimans, D. Kingma, and M. Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In ICML, 2015. [26] R. Shachter and M. Peot. Simulation approaches to general probabilistic inference on belief networks. In UAI, 1990. [27] E. Sudderth, M. Wainwright, and A. Willsky. Loop series and bethe variational bounds in attractive graphical models. In NIPS, pages 1425–1432, 2007. [28] M. Wainwright. Estimating the wrong graphical model: Benefits in the computation-limited setting. JMLR, 7:1829–1859, 2006. [29] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [30] M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. Information Theory, 51(7):2313–2335, 2005. [31] Y. Wexler and D. Geiger. Importance sampling via variational optimization. In UAI, 2007. [32] C. Yuan and M. Druzdzel. An importance sampling algorithm based on evidence pre-propagation. In UAI, pages 624–631, 2002. [33] C. Yuan and M. Druzdzel. Importance sampling algorithms for Bayesian networks: Principles and performance. Mathematical and Computer Modeling, 43(9):1189–1207, 2006. [34] C. Yuan and M. Druzdzel. Generalized evidence pre-propagated importance sampling for hybrid Bayesian networks. In AAAI, volume 7, pages 1296–1302, 2007. [35] C. Yuan and M. Druzdzel. Theoretical analysis and practical insights on importance sampling in Bayesian networks. International Journal of Approximate Reasoning, 46(2):320–333, 2007. 9
2015
118
5,612
The Human Kernel Andrew Gordon Wilson CMU Christoph Dann CMU Christopher G. Lucas University of Edinburgh Eric P. Xing CMU Abstract Bayesian nonparametric models, such as Gaussian processes, provide a compelling framework for automatic statistical modelling: these models have a high degree of flexibility, and automatically calibrated complexity. However, automating human expertise remains elusive; for example, Gaussian processes with standard kernels struggle on function extrapolation problems that are trivial for human learners. In this paper, we create function extrapolation problems and acquire human responses, and then design a kernel learning framework to reverse engineer the inductive biases of human learners across a set of behavioral experiments. We use the learned kernels to gain psychological insights and to extrapolate in humanlike ways that go beyond traditional stationary and polynomial kernels. Finally, we investigate Occam’s razor in human and Gaussian process based function learning. 1 Introduction Truly intelligent systems can learn and make decisions without human intervention. Therefore it is not surprising that early machine learning efforts, such as the perceptron, have been neurally inspired [1]. In recent years, probabilistic modelling has become a cornerstone of machine learning approaches [2, 3, 4], with applications in neural processing [5, 6, 3, 7] and human learning [8, 9]. From a probabilistic perspective, the ability for a model to automatically discover patterns and perform extrapolation is determined by its support (which solutions are a priori possible), and inductive biases (which solutions are a priori likely). Ideally, we want a model to be able to represent many possible solutions to a given problem, with inductive biases which can extract intricate structure from limited data. For example, if we are performing character recognition, we would want our support to contain a large collection of potential characters, accounting even for rare writing styles, and our inductive biases to reasonably reflect the probability of encountering each character [10]. The support and inductive biases of a wide range of probabilistic models, and thus the ability for these models to learn and generalise, is implicitly controlled by a covariance kernel, which determines the similarities between pairs of datapoints. For example, Bayesian basis function regression (including, e.g., all polynomial models), splines, and infinite neural networks, can all exactly be represented as a Gaussian process with a particular kernel function [11, 10, 12]. Moreover, the Fisher kernel provides a mechanism to reformulate probabilistic generative models as kernel methods [13]. In this paper, we wish to reverse engineer human-like support and inductive biases for function learning, using a Gaussian process (GP) based kernel learning formalism. In particular: • We create new human function learning datasets, including novel function extrapolation problems and multiple-choice questions that explore human intuitions about simplicity and explanatory power, available at http://functionlearning.com/. • We develop a statistical framework for kernel learning from the predictions of a model, conditioned on the (training) information that model is given. The ability to sample multiple sets of posterior predictions from a model, at any input locations of our choice, given any dataset of our choice, provides unprecedented statistical strength for kernel learning. By contrast, standard kernel learning involves fitting a kernel to a fixed dataset that can only be viewed as a single realisation from a stochastic process. Our framework leverages spectral mixture kernels [14] and non-parametric estimates. 1 • We exploit this framework to directly learn kernels from human responses, which contrasts with all prior work on human function learning, where one compares a fixed model to human responses. Further, we consider individual rather than averaged human extrapolations. • We interpret the learned kernels to gain scientific insights into human inductive biases, including the ability to adapt to new information for function learning. We also use the learned “human kernels” to inspire new types of covariance functions which can enable extrapolation on problems which are difficult for conventional GP models. • We study Occam’s razor in human function learning, and compare to GP marginal likelihood based model selection, which we show is biased towards under-fitting. • We provide an expressive quantitative means to compare existing machine learning algorithms with human learning, and a mechanism to directly infer human prior representations. Our work is intended as a preliminary step towards building probabilistic kernel machines that encapsulate human-like support and inductive biases. Since state of the art machine learning methods perform conspicuously poorly on a number of extrapolation problems which would be easy for humans [12], such efforts have the potential to help automate machine learning and improve performance on a wide range of tasks – including settings which are difficult for humans to process (e.g., big data and high dimensional problems). Finally, the presented framework can be considered in a more general context, where one wishes to efficiently reverse engineer interpretable properties of any model (e.g., a deep neural network) from its predictions. We further describe related work in section 2. In section 3 we introduce a framework for learning kernels from human responses, and employ this framework in section 4. In the supplement, we provide background on Gaussian processes [11], which we recommend as a review. 2 Related Work Historically, efforts to understand human function learning have focused on rule-based relationships (e.g., polynomial or power-law functions) [15, 16], or interpolation based on similarity learning [17, 18]. Griffiths et al. [19] were the first to note that a Gaussian process framework can be used to unify these two perspectives. They introduced a GP model with a mixture of RBF and polynomial kernels to reflect the human ability to learn arbitrary smooth functions while still identifying simple parametric functions. They applied this model to a standard set of evaluation tasks, comparing predictions on simple functions to averaged human judgments, and interpolation performance to human error rates. Lucas et al. [20, 21] extended this model to accommodate a wider range of phenomena, and to shed light on human predictions given sparse data. Our work complements these pioneering Gaussian process models and prior work on human function learning, but has many features that distinguish it from previous contributions: (1) rather than iteratively building models and comparing them to human predictions, based on fixed assumptions about the regularities humans can recognize, we are directly learning the properties of the human model through advanced kernel learning techniques; (2) essentially all models of function learning, including past GP models, are evaluated on averaged human responses, setting aside individual differences and erasing critical statistical structure in the data1. By contrast, our approach uses individual responses; (3) many recent model evaluations rely on relatively small and heterogeneous sets of experimental data. The evaluation corpora using recent reviews [22, 19] are limited to a small set of parametric forms, and more detailed analyses tend to involve only linear, quadratic and logistic functions. Other projects have collected richer data [23, 24], but we are only aware of coarse-grained, qualitative analyses using these data. Moreover, experiments that depart from simple parametric functions tend to use very noisy data. Thus it is unsurprising that participants tend to revert to the prior mode that arises in almost all function learning experiments: linear functions, especially with slope-1 and intercept-0 [23, 24] (but see [25]). In a departure from prior work, we create original function learning problems with no simple parametric description and no noise – where it is obvious that human learners cannot resort to simple rules – and acquire the human data ourselves. We hope these novel datasets will inspire more detailed findings on function learning; (4) we learn kernels from human responses, which (i) provide insights into the biases driving human function learning and the human ability to progressively adapt to new information, and (ii) enable human-like extrapolations on problems that are difficult for conventional GP models; and (5) we investigate Occam’s razor in human function learning and nonparametric model selection. 1For example, averaging prior draws from a Gaussian process would remove the structure necessary for kernel learning, leaving us simply with an approximation of the prior mean function. 2 3 The Human Kernel The rule-based and associative theories for human function learning can be unified as part of a Gaussian process framework. Indeed, Gaussian processes contain a large array of probabilistic models, and have the non-parametric flexibility to produce infinitely many consistent (zero training error) fits to any dataset. Moreover, the support and inductive biases of a GP are encaspulated by a covariance kernel. Our goal is to learn GP covariance kernels from predictions made by humans on function learning experiments, to gain a better understanding of human learning, and to inspire new machine learning models, with improved extrapolation performance, and minimal human intervention. 3.1 Problem Setup A (human) learner is given access to data y at training inputs X, and makes predictions y∗at testing inputs X∗. We assume the predictions y∗are samples from the learner’s posterior distribution over possible functions, following results showing that human inferences and judgments resemble posterior samples across a wide range of perceptual and decision-making tasks [26, 27, 28]. We assume we can obtain multiple draws of y∗for a given X and y. 3.2 Kernel Learning In standard GP applications, one has access to a single realisation of data y, and performs kernel learning by optimizing the marginal likelihood of the data with respect to covariance function hyperparameters θ (supplement). However, with only a single realisation of data we are highly constrained in our ability to learn an expressive kernel function – requiring us to make strong assumptions, such as RBF covariances, to extract useful information from the data. One can see this by simulating N datapoints from a GP with a known kernel, and then visualising the empirical estimate yy⊤of the known covariance matrix K. The empirical estimate, in most cases, will look nothing like K. However, perhaps surprisingly, if we have even a small number of multiple draws from a GP, we can recover a wide array of covariance matrices K using the empirical estimator Y Y ⊤/M −¯y¯y⊤, where Y is an N × M data matrix, for M draws, and ¯y is a vector of empirical means. The typical goal in choosing kernels is to use training data to find one that minimizes some loss function, e.g., generalisation error, but here we want to reverse engineer the kernel of a model – here, whatever model human learners are tacitly using – that has been applied to training data, based on both training data and predictions of the model. If we have a single sample extrapolation, y∗, at test inputs X∗, based on training points y, and Gaussian noise, the probability p(y∗|y, kθ) is given by the posterior predictive distribution of a Gaussian process, with f ∗≡y∗. One can use this probability as a utility function for kernel learning, much like the marginal likelihood. See the supplement for details of these distributions. Our problem setup affords unprecedented opportunities for flexible kernel learning. If we have multiple sample extrapolations from a given set of training data, y(1) ∗, y(2) ∗, . . . , y(W ) ∗ , then the predictive conditional marginal likelihood becomes QW j=1 p(y(j) ∗|y, kθ). One could apply this new objective, for instance, if we were to view different human extrapolations as multiple draws from a common generative model. Clearly this assumption is not entirely correct, since different people will have different biases, but it naturally suits our purposes: we are not as interested in the differences between people, as the shared inductive biases, and assuming multiple draws from a common generative model provides extraordinary statistical strength for learning these shared biases. Ultimately, we will study both the differences and similarities between the responses. One option for kernel learning is to specify a flexible parametric form for k and then learn θ by optimizing our chosen objective functions. For this approach, we choose the recent spectral mixture kernels of Wilson and Adams [14], which can model a wide range of stationary covariances, and are intended to help automate kernel selection. However, we note that our objective function can readily be applied to other parametric forms. We also consider empirical non-parametric kernel estimation, since non-parametric kernel estimators can have the flexibility to converge to any positive definite kernel, and thus become appealing when we have the signal strength provided by multiple draws from a stochastic process. 4 Human Experiments We wish to discover kernels that capture human inductive biases for learning functions and extrapolating from complex or ambiguous training data. We start by testing the consistency of our kernel learning procedure in section 4.1. In section 4.2, we study progressive function learning. Indeed, 3 0 0.2 0.4 0.6 0.8 1 -2 -1 0 1 2 3 4 (a) 1 Posterior Draw 0 0.2 0.4 0.6 0.8 1 -2 -1 0 1 2 3 4 (b) 10 Posterior Draws 0 0.2 0.4 0.6 0.8 1 -2 -1 0 1 2 3 4 Prediction kernel Data kernel Learned kernel (c) 20 Posterior Draws Figure 1: Reconstructing a kernel used for predictions: Training data were generated with an RBF kernel (green, ·−), and multiple independent posterior predictions were drawn from a GP with a spectral-mixture prediction kernel (blue, - -). As the number of posterior draws increases, the learned spectral-mixture kernel (red, —) converges to the prediction kernel. humans participants will have a different representation (e.g., learned kernel) for different observed data, and examining how these representations progressively adapt with new information can shed light on our prior biases. In section 4.3, we learn human kernels to extrapolate on tasks which are difficult for Gaussian processes with standard kernels. In section 4.4, we study model selection in human function learning. All human participants were recruited using Amazon’s mechanical turk and saw experimental materials provided at http://functionlearning.com. When we are considering stationary ground truth kernels, we use a spectral mixture for kernel learning; otherwise, we use a non-parametric empirical estimate. 4.1 Reconstructing Ground Truth Kernels We use simulations with a known ground truth to test the consistency of our kernel learning procedure, and the effects of multiple posterior draws, in converging to a kernel which has been used to make predictions. We sample 20 datapoints y from a GP with RBF kernel (the supplement describes GPs), kRBF(x, x′) = exp(−0.5||x −x′||/ℓ2), at random input locations. Conditioned on these data, we then sample multiple posterior draws, y(1) ∗, . . . , y(W ) ∗ , each containing 20 datapoints, from a GP with a spectral mixture kernel [14] with two components (the prediction kernel). The prediction kernel has deliberately not been trained to fit the data kernel. To reconstruct the prediction kernel, we learn the parameters θ of a randomly initialized spectral mixture kernel with five components, by optimizing the predictive conditional marginal likelihood QW j=1 p(y(j) ∗|y, kθ) wrt θ. Figure 1 compares the learned kernels for different numbers of posterior draws W against the data kernel (RBF) and the prediction kernel (spectral mixture). For a single posterior draw, the learned kernel captures the high-frequency component of the prediction kernel but fails at reconstructing the low-frequency component. Only with multiple draws does the learned kernel capture the longerrange dependencies. The fact that the learned kernel converges to the prediction kernel, which is different from the data kernel, shows the consistency of our procedure, which could be used to infer aspects of human inductive biases. 4.2 Progressive Function Learning We asked humans to extrapolate beyond training data in two sets of 5 functions, each drawn from GPs with known kernels. The learners extrapolated on these problems in sequence, and thus had an opportunity to progressively learn about the underlying kernel in each set. To further test progressive function learning, we repeated the first function at the end of the experiment, for six functions in each set. We asked for extrapolation judgments because they provide more information about inductive biases than interpolation, and pose difficulties for conventional GP kernels [14, 12, 29]. The observed functions are shown in black in Figure 2, the human responses in blue, and the true extrapolation in dashed black. In the first two rows, the black functions are drawn from a GP with a rational quadratic (RQ) kernel [11] (for heavy tailed correlations); there are 20 participants. We show the learned human kernel, the data generating kernel, the human kernel learned from a spectral mixture, and an RBF kernel trained only on the data, in Figures 2(g) and 2(h), respectively corresponding to Figures 2(a) and 2(f). Initially, both the human learners and RQ kernel show heavy tailed behaviour, and a bias for decreasing correlations with distance in the input space, but the human learners have a high degree of variance. By the time they have seen Figure 2(h), they are 4 0 5 10 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 (a) 0 5 10 -4 -2 0 2 4 (b) 0 5 10 -2 -1 0 1 2 (c) 0 5 10 -2 -1 0 1 2 (d) 0 5 10 -2 -1 0 1 (e) 0 5 10 -3 -2 -1 0 1 (f) 0 2 4 0 0.5 1 1.5 (g) 0 2 4 0 0.5 1 1.5 Human kernel Data kernel RBF kernel (h) 0 0.5 1 1.5 2 -8 -6 -4 -2 0 2 4 (i) 0 0.5 1 1.5 2 -8 -6 -4 -2 0 2 4 (j) 0 0.5 1 1.5 2 -6 -4 -2 0 2 (k) 0 0.5 1 1.5 2 -6 -4 -2 0 2 4 6 (l) 0 0.5 1 1.5 2 -4 -2 0 2 4 6 8 (m) 0 0.5 1 1.5 2 -8 -6 -4 -2 0 2 (n) (o) (p) Figure 2: Progressive Function Learning. Humans are shown functions in sequence and asked to make extrapolations. Observed data are in black, human predictions in blue, and true extrapolations in dashed black. (a)-(f): observed data are drawn from a rational quadratic kernel, with identical data in (a) and (f). (g): Learned human and RBF kernels on (a) alone, and (h): on (f), after seeing the data in (a)-(e). The true data generating rational quadratic kernel is shown in red. (i)-(n): observed data are drawn from a product of spectral mixture and linear kernels with identical data in (i) and (n). (o): the empirical estimate of the human posterior covariance matrix from all responses in (i)-(n). (p): the true posterior covariance matrix for (i)-(n). more confident in their predictions, and more accurately able to estimate the true signal variance of the function. Visually, the extrapolations look more confident and reasonable. Indeed, the human learners will adapt their representations (e.g., learned kernels) to different datasets. However – although the human learners will adapt their representations (e.g., learned kernels) to observed data – we can see in Figure 2(f) that the human learners are still over-estimating the tails of the kernel, perhaps suggesting a strong prior bias for heavy-tailed correlations. The learned RBF kernel, by contrast, cannot capture the heavy tailed nature of the training data (long range correlations), due to its Gaussian parametrization. Moreover, the learned RBF kernel underestimates the signal variance of the data, because it overestimates the noise variance (not shown), to explain away the heavy tailed properties of the data (its model misspecification). In the second two rows, we consider a problem with highly complex structure, and only 10 participants. Here, the functions are drawn from a product of spectral mixture and linear kernels. As the participants see more functions, they appear to expect linear trends, and become more similar in their predictions. In Figures 2(o) and 2(p), we show the learned and true predictive correlation matrices using empirical estimators which indicate similar correlation structure. 4.3 Discovering Unconventional Kernels The experiments reported in this section follow the same general procedure described in Section 4.2. In this case, 40 human participants were asked to extrapolate from two single training sets, in counterbalanced order: a sawtooth function (Figure 3(a)), and a step function (Figure 3(b)), with traing data showing as dashed black lines. 5 0 0.5 1 -1 0 1 2 (a) 0 0.5 1 -1 0 1 2 (b) 0 0.5 1 -1 0 1 2 (c) (d) 0 0.5 1 -1 0 1 2 (e) 0 0.5 1 -1 0 1 2 (f) 0 0.5 1 -1 0 1 2 (g) 0 0.5 1 -1 -0.5 0 0.5 1 1.5 2 (h) 0 0.5 1 -1 -0.5 0 0.5 1 (i) 0 0.5 1 -1 -0.5 0 0.5 1 (j) (k) (l) 0 0.5 1 -1 -0.5 0 0.5 1 (m) 0 0.5 1 -1 -0.5 0 0.5 1 (n) 0 0.5 1 -1 -0.5 0 0.5 1 (o) 0 0.5 1 -1 -0.5 0 0.5 1 (p) Figure 3: Learning Unconventional Kernels. (a)-(c): sawtooth function (dashed black), and three clusters of human extrapolations. (d) empirically estimated human covariance matrix for (a). (e)-(g): corresponding posterior draws for (a)-(c) from empirically estimated human covariance matrices. (h): posterior predictive draws from a GP with a spectral mixture kernel learned from the dashed black data. (i)-(j): step function (dashed black), and two clusters of human extrapolations. (k) and (l) are the empirically estimated human covariance matrices for (i) and (j), and (m) and (n) are posterior samples using these matrices. (o) and (p) are respectively spectral mixture and RBF kernel extrapolations from the data in black. These types of functions are notoriously difficult for standard Gaussian process kernels [11], due to sharp discontinuities and non-stationary behaviour. In Figures 3(a), 3(b), 3(c), we used agglomerative clustering to process the human responses into three categories, shown in purple, green, and blue. The empirical covariance matrix of the first cluster (Figure 3(d)) shows the dependencies of the sawtooth form that characterize this cluster. In Figures 3(e), 3(f), 3(g), we sample from the learned human kernels, following the same colour scheme. The samples appear to replicate the human behaviour, and the purple samples provide reasonable extrapolations. By contrast, posterior samples from a GP with a spectral mixture kernel trained on the black data in this case quickly revert to a prior mean, as shown in Fig 3(h). The data are sufficiently sparse, non-differentiable, and non-stationary, that the spectral mixture kernel is less inclined to produce a long range extrapolation than human learners, who attempt to generalise from a very small amount of information. For the step function, we clustered the human extrapolations based on response time and total variation of the predicted function. Responses that took between 50 and 200 seconds and did not vary by more than 3 units, shown in Figure 3(i), appeared reasonable. The other responses are shown in Figure 3(j). The empirical covariance matrices of both sets of predictions in Figures 3(k) and 3(l) show the characteristics of the responses. While the first matrix exhibits a block structure indicating step-functions, the second matrix shows fast changes between positive and negative dependencies characteristic for the high-frequency responses. Posterior sample extrapolations using the empirical human kernels are shown in Figures 3(m) and 3(n). In Figures 3(o) and 3(p) we show posterior samples from GPs with spectral mixture and RBF kernels, trained on the black data (e.g., given the same information as the human learners). The spectral mixture kernel is able to extract some structure (some horizontal and vertical movement), but is overconfident, and unconvincing compared to the human kernel extrapolations. The RBF kernel is unable to learn much structure in the data. 6 4.4 Human Occam’s Razor If you were asked to predict the next number in the sequence 9, 15, 21, . . . , you are likely more inclined to guess 27 than 149.5. However, we can produce either answer using different hypotheses that are entirely consistent with the data. Occam’s razor describes our natural tendency to favour the simplest hypothesis that fits the data, and is of foundational importance in statistical model selection. For example, MacKay [30] argues that Occam’s razor is automatically embodied by the marginal likelihood in performing Bayesian inference: indeed, in our number sequence example, marginal likelihood computations show that 27 is millions of times more probable than 149.5, even if the prior odds are equal. Occam’s razor is vitally important in nonparametric models such as Gaussian processes, which have the flexibility to represent infinitely many consistent solutions to any given problem, but avoid overfitting through Bayesian inference. For example, the marginal likelihood of a Gaussian process (supplement) separates into automatically calibrated model fit and model complexity terms, sometimes referred to as automatic Occam’s razor [31]. All Possible Datasets p(y|M) Complex Simple Appropriate (a) −2 0 2 4 6 8 −2 −1 0 1 2 Output, f(x) Input, x (b) Figure 4: Bayesian Occam’s Razor. a) The marginal likelihood (evidence) vs. all possible datasets. The dashed vertical line corresponds to an example dataset ˜y. b) Posterior mean functions of a GP with RBF kernel and too short, too large, and maximum marginal likelihood length-scales. Data are denoted by crosses. The marginal likelihood p(y|M) is the probability that if we were to randomly sample parameters from M that we would create dataset y [e.g., 31]. Simple models can only generate a small number of datasets, but because the marginal likelihood must normalise, it will generate these datasets with high probability. Complex models can generate a wide range of datasets, but each with typically low probability. For a given dataset, the marginal likelihood will favour a model of more appropriate complexity. This argument is illustrated in Fig 4(a). Fig 4(b) illustrates this principle with GPs. Here we examine Occam’s razor in human learning, and compare the Gaussian process marginal likelihood ranking of functions, all consistent with the data, to human preferences. We generated a dataset sampled from a GP with an RBF kernel, and presented users with a subsample of 5 points, as well as seven possible GP function fits, internally labelled as follows: (1) the predictive mean of a GP after maximum marginal likelihood hyperparameter estimation; (2) the generating function; (3-7) the predictive means of GPs with larger to smaller length-scales (simpler to more complex fits). We repeated this procedure four times, to create four datasets in total, and acquired 50 human rankings on each, for 200 total rankings. Each participant was shown the same unlabelled functions but with different random orderings. 1 2 3 4 5 6 7 0 20 40 60 80 Function Label First Place Votes (a) 1 2 3 4 5 6 7 1 2 3 4 5 6 7 First Choice Ranking Average Human Ranking (b) 1 2 3 4 5 6 7 2 3 4 5 6 7 GP Marginal Likelihood Ranking Average Human Ranking Truth ML -1.0 -1.5 -2.5 +0.5 +1.0 (c) Figure 5: Human Occam’s Razor. (a) Number of first place (highest ranking) votes for each function. (b) Average human ranking (with standard deviations) of functions compared to first place ranking defined by (a). (c) Average human ranking vs. average GP marginal likelihood ranking of functions. ‘ML’ = marginal likelihood optimum, ‘Truth’ = true extrapolation. Blue numbers are offsets to the log length-scale from the ML optimum. Positive offsets correspond to simpler solutions. 7 Figure 5(a) shows the number of times each function was voted as the best fit to the data, which follows the internal (latent) ordering defined above. The maximum marginal likelihood solution receives the most (37%) first place votes. Functions 2, 3, and 4 received similar numbers (between 15% and 18%) of first place votes. The solutions which have a smaller length-scale (greater complexity) than the marginal likelihood best fit – represented by functions 5, 6, and 7 – received a relatively small number of first place votes. These findings suggest that on average humans prefer overly simple explanations of the data. Moreover, participants generally agree with the GP marginal likelihood’s first choice preference, even over the true generating function. However, these data also suggest that participants have a wide array of prior biases, leading to variability in first choice preferences. Furthermore, 86% (43/50) of participants responded that their first ranked choice was “likely to have generated the data” and looks “very similar” to imagined. It’s possible for highly probable solutions to be underrepresented in Figure 5(a): we might imagine, for example, that a particular solution is never ranked first, but always second. In Figure 5(b), we show the average rankings, with standard deviations (the standard errors are stdev/ √ 200), compared to the first choice rankings, for each function. There is a general correspondence between rankings, suggesting that although human distributions over functions have different modes, these distributions have a similar allocation of probability mass. The standard deviations suggest that there is relatively more agreement that the complex small length-scale functions (labels 5, 6, 7) are improbable, than about specific preferences for functions 1, 2, 3, and 4. Finally, in Figure 5(c), we compare the average human rankings with the average GP marginal likelihood rankings. There are clear trends: (1) humans agree with the GP marginal likelihood about the best fit, and that empirically decreasing the length-scale below the best fit value monotonically decreases a solution’s probability; (2) humans penalize simple solutions less than the marginal likelihood, with function 4 receiving a last (7th) place ranking from the marginal likelihood. Despite the observed human tendency to favour simplicity more than the GP marginal likelihood, Gaussian process marginal likelihood optimisation is surprisingly biased towards under-fitting in function space. If we generate data from a GP with a known length-scale, the mode of the marginal likelihood, on average, will over-estimate the true length-scale (Figures 1 and 2 in the supplement). If we are unconstrained in estimating the GP covariance matrix, we will converge to the maximum likelihood estimator, ˆK = (y−¯y)(y−¯y)⊤, which is degenerate and therefore biased. Parametrizing a covariance matrix by a length-scale (for example, by using an RBF kernel), restricts this matrix to a low-dimensional manifold on the full space of covariance matrices. A biased estimator will remain biased when constrained to a lower dimensional manifold, as long as the manifold allows movement in the direction of the bias. Increasing a length-scale moves a covariance matrix towards the degeneracy of the unconstrained maximum likelihood estimator. With more data, the low-dimensional manifold becomes more constrained, and less influenced by this under-fitting bias. 5 Discussion We have shown that (1) human learners have systematic expectations about smooth functions that deviate from the inductive biases inherent in the RBF kernels that have been used in past models of function learning; (2) it is possible to extract kernels that reproduce qualitative features of human inductive biases, including the variable sawtooth and step patterns; (3) that human learners favour smoother or simpler functions, even in comparison to GP models that tend to over-penalize complexity; and (4) that is it possible to build models that extrapolate in human-like ways which go beyond traditional stationary and polynomial kernels. We have focused on human extrapolation from noise-free nonparametric relationships. This approach complements past work emphasizing simple parametric functions and the role of noise [e.g., 24], but kernel learning might also be applied in these other settings. In particular, iterated learning (IL) experiments [23] provide a way to draw samples that reflect human learners’ a priori expectations. Like most function learning experiments, past IL experiments have presented learners with sequential data. Our approach, following Little and Shiffrin [24], instead presents learners with plots of functions. This method is useful in reducing the effects of memory limitations and other sources of noise (e.g., in perception). It is possible that people show different inductive biases across these two presentation modes. Future work, using multiple presentation formats with the same underlying relationships, will help resolve these questions. Finally, the ideas discussed in this paper could be applied more generally, to discover interpretable properties of unknown models from their predictions. Here one encounters fascinating questions at the intersection of active learning, experimental design, and information theory. 8 References [1] W.S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of mathematical biology, 5(4):115–133, 1943. [2] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [3] K. Doya, S. Ishii, A. Pouget, and R.P.N. Rao. Bayesian brain: probabilistic approaches to neural coding. MIT Press, 2007. [4] Zoubin Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452– 459, 2015. [5] Daniel M Wolpert, Zoubin Ghahramani, and Michael I Jordan. An internal model for sensorimotor integration. Science, 269(5232):1880–1882, 1995. [6] David C Knill and Whitman Richards. Perception as Bayesian inference. Cambridge University Press, 1996. [7] Sophie Deneve. Bayesian spiking neurons i: inference. Neural computation, 20(1):91–117, 2008. [8] Thomas L Griffiths and Joshua B Tenenbaum. Optimal predictions in everyday cognition. Psychological Science, 17(9):767–773, 2006. [9] J.B. Tenenbaum, C. Kemp, T.L. Griffiths, and N.D. Goodman. How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022):1279–1285, 2011. [10] R.M. Neal. Bayesian Learning for Neural Networks. Springer Verlag, 1996. ISBN 0387947248. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. MIT Press, 2006. [12] Andrew Gordon Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. PhD thesis, University of Cambridge, 2014. http://www.cs.cmu.edu/˜andrewgw/andrewgwthesis.pdf. [13] Tommi Jaakkola, David Haussler, et al. Exploiting generative models in discriminative classifiers. Advances in neural information processing systems, pages 487–493, 1998. [14] Andrew Gordon Wilson and Ryan Prescott Adams. Gaussian process kernels for pattern discovery and extrapolation. International Conference on Machine Learning (ICML), 2013. [15] J Douglas Carroll. Functional learning: The learning of continuous functional mappings relating stimulus and response continua. ETS Research Bulletin Series, 1963(2), 1963. [16] Kyunghee Koh and David E Meyer. Function learning: Induction of continuous stimulus-response relations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(5):811, 1991. [17] Edward L DeLosh, Jerome R Busemeyer, and Mark A McDaniel. Extrapolation: The sine qua non for abstraction in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4):968, 1997. [18] Jerome R Busemeyer, Eunhee Byun, Edward L Delosh, and Mark A McDaniel. Learning functional relations based on experience with input-output pairs by humans and artificial neural networks. Concepts and Categories, 1997. [19] Thomas L Griffiths, Chris Lucas, Joseph Williams, and Michael L Kalish. Modeling human function learning with Gaussian processes. In Neural Information Processing Systems, 2009. [20] Christopher G Lucas, Thomas L Griffiths, Joseph J Williams, and Michael L Kalish. A rational model of function learning. Psychonomic bulletin & review, pages 1–23, 2015. [21] Christopher G Lucas, Douglas Sterling, and Charles Kemp. Superspace extrapolation reveals inductive biases in function learning. In Cognitive Science Society, 2012. [22] Mark A Mcdaniel and Jerome R Busemeyer. The conceptual basis of function learning and extrapolation: Comparison of rule-based and associative-based models. Psychonomic bulletin & review, 12(1):24–42, 2005. [23] Michael L Kalish, Thomas L Griffiths, and Stephan Lewandowsky. Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review, 14(2):288–294, 2007. [24] Daniel R Little and Richard M Shiffrin. Simplicity bias in the estimation of causal functions. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, pages 1157–1162, 2009. [25] Samuel GB Johnson, Andy Jin, and Frank C Keil. Simplicity and goodness-of-fit in explanation: The case of intuitive curve-fitting. In Proceedings of the 36th Annual Conference of the Cognitive Science Society, pages 701–706, 2014. [26] Samuel J Gershman, Edward Vul, and Joshua B Tenenbaum. Multistability and perceptual inference. Neural computation, 24(1):1–24, 2012. [27] Thomas L Griffiths, Edward Vul, and Adam N Sanborn. Bridging levels of analysis for probabilistic models of cognition. Current Directions in Psychological Science, 21(4):263–268, 2012. [28] Edward Vul, Noah Goodman, Thomas L Griffiths, and Joshua B Tenenbaum. One and done? optimal decisions from very few samples. Cognitive science, 38(4):599–637, 2014. [29] Andrew Gordon Wilson, Elad Gilboa, Arye Nehorai, and John P. Cunningham. Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, 2014. [30] David JC MacKay. Information theory, inference, and learning algorithms. Cambridge U. Press, 2003. [31] Carl Edward Rasmussen and Zoubin Ghahramani. Occam’s razor. In Neural Information Processing Systems (NIPS), 2001. [32] Andrew Gordon Wilson. A process over all stationary kernels. Technical report, University of Cambridge, 2012. 9
2015
119
5,613
Matrix Completion with Noisy Side Information Kai-Yang Chiang∗ Cho-Jui Hsieh † Inderjit S. Dhillon ∗ ∗University of Texas at Austin † University of California at Davis ∗{kychiang,inderjit}@cs.utexas.edu † chohsieh@ucdavis.edu Abstract We study the matrix completion problem with side information. Side information has been considered in several matrix completion applications, and has been empirically shown to be useful in many cases. Recently, researchers studied the effect of side information for matrix completion from a theoretical viewpoint, showing that sample complexity can be significantly reduced given completely clean features. However, since in reality most given features are noisy or only weakly informative, the development of a model to handle a general feature set, and investigation of how much noisy features can help matrix recovery, remains an important issue. In this paper, we propose a novel model that balances between features and observations simultaneously in order to leverage feature information yet be robust to feature noise. Moreover, we study the effect of general features in theory and show that by using our model, the sample complexity can be lower than matrix completion as long as features are sufficiently informative. This result provides a theoretical insight into the usefulness of general side information. Finally, we consider synthetic data and two applications — relationship prediction and semisupervised clustering — and show that our model outperforms other methods for matrix completion that use features both in theory and practice. 1 Introduction Low rank matrix completion is an important topic in machine learning and has been successfully applied to many practical applications [22, 12, 11]. One promising direction in this area is to exploit the side information, or features, to help matrix completion tasks. For example, in the famous Netflix problem, besides rating history, profile of users and/or genre of movies might also be given, and one could possibly leverage such side information for better prediction. Observing the fact that such additional features are usually available in real applications, how to better incorporate features into matrix completion becomes an important problem with both theoretical and practical aspects. Several approaches have been proposed for matrix completion with side information, and most of them empirically show that features are useful for certain applications [1, 28, 9, 29, 33]. However, there is surprisingly little analysis on the effect of features for general matrix completion. More recently, Jain and Dhillon [18] and Xu et al. [35] provided non-trivial guarantees on matrix completion with side information. They showed that if “perfect” features are given, under certain conditions, one can substantially reduce the sample complexity by solving a feature-embedded objective. This result suggests that completely informative features are extremely powerful for matrix completion, and the algorithm has been successfully applied in many applications [29, 37]. However, this model is still quite restrictive since if features are not perfect, it fails to guarantee recoverability and could even suffer poor performance in practice. A more general model with recovery analysis to handle noisy features is thus desired. In this paper, we study the matrix completion problem with general side information. We propose a dirty statistical model which balances between feature and observation information simultaneously to complete a matrix. As a result, our model can leverage feature information, yet is robust to noisy features. Furthermore, we provide a theoretical foundation to show the effectiveness of our model. We formally quantify the quality of features and show that the sample complexity of our model 1 depends on feature quality. Two noticeable results could thus be inferred: first, unlike [18, 35], given any feature set, our model is guaranteed to achieve recovery with at most O(n3/2) samples in distribution-free manner, where n is the dimensionality of the matrix. Second, if features are reasonably good, we can improve the sample complexity to o(n3/2). We emphasize that since Ω(n3/2) is the lower bound of sample complexity for distribution-free, trace-norm regularized matrix completion [32], our result suggests that even noisy features could asymptotically reduce the number of observations needed in matrix completion. In addition, we empirically show that our model outperforms other completion methods on synthetic data as well as in two applications: relationship prediction and semi-supervised clustering. Our contribution can be summarized as follows: • We propose a dirty statistical model for matrix completion with general side information where the matrix is learned by balancing features and pure observations simultaneously. • We quantify the effectiveness of features in matrix completion problem. • We show that our model is guaranteed to recover the matrix with any feature set, and moreover, the sample complexity can be lower than standard matrix completion given informative features. The paper is organized as follows. Section 2 states some related research. In Section 3, we introduce our proposed model for matrix completion with general side information. We theoretically analyze the effectiveness of features in our model in Section 4, and show experimental results in Section 5. 2 Related Work Matrix completion has been widely applied to many machine learning tasks, such as recommender systems [22], social network analysis [12] and clustering [11]. Several theoretical foundations have also been established. One remarkable milestone is the strong guarantee provided by Cand`es et al. [7, 5], who proves that O(npolylogn) observations are sufficient for exact recovery provided entries are uniformly sampled at random. Several work also studies recovery under non-uniform distributional assumptions [30, 10], distribution-free setting [32], and noisy observations [21, 4]. Several works also consider side information in matrix completion [1, 28, 9, 29, 33]. Although most of them found that features are helpful for certain applications [28, 33] and cold-start setting [29] from their experimental supports, their proposed methods focus on the non-convex matrix factorization formulation without any theoretical guarantees. Compared to them, our model mainly focuses on a convex trace-norm regularized objective and on theoretical insight on the effect of features. On the other hand, Jain and Dhillon [18] (also see [38]) studied an inductive matrix completion objective to incorporate side information, and followup work [35] also considers a similar formulation with trace norm regularized objective. Both of them show that recovery guarantees could be attained with lower sample complexity when features are perfect. However, if features are imperfect, such models cannot recover the underlying matrix and could suffer poor performance in practice. We will have a detailed discussion on inductive matrix completion model in Section 3. Our proposed model is also related to the family of dirty statistical models [36], where the model parameter is expressed as the sum of a number of parameter components, each of which has its own structure. Dirty statistical models have been proposed mostly for robust matrix completion, graphical model estimation, and multi-task learning to decompose the sparse component (noise) and low-rank component (model parameters) [6, 8, 19]. Our proposed algorithm is completely different. We aim to decompose the model into two parts: the part that can be described by side information and the part that has to be recovered purely by observations. 3 A Dirty Statistical Model for Matrix Completion with Features Let R ∈Rn1×n2 be the underlying rank-k matrix that aims to be recovered, where k ≪min(n1, n2) so that R is low-rank. Let Ωbe the set of observed entries sampled from R with cardinality |Ω| = m. Furthermore, let X ∈Rn1×d1 and Y ∈Rn2×d2 be the feature set, where each row xi (or yi) denotes the feature of the i-th row (or column) entity of R. Both d1, d2 ≤min(n1, n2) but can be either smaller or larger than k. Thus, given a set of observations Ωand the feature set X and Y as side information, the goal is to recover the underlying low rank matrix R. To begin with, consider an ideal case where the given features are “perfect” in the following sense: col(R) ⊆col(X) and row(R) ⊆col(Y ). (1) Such a feature set can be thought as perfect since it fully describes the true latent feature space of R. Then, instead of recovering the low rank matrix R directly, one can recover a smaller matrix 2 M ∈Rd1×d2 such that R = XMY T . The resulting formulation, called inductive matrix completion (or IMC in brief) [18], is shown to be both theoretically preferred [18, 35] and useful in real applications [37, 29]. Details of this model can be found in [18, 35]. However, in practice, most given features X and Y will not be perfect. In fact, they could be quite noisy or only weakly correlated to the latent feature space of R. Though in some cases applying IMC with imperfect X, Y might still yield decent performance, in many other cases, the performance drastically drops when features become noisy. This weakness of IMC can also be empirically seen in Section 5. Therefore, a more robust model is desired to better handle noisy features. We now introduce a dirty statistical model for matrix completion with (possibly noisy) features. The core concept of our model is to learn the underlying matrix by balancing feature information and observations. Specifically, we propose to learn R jointly from two parts, one is the low rank estimate from feature space XMY T , and the other part N is the part outside the feature space. Thus, N can be used to capture the information that noisy features fail to describe, which is then estimated by pure observations. Naturally, both XMY T and N are preferred to be low rank since they are aggregated to estimate a low rank matrix R. This further leads a preference on M to be low rank as well, since one could expect only a small subspace of X and a subspace of Y are jointly effective to form the low rank space XMY T . Putting all of above together, we consider to solve the following problem: min M,N ! (i,j)∈Ω ℓ((XMY T + N)ij, Rij) + λM∥M∥∗+ λN∥N∥∗, (2) where M and N are regularized with trace norm because of the low rank prior. The underlying matrix R can thus be estimated by XM ∗Y T +N ∗. We refer our model as DirtyIMC for convenience. To solve the convex problem (2), we propose an alternative minimization scheme to solve N and M iteratively. Our algorithm is stated in details in Appendix A. One remark of this algorithm is that it is guaranteed to converge to a global optimal, since the problem is jointly convex with M and N. The parameters λM and λN are crucial for controlling the importance between features and residual. When λM = ∞, M will be enforced to 0, so features are disregarded and (2) becomes a standard matrix completion objective. Another special case is λN = ∞, in which N will be enforced to 0 and the objective becomes IMC. Intuitively, with an appropriate ratio λM/λN, the proposed model can incorporate useful part of features, yet be robust to noisy part by compensating from pure observations. Some natural questions arise from here: How to quantify the quality of features? What is the right λM and λN given a feature set? And beyond intuition, how much can we benefit from features using our model in theory? We will formally answer these questions in Section 4. 4 Theoretical Analysis Now we analyze the usefulness of features in our model under a theoretical perspective. We first quantify the quality of features and show that with reasonably good features, our model achieves recovery with lower sample complexity. Finally, we compare our results to matrix completion and IMC. Due to space limitations, detailed proofs of Theorems and Lemmas are left in Appendix B. 4.1 Preliminaries Recall that our goal is to recover a rank-k matrix R given observed entry set Ω, feature set X and Y described in Section 3. To recover the matrix with our model (Equation (2)), it is equivalent to solve the hard-constraint problem: min M,N ! (i,j)∈Ω ℓ((XMY T + N)ij, Rij), subject to ∥M∥∗≤M, ∥N∥∗≤N. (3) For simplicity, we will consider d = max(d1, d2) = O(1) so that feature dimensions do not grow as a function of n. We assume each entry (i, j) ∈Ωis sampled i.i.d. under an unknown distribution with index set {(iα, jα)}m α=1. Also, each entry of R is assumed to be upper bounded, i.e. maxij |Rij| ≤R (so that trace norm of R is in O(√n1n2)). Such circumstance is consistent with real scenarios like the Netflix problem where users can rate movies with scale from 1 to 5. For convenience, let θ = (M, N) be any feasible solution, and Θ = {(M, N) | ∥M∥∗≤M, ∥N∥∗≤N} be the feasible solution set. Also, let fθ(i, j) = xT i Myj + Nij be the estimation function for Rij parameterized by θ, and FΘ = {fθ | θ ∈Θ} be the set of feasible functions. We are interested in the following two “ℓ-risk” quantities: • Expected ℓ-risk: Rℓ(f) = E(i,j) " ℓ(f(i, j), Rij) # . 3 • Empirical ℓ-risk: ˆRℓ(f) = 1 m $ (i,j)∈Ωℓ(f(i, j), Rij). Thus, our model is to solve for θ∗that parameterizes f ∗= arg minf∈FΘ ˆRℓ(f), and it is sufficient to show that recovery can be attained if Rℓ(f ∗) approaches to zero with large enough n and m. 4.2 Measuring the Quality of Features We now link the quality of features to Rademacher complexity, a learning theoretic tool to measure the complexity of a function class. We will show that quality features result in a lower model complexity and thus a smaller error bound. Under such a viewpoint, the upper bound of Rademacher complexity could be used for measuring the quality of features. To begin with, we apply the following Lemma to bound the expected ℓ-risk. Lemma 1 (Bound on Expected ℓ-risk [2]). Let ℓbe a loss function with Lipschitz constant Lℓ bounded by B with respect to its first argument, and δ be a constant where 0 < δ < 1. Let R(FΘ) be the Rademacher complexity of the function class FΘ (w.r.t. Ωand associated with ℓ) defined as: R(FΘ) = Eσ " sup f∈FΘ 1 m m ! α=1 σαℓ(f(iα, jα), Riαjα) # , (4) where each σα takes values {±1} with equal probability. Then with probability at least 1 −δ, for all f ∈FΘ we have: Rℓ(f) ≤ˆRℓ(f) + 2EΩ " R(FΘ) # + B % log 1 δ 2m . Apparently, to guarantee a small enough Rℓ, both ˆRℓand model complexity EΩ " R(FΘ) # have to be bounded. The next key lemma shows that, the model complexity term EΩ " R(FΘ) # is related to the feature quality in matrix completion context. Before diving into the details, we first provide an intuition on the meaning of “good” features. Consider any imperfect feature set which violates (1). One can imagine such feature set is perturbed by some misleading noise which is not correlated to the true latent features. However, features should still be effective if such noise does not weaken the true latent feature information too much. Thus, if a large portion of true latent features lies on the informative part of the feature spaces X and Y , they should still be somewhat informative and helpful for recovering the matrix R. More formally, the model complexity can be bounded in terms of M and N by the following lemma: Lemma 2. Let X = maxi ∥xi∥2, Y = maxi ∥yi∥2 and n = max(n1, n2). Then the model complexity of function class FΘ is upper bounded by: EΩ " R(FΘ) # ≤2LℓMXY & log 2d m + min ' 2LℓN & log 2n m , & 9CLℓB N(√n1 + √n2) m ( . Then, by Lemma 1 and 2, one could carefully construct a feasible solution set (by setting M and N) such that both ˆRℓ(f ∗) and EΩ " R(FΘ) # are controlled to be reasonably small. We now suggest a witness pair of M and N constructed as follows. Let γ be defined as: γ = min )mini ∥xi∥ X , mini ∥yi∥ Y * . Let Tµ(·) : R+ →R+ be the thresholding operator where Tµ(x) = x if x ≥µ and Tµ(x) = 0 otherwise. In addition, let X = $d1 i=1 σiuivT i be the reduced SVD of X, and define Xµ = $d1 i=1 σ1Tµ(σi/σ1)uivT i to be the “µ-informative” part of X. The ν-informative part of Y , denoted as Yν, can also be defined similarly. Now consider setting M = ∥ˆ M∥∗and N = ∥R−Xµ ˆ MY T ν ∥∗, where ˆ M = arg min M ∥XµMY T ν −R∥2 F = (XT µ Xµ)−1XT µ RYν(Y T ν Yν)−1 is the optimal solution for approximating R under the informative feature space Xµ and Yν. Then the following lemma shows that the trace norm of ˆ M will not grow as n increases. Lemma 3. Fix µ, ν ∈(0, 1], and let ˆd = min(rank(Xµ), rank(Yν)). Then with some universal constant C′: ∥ˆ M∥∗≤ ˆd C′µ2ν2γ2XY . 4 Moreover, by combining Lemma 1 - 3, we can upper bound Rℓ(f ∗) of DirtyIMC as follows: Theorem 1. Consider problem (3) with M = ∥ˆ M∥∗and N = ∥R −Xµ ˆ MY T ν ∥∗. Then with probability at least 1 −δ, the expected ℓ-risk of an optimal solution (N ∗, M ∗) will be bounded by: Rℓ(f ∗) ≤min ' 4LℓN & log 2n m , & 36CLℓB N(√n1 + √n2) m ( + 4Lℓˆd C′µ2ν2γ2 & log 2d m + B % log 1 δ 2m . 4.3 Sample Complexity Analysis From Theorem 1, we can derive the following sample complexity guarantee of our model. For simplicity, we assume k = O(1) so it will not grow as n increases in the following discussion. Corollary 1. Suppose we aim to “ϵ-recover” R where E(i,j) " ℓ(Nij + XMY T ij , Rij) # < ϵ given an arbitrarily small ϵ. Then for DirtyIMC model, O(min(N√n, N 2 log n)/ϵ2) observations are sufficient for ϵ-recovery provided a sufficiently large n. Corollary 1 suggests that the sample complexity of our model only depends on the trace norm of residual N. This matches the intuition of good features stated in Section 4.2 because X ˆ MY T will cover most part of R if features are good, and as a result, N will be small and one can enjoy small sample complexity by exploiting quality features. We also compare our sample complexity result with other models. First, suppose features are perfect (so that N = O(1)), our result suggests that only O(log n) samples are required for recovery. This matches the result of [35], in which the authors show that given perfect features, O(log n) observations are enough for exact recovery by solving the IMC objective. However, IMC does not guarantee recovery when features are not perfect, while our result shows that recovery is still attainable by DirtyIMC with O(min(N√n, N 2 log n)/ϵ2) samples. We will also empirically justify this result in Section 5. On the other hand, for standard matrix completion (i.e. no features are considered), the most wellknown guarantee is that under certain conditions, one can achieve O(n poly log n) sample complexity for both ϵ-recovery [34] and exact recovery [5]. However, these bounds only hold with distributional assumptions on observed entries. For sample complexity without any distributional assumptions, Shamir et al. [32] recently showed that O(n3/2) entries are sufficient for ϵ-recovery, and this bound is tight if no further distribution of observed entries is assumed. Compared to those results, our analysis also requires no assumptions on distribution of observed entries, and our sample complexity yields O(n3/2) as well in the worst case, by the fact that N ≤∥R∥∗= O(n). Notice that it is reasonable to meet the lower bound Ω(n3/2) even given features, since in an extreme case, X, Y could be random matrices and have no correlation to R, and thus the given information is as same as that in standard matrix completion. However, in many applications, features will be far from random, and our result provides a theoretical insight to show that features can be useful even if they are imperfect. Indeed, as long as features are informative enough such that N = o(n), our sample complexity will be asymptotically lower than O(n3/2). Here we provide two concrete instances for such a scenario. In the first scenario, we consider the rank-k matrix R to be generated from random orthogonal model [5] as follows: Theorem 2. Let R ∈Rn×n be generated from random orthogonal model, where U = {ui}k i=1, V = {vi}k i=1 are random orthogonal bases, and σ1 . . . σk are singular values with arbitrary magnitude. Let σt be the largest singular value such that limn→∞σt/√n = 0. Then, given the noisy features X, Y where X:i = ui (and Y:i = vi) if i < t and X:i (and V:i) be any basis orthogonal to U (and V ) if i ≥t, o(n) samples are sufficient for DirtyIMC to achieve ϵ-recovery. Theorem 2 suggests that, under random orthogonal model, if features are not too noisy in the sense that noise only corrupts the true subspace associated with smaller singular values, we can approximately recover R with only o(n) observations. An empirical justification for this result is presented in Appendix C. Another scenario is to consider R to be the product of two rank-k Gaussian matrices: Theorem 3. Let R = UV T be a rank-k matrix, where U, V ∈Rn×k are true latent row/column features with each Uij, Vij ∼N(0, σ2) i.i.d. Suppose now we are given a feature set X, Y where g(n) row items and h(n) column items have corrupted features. Moreover, each corrupted row/column item has perturbed feature xi = ui + ∆ui and yi = vi + ∆vi, where ∥∆u∥∞≤ξ1 and 5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Sparsity (ρs) = 0.095825 Feature noise level (ρf) Relative error SVDfeature MC IMC DirtyIMC (a) ρs = 0.1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Sparsity (ρs) = 0.25965 Feature noise level (ρf) Relative error SVDfeature MC IMC DirtyIMC (b) ρs = 0.25 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Sparsity (ρs) = 0.39413 Feature noise level (ρf) Relative error SVDfeature MC IMC DirtyIMC (c) ρs = 0.4 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 Feature noise level (ρf) = 0.1 Sparsity (ρs) Relative error SVDfeature MC IMC DirtyIMC (d) ρf = 0.1 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 Feature noise level (ρf) = 0.5 Sparsity (ρs) Relative error SVDfeature MC IMC DirtyIMC (e) ρf = 0.5 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 Feature noise level (ρf) = 0.9 Sparsity (ρs) Relative error SVDfeature MC IMC DirtyIMC (f) ρf = 0.9 Figure 1: Performance of various methods for matrix completion under different sparsity and feature quality. Compared to other feature-based completion methods, the top figures show that DirtyIMC is less sensitive to noisy features with each ρs, and the bottom figures show that error of DirtyIMC always decreases to 0 with more observations given any feature quality. ∥∆v∥∞≤ξ2 with some constants ξ1 and ξ2. Then for DirtyIMC model (3), with high probability, O + max( , g(n), , h(n))n log n observations are sufficient for ϵ-recovery. Theorem 3 suggests that, if features have good quality in the sense that items with corrupted features are not too many, for example g(n), h(n) = O(log n), then sample complexity of DirtyIMC can be O(n log n√log n) = o(n3/2) as well. Thus, both Theorem 2 and 3 provide concrete examples showing that given imperfect yet informative features, the sample complexity of our model can be asymptotically lower than the lower bound of pure matrix completion (which is Ω(n3/2)). 5 Experimental Results In this section, we show the effectiveness of the DirtyIMC model (2) for matrix completion with features on both synthetic datasets and real-world applications. For synthetic datasets, we show that DirtyIMC model better recovers low rank matrices under various quality of features. For real applications, we consider relationship prediction and semi-supervised clustering, where the current state-of-the-art methods are based on matrix completion and IMC respectively. We show that by applying DirtyIMC model to these two problems, we can further improve performance by making better use of features. 5.1 Synthetic Experiments We consider matrix recovery with features on synthetic data generated as follows. We create a low rank matrix R = UV T , as the true latent row/column space U, V ∈R200×20, Uij, Vij ∼ N(0, 1/20). We then randomly sample ρs percent of entries Ωfrom R as observations, and construct a perfect feature set X∗, Y ∗∈R200×40 which satisfies (1). To examine performance under different quality of features, we generate features X, Y with a noise parameter ρf, where X and Y will be derived by replacing ρf percent of bases of X∗(and Y ∗) with bases orthogonal to X∗(and Y ∗). We then consider recovering the underlying matrix R given X, Y and a subset Ωof R. We compare our DirtyIMC model (2) with standard trace-norm regularized matrix completion (MC) and two other feature-based completion methods: IMC [18] and SVDfeature [9]. The standard relative error ∥ˆR −R∥F /∥R∥F is used to evaluate a recovered matrix ˆR. For each method, we select parameters from the set {10α}2 α=−3 and report the one with the best recovery. All results are averaged over 5 random trials. Figure 1 shows the recovery of each method under each sparsity level ρs = 0.1, 0.25, 0.4, and each feature noise level ρf = 0.1, 0.5 and 0.9. We first observe that in the top figures, IMC and 6 Method DirtyIMC MF-ALS [16] IMC [18] HOC-3 HOC-5 [12] Accuracy 0.9474±0.0009 0.9412±0.0011 0.9139±0.0016 0.9242±0.0010 0.9297±0.0011 AUC 0.9506 0.9020 0.9109 0.9432 0.9480 Table 1: Relationship prediction on Epinions. Compared with other approaches, DirtyIMC model gives the best performance in terms of both accuracy and AUC. SVDfeature perform similarly under different ρs. This suggests that with sufficient observations, performance of IMC and SVDfeature mainly depend on feature quality and will not be affected much by the number of observations. As a result, given good features (1d), they achieve smaller error compared to MC with few observations, but as features become noisy (1e-1f), they suffer poor performance by trying to learn the underlying matrix under biased feature spaces. Another interesting finding is that when good features are given (1d), IMC (and SVDfeature) still fails to achieve 0 relative error as the number of observations increases, which reconfirms that IMC cannot guarantee recoverability when features are not perfect. On the other hand, we see that performance of DirtyIMC can be improved by both better features or more observations. In particular, it makes use of informative features to achieve lower error compared to MC and is also less sensitive to noisy features compared to IMC and SVDfeature. Some finer recovery results on ρs and ρf can be found in Appendix C. 5.2 Real-world Applications Relationship Prediction in Signed Networks. As the first application, we consider relationship prediction problem in an online review website Epinions [26], where people can write reviews and trust or distrust others based on their reviews. Such social network can be modeled as a signed network where trust/distrust are modeled as positive/negative edges between entities [24], and the problem is to predict unknown relationship between any two users given the network. A state-ofthe-art approach is the low rank model [16, 12] where one can first conduct matrix completion on adjacency matrix and then use the sign of completed matrix for relationship prediction. Therefore, if features of users are available, we can also consider low rank model by using our model for matrix completion step. This approach can be regarded as an improvement over [16] by incorporating feature information. In this dataset, there are about n = 105K users and m = 807K observed relationship pairs where 15% relationships are distrust. In addition to who-trust-to-whom information, we also have user feature matrix Z ∈Rn×41 where for each user a 41-dimensional feature is collected based on the user’s review history, such as number of positive/negative reviews the user gave/received. We then consider the low-rank model in [16] where matrix completion is conducted by DirtyIMC with non-convex relaxation (5) (DirtyIMC), IMC [18] (IMC), and matrix factorization proposed in [16] (MF-ALS), along with another two prediction methods, HOC-3 and HOC-5 [12]. Note that both row and column entities are users so X = Y = Z is set for both DirtyIMC and IMC model. We conduct the experiment using 10-fold cross validation on observed edges, where the parameters are chosen from the set ⊔2 α=−3{10α, 5 × 10α}. The averaged accuracy and AUC of each method are reported in Table 1. We first observe that IMC performs worse than MF-ALS even though IMC takes features into account. This is because features are only weakly related to relationship matrix, and as a result, IMC is misled by such noisy features. On the other hand, DirtyIMC performs the best among all prediction methods. In particular, it performs slightly better than MF-ALS in terms of accuracy, and much better in terms of AUC. This shows DirtyIMC can still exploit weakly informative features without being trapped by noisy features. Semi-supervised Clustering. We now consider semi-supervised clustering problem as another application. Given n items, the item feature matrix Z ∈Rn×d, and m pairwise constraints specifying whether item i and j are similar or dissimilar, the goal is to find a clustering of items such that most similar items are within the same cluster. We notice that the problem can indeed be solved by matrix completion. Consider S ∈Rn×n to be the signed similarity matrix defined as Sij = 1 (or −1) if item i and j are similar (or dissimilar), and 0 if similarity is unknown. Then solving semi-supervised clustering becomes equivalent to finding a clustering of the symmetric signed graph S, where the goal is to cluster nodes so that most edges within the same group are positive and most edges between groups are negative [12]. As a result, a matrix completion approach [12] can be applied to solve the signed graph clustering problem on S. Apparently, the above solution is not optimal for semi-supervised clustering as it disregards features. Many semi-supervised clustering algorithms are thus proposed by taking both item features 7 0 0.5 1 1.5 2 x 10 5 0 0.1 0.2 0.3 0.4 0.5 Mushroom Number of observed pairs Pairwise error K−means SignMC MCCC DirtyIMC 0 1 2 3 4 5 6 x 10 4 0 0.1 0.2 0.3 0.4 0.5 Segment Number of observed pairs Pairwise error K−means SignMC MCCC DirtyIMC 0 0.5 1 1.5 2 2.5 3 x 10 5 0 0.1 0.2 0.3 0.4 0.5 Covtype Number of observed pairs Pairwise error K−means SignMC MCCC DirtyIMC Figure 2: Semi-supervised clustering on real-world datasets. For Mushroom dataset where features are almost ideal, both MCCC and DirtyIMC achieve 0 error rate. For Segment and Covtype where features are more noisy, our model outperforms MCCC as its error decreases given more constraints. number of items n feature dimension d number of clusters k Mushrooms 8124 112 2 Segment 2319 19 7 Covtype 11455 54 7 Table 2: Statistics of semi-supervised clustering datasets. and constraints into consideration [13, 25, 37]. The current state-of-the-art method is the MCCC algorithm [37], which essentially solves semi-supervised clustering with IMC objective. In [37], the authors show that by running k-means on the top-k eigenvectors of the completed matrix ZMZT , MCCC outperforms other state-of-the-art algorithms [37]. We now consider solving semi-supervised clustering with our DirtyIMC model. Our algorithm, summarized in Algorithm 2 in Appendix D, first completes the pairwise matrix with DirtyIMC objective (2) instead of IMC (with both X, Y are set as Z), and then runs k-means on the top-k eigenvectors of the completed matrix to obtain a clustering. This algorithm can be viewed as an improved version of MCCC to handle noisy features Z. We now compare our algorithm with k-means, signed graph clustering with matrix completion [12] (SignMC) and MCCC [37]. Note that since MCCC has been shown to outperform most other state-of-the-art semi-supervised clustering algorithms in [37], comparing with MCCC is sufficient to demonstrate the effectiveness of our algorithm. We perform each method on three real-world datasets: Mushrooms, Segment and Covtype 1. All of them are classification benchmarks where features and ground-truth class of items are both available, and their statistics are summarized in Table 2. For each dataset, we randomly sample m = [1, 5, 10, 15, 20, 25, 30] × n pairwise constraints, and perform each algorithm to derive a clustering π, where πi is the cluster index of item i. We then evaluate π by the following pairwise error to ground-truth: n(n −1) 2 ) ! (i,j):π∗ i =π∗ j 1(πi ̸= πj) + ! (i,j):π∗ i ̸=π∗ j 1(πi = πj) * where π∗ i is the ground-truth class of item i. Figure 2 shows the result of each method on all three datasets. We first see that for Mushrooms dataset where features are perfect (100% training accuracy can be attained by linear-SVM for classification), both MCCC and DirtyIMC can obtain a perfect clustering, which shows that MCCC is indeed effective with perfect features. For Segment and Covtype datasets, we observe that the performance of k-means and MCCC are dominated by feature quality. Although MCCC still benefits from constraint information as it outperforms k-means, it clearly does not make the best use of constraints, as its performance does not improves even if number of constraints increases. On the other hand, the error rate of SignMC can always decrease down to 0 by increasing m. However, since it disregards features, it suffers from a much higher error rate than methods with features when constraints are few. We again see DirtyIMC combines advantage from MCCC and SignMC, as it makes use of features when few constraints are observed yet leverages constraint information simultaneously to avoid being trapped by feature noise. This experiment shows that our model outperforms state-of-the-art approaches for semi-supervised clustering. Acknowledgement. We thank David Inouye and Hsiang-Fu Yu for helpful comments and discussions. This research was supported by NSF grants CCF-1320746 and CCF-1117055. 1All datasets are available at http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/. For Covtype, we subsample from the entire dataset to make each cluster has balanced size. 8 References [1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. A new approach to collaborative filtering: Operator estimation with spectral regularization. JMLR, 10:803–826, 2009. [2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. JMLR, 3:463–482, 2003. [3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, 1999. [4] E. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010. [5] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. Commun. ACM, 55(6):111– 119, 2012. [6] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3):11:1– 11:37, 2011. [7] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theor., 56(5):2053–2080, 2010. [8] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex optimization. The Annals of Statistics, 2012. [9] T. Chen, W. Zhang, Q. Lu, K. Chen, Z. Zheng, and Y. Yu. SVDFeature: A toolkit for feature-based collaborative filtering. JMLR, 13:3619–3622, 2012. [10] Y. Chen, S. Bhojanapalli, S. Sanghavi, and R. Ward. Coherent matrix completion. In ICML, 2014. [11] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization. JMLR, 15(1):2213–2238, 2014. [12] K.-Y. Chiang, C.-J. Hsieh, N. Natarajan, I. S. Dhillon, and A. Tewari. Prediction and clustering in signed networks: A local to global perspective. JMLR, 15:1177–1213, 2014. [13] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In ICML, pages 209–216, 2007. [14] U. Feige and G. Schechtman. On the optimality of the random hyperplane rounding technique for max cut. Random Struct. Algorithms, 20(3):403–440, 2002. [15] L. Grippo and M. Sciandrone. Globally convergent block-coordinate techniques for unconstrained optimization. Optimization Methods and Software, 10:587–637, 1999. [16] C.-J. Hsieh, K.-Y. Chiang, and I. S. Dhillon. Low rank modeling of signed networks. In KDD, 2012. [17] C.-J. Hsieh and P. A. Olsan. Nuclear norm minimization via active subspace selection. In ICML, 2014. [18] P. Jain and I. S. Dhillon. Provable inductive matrix completion. CoRR, abs/1306.0626, 2013. [19] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, 2010. [20] S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In NIPS, pages 793 – 800, 2008. [21] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. JMLR, 2010. [22] Y. Koren, R. M. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. IEEE Computer, 42:30–37, 2009. [23] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statistics, 28(5):1302–1338, 2000. [24] J. Leskovec, D. Huttenlocher, and J. Kleinberg. Predicting positive and negative links in online social networks. In WWW, 2010. [25] Z. Li and J. Liu. Constrained clustering by spectral kernel learning. In ICCV, 2009. [26] P. Massa and P. Avesani. Trust-aware bootstrapping of recommender systems. In Proceedings of ECAI 2006 Workshop on Recommender Systems, pages 29–33, 2006. [27] R. Meir and T. Zhang. Generalization error bounds for bayesian mixture algorithms. JMLR, 2003. [28] A. K. Menon, K.-P. Chitrapura, S. Garg, D. Agarwal, and N. Kota. Response prediction using collaborative filtering with hierarchies and side-information. In KDD, pages 141–149, 2011. [29] N. Natarajan and I. S. Dhillon. Inductive matrix completion for predicting gene-disease associations. Bioinformatics, 30(12):60–68, 2014. [30] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. JMLR, 13(1):1665–1697, 2012. [31] M. Rudelson and R. Vershynin. Smallest singular value of a random rectangular matrix. Comm. Pure Appl. Math, pages 1707–1739, 2009. [32] O. Shamir and S. Shalev-Shwartz. Matrix completion with the trace norm: Learning, bounding, and transducing. JMLR, 15(1):3401–3423, 2014. [33] D. Shin, S. Cetintas, K.-C. Lee, and I. S. Dhillon. Tumblr blog recommendation with boosted inductive matrix completion. In CIKM, pages 203–212, 2015. [34] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In COLT, pages 545–560, 2005. [35] M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multilabel learning. In NIPS, 2013. [36] E. Yang and P. Ravikumar. Dirty statistical models. In NIPS, 2013. [37] J. Yi, L. Zhang, R. Jin, Q. Qian, and A. Jain. Semi-supervised clustering by input pattern assisted pairwise similarity matrix completion. In ICML, 2013. [38] K. Zhong, P. Jain, and I. S. Dhillon. Efficient matrix sensing using rank-1 gaussian measurements. In International Conference on Algorithmic Learning Theory(ALT), 2015. 9
2015
12
5,614
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu Department of Computer Science, University of Rochester {lianxiangru,huangyj0,raingomm,ji.liu.uwisc}@gmail.com Abstract Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is over a computer network and the other is on a shared memory system. We establish an ergodic convergence rate O(1/ √ K) for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by √ K (K is the total number of iterations). Our results generalize and improve existing analysis for convex minimization. 1 Introduction The asynchronous parallel optimization recently received many successes and broad attention in machine learning and optimization [Niu et al., 2011, Li et al., 2013, 2014b, Yun et al., 2013, Fercoq and Richt´arik, 2013, Zhang and Kwok, 2014, Marecek et al., 2014, Tappenden et al., 2015, Hong, 2014]. It is mainly due to that the asynchronous parallelism largely reduces the system overhead comparing to the synchronous parallelism. The key idea of the asynchronous parallelism is to allow all workers work independently and have no need of synchronization or coordination. The asynchronous parallelism has been successfully applied to speedup many state-of-the-art optimization algorithms including stochastic gradient [Niu et al., 2011, Agarwal and Duchi, 2011, Zhang et al., 2014, Feyzmahdavian et al., 2015, Paine et al., 2013, Mania et al., 2015], stochastic coordinate descent [Avron et al., 2014, Liu et al., 2014a, Sridhar et al., 2013], dual stochastic coordinate ascent [Tran et al., 2015], and randomized Kaczmarz algorithm [Liu et al., 2014b]. In this paper, we are particularly interested in the asynchronous parallel stochastic gradient algorithm (ASYSG) for nonconvex optimization mainly due to its recent successes and popularity in deep neural network [Bengio et al., 2003, Dean et al., 2012, Paine et al., 2013, Zhang et al., 2014, Li et al., 2014a] and matrix completion [Niu et al., 2011, Petroni and Querzoni, 2014, Yun et al., 2013]. While some research efforts have been made to study the convergence and speedup properties of ASYSG for convex optimization, people still know very little about its properties in nonconvex optimization. Existing theories cannot explain its convergence and excellent speedup property in practice, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. People even have no idea if its convergence is certified for nonconvex optimization, although it has been used widely in solving deep neural network and implemented on different platforms such as computer network and shared memory (for example, multicore and multiGPU) system. To fill these gaps in theory, this paper tries to make the first attempt to study ASYSG for the following nonconvex optimization problem minx∈Rn f(x) := Eξ[F(x; ξ)] (1) 1 where ξ ∈Ξ is a random variable and f(x) is a smooth (but not necessarily convex) function. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , N} and F(x; ξ) is the loss function with respect to the training sample indexed by ξ. We consider two popular asynchronous parallel implementations of SG: one is for the computer network originally proposed in [Agarwal and Duchi, 2011] and the other one is for the shared memory (including multicore/multiGPU) system originally proposed in [Niu et al., 2011]. Note that due to the architecture diversity, it leads to two different algorithms. The key difference lies on that the computer network can naturally (also efficiently) ensure the atomicity of reading and writing the whole vector of x, while the shared memory system is unable to do that efficiently and usually only ensures efficiency for atomic reading and writing on a single coordinate of parameter x. The implementation on computer cluster is described by the “consistent asynchronous parallel SG” algorithm (ASYSG-CON), because the value of parameter x used for stochastic gradient evaluation is consistent – an existing value of parameter x at some time point. Contrarily, we use the “inconsistent asynchronous parallel SG” algorithm (ASYSG-INCON) to describe the implementation on the shared memory platform, because the value of parameter x used is inconconsistent, that is, it might not be the real state of x at any time point. This paper studies the theoretical convergence and speedup properties for both algorithms. We establish an asymptotic convergence rate of O(1/ √ KM) for ASYSG-CON where K is the total iteration number and M is the size of minibatch. The linear speedup1 is proved to be achievable while the number of workers is bounded by O( √ K). For ASYSG-INCON, we establish an asymptotic convergence and speedup properties similar to ASYSG-CON. The intuition of the linear speedup of asynchronous parallelism for SG can be explained in the following: Recall that the serial SG essentially uses the “stochastic” gradient to surrogate the accurate gradient. ASYSG brings additional deviation from the accurate gradient due to using “stale” (or delayed) information. If the additional deviation is relatively minor to the deviation caused by the “stochastic” in SG, the total iteration complexity (or convergence rate) of ASYSG would be comparable to the serial SG, which implies a nearly linear speedup. This is the key reason why ASYSG works. The main contributions of this paper are highlighted as follows: • Our result for ASYSG-CON generalizes and improves earlier analysis of ASYSG-CON for convex optimization in [Agarwal and Duchi, 2011]. Particularly, we improve the upper bound of the maximal number of workers to ensure the linear speedup from O(K1/4M −3/4) to O(K1/2M −1/2) by a factor K1/4M 1/4; • The proposed ASYSG-INCON algorithm provides a more accurate description than HOGWILD! [Niu et al., 2011] for the lock-free implementation of ASYSG on the shared memory system. Although our result does not strictly dominate the result for HOGWILD! due to different problem settings, our result can be applied to more scenarios (e.g., nonconvex optimization); • Our analysis provides theoretical (convergence and speedup) guarantees for many recent successes of ASYSG in deep learning. To the best of our knowledge, this is the first work that offers such theoretical support. Notation x∗denotes the global optimal solution to (1). ∥x∥0 denotes the ℓ0 norm of vector x, that is, the number of nonzeros in x; ei ∈Rn denotes the ith natural unit basis vector. We use Eξk,∗(·) to denote the expectation with respect to a set of variables {ξk,1, · · · , ξk,M}. E(·) means taking the expectation in terms of all random variables. G(x; ξ) is used to denote ∇F(x; ξ) for short. We use ∇if(x) and (G(x; ξ))i to denote the ith element of ∇f(x) and G(x; ξ) respectively. Assumption Throughout this paper, we make the following assumption for the objective function. All of them are quite common in the analysis of stochastic gradient algorithms. Assumption 1. We assume that the following holds: • (Unbiased Gradient): The stochastic gradient G(x; ξ) is unbiased, that is to say, ∇f(x) = Eξ[G(x; ξ)] (2) 1The speedup for T workers is defined as the ratio between the total work load using one worker and the average work load using T workers to obtain a solution at the same precision. “The linear speedup is achieved” means that the speedup with T workers greater than cT for any values of T (c ∈(0, 1] is a constant independent to T). 2 • (Bounded Variance): The variance of stochastic gradient is bounded: Eξ(∥G(x; ξ) −∇f(x)∥2) ≤σ2, ∀x. (3) • (Lipschitzian Gradient): The gradient function ∇f(·) is Lipschitzian, that is to say, ∥∇f(x) −∇f(y)∥≤L∥x −y∥ ∀x, ∀y. (4) Under the Lipschitzian gradient assumption, we can define two more constants Ls and Lmax. Let s be any positive integer. Define Ls to be the minimal constant satisfying the following inequality: ∇f (x) −∇f x + P i∈S αiei  ≤Ls P i∈S αiei , ∀S ⊂{1, 2, ..., n} and |S|≤s (5) Define Lmax as the minimum constant that satisfies: |∇if(x) −∇if(x + αei)|≤Lmax|α|, ∀i ∈{1, 2, ..., n}. (6) It can be seen that Lmax ≤Ls ≤L. 2 Related Work This section mainly reviews asynchronous parallel gradient algorithms, and asynchronous parallel stochastic gradient algorithms and refer readers to the long version of this paper2 for review of stochastic gradient algorithms and synchronous parallel stochastic gradient algorithms. The asynchronous parallel algorithms received broad attention in optimization recently, although pioneer studies started from 1980s [Bertsekas and Tsitsiklis, 1989]. Due to the rapid development of hardware resources, the asynchronous parallelism recently received many successes when applied to parallel stochastic gradient [Niu et al., 2011, Agarwal and Duchi, 2011, Zhang et al., 2014, Feyzmahdavian et al., 2015, Paine et al., 2013], stochastic coordinate descent [Avron et al., 2014, Liu et al., 2014a], dual stochastic coordinate ascent [Tran et al., 2015], randomized Kaczmarz algorithm [Liu et al., 2014b], and ADMM [Zhang and Kwok, 2014]. Liu et al. [2014a] and Liu and Wright [2014] studied the asynchronous parallel stochastic coordinate descent algorithm with consistent read and inconsistent read respectively and prove the linear speedup is achievable if T ≤O(n1/2) for smooth convex functions and T ≤O(n1/4) for functions with “smooth convex loss + nonsmooth convex separable regularization”. Avron et al. [2014] studied this asynchronous parallel stochastic coordinate descent algorithm in solving Ax = b where A is a symmetric positive definite matrix, and showed that the linear speedup is achievable if T ≤O(n) for consistent read and T ≤O(n1/2) for inconsistent read. Tran et al. [2015] studied a semi-asynchronous parallel version of Stochastic Dual Coordinate Ascent algorithm which periodically enforces primal-dual synchronization in a separate thread. We review the asynchronous parallel stochastic gradient algorithms in the last. Agarwal and Duchi [2011] analyzed the ASYSG-CON algorithm (on computer cluster) for convex smooth optimization and proved a convergence rate of O(1/ √ MK + MT 2/K) which implies that linear speedup is achieved when T is bounded by O(K1/4/M 3/4). In comparison, our analysis for the more general nonconvex smooth optimization improves the upper bound by a factor K1/4M 1/4. A very recent work [Feyzmahdavian et al., 2015] extended the analysis in Agarwal and Duchi [2011] to minimize functions in the form “smooth convex loss + nonsmooth convex regularization” and obtained similar results. Niu et al. [2011] proposed a lock free asynchronous parallel implementation of SG on the shared memory system and described this implementation as HOGWILD! algorithm. They proved a sublinear convergence rate O(1/K) for strongly convex smooth objectives. Another recent work Mania et al. [2015] analyzed asynchronous stochastic optimization algorithms for convex functions by viewing it as a serial algorithm with the input perturbed by bounded noise and proved the convergences rates no worse than using traditional point of view for several algorithms. 3 Asynchronous parallel stochastic gradient for computer network This section considers the asynchronous parallel implementation of SG on computer network proposed by Agarwal and Duchi [2011]. It has been successfully applied to the distributed neural network [Dean et al., 2012] and the parameter server [Li et al., 2014a] to solve deep neural network. 2http://arxiv.org/abs/1506.08272 3 3.1 Algorithm Description: ASYSG-CON Algorithm 1 ASYSG-CON Require: x0, K, {γk}k=0,···,K−1 Ensure: xK 1: for k = 0, · · · , K −1 do 2: Randomly select M training samples indexed by ξk,1, ξk,2, ...ξk,M; 3: xk+1 = xk −γk PM m=1 G(xk−τk,m, ξk,m); 4: end for The “star” in the star-shaped network is a master machine3 which maintains the parameter x. Other machines in the computer network serve as workers which only communicate with the master. All workers exchange information with the master independently and simultaneously, basically repeating the following steps: • (Select): randomly select a subset of training samples S ∈Ξ; • (Pull): pull parameter x from the master; • (Compute): compute the stochastic gradient g ←P ξ∈S G(x; ξ); • (Push): push g to the master. The master basically repeats the following steps: • (Aggregate): aggregate a certain amount of stochastic gradients “g” from workers; • (Sum): summarize all “g”s into a vector ∆; • (Update): update parameter x by x ←x −γ∆. While the master is aggregating stochastic gradients from workers, it does not care about the sources of the collected stochastic gradients. As long as the total amount achieves the predefined quantity, the master will compute ∆and perform the update on x. The “update” step is performed as an atomic operation – workers cannot read the value of x during this step, which can be efficiently implemented in the network (especially in the parameter server [Li et al., 2014a]). The key difference between this asynchronous parallel implementation of SG and the serial (or synchronous parallel) SG algorithm lies on that in the “update” step, some stochastic gradients “g” in “∆” might be computed from some early value of x instead of the current one, while in the serial SG, all g’s are guaranteed to use the current value of x. The asynchronous parallel implementation substantially reduces the system overhead and overcomes the possible large network delay, but the cost is to use the old value of “x” in the stochastic gradient evaluation. We will show in Section 3.2 that the negative affect of this cost will vanish asymptotically. To mathematically characterize this asynchronous parallel implementation, we monitor parameter x in the master. We use the subscript k to indicate the kth iteration on the master. For example, xk denotes the value of parameter x after k updates, so on and so forth. We introduce a variable τk,m to denote how many delays for x used in evaluating the mth stochastic gradient at the kth iteration. This asynchronous parallel implementation of SG on the “star-shaped” network is summarized by the ASYSG-CON algorithm, see Algorithm 1. The suffix “CON” is short for “consistent read”. “Consistent read” means that the value of x used to compute the stochastic gradient is a real state of x no matter at which time point. “Consistent read” is ensured by the atomicity of the “update” step. When the atomicity fails, it leads to “inconsistent read” which will be discussed in Section 4. It is worth noting that on some “non-star” structures the asynchronous implementation can also be described as ASYSG-CON in Algorithm 1, for example, the cyclic delayed architecture and the locally averaged delayed architecture [Agarwal and Duchi, 2011, Figure 2] . 3.2 Analysis for ASYSG-CON To analyze Algorithm 1, besides Assumption 1 we make the following additional assumptions. Assumption 2. We assume that the following holds: • (Independence): All random variables in {ξk,m}k=0,1,···,K;m=1,···,M in Algorithm 1 are independent to each other; • (Bounded Age): All delay variables τk,m’s are bounded: maxk,m τk,m ≤T. The independence assumption strictly holds if all workers select samples with replacement. Although it might not be satisfied strictly in practice, it is a common assumption made for the analysis 3There could be more than one machines in some networks, but all of them serves the same purpose and can be treated as a single machine. 4 purpose. The bounded delay assumption is much more important. As pointed out before, the asynchronous implementation may use some old value of parameter x to evaluate the stochastic gradient. Intuitively, the age (or “oldness”) should not be too large to ensure the convergence. Therefore, it is a natural and reasonable idea to assume an upper bound for ages. This assumption is commonly used in the analysis for asynchronous algorithms, for example, [Niu et al., 2011, Avron et al., 2014, Liu and Wright, 2014, Liu et al., 2014a, Feyzmahdavian et al., 2015, Liu et al., 2014b]. It is worth noting that the upper bound T is roughly proportional to the number of workers. Under Assumptions 1 and 2, we have the following convergence rate for nonconvex optimization. Theorem 1. Assume that Assumptions 1 and 2 hold and the steplength sequence {γk}k=1,···,K in Algorithm 1 satisfies LMγk + 2L2M 2Tγk PT κ=1 γk+κ ≤1 for all k = 1, 2, .... (7) We have the following ergodic convergence rate for the iteration of Algorithm 1 1 PK k=1 γk PK k=1 γkE(∥∇f(xk)∥2) ≤ 2(f(x1)−f(x∗))+PK k=1(γ2 kML+2L2M 2γk Pk−1 j=k−T γ2 j)σ2 M PK k=1 γk . (8) where E(·) denotes taking expectation in terms of all random variables in Algorithm 1. To evaluate the convergence rate, the commonly used metrics in convex optimization are not eligible, for example, f(xk) −f ∗and ∥xk −x∗∥2. For nonsmooth optimization, we use the ergodic convergence as the metric, that is, the weighted average of the ℓ2 norm of all gradients ∥∇f(xk)∥2, which is used in the analysis for nonconvex optimization [Ghadimi and Lan, 2013]. Although the metric used in nonconvex optimization is not exactly comparable to f(xk) −f ∗or ∥xk −x∗∥2 used in the analysis for convex optimization, it is not totally unreasonable to think that they are roughly in the same order. The ergodic convergence directly indicates the following convergence: If randomly select an index ˜K from {1, 2, · · · , K} with probability {γk/PK k=1 γk}, then E(∥∇f(x ˜ K)∥2) is bounded by the right hand side of (8) and all bounds we will show in the following. Taking a close look at Theorem 1, we can properly choose the steplength γk as a constant value and obtain the following convergence rate: Corollary 2. Assume that Assumptions 1 and 2 hold. Set the steplength γk to be a constant γ γ := p f(x1) −f(x∗)/(MLKσ2). (9) If the delay parameter T is bounded by K ≥4ML(f(x1) −f(x∗))(T + 1)2/σ2, (10) then the output of Algorithm 1 satisfies the following ergodic convergence rate mink∈{1,···,K} E∥∇f(xk)∥2≤1 K PK k=1 E∥∇f(xk)∥2≤4 p (f(x1) −f(x∗))L/(MK)σ. (11) This corollary basically claims that when the total iteration number K is greater than O(T 2), the convergence rate achieves O(1/ √ MK). Since this rate does not depend on the delay parameter T after sufficient number of iterations, the negative effect of using old values of x for stochastic gradient evaluation vanishes asymptoticly. In other words, if the total number of workers is bounded by O( p K/M), the linear speedup is achieved. Note that our convergence rate O(1/ √ MK) is consistent with the serial SG (with M = 1) for convex optimization [Nemirovski et al., 2009], the synchronous parallel (or mini-batch) SG for convex optimization [Dekel et al., 2012], and nonconvex smooth optimization [Ghadimi and Lan, 2013]. Therefore, an important observation is that as long as the number of workers (which is proportional to T) is bounded by O( p K/M), the iteration complexity to achieve the same accuracy level will be roughly the same. In other words, the average work load for each worker is reduced by the factor T comparing to the serial SG. Therefore, the linear speedup is achievable if T ≤ O( p K/M). Since our convergence rate meets several special cases, it is tight. Next we compare with the analysis of ASYSG-CON for convex smooth optimization in Agarwal and Duchi [2011, Corollary 2]. They proved an asymptotic convergence rate O(1/ √ MK), which is consistent with ours. But their results require T ≤O(K1/4M −3/4) to guarantee linear speedup. Our result improves it by a factor O(K1/4M 1/4). 5 4 Asynchronous parallel stochastic gradient for shared memory architecture This section considers a widely used lock-free asynchronous implementation of SG on the shared memory system proposed in Niu et al. [2011]. Its advantages have been witnessed in solving SVM, graph cuts [Niu et al., 2011], linear equations [Liu et al., 2014b], and matrix completion [Petroni and Querzoni, 2014]. While the computer network always involves multiple machines, the shared memory platform usually only includes a single machine with multiple cores / GPUs sharing the same memory. 4.1 Algorithm Description: ASYSG-INCON Algorithm 2 ASYSG-INCON Require: x0, K, γ Ensure: xK 1: for k = 0, · · · , K −1 do 2: Randomly select M training samples indexed by ξk,1, ξk,2, ...ξk,M; 3: Randomly select ik ∈{1, 2, ..., n} with uniform distribution; 4: (xk+1)ik = (xk)ik −γ PM m=1(G(ˆxk,m; ξk,m))ik; 5: end for For the shared memory platform, one can exactly follow ASYSG-CON on the computer network using software locks, which is expensive4. Therefore, in practice the lock free asynchronous parallel implementation of SG is preferred. This section considers the same implementation as Niu et al. [2011], but provides a more precise algorithm description ASYSG-INCON than HOGWILD! proposed in Niu et al. [2011]. In this lock free implementation, the shared memory stores the parameter “x” and allows all workers reading and modifying parameter x simultaneously without using locks. All workers repeat the following steps independently, concurrently, and simultaneously: • (Read): read the parameter from the shared memory to the local memory without software locks (we use ˆx to denote its value); • (Compute): sample a training data ξ and use ˆx to compute the stochastic gradient G(ˆx; ξ) locally; • (Update): update parameter x in the shared memory without software locks x ←x −γG(ˆx; ξ). Since we do not use locks in both “read” and “update” steps, it means that multiple workers may manipulate the shared memory simultaneously. It causes the “inconsistent read” at the “read” step, that is, the value of ˆx read from the shared memory might not be any state of x in the shared memory at any time point. For example, at time 0, the original value of x in the shared memory is a two dimensional vector [a, b]; at time 1, worker W is running the “read” step and first reads a from the shared memory; at time 2, worker W ′ updates the first component of x in the shared memory from a to a′; at time 2, worker W ′ updates the second component of x in the shared memory from b to b′; at time 3, worker W reads the value of the second component of x in the shared memory as b′. In this case, worker W eventually obtains the value of ˆx as [a, b′], which is not a real state of x in the shared memory at any time point. Recall that in ASYSG-CON the parameter value obtained by any worker is guaranteed to be some real value of parameter x at some time point. To precisely characterize this implementation and especially represent ˆx, we monitor the value of parameter x in the shared memory. We define one iteration as a modification on any single component of x in the shared memory since the update on a single component can be considered to be atomic on GPUs and DSPs [Niu et al., 2011]. We use xk to denote the value of parameter x in the shared memory after k iterations and ˆxk to denote the value read from the shared memory and used for computing stochastic gradient at the kth iteration. ˆxk can be represented by xk with a few earlier updates missing ˆxk = xk −P j∈J(k)(xj+1 −xj) (12) where J(k) ⊂{k −1, k, · · · , 0} is a subset of index numbers of previous iterations. This way is also used in analyzing asynchronous parallel coordinate descent algorithms in [Avron et al., 2014, Liu and Wright, 2014]. The kth update happened in the shared memory can be described as (xk+1)ik = (xk)ik −γ(G(ˆxk; ξk))ik where ξk denotes the index of the selected data and ik denotes the index of the component being updated at kth iteration. In the original analysis for the HOGWILD! implementation [Niu et al., 2011], ˆxk is assumed to be some earlier state of x in the shared memory (that is, the consistent read) for simpler analysis, although it is not true in practice. 4The time consumed by locks is roughly equal to the time of 104 floating-point computation. The additional cost for using locks is the waiting time during which multiple worker access the same memory address. 6 One more complication is to apply the mini-batch strategy like before. Since the “update” step needs physical modification in the shared memory, it is usually much more time consuming than both “read” and “compute” steps are. If many workers run the “update” step simultaneously, the memory contention will seriously harm the performance. To reduce the risk of memory contention, a common trick is to ask each worker to gather multiple (say M) stochastic gradients and write the shared memory only once. That is, in each cycle, run both “update” and “compute” steps for M times before you run the “update” step. Thus, the mini-batch updates happen in the shared memory can be written as (xk+1)ik = (xk)ik −γ PM m=1(G(ˆxk,m; ξk,m))ik (13) where ik denotes the coordinate index updated at the kth iteration, and G(ˆxk,m; ξk,m) is the mth stochastic gradient computed from the data sample indexed by ξk,m and the parameter value denoted by ˆxk,m at the kth iteration. ˆxk,m can be expressed by: ˆxk,m = xk −P j∈J(k,m)(xj+1 −xj) (14) where J(k, m) ⊂{k −1, k, · · · , 0} is a subset of index numbers of previous iterations. The algorithm is summarized in Algorithm 2 from the view of the shared memory. 4.2 Analysis for ASYSG-INCON To analyze the ASYSG-INCON, we need to make a few assumptions similar to Niu et al. [2011], Liu et al. [2014b], Avron et al. [2014], Liu and Wright [2014]. Assumption 3. We assume that the following holds for Algorithm 2: • (Independence): All groups of variables {ik, {ξk,m}M m=1} at different iterations from k = 1 to K are independent to each other. • (Bounded Age): Let T be the global bound for delay: J(k, m) ⊂{k −1, ...k −T}, ∀k, ∀m, so |J(k, m)|≤T. The independence assumption might not be true in practice, but it is probably the best assumption one can make in order to analyze the asynchronous parallel SG algorithm. This assumption was also used in the analysis for HOGWILD! [Niu et al., 2011] and asynchronous randomized Kaczmarz algorithm [Liu et al., 2014b]. The bounded delay assumption basically restricts the age of all missing components in ˆxk,m (∀m, ∀k). The upper bound “T” here serves a similar purpose as in Assumption 2. Thus we abuse this notation in this section. The value of T is proportional to the number of workers and does not depend on the size of mini-batch M. The bounded age assumption is used in the analysis for asynchronous stochastic coordinate descent with “inconsistent read” [Avron et al., 2014, Liu and Wright, 2014]. Under Assumptions 1 and 3, we have the following results: Theorem 3. Assume that Assumptions 1 and 3 hold and the constant steplength γ satisfies 2M 2TL2 T (√n + T −1)γ2/n3/2 + 2MLmaxγ ≤1. (15) We have the following ergodic convergence rate for Algorithm 2 1 K PK t=1 E ∥∇f(xt)∥2 ≤ 2n KMγ (f(x1) −f(x∗)) + L2 T T Mγ2 2n σ2 + Lmaxγσ2. (16) Taking a close look at Theorem 3, we can choose the steplength γ properly and obtain the following error bound: Corollary 4. Assume that Assumptions 1 and 3 hold. Set the steplength to be a constant γ γ := p 2(f(x1) −f(x∗))n/( p KLT Mσ). (17) If the total iterations K is greater than K ≥16(f(x1) −f(x∗))LT M  n3/2 + 4T 2 /(√nσ2), (18) then the output of Algorithm 2 satisfies the following ergodic convergence rate 1 K PK k=1 E(∥∇f(xk)∥2) ≤ p 72 (f (x1) −f (x∗)) LT n/(KM)σ. (19) 7 This corollary indicates the asymptotic convergence rate achieves O(1/ √ MK) when the total iteration number K exceeds a threshold in the order of O(T 2) (if n is considered as a constant). We can see that this rate and the threshold are consistent with the result in Corollary 2 for ASYSG-CON. One may argue that why there is an additional factor √n in the numerator of (19). That is due to the way we count iterations – one iteration is defined as updating a single component of x. If we take into account this factor in the comparison to ASYSG-CON, the convergence rates for ASYSG-CON and ASYSG-INCON are essentially consistent. This comparison implies that the “inconsistent read” would not make a big difference from the “consistent read”. Next we compare our result with the analysis of HOGWILD! by [Niu et al., 2011]. In principle, our analysis and their analysis consider the same implementation of asynchronous parallel SG, but differ in the following aspects: 1) our analysis considers the smooth nonconvex optimization which includes the smooth strongly convex optimization considered in their analysis; 2) our analysis considers the “inconsistent read” model which meets the practice while their analysis assumes the impractical “consistent read” model. Although the two results are not absolutely comparable, it is still interesting to see the difference. Niu et al. [2011] proved that the linear speedup is achievable if the maximal number of nonzeros in stochastic gradients is bounded by O(1) and the number of workers is bounded by O(n1/4). Our analysis does not need this prerequisite and guarantees the linear speedup as long as the number of workers is bounded by O( √ K). Although it is hard to say that our result strictly dominates HOGWILD! in Niu et al. [2011], our asymptotic result is eligible for more scenarios. 5 Experiments The successes of ASYSG-CON and ASYSG-INCON and their advantages over synchronous parallel algorithms have been widely witnessed in many applications such as deep neural network [Dean et al., 2012, Paine et al., 2013, Zhang et al., 2014, Li et al., 2014a], matrix completion [Niu et al., 2011, Petroni and Querzoni, 2014, Yun et al., 2013], SVM [Niu et al., 2011], and linear equations [Liu et al., 2014b]. We refer readers to these literatures for more comphrehensive comparison and empirical studies. This section mainly provides the empirical study to validate the speedup properties for completeness. Due to the space limit, please find it in Supplemental Materials. 6 Conclusion This paper studied two popular asynchronous parallel implementations for SG on computer cluster and shared memory system respectively. Two algorithms (ASYSG-CON and ASYSG-INCON) are used to describe two implementations. An asymptotic sublinear convergence rate is proven for both algorithms on nonconvex smooth optimization. This rate is consistent with the result of SG for convex optimization. The linear speedup is proven to achievable when the number of workers is bounded by √ K, which improves the earlier analysis of ASYSG-CON for convex optimization in [Agarwal and Duchi, 2011]. The proposed ASYSG-INCON algorithm provides a more precise description for lock free implementation on shared memory system than HOGWILD! [Niu et al., 2011]. Our result for ASYSG-INCON can be applied to more scenarios. Acknowledgements This project is supported by the NSF grant CNS-1548078, the NEC fellowship, and the startup funding at University of Rochester. We thank Professor Daniel Gildea and Professor Sandhya Dwarkadas at University of Rochester, Professor Stephen J. Wright at University of Wisconsin-Madison, and anonymous (meta-)reviewers for their constructive comments and helpful advices. References A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. NIPS, 2011. H. Avron, A. Druinsky, and A. Gupta. Revisiting asynchronous linear solvers: Provable convergence rate through randomization. IPDPS, 2014. Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155, 2003. 8 D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation: numerical methods, volume 23. Prentice hall Englewood Cliffs, NJ, 1989. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. NIPS, 2012. O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13(1):165–202, 2012. O. Fercoq and P. Richt´arik. Accelerated, parallel and proximal coordinate descent. arXiv preprint arXiv:1312.5799, 2013. H. R. Feyzmahdavian, A. Aytekin, and M. Johansson. An asynchronous mini-batch algorithm for regularized stochastic optimization. ArXiv e-prints, May 18 2015. S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013. M. Hong. A distributed, asynchronous and incremental algorithm for nonconvex optimization: An ADMM based approach. arXiv preprint arXiv:1412.6058, 2014. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep, 1(4):7, 2009. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. NIPS, pages 1097–1105, 2012. M. Li, L. Zhou, Z. Yang, A. Li, F. Xia, D. G. Andersen, and A. Smola. Parameter server for distributed machine learning. Big Learning NIPS Workshop, 2013. M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the parameter server. OSDI, 2014a. M. Li, D. G. Andersen, A. J. Smola, and K. Yu. Communication efficient distributed machine learning with the parameter server. NIPS, 2014b. J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. arXiv preprint arXiv:1403.3862, 2014. J. Liu, S. J. Wright, C. R´e, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordinate descent algorithm. ICML, 2014a. J. Liu, S. J. Wright, and S. Sridhar. An asynchronous parallel randomized kaczmarz algorithm. arXiv preprint arXiv:1401.4780, 2014b. H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed iterate analysis for asynchronous stochastic optimization. arXiv preprint arXiv:1507.06970, 2015. J. Marecek, P. Richt´arik, and M. Tak´ac. Distributed block coordinate descent for minimizing partially separable functions. arXiv preprint arXiv:1406.0238, 2014. A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. F. Niu, B. Recht, C. Re, and S. Wright. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. NIPS, 2011. T. Paine, H. Jin, J. Yang, Z. Lin, and T. Huang. Gpu asynchronous stochastic gradient descent to speed up neural network training. NIPS, 2013. F. Petroni and L. Querzoni. Gasgd: stochastic gradient descent for distributed asynchronous matrix completion via graph partitioning. ACM Conference on Recommender systems, 2014. S. Sridhar, S. Wright, C. Re, J. Liu, V. Bittorf, and C. Zhang. An approximate, efficient LP solver for lp rounding. NIPS, 2013. R. Tappenden, M. Tak´aˇc, and P. Richt´arik. On the complexity of parallel coordinate descent. arXiv preprint arXiv:1503.03033, 2015. K. Tran, S. Hosseini, L. Xiao, T. Finley, and M. Bilenko. Scaling up stochastic dual coordinate ascent. ICML, 2015. H. Yun, H.-F. Yu, C.-J. Hsieh, S. Vishwanathan, and I. Dhillon. Nomad: Non-locking, stochastic multi-machine algorithm for asynchronous and decentralized matrix completion. arXiv preprint arXiv:1312.0193, 2013. R. Zhang and J. Kwok. Asynchronous distributed ADMM for consensus optimization. ICML, 2014. S. Zhang, A. Choromanska, and Y. LeCun. Deep learning with elastic averaging SGD. CoRR, abs/1412.6651, 2014. 9
2015
120
5,615
Evaluating the statistical significance of biclusters Jason D. Lee, Yuekai Sun, and Jonathan Taylor Institute of Computational and Mathematical Engineering Stanford University Stanford, CA 94305 {jdl17,yuekai,jonathan.taylor}@stanford.edu Abstract Biclustering (also known as submatrix localization) is a problem of high practical relevance in exploratory analysis of high-dimensional data. We develop a framework for performing statistical inference on biclusters found by score-based algorithms. Since the bicluster was selected in a data dependent manner by a biclustering or localization algorithm, this is a form of selective inference. Our framework gives exact (non-asymptotic) confidence intervals and p-values for the significance of the selected biclusters. 1 Introduction Given a matrix X ∈Rm×n, biclustering or submatrix localization is the problem of identifying a subset of the rows and columns of X such that the bicluster or submatrix consisting of the selected rows and columns are “significant” compared to the rest of X. An important application of biclustering is the identification of significant genotype-phenotype associations in the (unsupervised) analysis of gene expression data. The data is usually represented by an expression matrix X whose rows correspond to genes and columns correspond to samples. Thus genotype-phenotype associations correspond to salient submatrices of X. The location and significance of such biclusters, in conjunction with relevant clinical information, give preliminary results on the genetic underpinnings of the phenotypes being studied. More generally, given a matrix X ∈Rm×n whose rows correspond to variables and columns correspond to samples, biclustering seeks sample-variable associations in the form of salient submatrices. Without loss of generality, we consider square matrices X ∈Rn×n of the form X = M + Z, Zij ∼N(0, σ2) M = µeI0eT J0, µ ≥0, I0, J0 ⊂[n]. (1.1) The components of eI, I ⊂[n] are given by (eI)i = 1 i ∈I 0 otherwise . For our theoretical results, we assume the size of the embedded submatrix |I0| = |J0| = k and the noise variance σ2 is known. The biclustering problem, due to its practical relevance, has attracted considerable attention. Most previous work focuses on finding significant submatrices. A large class of algorithms for biclustering are score-based, i.e. they search for submatrices that maximize some score function that measures the “significance” of a submatrix. In this paper, we focus on evaluating the significance of submatrices found by score-based algorithms for biclustering. More precisely, let I(X), J(X) ⊂[n] be a (random) pair output by a biclustering algorithm. We seek to test whether the localized submatrix 1 XI(X),J(X) contains any signal, i.e. test the hypothesis H0 : X i∈I(X) j∈J(X) Mij = 0. (1.2) Since the hypothesis depends on the (random) output of the biclustering algorithm, this is a form of selective inference. The distribution of the test statistic P i∈I(X) j∈J(X) Xij depends on the specific algorithm, and is extremely difficult to derive for many heuristic biclustering algorithms. Our main contribution is to test whether a biclustering algorithm has found a statistically significant bicluster. The tests and confidence intervals we construct are exact, meaning that in finite samples the type 1 error is exactly α. This paper is organized as follows. First, we review recent work on biclustering and related problems. Then, in section 2, we describe our framework for performing inference in the context of a simple biclustering algorithm based on a scan statistic. We show 1. the framework gives exact (non-asymptotic) Unif(0, 1) p-values under H0, and the p-values can be “inverted” to form confidence intervals for the amount of signal in XI(X),J(X). 2. under the minimax signal-to-noise ratio (SNR) regime µ ≳ q log n k , the test has full asymptotic power . In section 4, we show the framework handles more computationally tractable biclustering algorithms, including a greedy algorithm originally proposed by Shabalin et al. [12]. In the supplementary materials, we discuss the problem in the more general setting where there are multiple emnbedded submatrices. Finally, we present experimental validation of the various tests and biclustering algorithms. 1.1 Related work A slightly easier problem is submatrix detection: test whether a matrix has an embedded submatrix with nonzero mean [1, 4]. This problem was recently studied by Ma and Wu [11] who characerized the minimum signal strength µ for any test and any computationally tractable test to reliably detect an embedded submatrix. We emphasize that the problem we consider is not the submatrix detection problem, but a complementary problem. Submatrix detection asks whether there are any hidden row-column associations in a matrix. We ask whether a submatrix selected by a biclustering algorithm captures the hidden association(s). In practice, given a matrix, a practitioner might perform (in order) 1. submatrix detection: check for a hidden submatrix with elevated mean. 2. submatrix localization: attempt to find the hidden submatrix. 3. selective inference: check whether the selected submatrix captures any signal. We focus on the third step in the pipeline. Results on evaluating the significance of selected submatrices are scarce. The only result we know of is by Bhamidi, Dey and Nobel, who characterized the asymptotic distribution of the largest k × k average submatrix in Gaussian random matrices [6]. Their result may be used to form an asymptotic test of (1.2). The submatrix localization problem, due to its practical relevance, has attracted considerable attention [5, 2, 3]. Most prior work focuses on finding significant submatrices. Broadly speaking, submatrix localization procedures fall into one of two types: score-based search procedures and spectral algorithms. The main idea behind the score-based approach to submatrix localization is significant submatrices should maximize some score that measures the “significance” of a submatrix, e.g. the average of its entries [12] or the goodness-of-fit of a two-way ANOVA model [8, 9]. Since there are exponentially many submatrices, many score-based search procedure use heuristics to reduce the search space. Such heuristics are not guaranteed to succeed, but often perform well in practice. One of the purposes of our work is to test whether a heuristic algorithm has identified a significant submatrix. 2 The submatrix localization problem exhibits a statistical and computational trade-off that was first studied by Balakrishnan et al. [5]. They compare the SNR required by several computationally efficient algorithms to the minimax SNR. Recently, Chen and Xu [7] study the trade-off when there are several embedded submatrices. In this more general setting, they show the SNR required by convex relaxation is smaller than the SNR required by entry-wise thresholding. Thus the power of convex relaxation is in separating clusters/submatrices, not in identifying one cluster/submatrix. 2 A framework for evaluating the significance of a submatrix Our main contribution is a framework for evaluating significance of a submatrix selected by a biclustering algorithm. The framework allows us to perform exact (non-asymptotic) inference on the selected submatrix. In this section, we develop the framework on a (very) simple score-based algorithm that simply outputs the largest average submatrix. At a high level, our framework consists of characterizing the selection event {(I(X), J(X)) = (I, J)} and applying the key distributional result in [10] to obtain a pivotal quantity. 2.1 The significance of the largest average submatrix To begin, we consider performing inference on output of the simple algorithm that simply returns the k × k submatrix with largest sum. Let S be the set of indices of all k × k submatrices of X, i.e. S = {(I, J) | I, J ⊂[n], |I| = |J| = k}. The Largest Average Submatrix (LAS) algorithm returns a pair (ILAS(X), JLAS(X)) (ILAS(X), JLAS(X)) = arg max (I,J)∈S eT I XeJ The optimal value S(1) = tr  eJLAS(X)eT ILAS(X)X  is distributed like the maxima of n k 2 (correlated) normal random variables. Although results on the asymptotic distribution (k fixed, n growing) of S(1) (under H0 : µ = 0) are known (e.g. Theorem 2.1 in [6]), we are not aware of any results that characterizes the finite sample distribution of the optimal value. To avoid this pickle, we condition on the selection event ELAS(I, J) = {(ILAS(X), JLAS(X)) = (I, J)} (2.1) and work with the distribution of X | {(ILAS(X), JLAS(X)) = (I, J)} . We begin by making a key observation. The selection event given by (2.1) is equivalent to X satisfying a set of linear inequalities given by tr eJeT I X  ≥tr eJ′eT I′X  for any (I′, J′) ∈S \ (I, J). (2.2) Thus the selection event is equivalent to X falling in the polyhedral set CLAS(I, J) =  X ∈Rn×n | tr eJeT I X  ≥tr eJ′eT I′X  for any (I′, J′) ∈S \ (I, J) . (2.3) Thus, X | {(ILAS(X), JLAS(X)) = (I, J)} = X | {X ∈CLAS(I, J)} is a constrained Gaussian random variable. Recall our goal was to perform inference on the amount of signal in the selected submatrix XILAS(X),JLAS(X). This task is akin to performing inference on the mean parameter1 of a constrained Gaussian random variable, namely X | {X ∈CLAS(I, J)} . We apply the selective inference framework by Lee et al. [10] to accomplish the task. Before we delve into the details of how we perform inference on the mean parameter of a constrained Gaussian random variable, we review the key distribution result in [10] concerning constrained Gaussian random variables. Theorem 2.1. Consider a Gaussian random variable y ∈Rn with mean ν ∈Rn and covariance Σ ∈Sn×n ++ constrained to a polyhedral set C = {x ∈Rp | Ay ≤b} for some A ∈Rm×n, b ∈Rm. 1The mean parameter is the mean of the Gaussian prior to truncation. 3 Let η ∈Rn represent a linear function of y. Define α = AΣη ηT Ση and V+(y) = sup j:αj<0 1 αj (bj −(Ay)j + αjηT y) (2.4) V−(y) = inf j:αj>0 1 αj (bj −(Ay)j + αjηT y) (2.5) V 0(y) = inf j:αj=0 bj −(Ay)j (2.6) F(x, ν, σ2, a, b) = Φ x−ν σ  −Φ a−ν σ  Φ b−ν σ  −Φ a−ν σ  . (2.7) The expression F(ηT y, ηT ν, ηT Ση, V−(y), V+(y)) is a pivotal quantity with a Unif(0, 1) distribution, i.e. F ηT y, ηT ν, ηT Ση, V−(y), V+(y)  | {Ay ≤b} ∼Unif(0, 1). (2.8) Remark 2.2. The truncation limits V+(y) and V−(y) (and V 0(y)) depend on η and the polyhedral set C. We omit the dependence to keep our notation manageable. Recall X | {ELAS(I, J)} is a constrained Gaussian random variable (constrained to the polyhedral set CLAS(I, J) given by (2.3)). By Theorem 2.1 and the characterization of the selection event ELAS(I, J), the random variable F S(1), tr eJeT I M  , σ2k2, V−(X), V+(X)  | {ELAS(I, J)} , where V+(X) and V−(X) (and V 0(X)) are evaluated on the polyhedral set CLAS(I, J), is uniformly distributed on the unit interval. The mean parameter tr eJeT I M  is the amount of signal captured by XI,J: tr eJeT I M  = |I ∩I0| |J ∩J0| µ. What are V+(X) and V−(X)? Let EI′,J′ = e′ Ie′T J for any I′, J′ ⊂[n]. For convenience, we index the constraints (2.2) by the pairs (I′, J′). The term αI′,J′ is given by αI′,J′ = |I∩I′||J∩J′|−k2 k2 . Since |I ∩I′| |J ∩J′| < k2, αI′,J′ is negative for any (I′, J′) ∈Sn,k \ (I, J), and the upper truncation limit V+(X) is ∞. The lower truncation limit V−(X) simplifies to V−(X) = max (I′,J′):αI′,J′<0 tr ET I,JX  − k2 tr  (EI,J −EI′,J′)T X  k2 −|I ∩I′| |J ∩J′| . (2.9) We summarize the developments thus far in a corollary. Corollary 2.3. We have F S(1), tr eJeT I M  , k2σ2, V−(X), ∞  | {ELAS(I, J)} ∼Unif (0, 1) (2.10) V−(X) = max (I′,J′):αI′,J′<0 tr ET I,JX  − k2 tr  (EI,J −EI′,J′)T X  k2 −|I ∩I′| |J ∩J′| (2.11) Under the hypothesis H0 : tr  eJLAS(X)eT ILAS(X)M  = 0, (2.12) we expect F S(1), 0, k2σ2, V−(X), ∞  | {ELAS(I, J)} ∼Unif (0, 1) Thus 1 −F S(1), 0, k2σ2, V−(X), ∞  is a p-value for the hypothesis (2.12). Under the alternative, we expect the selected submatrix to be (stochastically) larger than under the null. Thus rejecting H0 when the p-value is smaller than α is an exact α level test for H0; i.e. Pr0 (reject H0 | {ELAS(I, J)}) = α. Since the test controls Type I error at α for all possible selection events (i.e. all possible outcomes of the LAS algorithm), the test also controls Type I error unconditionally: Pr0 (reject H0) = X I,J⊂[n] Pr0 (reject H0 | {ELAS(I, J)}) Pr0 ({ELAS(I, J)}) ≤α X I,J⊂[n] Pr0 ({ELAS(I, J)}) = α. Thus the test is an exact α-level test of H0. We summarize the result in a Theorem. 4 Theorem 2.4. The test that rejects when F S(1), 0, k2σ2, V−(X), ∞  ≥1 −α V−(X), = max (I′,J′):αI′,J′<0 tr  ET ILAS(X),JLAS(X)X  − k2 tr EILAS(X),JLAS(X) −EI′,J′T X  k2 − ILAS(X) ∩I′ JLAS(X) ∩J′ , is a valid α-level test for H0 : P i∈I(X) j∈J(X) Mij = 0. To obtain confidence intervals for the amount of signal in the selected submatrix, we “invert” the pivotal quantity given by (2.10). By Corollary 2.3, the interval n ν ∈R : α 2 ≤F S(1), ν, k2σ2, V−(X), ∞  ≤1 −α 2 o (2.13) is an exact 1 −α confidence interval for P i∈I(X) j∈J(X) Mij. When (ILAS(X), JLAS(X)) = (I0, J0), (2.13) is a confidence interval for µ. Like the test given by Lemma 2.4, the confidence intervals given by (2.13) are also valid unconditionally. 2.2 Power under minimax signal-to-noise ratio In section 2, we derived an exact (non-asymptotically valid) test for the hypothesis (2.12). In this section, we study the power of the test. Before we delve into the details, we review some relevant results to place our result in the correct context. Balakrishnan et al. [5] show µ must be at least Θ  σ q log(n−k) k  for any algorithm to succeed (find the embedded submatrix) with high probability. They also show the LAS algorithm is minimax rate optimal; i.e. the LAS algorithm finds the embedded submatrix with probability 1 − 4 n−k when µ ≥4σ q 2 log(n−k) k . We show that the test given by Theorem 2.4 has asymptotic full power under the same signal strength. The proof is given in the appendix. Theorem 2.5. Let µ = C q 2 log(n−k) k . When C > max  1 √ α log(n−k)( √ k−5/4), 4 + 4 q log 2 α log(n−k)  and k ≤n 2 , the α-level test given by Corollary 2.3 has power at least 1 − 5 n−k; i.e. Pr(reject H0) ≥1 − 5 n−k. Further, for any sequence (n, k) such that n →∞, when C > 4, and k ≤n 2 , Pr(reject H0) →1. 3 General scan statistics Although we have elected to present our framework in the context of biclustering, the framework readily extends to scan statistics. Let z ∼N(µ, Σ), where E[z] has the form E[zi] = µ i ∈S 0 otherwise for some µ > 0 and S ⊂[ n ]. The set S belongs to a collection C = {S1, . . . , SN}. We decide which index set in C generated the data by ˆS = arg maxS∈C P i∈S zi. (3.1) Given ˆS, we are interested in testing the null hypothesis H0 : E[z ˆS] = 0. (3.2) To perform exact inference for the selected effect µ ˆS, we must first characterize the selection event. We observe that the selection event { ˆS = S} is equivalent to X satisfying a set of linear inequalities given by eT Sz ≥eT S′z for any S′ ∈C \ S. (3.3) 5 Given the form of the constraints (3.3), aS′ = (eS′ −eS)T eS eT SeS = 1 |S| (|S ∩S′| −|S|) for any S′ ∈C \ S. Since |S ∩S′| ≤|S| , we have aS′ ∈[−1, 0], which implies V+(z) = ∞. The term V−(z) also simplifies: V−(z) = sup S′ 1 aS′ ((eS −eS′)T z + aS′eT Sz) = eT Sz + sup S′ 1 aS′ ((eS −eS′)T z). Let y(1), y(2) be the largest and second largest scan statistics. We have V−(z) ≤z(1) + sup S′ ((eS′ −eS)T z) = z(1) + z(2) −z(1) = z(2). Intuitively, the pivot will be large (the p-value will be small), when eT Sz exceeds the lower truncation limit V−by a large margin. Since the second largest scan statistic is an upper bound for the lower truncation limit, the test will reject when y(1) exceeds y(2) by a large margin. Theorem 3.1. The test that rejects when F z(1), 0, k2σ2, V−(z), ∞  ≥1 −α where V−(X) = eT ˆSz + supS′ 1 aS′ ((e ˆS −eS′)T z), is a valid α-level test for H0 : eT ˆSµ = 0. To our knowledge, most precedures for obtaining valid inference on scan statistics require careful characterization of the asymptotic distribution of eT ˆSz. Such results are usually only valid when the components of z are independent with identical variances (e.g. see [6]), and can only be used to test the global null: H0 : E[z] = 0. Our framework not only relaxes the independence and homeoskedastic assumption, but also allows us to for confidence intervals for the selected effect size. 4 Extensions to other score-based approaches Returning to the submatrix localization problem, we note that the framework described in section 2 also readily handles other score-based approaches, as long as the scores are affine functions of the entries. The main idea is to partition Rn×n into non-overlapping regions that corresponding to a possible outcomes of the algorithm; i.e. the event that the algorithm outputs a particular submatrix is equivalent to X falling in the corresponding region of Rn×n. In this section, we show how to perform exact inference on biclusters found by more computationally tractable algorithms. 4.1 Greedy search Searching over all n k 2 submatrices to find the largest average submatrix is computationally intractable for all but the smallest matrices. Here we consider a family of heuristics based on a greedy search algorithm proposed by Shabalin et al. [12] that looks for “local” largest average submatrices. Their approach is widely used to discover genotype-phenotype associations in high-dimensional gene expression data. Here the score is simply the sum of the entries in a submatrix. Algorithm 1 Greedy search algorithm 1: Initialize: select J0 ⊂[n]. 2: repeat 3: Il+1 ←the indices of the rows with the largest column sum in Jl 4: Jl+1 ←the indices of the columns with the largest row sum in Il+1 5: until convergence To adapt the framework laid out in section 2 to the greedy search algorithm, we must characterize the selection event. Here the selection event is the “path” of the greedy search: EGrS = EGrS (I1, J1), (I2, J2), . . .  6 is the event the greedy search selected (I1, J1) at the first step, (I2, J2) at the second step, etc. In practice, to ensure stable performance of the greedy algorithm, Shabalin et al. propose to run the greedy search with random initialization 1000 times and select the largest local maximum. Suppose the m⋆-th greedy search outputs the largest local maximum. The selection event is EGrS,1 ∩· · · ∩EGrS,1000 ∩  m⋆= arg max m=1,...,1000 eT IGrS,m(X)XeJGrS,m(X)  where EGrS,m = EGrS (I1 m, J1 m), (I2 m, J2 m), . . .  , m = 1, . . . , 1000 is the event the m-th greedy search selected (I1 m, J1 m) at the first step, (I2 m, J2 m) at the second step, etc. An alternative to running the greedy search with random initialization many times and picking the largest local maximum is to initialize the greedy search intelligently. Let Jgreedy(X) be the output of the intelligent initialization. The selection event is given by EGrS ∩  Jgreedy(X) = J0 , (4.1) where EGrS is the event the greedy search selected (I1, J1) at the first step, (I2, J2) at the second step, etc. The intelligent initialization selects J0 when eT [n]Xej ≥eT [n]Xej′ for any j ∈J0, j′ ∈[n] \ J0, (4.2) which corresponds to selecting the k columns with largest sum. Thus the selection event is equivalent to X falling in the polyhedral set CGrS ∩ n X ∈Rn×n | tr  ejeT [n]X  ≥tr  ej′eT [n]X  for any j ∈J0, j′ ∈[n] \ J0o , where CGrS is the constraint set corresponding to the selection event EGrS (see Appendix for an explicit characteriation). 4.2 Largest row/column sum test An alternative to running the greedy search is to use a test statistic based off choosing the k rows and columns with largest sum. The largest row/column sum test selects a subset of columns J0 when eT [n]Xej ≥eT [n]Xej′ for any j ∈J0, j′ ∈[n] \ J0 (4.3) which corresponds to selecting the k columns with largest sum. Similarly, it selects rows I0 with largest sum. Thus the selection event for initialization at (I0, J0) is equivalent to X falling in the polyhedral set n X ∈Rn×n | tr  ejeT [n]X  ≥tr  ej′eT [n]X  for any j ∈J0, j′ ∈[n] \ J0o ∩ n X ∈Rn×n | tr  eieT [n]X  ≥tr  ei′eT [n]X  for any i ∈I0, i′ ∈[n] \ I0o . (4.4) The procedure of selecting the k largest rows/columns was analyzed in [5]. They proved that when µ ≥4/k p n log(n −k) the procedure recovers the planted submatrix. We show a similar result for the test statistic based off the intelligent initialization F  tr  eJ0(X)eT I0(X)X  , 0, σ2k2, V −(X), V +(X)  . (4.5) Under the null of µ = 0, the statistic (4.5) is uniformly distributed, so type 1 error is controlled at level α. The theorem below shows that this computationally tractable test has power tending to 1 for µ > 4 k p n log(n −k). Theorem 4.1. Let µ = C k p n log(n −k). Assume that n ≥2 exp(1) and n ≥ k 2. When C > max  4 q 1 + 1 4n2 + 2 n, 2 log 2/α √ log(n−k) + √ 2 n  , the α-level test given by Corollary 2.3 has power at least 1 − 9 n−k; i.e. Pr(reject H0) ≥1 − 9 n−k. Further, for any sequence (n, k) such that n →∞, when C > 4, and k ≤n 2 , Pr(reject H0) →1. 7 0 5 10 0 0.2 0.4 0.6 0.8 1 Signal Strength Power k=log n 0 5 10 0 0.2 0.4 0.6 0.8 1 Signal Strength Power k=sqrt(n) 0 5 10 0 0.2 0.4 0.6 0.8 1 Signal Strength Power k=.2n n=50 n=100 n=500 n=1000 Figure 1: Random initialization with 10 restarts 0 5 10 0 0.2 0.4 0.6 0.8 1 Signal Strength Power k=log n 0 5 10 0 0.2 0.4 0.6 0.8 1 Signal Strength Power k=sqrt(n) 0 5 10 0 0.2 0.4 0.6 0.8 1 Signal Strength Power k=.2n n=50 n=100 n=500 n=1000 Figure 2: Intelligent initialization In practice, we have found that initializing the greedy algorithm with the rows and columns identified by the largest row/column sum test stabilizes the performance of the greedy algorithm and preserves power. By intersecting the selection events from the largest row/column sum test and the greedy algorithm, the test also controls type 1 error. Let (Iloc(X), Jloc(X)) be the pair of indices returned by the greedy algorithm initialized with (I0, J0) from the largest row/column sum test. The test statistic is given by F  tr  eJloc(X)eT Iloc(X)X  , 0, σ2k2, V −(X), V +(X)  , (4.6) where V +(X), V −(X) are now computed using the intersection of the greedy and the largest row/column sum selection events. This statistic is also uniformly distributed under the null. We test the performance of three of the biclustering algorithms: Algorithm 1 with the intelligent initialization in (4.4) and Algorithm 1 with 10 random restarts. We generate data from the model (1.1) for various values of n and k. We only test the power of each procedure, since all of the algorithms discussed provably control type 1 error. The results are in Figures 1, and 2. The y-axis shows power (the probability of rejecting) and the x-axis is rescaled signal strength µ .q 2 log(n−k) k . The tests were calibrated to control type 1 error at α = .1, so any power over .1 is nontrivial. From the k = log n plot, we see that the intelligently initialized greedy procedure outperforms the greedy algorithm with a single random initialization and the greedy algorithm with 10 random initializations. 5 Conclusion In this paper, we considered the problem of evaluating the statistical significance of the output of several biclustering algorithms. By considering the problem as a selective inference problem, we are able to devise exact significance tests and confidence intervals for the selected bicluster. We also show how the framework generalizes to the more practical problem of evaluating the significance of multiple biclusters. In this setting, our approach gives sequential tests that control family-wise error rate in the strong sense. 8 References [1] Louigi Addario-Berry, Nicolas Broutin, Luc Devroye, G´abor Lugosi, et al. On combinatorial testing problems. The Annals of Statistics, 38(5):3063–3092, 2010. [2] Brendan PW Ames. Guaranteed clustering and biclustering via semidefinite programming. Mathematical Programming, pages 1–37, 2012. [3] Brendan PW Ames and Stephen A Vavasis. Convex optimization for the planted k-disjointclique problem. Mathematical Programming, 143(1-2):299–337, 2014. [4] Ery Arias-Castro, Emmanuel J Candes, Arnaud Durand, et al. Detection of an anomalous cluster in a network. The Annals of Statistics, 39(1):278–304, 2011. [5] Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, Aarti Singh, and Larry Wasserman. Statistical and computational tradeoffs in biclustering. In NIPS 2011 Workshop on Computational Trade-offs in Statistical Learning, 2011. [6] Shankar Bhamidi, Partha S Dey, and Andrew B Nobel. Energy landscape for large average submatrix detection problems in gaussian random matrices. arXiv preprint arXiv:1211.2284, 2012. [7] Yudong Chen and Jiaming Xu. Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. arXiv preprint arXiv:1402.1267, 2014. [8] Yizong Cheng and George M Church. Biclustering of expression data. In ISMB, volume 8, pages 93–103, 2000. [9] Laura Lazzeroni and Art Owen. Plaid models for gene expression data. Statistica Sinica, 12(1):61–86, 2002. [10] Jason D Lee, Dennis L Sun, Yuekai Sun, and Jonathan E Taylor. Exact post-selection inference with the lasso. arXiv preprint arXiv:1311.6238, 2013. [11] Zongming Ma and Yihong Wu. Computational barriers in minimax submatrix detection. arXiv preprint arXiv:1309.5914, 2013. [12] Andrey A Shabalin, Victor J Weigman, Charles M Perou, and Andrew B Nobel. Finding large average submatrices in high dimensional data. The Annals of Applied Statistics, pages 985–1012, 2009. 9
2015
121
5,616
Fast and Guaranteed Tensor Decomposition via Sketching Yining Wang, Hsiao-Yu Tung, Alex Smola Machine Learning Department Carnegie Mellon University, Pittsburgh, PA 15213 {yiningwa,htung}@cs.cmu.edu alex@smola.org Anima Anandkumar Department of EECS University of California Irvine Irvine, CA 92697 a.anandkumar@uci.edu Abstract Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results. Keywords: Tensor CP decomposition, count sketch, randomized methods, spectral methods, topic modeling 1 Introduction In many data-rich domains such as computer vision, neuroscience and social networks consisting of multi-modal and multi-relational data, tensors have emerged as a powerful paradigm for handling the data deluge. An important operation with tensor data is its decomposition, where the input tensor is decomposed into a succinct form. One of the popular decomposition methods is the CANDECOMP/PARAFAC (CP) decomposition, also known as canonical polyadic decomposition [12, 5], where the input tensor is decomposed into a succinct sum of rank-1 components. The CP decomposition has found numerous applications in data mining [4, 18, 20], computational neuroscience [10, 21], and recently, in statistical learning for latent variable models [1, 30, 28, 6]. For latent variable modeling, these methods yield consistent estimates under mild conditions such as non-degeneracy and require only polynomial sample and computational complexity [1, 30, 28, 6]. Given the importance of tensor methods for large-scale machine learning, there has been an increasing interest in scaling up tensor decomposition algorithms to handle gigantic real-world data tensors [27, 24, 8, 16, 14, 2, 29]. However, the previous works fall short in many ways, as described subsequently. In this paper, we design and analyze efficient randomized tensor methods using ideas from sketching [23]. The idea is to maintain a low-dimensional sketch of an input tensor and then perform implicit tensor decomposition using existing methods such as tensor power updates, alternating least squares or online tensor updates. We obtain the fastest decomposition methods for both sparse and dense tensors. Our framework can easily handle modern machine learning applications with billions of training instances, and at the same time, comes with attractive theoretical guarantees. 1 Our main contributions are as follows: Efficient tensor sketch construction: We propose efficient construction of tensor sketches when the input tensor is available in factored forms such as in the case of empirical moment tensors, where the factor components correspond to rank-1 tensors over individual data samples. We construct the tensor sketch via efficient FFT operations on the component vectors. Sketching each rank-1 component takes O(n + b log b) operations where n is the tensor dimension and b is the sketch length. This is much faster than the O(np) complexity for brute force computations of a pth-order tensor. Since empirical moment tensors are available in the factored form with N components, where N is the number of samples, it takes O((n + b log b)N) operations to compute the sketch. Implicit tensor contraction computations: Almost all tensor manipulations can be expressed in terms of tensor contractions, which involves multilinear combinations of different tensor fibres [19]. For example, tensor decomposition methods such as tensor power iterations, alternating least squares (ALS), whitening and online tensor methods all involve tensor contractions. We propose a highly efficient method to directly compute the tensor contractions without forming the input tensor explicitly. In particular, given the sketch of a tensor, each tensor contraction can be computed in O(n + b log b) operations, regardless of order of the source and destination tensors. This significantly accelerates the brute-force implementation that requires O(np) complexity for pth-order tensor contraction. In addition, in many applications, the input tensor is not directly available and needs to be computed from samples, such as the case of empirical moment tensors for spectral learning of latent variable models. In such cases, our method results in huge savings by combining implicit tensor contraction computation with efficient tensor sketch construction. Novel colliding hashes for symmetric tensors: When the input tensor is symmetric, which is the case for empirical moment tensors that arise in spectral learning applications, we propose a novel colliding hash design by replacing the Boolean ring with the complex ring C to handle multiplicities. As a result, it makes the sketch building process much faster and avoids repetitive FFT operations. Though the computational complexity remains the same, the proposed colliding hash design results in significant speed-up in practice by reducing the actual number of computations. Theoretical and empirical guarantees: We show that the quality of the tensor sketch does not depend on sparseness, uniform entry distribution, or any other properties of the input tensor. On the other hand, previous works assume specific settings such as sparse tensors [24, 8, 16], or tensors having entries with similar magnitude [27]. Such assumptions are unrealistic, and in practice, we may have both dense and spiky tensors, for example, unordered word trigrams in natural language processing. We prove that our proposed randomized method for tensor decomposition does not lead to any significant degradation of accuracy. Experiments on synthetic and real-world datasets show highly competitive results. We demonstrate a 10x to 100x speed-up over exact methods for decomposing dense, high-dimensional tensors. For topic modeling, we show a significant reduction in computational time over existing spectral LDA implementations with small performance loss. In addition, our proposed algorithm outperforms collapsed Gibbs sampling when running time is constrained. We also show that if a Gibbs sampler is initialized with our output topics, it converges within several iterations and outperforms a randomly initialized Gibbs sampler run for much more iterations. Since our proposed method is efficient and avoids local optima, it can be used to accelerate the slow burn-in phase in Gibbs sampling. Related Works: There have been many works on deploying efficient tensor decomposition methods [27, 24, 8, 16, 14, 2, 29]. Most of these works except [27, 2] implement the alternating least squares (ALS) algorithm [12, 5]. However, this is extremely expensive since the ALS method is run in the input space, which requires O(n3) operations to execute one least squares step on an n-dimensional (dense) tensor. Thus, they are only suited for extremely sparse tensors. An alternative method is to first reduce the dimension of the input tensor through procedures such as whitening to O(k) dimension, where k is the tensor rank, and then carry out ALS in the dimensionreduced space on k × k × k tensor [13]. This results in significant reduction of computational complexity when the rank is small (k ≪n). Nonetheless, in practice, such complexity is still prohibitively high as k could be several thousands in many settings. To make matters even worse, when the tensor corresponds to empirical moments computed from samples, such as in spectral learning of latent variable models, it is actually much slower to construct the reduced dimension 2 Table 1: Summary of notations. See also Appendix F. Variables Operator Meaning Variables Operator Meaning a, b ∈Cn a ◦b ∈Cn Element-wise product a ∈Cn a⊗3 ∈Cn×n×n a ⊗a ⊗a a, b ∈Cn a ∗b ∈Cn Convolution A, B ∈Cn×m A ⊙B ∈Cn2×m Khatri-Rao product a, b ∈Cn a ⊗b ∈Cn×n Tensor product T ∈Cn×n×n T(1) ∈Cn×n2 Mode expansion k × k × k tensor from training data than to decompose it, since the number of training samples is typically very large. Another alternative is to carry out online tensor decomposition, as opposed to batch operations in the above works. Such methods are extremely fast [14], but can suffer from high variance. The sketching ideas developed in this paper will improve our ability to handle larger sizes of mini-batches and therefore result in reduced variance in online tensor methods. Another alternative method is to consider a randomized sampling of the input tensor in each iteration of tensor decomposition [27, 2]. However, such methods can be expensive due to I/O calls and are sensitive to the sampling distribution. In particular, [27] employs uniform sampling, which is incapable of handling tensors with spiky elements. Though non-uniform sampling is adopted in [2], it requires an additional pass over the training data to compute the sampling distribution. In contrast, our sketch based method takes only one pass of the data. 2 Preliminaries Tensor, tensor product and tensor decomposition A 3rd order tensor 1 T of dimension n has n3 entries. Each entry can be represented as Tijk for i, j, k ∈{1, · · · , n}. For an n × n × n tensor T and a vector u ∈Rn, we define two forms of tensor products (contractions) as follows: T(u, u, u) = n X i,j,k=1 Ti,j,kuiujuk; T(I, u, u) =   n X j,k=1 T1,j,kujuk, · · · , n X j,k=1 Tn,j,kujuk  . Note that T(u, u, u) ∈R and T(I, u, u) ∈Rn. For two complex tensors A, B of the same order and dimension, its inner product is defined as ⟨A, B⟩:= P l AlBl, where l ranges over all tuples that index the tensors. The Frobenius norm of a tensor is simply ∥A∥F = p ⟨A, A⟩. The rank-k CP decomposition of a 3rd-order n-dimensional tensor T ∈ Rn×n×n involves scalars {λi}k i=1 and n-dimensional vectors {ai, bi, ci}k i=1 such that the residual ∥T − Pk i=1 λiai ⊗bi ⊗ci∥2 F is minimized. Here R = a ⊗b ⊗c is a 3rd order tensor defined as Rijk = aibjck. Additional notations are defined in Table 1 and Appendix F. Robust tensor power method The method was proposed in [1] and was shown to provably succeed if the input tensor is a noisy perturbation of the sum of k rank-1 tensors whose base vectors are orthogonal. Fix an input tensor T ∈Rn×n×n, The basic idea is to randomly generate L initial vectors and perform T power update steps: ˆu = T(I, u, u)/∥T(I, u, u)∥2. The vector that results in the largest eigenvalue T(u, u, u) is then kept and subsequent eigenvectors can be obtained via deflation. If implemented naively, the algorithm takes O(kn3LT) time to run 2, requiring O(n3) storage. In addition, in certain cases when a second-order moment matrix is available, the tensor power method can be carried out on a k × k × k whitened tensor [1], thus improving the time complexity by avoiding dependence on the ambient dimension n. Apart from the tensor power method, other algorithms such as Alternating Least Squares (ALS, [12, 5]) and Stochastic Gradient Descent (SGD, [14]) have also been applied to tensor CP decomposition. Tensor sketch Tensor sketch was proposed in [23] as a generalization of count sketch [7]. For a tensor T of dimension n1 × · · · × np, random hash functions h1, · · · , hp : [n] →[b] with Prhj[hj(i) = t] = 1/b for every i ∈[n], j ∈[p], t ∈[b] and binary Rademacher variables ξ1, · · · , ξp : [n] →{±1}, the sketch sT : [b] →R of tensor T is defined as sT(t) = X H(i1,··· ,ip)=t ξ1(i1) · · · ξp(ip)Ti1,··· ,ip, (1) 1Though we mainly focus on 3rd order tensors in this work, extension to higher order tensors is easy. 2L is usually set to be a linear function of k and T is logarithmic in n; see Theorem 5.1 in [1]. 3 where H(i1, · · · , ip) = (h1(i1) + · · · + hp(ip)) mod b. The corresponding recovery rule is bTi1,··· ,ip = ξ1(i1) · · · ξp(ip)sT(H(i1, · · · , ip)). For accurate recovery, H needs to be 2-wise independent, which is achieved by independently selecting h1, · · · , hp from a 2-wise independent hash family [26]. Finally, the estimation can be made more robust by the standard approach of taking B independent sketches of the same tensor and then report the median of the B estimates [7]. 3 Fast tensor decomposition via sketching In this section we first introduce an efficient procedure for computing sketches of factored or empirical moment tensors, which appear in a wide variety of applications such as parameter estimation of latent variable models. We then show how to run tensor power method directly on the sketch with reduced computational complexity. In addition, when an input tensor is symmetric (i.e., Tijk the same for all permutations of i, j, k) we propose a novel “colliding hash” design, which speeds up the sketch building process. Due to space limits we only consider the robust tensor power method in the main text. Methods and experiments for sketching based ALS are presented in Appendix C. To avoid confusions, we emphasize that n is used to denote the dimension of the tensor to be decomposed, which is not necessarily the same as the dimension of the original data tensor. Indeed, once whitening is applied n could be as small as the intrinsic dimension k of the original data tensor. 3.1 Efficient sketching of empirical moment tensors Sketching a 3rd-order dense n-dimensional tensor via Eq. (1) takes O(n3) operations, which in general cannot be improved because the input size is Ω(n3). However, in practice data tensors are usually structured. One notable example is empirical moment tensors, which arises naturally in parameter estimation problems of latent variable models. More specifically, an empirical moment tensor can be expressed as T = ˆE[x⊗3] = 1 N PN i=1 x⊗3 i , where N is the total number of training data points and xi is the ith data point. In this section we show that computing sketches of such tensors can be made significantly more efficient than the brute-force implementations via Eq. (1). The main idea is to sketch low-rank components of T efficiently via FFT, a trick inspired by previous efforts on sketching based matrix multiplication and kernel learning [22, 23]. We consider the more generalized case when an input tensor T can be written as a weighted sum of known rank-1 components: T = PN i=1 aiui ⊗vi ⊗wi, where ai are scalars and ui, vi, wi are known n-dimensional vectors. The key observation is that the sketch of each rank-1 component Ti = ui ⊗vi ⊗wi can be efficiently computed by FFT. In particular, sTi can be computed as sTi = s1,ui ∗s2,vi ∗s3,wi = F−1(F(s1,ui) ◦F(s2,vi) ◦F(s3,wi)), (2) where ∗denotes convolution and ◦stands for element-wise vector product. s1,u(t) = P h1(i)=t ξ1(i)ui is the count sketch of u and s2,v, s3,w are defined similarly. F and F−1 denote the Fast Fourier Transform (FFT) and its inverse operator. By applying FFT, we reduce the convolution computation into element-wise product evaluation in the Fourier space. Therefore, sT can be computed using O(n+b log b) operations, where the O(b log b) term arises from FFT evaluations. Finally, because the sketching operator is linear (i.e., s(P i aiTi) = P i ais(Ti)), sT can be computed in O(N(n + b log b)), which is much cheaper than brute-force that takes O(Nn3) time. 3.2 Fast robust tensor power method We are now ready to present the fast robust tensor power method, the main algorithm of this paper. The computational bottleneck of the original robust tensor power method is the computation of two tensor products: T(I, u, u) and T(u, u, u). A naive implementation requires O(n3) operations. In this section, we show how to speed up computation of these products. We show that given the sketch of an input tensor T, one can approximately compute both T(I, u, u) and T(u, u, u) in O(b log b + n) steps, where b is the hash length. Before going into details, we explain the key idea behind our fast tensor product computation. For any two tensors A, B, its inner product ⟨A, B⟩can be approximated by 4 ⟨A, B⟩≈⟨sA, sB⟩. (3) 3ℜ(·) denotes the real part of a complex number. med(·) denotes the median. 4All approximations will be theoretically justified in Section 4 and Appendix E.2. 4 Algorithm 1 Fast robust tensor power method 1: Input: noisy symmetric tensor ¯T = T + E ∈Rn×n×n; target rank k; number of initializations L, number of iterations T, hash length b, number of independent sketches B. 2: Initialization: h(m) j , ξ(m) j for j ∈{1, 2, 3} and m ∈[B]; compute sketches s(m) ¯T ∈Cb. 3: for τ = 1 to L do 4: Draw u(τ) 0 uniformly at random from unit sphere. 5: for t = 1 to T do 6: For each m ∈[B], j ∈{2, 3} compute the sketch of u(τ) t−1 using h(m) j ,ξ(m) j via Eq. (1). 7: Compute v(m) ≈¯T(I, u(τ) t−1, u(τ) t−1) as follows: first evaluate ¯s(m) = F−1(F(s(m) ¯T ) ◦ F(s(m) 2,u ) ◦F(s(m) 3,u )). Set [v(m)]i as [v(m)]i ←ξ1(i)[¯s(m)]h1(i) for every i ∈[n]. 8: Set ¯vi ←med(ℜ(v(1) i ), · · · , ℜ(v(B) i ))3. Update: u(τ) t = ¯v/∥¯v∥. 9: Selection Compute λ(m) τ ≈¯T(u(τ) T , u(τ) T , u(τ) T ) using s(m) ¯T for τ ∈[L] and m ∈[B]. Evaluate λτ = med(λ(1) τ , · · · , λ(B) τ ) and τ ∗= argmaxτλτ. Set ˆλ = λτ ∗and ˆu = u(τ ∗) T . 10: Deflation For each m ∈[B] compute sketch ˜s(m) ∆T for the rank-1 tensor ∆T = ˆλˆu⊗3. 11: Output: the eigenvalue/eigenvector pair (ˆλ, ˆu) and sketches of the deflated tensor ¯T −∆T. Table 2: Computational complexity of sketched and plain tensor power method. n is the tensor dimension; k is the intrinsic tensor rank; b is the sketch length. Per-sketch time complexity is shown. PLAIN SKETCH PLAIN+WHITENING SKETCH+WHITENING preprocessing: general tensors O(n3) O(kn3) O(n3) preprocessing: factored tensors O(Nn3) O(N(n + b log b)) O(N(nk + k3)) O(N(nk + b log b)) with N components per tensor contraction time O(n3) O(n + b log b) O(k3) O(k + b log b) Eq. (3) immediately results in a fast approximation procedure of T(u, u, u) because T(u, u, u) = ⟨T, X⟩where X = u⊗u⊗u is a rank one tensor, whose sketch can be built in O(n+b log b) time by Eq. (2). Consequently, the product can be approximately computed using O(n + b log b) operations if the tensor sketch of T is available. For tensor product of the form T(I, u, u). The ith coordinate in the result can be expressed as ⟨T, Yi⟩where Yi = ei ⊗u ⊗u; ei = (0, · · · , 0, 1, 0, · · · , 0) is the ith indicator vector. We can then apply Eq. (3) to approximately compute ⟨T, Yi⟩efficiently. However, this method is not completely satisfactory because it requires sketching n rank-1 tensors (Y1 through Yn), which results in O(n) FFT evaluations by Eq. (2). Below we present a proposition that allows us to use only O(1) FFTs to approximate T(I, u, u). Proposition 1. ⟨sT, s1,ei ∗s2,u ∗s3,u⟩= ⟨F−1(F(sT) ◦F(s2,u) ◦F(s3,u)), s1,ei⟩. Proposition 1 is proved in Appendix E.1. The main idea is to “shift” all terms not depending on i to the left side of the inner product and eliminate the inverse FFT operation on the right side so that sei contains only one nonzero entry. As a result, we can compute F−1(F(sT) ◦F(s2,u) ◦F(s3,u)) once and read off each entry of T(I, u, u) in constant time. In addition, the technique can be further extended to symmetric tensor sketches, with details deferred to Appendix B due to space limits. When operating on an n-dimensional tensor, The algorithm requires O(kLT(n + Bb log b)) running time (excluding the time for building ˜s ¯T) and O(Bb) memory, which significantly improves the O(kn3LT) time and O(n3) space complexity over the brute force tensor power method. Here L, T are algorithm parameters for robust tensor power method. Previous analysis shows that T = O(log k) and L = poly(k), where poly(·) is some low order polynomial function. [1] Finally, Table 2 summarizes computational complexity of sketched and plain tensor power method. 3.3 Colliding hash and symmetric tensor sketch For symmetric input tensors, it is possible to design a new style of tensor sketch that can be built more efficiently. The idea is to design hash functions that deliberately collide symmetric entries, i.e., (i, j, k), (j, i, k), etc. Consequently, we only need to consider entries Tijk with i ≤j ≤k when building tensor sketches. An intuitive idea is to use the same hash function and Rademacher random variable for each order, that is, h1(i) = h2(i) = h3(i) =: h(i) and ξ1(i) = ξ2(i) = ξ3(i) =: ξ(i). 5 In this way, all permutations of (i, j, k) will collide with each other. However, such a design has an issue with repeated entries because ξ(i) can only take ±1 values. Consider (i, i, k) and (j, j, k) as an example: ξ(i)2ξ(k) = ξ(j)2ξ(k) with probability 1 even if i ̸= j. On the other hand, we need E[ξ(a)ξ(b)] = 0 for any pair of distinct 3-tuples a and b. To address the above-mentioned issue, we extend the Rademacher random variables to the complex domain and consider all roots of zm = 1, that is, Ω= {ωj}m−1 j=0 where ωj = ei 2πj m . Suppose σ(i) is a Rademacher random variable with Pr[σ(i) = ωi] = 1/m. By elementary algebra, E[σ(i)p] = 0 whenever m is relative prime to p or m can be divided by p. Therefore, by setting m = 4 we avoid collisions of repeated entries in a 3rd order tensor. More specifically, The symmetric tensor sketch of a symmetric tensor T ∈Rn×n×n can be defined as ˜sT(t) := X ˜ H(i,j,k)=t Ti,j,kσ(i)σ(j)σ(k), (4) where ˜H(i, j, k) = (h(i) + h(j) + h(k)) mod b. To recover an entry, we use bTi,j,k = 1/κ · σ(i) · σ(j) · σ(k) · ˜sT(H(i, j, k)), (5) where κ = 1 if i = j = k; κ = 3 if i = j or j = k or i = k; κ = 6 otherwise. For higher order tensors, the coefficients can be computed via the Young tableaux which characterizes symmetries under the permutation group. Compared to asymmetric tensor sketches, the hash function h needs to satisfy stronger independence conditions because we are using the same hash function for each order. In our case, h needs to be 6-wise independent to make ˜H 2-wise independent. The fact is due to the following proposition, which is proved in Appendix E.1. Proposition 2. Fix p and q. For h : [n] →[b] define symmetric mapping ˜H : [n]p →[b] as ˜H(i1, · · · , ip) = h(i1) + · · · + h(ip). If h is (pq)-wise independent then H is q-wise independent. The symmetric tensor sketch described above can significantly speed up sketch building processes. For a general tensor with M nonzero entries, to build ˜sT one only needs to consider roughly M/6 entries (those Tijk ̸= 0 with i ≤j ≤k). For a rank-1 tensor u⊗3, only one FFT is needed to build F(˜s); in contrast, to compute Eq. (2) one needs at least 3 FFT evaluations. Finally, in Appendix B we give details on how to seamlessly combine symmetric hashing and techniques in previous sections to efficiently construct and decompose a tensor. 4 Error analysis In this section we provide theoretical analysis on approximation error of both tensor sketch and the fast sketched robust tensor power method. We mainly focus on symmetric tensor sketches, while extension to asymmetric settings is trivial. Due to space limits, all proofs are placed in the appendix. 4.1 Tensor sketch concentration bounds Theorem 1 bounds the approximation error of symmetric tensor sketches when computing T(u, u, u) and T(I, u, u). Its proof is deferred to Appendix E.2. Theorem 1. Fix a symmetric real tensor T ∈Rn×n×n and a real vector u ∈Rn with ∥u∥2 = 1. Suppose ε1,T (u) ∈R and ε2,T (u) ∈Rn are estimation errors of T(u, u, u) and T(I, u, u) using B independent symmetric tensor sketches; that is, ε1,T (u) = bT(u, u, u) −T(u, u, u) and ε2,T (u) = bT(I, u, u)−T(I, u, u). If B = Ω(log(1/δ)) then with probability ≥1−δ the following error bounds hold: ε1,T (u) = O(∥T∥F / √ b); [ε2,T (u)]i = O(∥T∥F / √ b), ∀i ∈{1, · · · , n}. (6) In addition, for any fixed w ∈Rn, ∥w∥2 = 1 with probability ≥1 −δ we have ⟨w, ε2,T (u)⟩2 = O(∥T∥2 F /b). (7) 4.2 Analysis of the fast tensor power method We present a theorem analyzing robust tensor power method with tensor sketch approximations. A more detailed theorem statement along with its proof can be found in Appendix E.3. Theorem 2. Suppose ¯T = T + E ∈Rn×n×n where T = Pk i=1 λiv⊗3 i with an orthonormal basis {vi}k i=1, λ1 > · · · > λk > 0 and ∥E∥= ϵ. Let {(ˆλi, ˆvi)}k i=1 be the eigen6 Table 3: Squared residual norm on top 10 recovered eigenvectors of 1000d tensors and running time (excluding I/O and sketch building time) for plain (exact) and sketched robust tensor power methods. Two vectors are considered mismatch (wrong) if ∥v −ˆv∥2 2 > 0.1. A extended version is shown as Table 5 in Appendix A. Residual norm No. of wrong vectors Running time (min.) log2(b): 12 13 14 15 16 12 13 14 15 16 12 13 14 15 16 σ = .01 B = 20 .40 .19 .10 .09 .08 8 6 3 0 0 .85 1.6 3.5 7.4 16.6 B = 30 .26 .10 .09 .08 .07 7 5 2 0 0 1.3 2.4 5.3 11.3 24.6 B = 40 .17 .10 .08 .08 .07 7 4 0 0 0 1.8 3.3 7.3 15.2 33.0 Exact .07 0 293.5 Table 4: Negative log-likelihood and running time (min) on the large Wikipedia dataset for 200 and 300 topics. k like. time log2 b iters k like. time log2 b iters 200 Spectral 7.49 34 12 300 7.39 56 13 Gibbs 6.85 561 30 6.38 818 30 Hybrid 6.77 144 12 5 6.31 352 13 10 value/eigenvector pairs obtained by Algorithm 1. Suppose ϵ = O(1/(λ1n)), T = Ω(log(n/δ) + log(1/ϵ) maxi λi/(λi −λi−1)) and L grows linearly with k. Assume the randomness of the tensor sketch is independent among tensor product evaluations. If B = Ω(log(n/δ)) and b satisfies b = Ω  max ϵ−2∥T∥2 F ∆(λ)2 , δ−4n2∥T∥2 F r(λ)2λ2 1  (8) where ∆(λ) = mini(λi −λi−1) and r(λ) = maxi,j>i(λi/λj), then with probability ≥1 −δ there exists a permutation π over [k] such that ∥vπ(i) −ˆvi∥2 ≤ϵ, |λπ(i) −ˆλi| ≤λiϵ/2, ∀i ∈{1, · · · , k} (9) and ∥T −Pk i=1 ˆλiˆv⊗3 i ∥≤cϵ for some constant c. Theorem 1 shows that the sketch length b can be set as o(n3) to provably approximately decompose a 3rd-order tensor with dimension n. Theorem 1 together with time complexity comparison in Table 2 shows that the sketching based fast tensor decomposition algorithm has better computational complexity over brute-force implementation. One potential drawback of our analysis is the assumption that sketches are independently built for each tensor product (contraction) evaluation. This is an artifact of our analysis and we conjecture that it can be removed by incorporating recent development of differentially private adaptive query framework [9]. 5 Experiments We demonstrate the effectiveness and efficiency of our proposed sketch based tensor power method on both synthetic tensors and real-world topic modeling problems. Experimental results involving the fast ALS method are presented in Appendix C.3. All methods are implemented in C++ and tested on a single machine with 8 Intel X5550@2.67Ghz CPUs and 32GB memory. For synthetic tensor decomposition we use only a single thread; for fast spectral LDA 8 to 16 threads are used. 5.1 Synthetic tensors In Table 5 we compare our proposed algorithms with exact decomposition methods on synthetic tensors. Let n = 1000 be the dimension of the input tensor. We first generate a random orthonormal basis {vi}n i=1 and then set the input tensor T as T = normalize(Pn i=1 λiv⊗3 i ) + E, where the eigenvalues λi satisfy λi = 1/i. The normalization step makes ∥T∥2 F = 1 before imposing noise. The Gaussian noise matrix E is symmetric with Eijk ∼N(0, σ/n1.5) for i ≤j ≤k and noise-tosignal level σ. Due to time constraints, we only compare the recovery error and running time on the top 10 recovered eigenvectors of the full-rank input tensor T. Both L and T are set to 30. Table 3 shows that our proposed algorithms achieve reasonable approximation error within a few minutes, which is much faster then exact methods. A complete version (Table 5) is deferred to Appendix A. 5.2 Topic modeling We implement a fast spectral inference algorithm for Latent Dirichlet Allocation (LDA [3]) by combining tensor sketching with existing whitening technique for dimensionality reduction. Implemen7 9 10 11 12 13 14 15 16 7.8 8 8.2 8.4 Log hash length Negative Log−likelihood k=50 k=100 k=200 Exact, k=50 Exact, k=100 Exact, k=200 Gibbs sampling, 100 iterations, 145 mins Figure 1: Left: negative log-likelihood for fast and exact tensor power method on Wikipedia dataset. Right: negative log-likelihood for collapsed Gibbs sampling, fast LDA and Gibbs sampling using Fast LDA as initialization. tation details are provided in Appendix D. We compare our proposed fast spectral LDA algorithm with baseline spectral methods and collapsed Gibbs sampling (using GibbsLDA++ [25] implementation) on two real-world datasets: Wikipedia and Enron. Dataset details are presented in A Only the most frequent V words are kept and the vocabulary size V is set to 10000. For the robust tensor power method the parameters are set to L = 50 and T = 30. For ALS we iterate until convergence, or a maximum number of 1000 iterations is reached. α0 is set to 1.0 and B is set to 30. Obtained topic models Φ ∈RV ×K are evaluated on a held-out dataset consisting of 1000 documents randomly picked out from training datasets. For each testing document d, we fit a topic mixing vector ˆπd ∈RK by solving the following optimization problem: ˆπd = argmin∥π∥1=1,π≥0∥wd −Φπ∥2, where wd is the empirical word distribution of document d. The per-document log-likelihood is then defined as Ld = 1 nd Pnd i=1 ln p(wdi), where p(wdi) = PK k=1 ˆπkΦwdi,k. Finally, the average Ld over all testing documents is reported. Figure 1 left shows the held-out negative log-likelihood for fast spectral LDA under different hash lengths b. We can see that as b increases, the performance approaches the exact tensor power method because sketching approximation becomes more accurate. On the other hand, Table 6 shows that fast spectral LDA runs much faster than exact tensor decomposition methods while achieving comparable performance on both datasets. Figure 1 right compares the convergence of collapsed Gibbs sampling with different number of iterations and fast spectral LDA with different hash lengths on Wikipedia dataset. For collapsed Gibbs sampling, we set α = 50/K and β = 0.1 following [11]. As shown in the figure, fast spectral LDA achieves comparable held-out likelihood while running faster than collapsed Gibbs sampling. We further take the dictionary Φ output by fast spectral LDA and use it as initializations for collapsed Gibbs sampling (the word topic assignments z are obtained by 5-iteration Gibbs sampling, with the dictionary Φ fixed). The resulting Gibbs sampler converges much faster: with only 3 iterations it already performs much better than a randomly initialized Gibbs sampler run for 100 iterations, which takes 10x more running time. We also report performance of fast spectral LDA and collapsed Gibbs sampling on a larger dataset in Table 4. The dataset was built by crawling 1,085,768 random Wikipedia pages and a held-out evaluation set was built by randomly picking out 1000 documents from the dataset. Number of topics k is set to 200 or 300, and after getting topic dictionary Φ from fast spectral LDA we use 2iteration Gibbs sampling to obtain word topic assignments z. Table 4 shows that the hybrid method (i.e., collapsed Gibbs sampling initialized by spectral LDA) achieves the best likelihood performance in a much shorter time, compared to a randomly initialized Gibbs sampler. 6 Conclusion In this work we proposed a sketching based approach to efficiently compute tensor CP decomposition with provable guarantees. We apply our proposed algorithm on learning latent topics of unlabeled document collections and achieve significant speed-up compared to vanilla spectral and collapsed Gibbs sampling methods. Some interesting future directions include further improving the sample complexity analysis and applying the framework to a broader class of graphical models. Acknowledgement: Anima Anandkumar is supported in part by the Microsoft Faculty Fellowship and the Sloan Foundation. Alex Smola is supported in part by a Google Faculty Research Grant. 8 References [1] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15:2773–2832, 2014. [2] S. Bhojanapalli and S. Sanghavi. A new sampling technique for tensors. arXiv:1502.05023, 2015. [3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3:993–1022, 2003. [4] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka Jr, and T. M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010. [5] J. D. Carroll and J.-J. Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young decomposition. Psychometrika, 35(3):283–319, 1970. [6] A. Chaganty and P. Liang. Estimating latent-variable graphical models using moments and likelihoods. In ICML, 2014. [7] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theoretical Computer Science, 312(1):3–15, 2004. [8] J. H. Choi and S. Vishwanathan. DFacTo: Distributed factorization of tensors. In NIPS, 2014. [9] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth. Preserving statistical validity in adaptive data analysis. In STOC, 2015. [10] A. S. Field and D. Graupe. Topographic component (parallel factor) analysis of multichannel evoked potentials: practical issues in trilinear spatiotemporal decomposition. Brain Topography, 3(4):407–423, 1991. [11] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(suppl 1):5228–5235, 2004. [12] R. A. Harshman. Foundations of the PARAFAC procedure: Models and conditions for an explanatory multi-modal factor analysis. UCLA Working Papers in Phonetics, 16:1–84, 1970. [13] F. Huang, S. Matusevych, A. Anandkumar, N. Karampatziakis, and P. Mineiro. Distributed latent dirichlet allocation via tensor factorization. In NIPS Optimization Workshop, 2014. [14] F. Huang, U. N. Niranjan, M. U. Hakeem, and A. Anandkumar. Fast detection of overlapping communities via online tensor methods. arXiv:1309.0787, 2013. [15] A. Jain. Fundamentals of digital image processing, 1989. [16] U. Kang, E. Papalexakis, A. Harpale, and C. Faloutsos. Gigatensor: Scaling tensor analysis up by 100 times - algorithms and discoveries. In KDD, 2012. [17] B. Klimt and Y. Yang. Introducing the enron corpus. In CEAS, 2004. [18] T. Kolda and B. Bader. The tophits model for higher-order web link analysis. In Workshop on link analysis, counterterrorism and security, 2006. [19] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009. [20] T. G. Kolda and J. Sun. Scalable tensor decompositions for multi-aspect data mining. In ICDM, 2008. [21] M. Mørup, L. K. Hansen, C. S. Herrmann, J. Parnas, and S. M. Arnfred. Parallel factor analysis as an exploratory tool for wavelet transformed event-related eeg. NeuroImage, 29(3):938–947, 2006. [22] R. Pagh. Compressed matrix multiplication. In ITCS, 2012. [23] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In KDD, 2013. [24] A.-H. Phan, P. Tichavsky, and A. Cichocki. Fast alternating LS algorithms for high order CANDECOMP/PARAFAC tensor factorizations. IEEE Transactions on Signal Processing, 61(19):4834–4846, 2013. [25] X.-H. Phan and C.-T. Nguyen. GibbsLDA++: A C/C++ implementation of latent dirichlet allocation (lda), 2007. [26] M. Ptras¸cu and M. Thorup. The power of simple tabulation hashing. Journal of the ACM, 59(3):14, 2012. [27] C. Tsourakakis. MACH: Fast randomized tensor decompositions. In SDM, 2010. [28] H.-Y. Tung and A. Smola. Spectral methods for indian buffet process inference. In NIPS, 2014. [29] C. Wang, X. Liu, Y. Song, and J. Han. Scalable moment-based inference for latent dirichlet allocation. In ECML/PKDD, 2014. [30] Y. Wang and J. Zhu. Spectral methods for supervised topic models. In NIPS, 2014. 9
2015
122
5,617
Inverse Reinforcement Learning with Locally Consistent Reward Functions Quoc Phong Nguyen†, Kian Hsiang Low†, and Patrick Jaillet§ Dept. of Computer Science, National University of Singapore, Republic of Singapore† Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA§ {qphong,lowkh}@comp.nus.edu.sg†, jaillet@mit.edu§ Abstract Existing inverse reinforcement learning (IRL) algorithms have assumed each expert’s demonstrated trajectory to be produced by only a single reward function. This paper presents a novel generalization of the IRL problem that allows each trajectory to be generated by multiple locally consistent reward functions, hence catering to more realistic and complex experts’ behaviors. Solving our generalized IRL problem thus involves not only learning these reward functions but also the stochastic transitions between them at any state (including unvisited states). By representing our IRL problem with a probabilistic graphical model, an expectation-maximization (EM) algorithm can be devised to iteratively learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories. As a result, the most likely partition of a trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived. Empirical evaluation on synthetic and real-world datasets shows that our IRL algorithm outperforms the state-of-the-art EM clustering with maximum likelihood IRL, which is, interestingly, a reduced variant of our approach. 1 Introduction The reinforcement learning problem in Markov decision processes (MDPs) involves an agent using its observed rewards to learn an optimal policy that maximizes its expected total reward for a given task. However, such observed rewards or the reward function defining them are often not available nor known in many real-world tasks. The agent can therefore learn its reward function from an expert associated with the given task by observing the expert’s behavior or demonstration, and this approach constitutes the inverse reinforcement learning (IRL) problem. Unfortunately, the IRL problem is ill-posed because infinitely many reward functions are consistent with the expert’s observed behavior. To resolve this issue, existing IRL algorithms have proposed alternative choices of the agent’s reward function that minimize different dissimilarity measures defined using various forms of abstractions of the agent’s generated optimal behavior vs. the expert’s observed behavior, as briefly discussed below (see [17] for a detailed review): (a) The projection algorithm [1] selects a reward function that minimizes the squared Euclidean distance between the feature expectations obtained by following the agent’s generated optimal policy and the empirical feature expectations observed from the expert’s demonstrated state-action trajectories; (b) the multiplicative weights algorithm for apprentice learning [24] adopts a robust minimax approach to deriving the agent’s behavior, which is guaranteed to perform no worse than the expert and is equivalent to choosing a reward function that minimizes the difference between the expected average reward under the agent’s generated optimal policy and the expert’s empirical average reward approximated using the agent’s reward weights; (c) the linear programming apprentice learning algorithm [23] picks its reward function by minimizing the same dissimilarity measure but incurs much less time empirically; (d) the policy matching algorithm [16] aims to match the agent’s generated optimal behavior to the expert’s observed behavior by choosing a reward function that minimizes the sum of 1 squared Euclidean distances between the agent’s generated optimal policy and the expert’s estimated policy (i.e., from its demonstrated trajectories) over every possible state weighted by its empirical state visitation frequency; (e) the maximum entropy IRL [27] and maximum likelihood IRL (MLIRL) [2] algorithms select reward functions that minimize an empirical approximation of the KullbackLeibler divergence between the distributions of the agent’s and expert’s generated state-action trajectories, which is equivalent to maximizing the average log-likelihood of the expert’s demonstrated trajectories. The log-likelihood formulations of the maximum entropy IRL and MLIRL algorithms differ in the use of smoothing at the trajectory and action levels, respectively. As a result, the former’s log-likelihood or dissimilarity measure does not utilize the agent’s generated optimal policy, which is consequently questioned by [17] as to whether it is considered an IRL algorithm. Bayesian IRL [21] extends IRL to the Bayesian setting by maintaining a distribution over all possible reward functions and updating it using Bayes rule given the expert’s demonstrated trajectories. The work of [5] extends the projection algorithm [1] to handle partially observable environments given the expert’s policy (i.e., represented as a finite state controller) or observation-action trajectories. All the IRL algorithms described above have assumed that the expert’s demonstrated trajectories are only generated by a single reward function. To relax this restrictive assumption, the recent works of [2, 6] have, respectively, generalized MLIRL (combining it with expectation-maximization (EM) clustering) and Bayesian IRL (integrating it with a Dirichlet process mixture model) to handle trajectories generated by multiple reward functions (e.g., due to many intentions) in observable environments. But, each trajectory is assumed to be produced by a single reward function. In this paper, we propose a new generalization of the IRL problem in observable environments, which is inspired by an open question posed in the seminal works of IRL [19, 22]: If behavior is strongly inconsistent with optimality, can we identify “locally consistent” reward functions for specific regions in state space? Such a question implies that no single reward function is globally consistent with the expert’s behavior, hence invalidating the use of all the above-mentioned IRL algorithms. More importantly, multiple reward functions may be locally consistent with the expert’s behavior in different segments along its state-action trajectory and the expert has to switch/transition between these locally consistent reward functions during its demonstration. This can be observed in the following real-world example [26] where every possible intention of the expert is uniquely represented by a different reward function: A driver intends to take the highway to a food center for lunch. An electronic toll coming into effect on the highway may change his intention to switch to another route. Learning of the driver’s intentions to use different routes and his transitions between them allows the transport authority to analyze, understand, and predict the traffic route patterns and behavior for regulating the toll collection. This example, among others (e.g., commuters’ intentions to use different transport modes, tourists’ intentions to visit different attractions, Section 4), motivate the practical need to formalize and solve our proposed generalized IRL problem. This paper presents a novel generalization of the IRL problem that, in particular, allows each expert’s state-action trajectory to be generated by multiple locally consistent reward functions, hence catering to more realistic and complex experts’ behaviors than that afforded by existing variants of the IRL problem (which all assume that each trajectory is produced by a single reward function) discussed earlier. At first glance, one may straightaway perceive our generalization as an IRL problem in a partially observable environment by representing the choice of locally consistent reward function in a segment as a latent state component. However, the observation model cannot be easily specified nor learned from the expert’s state-action trajectories, which invalidates the use of IRL for POMDP [5]. Instead, we develop a probabilistic graphical model for representing our generalized IRL problem (Section 2), from which an EM algorithm can be devised to iteratively select the locally consistent reward functions as well as learn the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories (Section 3). As a result, the most likely partition of an expert’s demonstrated trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived (Section 3), thus enabling practitioners to identify states in which the expert transitions between locally consistent reward functions and investigate the resulting causes. To extend such a partitioning to work for trajectories traversing through any (possibly unvisited) region of the state space, we propose using a generalized linear model to represent and predict the stochastic transitions between reward functions at any state (i.e., including states not visited in the expert’s demonstrated trajectories) by exploiting features that influence these transitions (Section 2). Finally, our proposed IRL algorithm is empirically evaluated using both synthetic and real-world datasets (Section 4). 2 2 Problem Formulation A Markov decision process (MDP) for an agent is defined as a tuple (S, A, t, r✓, γ) consisting of a finite set S of its possible states such that each state s 2 S is associated with a column vector φs of realized feature measurements, a finite set A of its possible actions, a state transition function t : S ⇥A ⇥S ! [0, 1] denoting the probability t(s, a, s0) , P(s0|s, a) of moving to state s0 by performing action a in state s, a reward function r✓: S ! R mapping each state s 2 S to its reward r✓(s) , ✓>φs where ✓is a column vector of reward weights, and constant factor γ 2 (0, 1) discounting its future rewards. When ✓is known, the agent can compute its policy ⇡✓: S ⇥A ! [0, 1] specifying the probability ⇡✓(s, a) , P(a|s, r✓) of performing action a in state s. However, ✓is not known in IRL and to be learned from an expert (Section 3). Let R denote a finite set of locally consistent reward functions of the agent and re✓be a reward function chosen arbitrarily from R prior to learning. Define a transition function ⌧! : R⇥S⇥R ! [0, 1] for switching between these reward functions as the probability ⌧!(r✓, s, r✓0) , P(r✓0|s, r✓, !) of switching from reward function r✓to reward function r✓0 in state s where the set ! , {!r✓r✓0 }r✓2R,r✓02R\{re✓} contains column vectors of transition weights !r✓r✓0 for all r✓2 R and r✓0 2 R \ {re✓} if the features influencing the stochastic transitions between reward functions can be additionally observed by the agent during the expert’s demonstration, and ! , ; otherwise. In our generalized IRL problem, ⌧! is not known and to be learned from the expert (Section 3). Specifically, in the former case, we propose using a generalized linear model to represent ⌧!: ⌧!(r✓, s, r✓0) , ⇢exp(!> r✓r✓0 's)/(1 + P r¯✓2R\{re✓} exp(!> r✓r¯✓'s)) if r✓0 6= re✓, 1/(1 + P r¯✓2R\{re✓} exp(!> r✓r¯✓'s)) otherwise; (1) where 's is a column vector of random feature measurements influencing the stochastic transitions between reward functions (i.e., ⌧!) in state s. Remark 1. Different from φs whose feature measurements are typically assumed in IRL algorithms to be realized/known to the agent for all s 2 S and remain static over time, the feature measurements of 's are, in practice, often not known to the agent a priori and can only be observed when the expert (agent) visits the corresponding state s 2 S during its demonstration (execution), and may vary over time according to some unknown distribution, as motivated by the real-world examples given in Section 1. Without prior observation of the feature measurements of 's for all s 2 S (or knowledge of their distributions) necessary for computing ⌧! (1), the agent cannot consider exploiting ⌧! for switching between reward functions within MDP or POMDP planning, even after learning its weights !; this eliminates the possibility of reducing our generalized IRL problem to an equivalent conventional IRL problem (Section 1) with only a single reward function (i.e., comprising a mixture of locally consistent reward functions). Furthermore, the observation model cannot be easily specified nor learned from the expert’s trajectories of states, actions, and 's, which invalidates the use of IRL for POMDP [5]. Instead of exploiting ⌧! within planning, during the agent’s execution, when it visits some state s and observes the feature measurements of 's, it can then use and compute ⌧! for state s to switch between reward functions, each of which has generated a separate MDP policy prior to execution, as illustrated in a simple example in Fig. 1 below. r✓ r✓0 ⌧!(r✓, s, r✓) ⌧!(r✓, s, r✓0) ⌧!(r✓0, s, r✓) ⌧!(r✓0, s, r✓0) Figure 1: Transition function ⌧! of an agent in state s for switching between two reward functions r✓and r✓0 with their respective policies ⇡✓and ⇡✓0 generated prior to execution. Remark 2. Using a generalized linear model to represent ⌧! (1) allows learning of the stochastic transitions between reward functions (specifically, by learning ! (Section 3)) to be generalized across different states. After learning, (1) can then be exploited for predicting the stochastic transitions between reward functions at any state (i.e., including states not visited in the expert’s demonstrated state-action trajectories). Consequently, the agent can choose to traverse a trajectory through any region (i.e., possibly not visited by the expert) of the state space during its execution and the most likely partition of its trajectory into segments that are generated from different locally consistent reward functions selected by EM can still be derived (Section 3). In contrast, if the feature measurements of 's cannot be observed by the agent during the expert’s demonstration (i.e., ! = ;, as defined above), then such a generalization is not possible; only the transition probabilities of switching between reward functions at states visited in the expert’s demonstrated trajectories can be estimated (Section 3). In practice, since the number |S| of visited states is expected to be much larger than the length L of any feature vector 's, 3 the number O(|S||R|2) of transition probabilities to be estimated is bigger than |!| = O(L|R|2) in (1). So, observing 's offers a further advantage of reducing the number of parameters to be learned. R✓n 0 R✓n 1 An 1 Sn 1 R✓n 2 An 2 Sn 2 · · · · · · · · · R✓n Tn An Tn Sn Tn Figure 2: Probabilistic graphical model of the expert’s n-th demonstrated trajectory encoding its stochastic transitions between reward functions with solid edges (i.e., ⌧!(r✓n t−1, sn t , r✓n t ) = P(r✓n t |sn t , r✓n t−1, !) for t = 1, . . . , Tn), state transitions with dashed edges (i.e., t(sn t , an t , sn t+1) = P(sn t+1|sn t , an t ) for t = 1, . . . , Tn −1), and policy with dotted edges (i.e., ⇡✓n t (sn t , an t ) = P(an t |sn t , r✓n t ) for t = 1, . . . , Tn). Fig. 2 shows the probabilistic graphical model for representing our generalized IRL problem. To describe our model, some notations are necessary: Let N be the number of the expert’s demonstrated trajectories and Tn be the length (i.e., number of time steps) of its n-th trajectory for n = 1, . . . , N. Let r✓n t 2 R, an t 2 A, and sn t 2 S denote its reward function, action, and state at time step t in its n-th trajectory, respectively. Let R✓n t , An t , and Sn t be random variables corresponding to their respective realizations r✓n t , an t , and sn t where R✓n t is a latent variable, and An t and Sn t are observable variables. Define r✓n , (r✓n t )Tn t=0, an , (an t )Tn t=1, and sn , (sn t )Tn t=1 as sequences of all its reward functions, actions, and states in its n-th trajectory, respectively. Finally, define r✓1:N , (r✓n)N n=1, a1:N , (an)N n=1, and s1:N , (sn)N n=1 as tuples of all its reward function sequences, action sequences, and state sequences in its N trajectories, respectively. It can be observed from Fig. 2 that our probabilistic graphical model of the expert’s n-th demonstrated trajectory encodes its stochastic transitions between reward functions, state transitions, and policy. Through our model, the Viterbi algorithm [20] can be applied to derive the most likely partition of the expert’s trajectory into segments that are generated from different locally consistent reward functions selected by EM, as shown in Section 3. Given the state transition function t(·, ·, ·) and the number |R| of reward functions, our model allows tractable learning of the unknown parameters using EM (Section 3), which include the reward weights vector ✓for all reward functions r✓2 R, transition function ⌧! for switching between reward functions, initial state probabilities ⌫(s) , P(Sn 1 = s) for all s 2 S, and initial reward function probabilities σ(r✓) , P(R✓n 0 = r✓) for all r✓2 R. 3 EM Algorithm for Parameter Learning A straightforward approach to learning the unknown parameters ⇤, (⌫, σ, {✓|r✓2 R}, ⌧!) is to select the value of ⇤that directly maximizes the log-likelihood of the expert’s demonstrated trajectories. Computationally, such an approach is prohibitively expensive due to a large joint parameter space to be searched for the optimal value of ⇤. To ease this computational burden, our key idea is to devise an EM algorithm that iteratively refines the estimate for ⇤to improve the expected loglikelihood instead, which is guaranteed to improve the original log-likelihood by at least as much: Expectation (E) step. Q(⇤, ⇤i) , P r✓1:N P(r✓1:N |s1:N, a1:N, ⇤i) log P(r✓1:N , s1:N, a1:N|⇤). Maximization (M) step. ⇤i+1 = argmax⇤Q(⇤, ⇤i) where ⇤i denotes an estimate for ⇤at iteration i. The Q function of EM can be reduced to the following sum of five terms, as shown in Appendix A: Q(⇤, ⇤i) = PN n=1 log ⌫(sn 1) + PN n=1 P r✓2R P(R✓n 0 = r✓|sn, an, ⇤i) log σ(r✓) (2) + PN n=1 PTn t=1 P r✓2R P(R✓n t = r✓|sn, an, ⇤i) log ⇡✓(sn t , an t ) (3) + PN n=1 PTn t=1 P r✓,r✓02R P(R✓n t−1 = r✓, sn t , R✓n t = r✓0|sn, an, ⇤i) ⇥log ⌧!(r✓, sn t , r✓0) (4) + PN n=1 PTn−1 t=1 log t(sn t , an t , sn t+1) . (5) Interestingly, each of the first four terms in (2), (3), and (4) contains a unique unknown parameter type (respectively, ⌫, σ, {✓|r✓2 R}, and ⌧!) and can therefore be maximized separately in the M step to be discussed below. As a result, the parameter space to be searched can be greatly reduced. Note that the third term (3) generalizes the log-likelihood in MLIRL [2] (i.e., assuming all trajectories to be produced by a single reward function) to that allowing each expert’s trajectory to be generated by multiple locally consistent reward functions. The last term (5), which contains the known state transition function t, is independent of unknown parameters ⇤.1 1If the state transition function is unknown, then it can be learned by optimizing the last term (5). 4 Learning initial state probabilities. To maximize the first term in the Q function (2) of EM, we use the method of Lagrange multipliers with the constraint P s2S ⌫(s) = 1 to obtain the estimate b⌫(s) = (1/N) PN n=1 In 1 for all s 2 S where In 1 is an indicator variable of value 1 if sn 1 = s, and 0 otherwise. Since b⌫can be computed directly from the expert’s demonstrated trajectories in O(N) time, it does not have to be refined. Learning initial reward function probabilities. To maximize the second term in Q function (2) of EM, we utilize the method of Lagrange multipliers with the constraint P r✓2R σ(r✓) = 1 to derive σi+1(r✓i) = (1/N) PN n=1 P(R✓n 0 = r✓|sn, an, ⇤i) (6) for all r✓i 2 R where σi+1 denotes an estimate for σ at iteration i+1, ✓i denotes an estimate for ✓at iteration i, and P(R✓n t = r✓|sn, an, ⇤i) (in this case, t = 0) can be computed in O(PN n=1 |R|2Tn) time using a procedure inspired by Baum-Welch algorithm [3], as shown in Appendix B. Learning reward functions. The third term in the Q function (3) of EM is maximized using gradient ascent and its gradient g1(✓) with respect to ✓is derived to be g1(✓) , N X n=1 Tn X t=1 P(R✓n t = r✓|sn, an, ⇤i) ⇡✓(sn t , an t ) d⇡✓(sn t , an t ) d✓ (7) for all ✓2 {✓0|r✓0 2 R}. For ⇡✓(sn t , an t ) to be differentiable in ✓, we define the Q✓function of MDP using an operator that blends the Q✓values via Boltzmann exploration [2]: Q✓(s, a) , ✓>φs + γ P s02S t(s, a, s0) ⌦a0 Q✓(s0, a0) where ⌦aQ✓(s, a) , P a2A Q✓(s, a) ⇥⇡✓(s, a) such that ⇡✓(s, a) , exp(βQ✓(s, a))/ P a02A exp(βQ✓(s, a0)) is defined as a Boltzmann exploration policy, and β > 0 is a temperature parameter. Then, we update ✓i+1 ✓i + δg1(✓i) where δ is the learning step size. We use backtracking line search method to improve the performance of gradient ascent. Similar to MLIRL, the time incurred in each iteration of gradient ascent depends mostly on that of value iteration, which increases with the size of the MDP’s state and action space. Learning transition function for switching between reward functions. To maximize the fourth term in the Q function (4) of EM, if the feature measurements of 's cannot be observed by the agent during the expert’s demonstration (i.e., ! = ;), then we utilize the method of Lagrange multipliers with the constraints P r✓02R ⌧!(r✓, s, r✓0) = 1 for all r✓2 R and s 2 S to obtain ⌧!i+1(r✓i, s, r✓0i) = (PN n=1 PTn t=1 γn,t,r✓i,s,r✓0i )/(P r¯✓i2R PN n=1 PTn t=1 γn,t,r✓i,s,r¯✓i ) (8) for r✓i, r✓0i 2 R and s 2 S where S is the set of states visited by the expert, ⌧!i+1 is an estimate for ⌧! at iteration i + 1, and γn,t,r✓i,s,r¯✓i , P(R✓n t−1 = r✓, Sn t = s, R✓n t = r¯✓|sn, an, ⇤i) can be computed efficiently by exploiting the intermediate results from evaluating P(R✓n t = r✓|sn, an, ⇤i) described previously, as detailed in Appendix B. On the other hand, if the feature measurements of 's can be observed by the agent during the expert’s demonstration, then recall that we use a generalized linear model to represent ⌧! (1) (Section 2) and ! is the unknown parameter to be estimated. Similar to learning the reward weights vector ✓for reward function r✓, we maximize the fourth term (4) in the Q function of EM by using gradient ascent and its gradient g2(!r✓r✓0 ) with respect to !r✓r✓0 is derived to be g2(!r✓r✓0 ) , N X n=1 Tn X t=1 X r¯✓2R γn,t,r✓,sn t ,r¯✓ ⌧!(r✓, sn t , r¯✓) d⌧!(r✓, sn t , r¯✓) d!r✓r✓0 (9) for all !r✓r✓0 2 !. Let !i r✓r✓0 denote an estimate for !r✓r✓0 at iteration i. Then, it is updated using !i+1 r✓r✓0 !i r✓r✓0 + δg2(!i r✓r✓0 ) where δ is the learning step size. Backtracking line search method is also used to improve the performance of gradient ascent here. In both cases, the time incurred in each iteration i is proportional to the number of γn,t,r✓i,s,r¯✓i to be computed, which is O(PN n=1 |R|2|S|Tn) time. Viterbi algorithm for partitioning a trajectory into segments with different locally consistent reward functions. Given the final estimate b⇤= (b⌫, bσ, {b✓|rb✓2 R}, ⌧b!) for the unknown parameters ⇤produced by EM, the most likely partition of the expert’s n-th demonstrated trajectory into segments generated by different locally consistent reward functions is r⇤ ✓n = (r⇤ ✓n t )Tn t=0 , argmaxr✓n P(r✓n|sn, an, b⇤) = argmaxr✓n P(r✓n, sn, an|b⇤), which can be derived using the Viterbi algorithm [20]. Specifically, define vrb✓,T for T = 1, . . . , Tn as the probability of the most 5 likely reward function sequence (r✓n t )T −1 t=0 from time steps 0 to T −1 ending with reward function rb✓at time step T that produce state and action sequences (sn t )T t=1 and (an t )T t=1: vrb✓,T , max(r✓n t )T −1 t=0 P((r✓n t )T −1 t=0 , R✓n T = r✓, (sn t )T t=1, (an t )T t=1|b⇤) = t(sn T −1, an T −1, sn T ) ⇡b✓(sn T , an T ) maxrb✓0 vrb✓0,T −1 ⌧b!(rb✓0, sn T , rb✓) , vrb✓,1 , maxr✓n 0 P(r✓n 0 , R✓n 1 = r✓, sn 1, an 1|b⇤) = b⌫(sn 1) ⇡b✓(sn 1, an 1) maxrb✓0 bσ(rb✓0) ⌧b!(rb✓0, sn 1, rb✓) . Then, r⇤ ✓n 0 = argmaxrb✓0 bσ(rb✓0) ⌧b!(rb✓0, sn 1, r⇤ ✓n 1 ), r⇤ ✓n T = argmaxrb✓0 vrb✓0,T ⌧b!(rb✓0, sn T +1, r⇤ ✓n T +1) for T = 1, . . . , Tn −1, and r⇤ ✓n Tn = argmaxrb✓vrb✓,Tn. The above Viterbi algorithm can be applied in the same way to partition an agent’s trajectory traversing through any region (i.e., possibly not visited by the expert) of the state space during its execution in O(|R|2T) time. 4 Experiments and Discussion This section evaluates the empirical performance of our IRL algorithm using 3 datasets featuring experts’ demonstrated trajectories in two simulated grid worlds and real-world taxi trajectories. The average log-likelihood of the expert’s demonstrated trajectories is used as the performance metric because it inherently accounts for the fidelity of our IRL algorithm in learning the locally consistent reward functions (i.e., R) and the stochastic transitions between them (i.e., ⌧!): L(⇤) , (1/Ntot) PNtot n=1 log P(sn, an|⇤) (10) where Ntot is the total number of the expert’s demonstrated trajectories available in the dataset. As proven in [17], maximizing L(⇤) with respect to ⇤is equivalent to minimizing an empirical approximation of the Kullback-Leibler divergence between the distributions of the agent’s and expert’s generated state-action trajectories. Note that when the final estimate b⇤produced by EM (Section 3) is plugged into (10), the resulting P(sn, an|b⇤) in (10) can be computed efficiently using a procedure similar to that in Section 3, as detailed in Appendix C. To avoid local maxima in gradient ascent, we initialize our EM algorithm with 20 random ⇤0 values and report the best result based on the Q value of EM (Section 3). 0 1 2 3 4 0 1 2 3 4 O D 0 1 2 3 4 0 1 2 3 4 O D A B Figure 3: Grid worlds A (states (0, 0), (1, 1), and (2, 2) are, respectively, examples of water, land, and obstacle), and B (state (2, 2) is an example of barrier). ‘O’ and ‘D’ denote origin and destination. To demonstrate the importance of modeling and learning stochastic transitions between locally consistent reward functions, the performance of our IRL algorithm is compared with that of its reduced variant assuming no change/switching of reward function within each trajectory, which is implemented by initializing ⌧!(r✓, s, r✓) = 1 for all r✓2 R and s 2 S and deactivating the learning of ⌧!. In fact, it can be shown (Appendix D) that such a reduction, interestingly, is equivalent to EM clustering with MLIRL [2]. So, our IRL algorithm generalizes EM clustering with MLIRL, the latter of which has been empirically demonstrated in [2] to outperform many existing IRL algorithms, as discussed in Section 1. Simulated grid world A. The environment (Fig. 3) is modeled as a 5 ⇥5 grid of states, each of which is either land, water, water and destination, or obstacle associated with the respective feature vectors (i.e., φs) (0, 1, 0)>, (1, 0, 0)>, (1, 0, 1)>, and (0, 0, 0)>. The expert starts at origin (0, 2) and any of its actions can achieve the desired state with 0.85 probability. It has two possible reward functions, one of which prefers land to water and going to destination (i.e., ✓= (0, 20, 30)>), and the other of which prefers water to land and going to destination (i.e., ✓0 = (20, 0, 30)>). The expert will only consider switching its reward function at states (2, 0) and (2, 4) from r✓0 to r✓with 0.5 probability and from r✓to r✓0 with 0.7 probability; its reward function remains unchanged at all other states. The feature measurements of 's cannot be observed by the agent during the expert’s demonstration. So, ! = ; and ⌧! is estimated using (8). We set γ to 0.95 and the number |R| of reward functions of the agent to 2. Fig. 4a shows results of the average log-likelihood L (10) achieved by our IRL algorithm, EM clustering with MLIRL, and the expert averaged over 4 random instances with varying number N of expert’s demonstrated trajectories. It can be observed that our IRL algorithm significantly outperforms EM clustering with MLIRL and achieves a L performance close to that of the expert, especially when N increases. This can be explained by its modeling of ⌧! and its high fidelity in learning and predicting ⌧!: While our IRL algorithm allows switching of reward function within each trajectory, EM clustering with MLIRL does not. 6 0 200 400 600 800 1000 1200 1400 1600 −25 −23 −21 −19 −17 −15 No. of demonstrated trajectories Average log−likelihood Our IRL algorithm EM clustering with MLIRL Expert 0 100 200 300 400 500 600 −29 −28 −27 −26 −25 −24 −23 −22 No. of demonstrated trajectories Average log−likelihood Our IRL algorithm EM clustering with MLIRL Expert (a) (b) Figure 4: Graphs of average log-likelihood L achieved by our IRL algorithm, EM clustering with MLIRL, and the expert vs. number N of expert’s demonstrated trajectories in simulated grid worlds (a) A (Ntot = 1500) and (b) B (Ntot = 500). We also observe that the accuracy of estimating the transition probabilities ⌧!(r✓, s, .) (⌧!(r✓0, s, .)) using (8) depends on the frequency and distribution of trajectories demonstrated by the expert with its reward function R✓n t−1 = r✓(R✓n t−1 = r✓0) at time step t−1 and its state sn t = s at time step t, which is expected. Those transition probabilities that are poorly estimated due to few relevant expert’s demonstrated trajectories, however, do not hurt the L performance of our IRL algorithm by much because such trajectories tend to have very low probability of being demonstrated by the expert. In any case, this issue can be mitigated by using the generalized linear model (1) to represent ⌧! and observing the feature measurements of 's necessary for learning and computing ⌧!, as shown next. Simulated grid world B. The environment (Fig. 3) is also modeled as a 5 ⇥5 grid of states, each of which is either the origin, destination, or land associated with the respective feature vectors (i.e. φs) (0, 1)>, (1, 0)>, and (0, 0)>. The expert starts at origin (4, 0) and any of its actions can achieve the desired state with 0.85 probability. It has two possible reward functions, one of which prefers going to destination (i.e., ✓= (30, 0)>), and the other of which prefers returning to origin (i.e., ✓0 = (0, 30)>). While moving to the destination, the expert will encounter barriers at some states with corresponding feature vectors 's = (1, 1)> and no barriers at all other states with 's = (0, 1)>; the second component of 's is used as an offset value in the generalized linear model (1). The expert’s behavior of switching between reward functions is governed by a generalized linear model ⌧! (1) with re✓= r✓0 and transition weights !r✓r✓= (−11, 12)> and !r✓0r✓= (13, −12)>. As a result, it will, for example, consider switching its reward function at states with barriers from r✓to r✓0 with 0.269 probability. We estimate ⌧! using (9) and set γ to 0.95 and the number |R| of reward functions of the agent to 2. To assess the fidelity of learning and predicting the stochastic transitions between reward functions at unvisited states, we intentionally remove all demonstrated trajectories that visit state (2, 0) with a barrier. Fig. 4b shows results of L (10) performance achieved by our IRL algorithm, EM clustering with MLIRL, and the expert averaged over 4 random instances with varying N. It can again be observed that our IRL algorithm outperforms EM clustering with MLIRL and achieves an L performance comparable to that of the expert due to its modeling of ⌧! and its high fidelity in learning and predicting ⌧!: While our IRL algorithm allows switching of reward function within each trajectory, EM clustering with MLIRL does not. Besides, the estimated transition function ⌧b! using (9) is very close to that of the expert, even at unvisited state (2, 0). So, unlike using (8), the learning of ⌧! with (9) can be generalized well across different states, thus allowing ⌧! to be predicted accurately at any state. Hence, we will model ⌧! with (1) and learn it using (9) in the next experiment. Real-world taxi trajectories. The Comfort taxi company in Singapore has provided GPS traces of 59 taxis with the same origin and destination that are map-matched [18] onto a network (i.e., comprising highway, arterials, slip roads, etc) of 193 road segments (i.e., states). Each road segment/state is specified by a 7-dimensional feature vector φs: Each of the first six components of φs is an indicator describing whether it belongs to Alexandra Road (AR), Ayer Rajah Expressway (AYE), Depot Road (DR), Henderson Road (HR), Jalan Bukit Merah (JBM), or Lower Delta Road (LDR), while the last component of φs is the normalized shortest path distance from the road segment to destination. We assume that the 59 map-matched trajectories are demonstrated by taxi drivers with a common set R of 2 reward functions and the same transition function ⌧! (1) for switching between reward functions, the latter of which is influenced by the normalized taxi speed constituting the first component of 2-dimensional feature vector 's; the second component of 's is used as an offset of value 1 in the generalized linear model (1). The number |R| of reward functions is set to 2 because when we experiment with |R| = 3, two of the learned reward functions are similar. Every driver can deterministically move its taxi from its current road segment to the desired adjacent road segment. 7 10 20 30 40 50 60 −8 −7.5 −7 −6.5 −6 −5.5 −5 −4.5 −4 No. of demonstrated trajectories Average log−likelihood Our IRL algorithm EM clustering with MLIRL 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Normalized taxi speed Probability ⌧b!(rb✓, s, rb✓0) ⌧b!(rb✓0, s, rb✓0) (a) (b) Figure 5: Graphs of (a) average log-likelihood L achieved by our IRL algorithm and EM clustering with MLIRL vs. no. N of taxi trajectories (Ntot = 59) and (b) transition probabilities of switching between reward functions vs. taxi speed. Fig. 5a shows results of L (10) performance achieved by our IRL algorithm and EM clustering with MLIRL averaged over 3 random instances with varying N. Our IRL algorithm outperforms EM clustering with MLIRL due to its modeling of ⌧! and its high fidelity in learning and predicting ⌧!. To see this, our IRL algorithm is able to learn that a taxi driver is likely to switch between reward functions representing different intentions within its demonstrated trajectory: Reward function rb✓denotes his intention of driving directly to the destination (Fig. 6a) due to a huge penalty (i.e., reward weight -49) on being far from destination and a large reward (i.e., reward weight 35.7) for taking the shortest path from origin to destination, which is via JBM, while rb✓0 denotes his intention of detouring to DR or JBM (Fig. 6b) due to large rewards for traveling on them (respectively, reward weights 30.5 and 23.7). As an example, Fig. 6c shows the most likely partition of a demonstrated trajectory into segments generated from locally consistent reward functions rb✓and rb✓0, which is derived using our Viterbi algorithm (Section 3). It can be observed that the driver is initially in rb✓0 on the slip road exiting AYE, switches from rb✓0 to rb✓upon turning into AR to detour to DR, and remains in rb✓while driving along DR, HR, and JBM to destination. On the other hand, the reward functions learned by EM clustering with MLIRL are both associated with his intention of driving directly to destination (i.e., similar to rb✓); it is not able to learn his intention of detouring to DR or JBM. (a) D AYE JBM DR AR HR LDR O (b) O D JBM DR AYE AR HR LDR (c) O AR DR AYE JBM HR LDR D Figure 6: Reward (a) rb✓(s) and (b) rb✓0(s) for each road segment s with b✓= (7.4, 3.9, 16.3, 20.3, 35.7, 21.5, −49.0)> and b✓0 = (5.2, 9.2, 30.5, 15.0, 23.7, 21.5, −9.2)> such that more red road segments give higher rewards. (c) Most likely partition of a demonstrated trajectory from origin ‘O’ to destination ‘D’ into red and green segments generated by rb✓and rb✓0, respectively. Fig. 5b shows the influence of normalized taxi speed (i.e., first component of 's) on the estimated transition function ⌧b! using (9). It can be observed that when the driver is in rb✓(i.e., driving directly to destination), he is very unlikely to change his intention regardless of taxi speed. But, when he is in rb✓0 (i.e., detouring to DR or JBM), he is likely (unlikely) to remain in this intention if taxi speed is low (high). The demonstrated trajectory in Fig. 6c in fact supports this observation: The driver initially remains in rb✓0 on the upslope slip road exiting AYE, which causes the low taxi speed. Upon turning into AR to detour to DR, he switches from rb✓0 to rb✓because he can drive at relatively high speed on flat terrain. 5 Conclusion This paper describes an EM-based IRL algorithm that can learn the multiple reward functions being locally consistent in different segments along a trajectory as well as the stochastic transitions between them. It generalizes EM-clustering with MLIRL and has been empirically demonstrated to outperform it on both synthetic and real-world datasets. For our future work, we plan to extend our IRL algorithm to cater to an unknown number of reward functions [6], nonlinear reward functions [12] modeled by Gaussian processes [4, 8, 13, 14, 15, 25], other dissimilarity measures described in Section 1, linearly-solvable MDPs [7], active learning with Gaussian processes [11], and interactions with self-interested agents [9, 10]. Acknowledgments. This work was partially supported by Singapore-MIT Alliance for Research and Technology Subaward Agreement No. 52 R-252-000-550-592. 8 References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. ICML, 2004. [2] M. Babes¸-Vroman, V. Marivate, K. Subramanian, and M. Littman. Apprenticeship learning about multiple intentions. In Proc. ICML, pages 897–904, 2011. [3] J. Bilmes. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and Hidden Markov models. Technical Report ICSI-TR-97-02, University of California, Berkeley, 1998. [4] J. Chen, N. Cao, K. H. Low, R. Ouyang, C. K.-Y. Tan, and P. Jaillet. Parallel Gaussian process regression with low-rank covariance matrix approximations. In Proc. UAI, pages 152–161, 2013. [5] J. Choi and K. Kim. Inverse reinforcement learning in partially observable environments. JMLR, 12:691– 730, 2011. [6] J. Choi and K. Kim. Nonparametric Bayesian inverse reinforcement learning for multiple reward functions. In Proc. NIPS, pages 314–322, 2012. [7] K. Dvijotham and E. Todorov. Inverse optimal control with linearly-solvable MDPs. In Proc. ICML, pages 335–342, 2010. [8] T. N. Hoang, Q. M. Hoang, and K. H. Low. A unifying framework of anytime sparse Gaussian process regression models with stochastic variational inference for big data. In Proc. ICML, pages 569–578, 2015. [9] T. N. Hoang and K. H. Low. A general framework for interacting Bayes-optimally with self-interested agents using arbitrary parametric model and model prior. In Proc. IJCAI, pages 1394–1400, 2013. [10] T. N. Hoang and K. H. Low. Interactive POMDP Lite: Towards practical planning to predict and exploit intentions for interacting with self-interested agents. In Proc. IJCAI, pages 2298–2305, 2013. [11] T. N. Hoang, K. H. Low, P. Jaillet, and M. Kankanhalli. Nonmyopic ✏-Bayes-optimal active learning of Gaussian processes. In Proc. ICML, pages 739–747, 2014. [12] S. Levine, Z. Popovi´c, and V. Koltun. Nonlinear inverse reinforcement learning with Gaussian processes. In Proc. NIPS, pages 19–27, 2011. [13] K. H. Low, J. Chen, T. N. Hoang, N. Xu, and P. Jaillet. Recent advances in scaling up Gaussian process predictive models for large spatiotemporal data. In S. Ravela and A. Sandu, editors, Proc. Dynamic Data-driven Environmental Systems Science Conference (DyDESS’14). LNCS 8964, Springer, 2015. [14] K. H. Low, N. Xu, J. Chen, K. K. Lim, and E. B. ¨Ozg¨ul. Generalized online sparse Gaussian processes with application to persistent mobile robot localization. In Proc. ECML/PKDD Nectar Track, 2014. [15] K. H. Low, J. Yu, J. Chen, and P. Jaillet. Parallel Gaussian process regression for big data: Low-rank representation meets Markov approximation. In Proc. AAAI, pages 2821–2827, 2015. [16] G. Neu and C. Szepesv´ari. Apprenticeship learning using inverse reinforcement learning and gradient methods. In Proc. UAI, pages 295–302, 2007. [17] G. Neu and C. Szepesv´ari. Training parsers by inverse reinforcement learning. Machine Learning, 77(2– 3):303–337, 2009. [18] P. Newson and J. Krumm. Hidden Markov map matching through noise and sparseness. In Proc. 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 336–343, 2009. [19] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. ICML, 2000. [20] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77(2):257–286, 1989. [21] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In Proc. IJCAI, pages 2586– 2591, 2007. [22] S. Russell. Learning agents for uncertain environments. In Proc. COLT, pages 101–103, 1998. [23] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In Proc. ICML, pages 1032–1039, 2008. [24] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Proc. NIPS, pages 1449–1456, 2007. [25] N. Xu, K. H. Low, J. Chen, K. K. Lim, and E. B. ¨Ozg¨ul. GP-Localize: Persistent mobile robot localization using online sparse Gaussian process observation model. In Proc. AAAI, pages 2585–2592, 2014. [26] J. Yu, K. H. Low, A. Oran, and P. Jaillet. Hierarchical Bayesian nonparametric approach to modeling and learning the wisdom of crowds of urban traffic route planning agents. In Proc. IAT, pages 478–485, 2012. [27] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In Proc. AAAI, pages 1433–1438, 2008. 9
2015
123
5,618
A hybrid sampler for Poisson-Kingman mixture models Mar´ıa Lomel´ı Gatsby Unit University College London mlomeli@gatsby.ucl.ac.uk Stefano Favaro Department of Economics and Statistics University of Torino and Collegio Carlo Alberto stefano.favaro@unito.it Yee Whye Teh Department of Statistics University of Oxford y.w.teh@stats.ox.ac.uk Abstract This paper concerns the introduction of a new Markov Chain Monte Carlo scheme for posterior sampling in Bayesian nonparametric mixture models with priors that belong to the general Poisson-Kingman class. We present a novel compact way of representing the infinite dimensional component of the model such that while explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation results demonstrating the efficacy of the proposed MCMC algorithm against existing marginal and conditional MCMC samplers. 1 Introduction According to Ghahramani [9], models that have a nonparametric component give us more flexiblity that could lead to better predictive performance. This is because their capacity to learn does not saturate hence their predictions should continue to improve as we get more and more data. Furthermore, we are able to fully consider our uncertainty about predictions thanks to the Bayesian paradigm. However, a major impediment to the widespread use of Bayesian nonparametric models is the problem of inference. Over the years, many MCMC methods have been proposed to perform inference which usually rely on a tailored representation of the underlying process [5, 4, 18, 20, 28, 6]. This is an active research area since dealing with this infinite dimensional component forbids the direct use of standard simulation-based methods for posterior inference. These methods usually require a finite-dimensional representation. There are two main sampling approaches to facilitate simulation in the case of Bayesian nonparametric models: random truncation and marginalization. These two schemes are known in the literature as conditional and marginal samplers. In conditional samplers, the infinite-dimensional prior is replaced by a finite-dimensional representation chosen according to a truncation level. In marginal samplers, the need to represent the infinite-dimensional component can be bypassed by marginalising it out. Marginal samplers have less storage requirements than conditional samplers but could potentially have worst mixing properties. However, not integrating out the infinite dimensional compnent leads to a more comprehensive representation of the random probability measure, useful to compute expectations of interest with respect to the posterior. In this paper, we propose a novel MCMC sampler for Poisson-Kingman mixture models, a very large class of Bayesian nonparametric mixture models that encompass all previously explored ones in the literature. Our approach is based on a hybrid scheme that combines the main strengths of 1 both conditional and marginal samplers. In the flavour of probabilistic programming, we view our contribution as a step towards wider usage of flexible Bayesian nonparametric models, as it allows automated inference in probabilistic programs built out of a wide variety of Bayesian nonparametric building blocks. 2 Poisson-Kingman processes Poisson-Kingman random probability measures (RPMs) have been introduced in Pitman [23] as a generalization of homogeneous Normalized Random Measures (NRMs) [25, 13]. Let X be a complete and separable metric space endowed with the Borel σ-field BpXq, let µ „ CRMpρ, H0q be a homogeneous Completely Random Measure (CRM) with L´evy measure ρ and base distribution H0, see Kingman [15] for a good overview about CRMs and references therein. Then, the corresponding total mass of µ is T “ µpXq and let it be finite, positive almost surely, and absolutely continuous with respect to Lebesgue measure. For any t P R`, let us consider the conditional distribution of µ{t given that the total mass T P dt. This distribution is denoted by PKpρ, δt, H0q, it is the distribution of a RPM, where δt denotes the usual Dirac delta function. Poisson-Kingman RPMs form a class of RPMs whose distributions are obtained by mixing PKpρ, δt, H0q, over t, with respect to some distribution γ on the positive real line. Specifically, a Poisson-Kingman RPM has following the hierarchical representation T „ γ P|T “ t „ PKpρ, δt, H0q. (1) The RPM P is referred to as the Poisson-Kingman RPM with L´evy measure ρ, base distribution H0 and mixing distribution γ. Throughout the paper we denote by PKpρ, γ, H0q the distribution of P and, without loss of generality, we will assume that γpdtq9hptqfρptqdt where fρ is the density of the total mass T under the CRM and h is a non-negative function. Note that, when γpdtq “ fρptqdt then the distribution PKpρ, fρ, H0q coincides with NRMpρ, H0q. The resulting P “ ř kě1 pkδφk is almost surely discrete and since µ is homogeneous, the atoms pφkqkě1 of P are independent of their masses ppkqkě1 and form a sequence of independent random variables identically distributed according to H0. Finally, the masses of P have distribution governed by the L´evy measure ρ and the distribution γ. One nice property is that P is almost surely discrete: if we obtain a sample tYiun i“1 from it, there is a positive probability of Yi “ Yj for each pair of indexes i ‰ j. Hence, it induces a random partition Π on N, where i and j are in the same block in Π if and only if Yi “ Yj. Kingman [16] showed that Π is exchangeable, this property will be one of the main tools for the derivation of our hybrid sampler. 2.1 Size-biased sampling Poisson-Kingman processes A second object induced by a Poisson-Kingman RPM is a size-biased permutation of its atoms. Specifically, order the blocks in Π by increasing order of the least element in each block, and for each k P N let Zk be the least element of the kth block. Zk is the index among pYiqiě1 of the first appearance of the kth unique value in the sequence. Let ˜Jk “ µptYZkuq be the mass of the corresponding atom in µ. Then p ˜Jkqkě1 is a size-biased permutation of the masses of atoms in µ, with larger masses tending to appear earlier in the sequence. It is easy to see that ř kě1 ˜Jk “ T, and that the sequence can be understood as a stick-breaking construction: starting with a stick of length T0 “ T; break off the first piece of length ˜J1; the surplus length of stick is T1 “ T0 ´ ˜J1; then the second piece with length ˜J2 is broken off, etc. Theorem 2.1 of Perman et al. [21] states that the sequence of surplus masses pTkqkě0 forms a Markov chain and gives the corresponding initial distribution and transition kernels. The corresponding generative process for the sequence pYiqiě1 is as follows: i) Start with drawing the total mass from its distribution Pρ,h,H0pT P dtq9hptqfρptqdt. ii) The first draw Y1 from P is a size-biased pick from the masses of µ. The actual value of Y1 is simply Y ˚ 1 „ H0, while the mass of the corresponding atom in µ is ˜J1, with conditional 2 distribution Pρ,h,H0p ˜J1 P ds1|T P dtq “ s1 t ρpds1qfρpt ´ s1q fρptq , with surplus mass T1 “ T ´ ˜J1. iii) For subsequent draws i ě 2: – Let K be the current number of distinct values among Y1, . . . , Yi´1, and Y ˚ 1 , . . . , Y ˚ K the unique values, i.e., atoms in µ. The masses of these first K atoms are denoted by ˜J1, . . . , ˜JK and the surplus mass is TK “ T ´ řK k“1 ˜Jk. – For each k ď K, with probability ˜Jk{T, we set Yi “ Y ˚ k . – With probability TK{T, Yi takes on the value of an atom in µ besides the first K atoms. The actual value Y ˚ K`1 is drawn from H0, while its mass is drawn from Pρ,h,H0p ˜JK`1 P dsK`1|TK P dtKq “ sK`1 tK ρpdsK`1qfρptK ´ sK`1q fρptKq , TK`1 “ TK´ ˜JK`1. By multiplying the above infinitesimal probabilities, one obtains the joint distribution of the random elements T, Π, p ˜Jiqiě1 and pY ˚ i qiě1 Pρ,h,H0pΠn “ pckqkPrKs, Y ˚ k P dy˚ k, ˜Jk P dsk for k P rKs, T P dtq (2) “ t´nfρpt ´ řK k“1 skqhptqdt K ź k“1 s|ck| k ρpdskqH0pdy˚ kq, where pckqkPrKs denotes a particular partition of rns with K blocks, c1, . . . , cK, ordered by increasing least element and |ck| is the cardinality of block ck. The distribution (2) is invariant to the size-biased order. Such a joint distribution was first obtained in Pitman [23] , see also Pitman [24] for further details. 2.2 Relationship to the usual Stick-breaking construction In the generative process above, we mentioned that it is reminiscent of the well known stick breaking construction from Ishwaran & James [12], where you break a stick of length one but it is not the same. However, we can effectively reparameterize the model, starting with Equation (2), due to two useful identities in distribution: Pj d“ ˜ Jj T ´ř ℓăj ˜ Jℓand Vj d“ Pj 1´ř ℓăj Pℓfor j “ 1, . . . , K. Indeed, using this reparameterization, we obtain the corresponding joint in terms of K p0, 1q-valued stick-breaking weights tVjuK j“1 which correspond to a stick-breaking representation. Note that this joint distribution is for a general L´evy measure ρ, density fρ and it is conditioned on the valued of the random variable T. We can recover the well known Stick breaking representations for the Dirichlet and Pitman-Yor processes, for a specific choice of ρ and if we integrate out T, see the supplementary material for further details about the latter. However, in general, these stick-breaking random variables form a sequence of dependent random variables with a complicated distribution, except for the two previously mentioned processes, see Pitman [22] for details. 2.3 Poisson-Kingman mixture model We are mainly interested in using Poisson-Kingman RPMs as a building block for an infinite mixture model. Indeed, we can use Equation (1) as the top level of the following hierarchical specification T „ γ P|T „ PKpρσ, δT , H0q Yi | P iid„ P Xi | Yi ind „ Fp¨ | Yiq (3) 3 ˜J1, Y ∗ 1 X3 X1 X2 ˜J2, Y ∗ 2 X4 X5 ˜J3, Y ∗ 3 X6 ˜J4, Y e 1 X1 X8 T −P4 ℓ=1 ˜Jℓ n Y ′e 1 , Y e 2 o Figure 1: Varying table size Chinese restaurant representation for observations tXiu9 i“1 where Fp¨ | Y q is the likelihood term for each mixture component, and our dataset consists of n observations pxiqiPrns of the corresponding variables pXiqiPrns. We will assume that Fp¨ | Y q is smooth. After specifying the model we would like to carry out inference for clustering and/or density estimation tasks. We can do it exactly and more efficiently than with known MCMC samplers with our novel approach. In the next section, we present our main contribution and in the following one we show how it outperforms other samplers. 3 Hybrid Sampler Equation’s (2) joint distribution is written in terms of the first K size-biased weights. In order to obtain a complete representation of the RPM, we need to size-bias sample from it a countably infinite number of times. Succesively, devise some way of representing this object exactly in a computer with finite memory and storage is needed. We introduce the following novel strategy: starting from equation (2), we exploit the generative process of section 2.1 when reassigning observations to clusters. In addition to this, we reparameterize the model in terms of a surplus mass random variable V “ T ´ řK k“1 ˜Jk and end up with the following joint distribution Pρ,h,H0pΠn “ pckqkPrKs, Y ˚ k P dy˚ k, ˜Jk P dsk for k P rKs, T ´ K ÿ k“1 ˜Jk P dv, Xi P dxi for i P rnsq (4) “ pv ` K ÿ k“1 skq´nh ˜ v ` K ÿ k“1 sk ¸ fρpvq K ź k“1 s|ck| k ρpdskqH0pdy˚ kq ź iPck Fpdxi|y˚ kq. For this reason, while having a complete representation of the infinite dimensional part of the model we only need to explicitly represent those size-biased weights associated to occupied clusters plus a surplus mass term which is associated to the rest of the empty clusters, as Figure 1 shows. The cluster reassignment step can be seen as a lazy sampling scheme: we explicitly represent and update the weights associated to occupied clusters and create a size-biased weight only when a new cluster appears. To make this possible we use the induced partition and we call Equation (4) the varying table size Chinese restaurant representation because the size-biased weights can be thought as the sizes of the tables in our restaurant. In the next subsection, we compute the complete conditionals of each random variable of interest to implement an overall Gibbs sampling MCMC scheme. 3.1 Complete conditionals Starting from equation (4), we obtain the following complete conditionals for the Gibbs sampler P pV P dv | Restq9 ˜ v ` K ÿ k“1 sk ¸´n fρpvqh ˜ v ` K ÿ k“1 sk ¸ dv (5) P ´ ˜Ji P dsi | Rest ¯ 9 ˜ v ` si ` ÿ k‰i sk ¸´n h ˜ v ` si ` ÿ k‰i sk ¸ s|ci| i ρpdsiqIp0,Surpmassiqpsiqdsi 4 where Surpmassi “ V ` řk j“1 ˜Jj ´ ř jăi ˜Jj. Ppci “ c | c´i, Restq9 # scFpdxi | tXjujPc Y ˚ c q if i is assigned to existing cluster c v M Fpdxi | Y ˚ c q if i is assigned to a new cluster c According to the rule above, the ith observation will be either reassigned to an existing cluster or to one of the M new clusters in the ReUse algorithm as in Favaro & Teh [6]. If it is assigned to a new cluster, then we need to sample a new size-biased weight from the following P ´ ˜Jk`1 P dsk`1 | Rest ¯ 9fρpv ´ sk`1qρpsk`1qsk`1Ip0,vqpsk`1qdsk`1. (6) Every time a new cluster is created we need to obtain its corresponding size-biased weight which could happen 1 ď R ď n times per iteration hence, it has a significant contribution to the overall computational cost. For this reason, an independent and identically distributed (i.i.d.) draw from its corresponding complete conditional (6) is highly desirable. In the next subsection we present a way to achieve this. Finally, for updating cluster parameters tY ˚ k ukPrKs, in the case where H0 is non-conjugate to the likelihood, we use an extension of Favaro & Teh [6]’s ReUse algorithm, see Algorithm 3 in the supplementary material for details. The complete conditionals in Equation (5) do not have a standard form but a generic MCMC method can be applied to sample from each within the Gibbs sampler. We use slice sampling from Neal [19] to update the size-biased weights and the surplus mass. However, there is a class of priors where the total mass’s density is intractable so an additional step needs to be introduced to sample the surplus mass. In the next subsection we present two alternative ways to overcome this issue. 3.2 Example of classes of Poisson-Kingman priors a) σ-Stable Poisson-Kingman processes [23]. For any σ P p0, 1q, let fσptq “ 1 π ř8 j“0 p´1qj`1 j! sinpπσjq Γpσj`1q tσj`1 be the density function of a positive σ-Stable random variable and ρpdxq “ ρσpdxq :“ σ Γp1´σqx´σ´1dx. This class of RPMs is denoted by PKpρσ, hT , H0q where h is a function that indexes each member of the class. For example, in the experimental section, we picked 3 choices of the h function that index the following processes: Pitman-Yor, Normalized Stable and Normalized Generalized Gamma processes. This class includes all Gibbs type priors with parameter σ P p0, 1q, so other choices of h are possible, see Gnedin & Pitman [10] and De Blasi et al. [1] for a noteworthy account of this class of Bayesian nonparametric priors. In this case, the total mass’s density is intractable and we propose two ways of dealing with this. Firstly, we used Kanter [14]’s integral representation for the σ-Stable density as in Lomeli et al. [17], introduce an auxiliary variable Z and slice sample each variable P pV P dv | Restq9 ˜ v ` kÿ i“1 si ¸´n v´ σ 1´σ exp ” ´v ´σ 1´σ Apzq ı h ˜ v ` kÿ i“1 si ¸ dv P pZ P dz | Restq9Apzq exp ” ´vp´ σ 1´σqApzq ı dz, see Algorithm 1 in the supplementary material for details. Alternatively, we can completely bypass the evaluation of the total mass’s density by updating the surplus mass with a Metropolis-Hastings step with an independent proposal from a Stable or from an Exponentially Tilted Stable(λ). It is straight forward to obtain i.i.d draws from these proposals, see Devroye [3] and Hofert [11] for an improved rejection sampling method for the Exponentially tilted case. This leads to the following acceptance ratio P pV 1 P dv1 | Restq fσpvq exp p´λvq P pV P dv | Restq fσpv1q exp p´λv1q “ ´ v1 ` řk i“1 si ¯´n h ´ v1 ` řk i“1 si ¯ dv1 exp p´vq ´ v ` řk i“1 si ¯´n h ´ v ` řk i“1 si ¯ dv exp p´v1q , see Algorithm 2 in the supplementary material for details. Finally, to sample a new size-biased weight P ´ ˜Jk`1 P dsk`1 | Rest ¯ 9fσpv ´ sk`1qs´σ k`1Ip0,vqpsk`1qdsk`1. 5 Fortunately, we can get an i.i.d. draw from the above due to an identity in distribution given by Favaro et al. [8] for the usual stick breaking weights for any prior in this class such that σ “ u v where u ă v are coprime integers. Then we just reparameterize it back to obtain the new size-biased weight, see Algorithm 4 in the supplementary material for details. b) ´ logBeta-Poisson-Kingman processes [25, 27]. Let fρptq “ Γpa`bq ΓpaqΓpbq exp p´atq p1 ´ expp´tqqb´1 be the density of a positive random variable X d“ ´ log Y , where Y „ Betapa, bq and ρpxq “ expp´axqp1´expp´bxqq xp1´expp´xqq . This class of RPMs generalises the Gamma process but has similar properties. Indeed, if we take b “ 1 and the density function for T is γptq “ fρptq we recover the L´evy measure and total mass’s density function of a Gamma process. Finally, to sample a new size-biased weight P ´ ˜Jk`1 P dsk`1 | Rest ¯ 9p1 ´ exppsk`1 ´ vqqb´1 p1 ´ expp´bsk`1qq 1 ´ expp´sk`1q dsk`1Ip0,vqpsk`1q. If b ą 1, this complete conditional is a monotone decreasing unnormalised density with maximum at b. We can easily get an i.i.d. draw with a simple rejection sampler [2] where the rejection constant is bv and the proposal is Up0, vq. There is no other known sampler for this process. 3.3 Relationship to marginal and conditional MCMC samplers Starting from equation (2), another strategy would be to reparameterize the model in terms of the usual stick breaking weights. Next, we could choose a random truncation level and represent finitely many sticks as in Favaro & Walker [7]. Alternatively, we could integrate out the random probability measure and sample only the partition induced by it as in Lomeli et al. [17]. Conditional samplers have large memory requirements as often, the number of sticks needed can be very large. Furthermore, the conditional distributions of the stick lengths are quite involved so they tend to have slow running times. Marginal samplers have less storage requirements than conditional samplers but could potentially have worst mixing properties. For example, Lomeli et al. [17] had to introduce a number of auxiliary variables which worsen the mixing. Our novel hybrid sampler exploits marginal and conditional samplers advantages. It has less memory requirements since it just represents the size-biased weights of occupied as opposed to conditional samplers which represent both empty and occupied clusters. Also, it does not integrate out the size-biased weights thus, we obtain a more comprehensive representation of the RPM. 4 Performance assesssment We illustrate the performance of our hybrid sampler on a range of Bayesian nonparametric mixture models, obtained by different specifications of ρ and γ, as in Equation (3). At the top level of this hierarchical specification, different Bayesian nonparametric priors were chosen from both classes presented in the examples section. We chose the base distribution H0 and the likelihood term F for the kth cluster to be H0pdµkq “ N ` dµk | µ0, σ2 0 ˘ and Fpdx1, . . . , dxnk | µk, τ1q “ śnk i“1 N ` xi | µk, σ2 1 ˘ , where tXjunk j“1 are the nk observations assigned to the kth cluster at some iteration. N denotes a Normal distribution with mean µk and variance σ2 1, a common parameter among all clusters. The mean’s prior distribution is Normal, centered at µ0 and with variance σ2 0. Although the base distribution is conjugate to the likelihood we treated it as non-conjugate case and sampled the parameters at each iteration rather than integrating them out. We used the dataset from Roeder [26] to test the algorithmic performance in terms of running time and effective sample size (ESS), as Table 1 shows. The dataset consists of measurements of velocities in km/sec of n “ 82 galaxies from a survey of the Corona Borealis region. For the σ-Stable Poisson-Kingman class, we compared it against our implementation of Favaro & Walker [7]’s conditional sampler and against the marginal sampler of Lomeli et al. [17]. We chose to compare our hybrid sampler against these existing approaches which follow the same general purpose paradigm. 6 Algorithm σ Running time ESS(˘std) Pitman-Yor process (θ “ 10) Hybrid 0.3 7135.1(28.316) 2635.488(187.335) Hybrid-MH (λ “ 0) 0.3 5469.4(186.066) 2015.625(152.030) Conditional 0.3 NA NA Marginal 0.3 4685.7(84.104) 2382.799(169.359) Hybrid 0.5 3246.9(24.894) 3595.508(174.075) Hybrid-MH (λ “ 50) 0.5 4902.3(6.936) 3579.686(135.726) Conditional 0.5 10141.6(237.735) 905.444(41.475) Marginal 0.5 4757.2(37.077) 2944.065(195.011) Normalized Stable process Hybrid 0.3 5054.7(70.675) 5324.146(167.843) Hybrid-MH (λ “ 0) 0.3 7866.4(803.228) 5074.909(100.300) Conditional 0.3 NA NA Marginal 0.3 7658.3(193.773) 2630.264(429.877) Hybrid 0.5 5382.9(57.561) 4877.378(469.794) Hybrid-MH (λ “ 50) 0.5 4537.2(37.292) 4454.999(348.356) Conditional 0.5 10033.1(22.647) 912.382(167.089) Marginal 0.5 8203.1(106.798) 3139.412(351.788) Normalized Generalized Gamma process (τ “ 1) Hybrid 0.3 4157.8(92.863) 5104.713(200.949) Hybrid-MH (λ “ 0) 0.3 4745.5(187.506) 4848.560(312.820) Conditional 0.3 NA NA Marginal 0.3 7685.8(208.98) 3587.733(569.984) Hybrid 0.5 6299.2(102.853) 4646.987(370.955) Hybrid-MH (λ “ 50) 0.5 4686.4(35.661) 4343.555(173.113) Conditional 0.5 10046.9(206.538) 1000.214(70.148) Marginal 0.5 8055.6(93.164) 4443.905(367.297) -logBeta (a “ 1, b “ 2) Hybrid 2520.6(121.044) 3068.174(540.111) Conditional NA NA Marginal NA NA Table 1: Running times in seconds and ESS averaged over 10 chains, 30,000 iterations, 10,000 burn in. Table 1 shows that different choices of σ result in differences in the algorithm’s running times and ESS. The reason for this is that in the σ “ 0.5 case there are readily available random number generators which do not increase the computational cost. In contrast, in the σ “ 0.3 case, a rejection sampler method is needed every time a new size-biased weight is sampled which increases the computational cost, see Favaro et al. [8] for details. Even so, in most cases, we outperform both marginal and conditional MCMC schemes in terms of running times and in all cases, in terms of ESS. In the Hybrid-MH case, even thought the ESS and running times are competitive, we found that the acceptance rate is not optimal, we are currently exploring other choices of proposals. Finally, in Example b), our approach is the only one available and it has good running times and ESS. This qualitative comparison confirms our previous statements about our novel approach. 5 Discussion Our main contribution is our Hybrid MCMC sampler as a general purpose tool for inference with a very large class of infinite mixture models. We argue in favour of an approach in which a generic algorithm can be applied to a very large class of models, so that the modeller has a lot of flexibility in choosing specific models suitable for his/her problem of interest. Our method is a hybrid approach since it combines the perks of the conditional and marginal schemes. Indeed, our experiments confirm that our hybrid sampler is more efficient since it outperforms both marginal and conditional samplers in running times in most cases and in ESS in all cases. We introduced a new compact way of representing the infinite dimensional component such that it is feasible to perform inference and how to deal with the corresponding intractabilities. However, there are still various challenges that remain when dealing with these type of models. For instance, there are some values for σ which we are unable to perform inference with our novel sampler. Secondly, when a Metropolis-Hastings step is used, there could be other ways to improve the mixing in terms of better proposals. Finally, all BNP MCMC methods can be affected by the dimensionality and size of the dataset when dealing with an infinite mixture model. Indeed, all methods rely on the same way of dealing with the likelihood term. When adding a new cluster, all methods sample its 7 corresponding parameter from the prior distribution. In a high dimensional scenario, it could be very difficult to sample parameter values close to the existing data points. We consider these points to be an interesting avenue of future research. Acknowledgments We thank Konstantina Palla for her insightful comments. Mar´ıa Lomel´ı is funded by the Gatsby Charitable Foundation, Stefano Favaro is supported by the European Research Council through StG N-BNP 306406 and Yee Whye Teh is supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 617071. References [1] De Blasi, P., Favaro, S., Lijoi, A., Mena, R. H., Pr¨uenster, I., & Ruggiero, M. 2015. Are Gibbs-type priors the most natural generalization of the Dirichlet process? Pages 212–229 of: IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 37. [2] Devroye, L. 1986. Non-Uniform Random Variate Generation. Springer-Verlag. [3] Devroye, L. 2009. Random variate generation for exponentially and polynomially tilted Stable distributions. ACM Transactions on Modelling and Computer Simulation, 19, 1–20. [4] Escobar, M. D. 1994. Estimating normal means with a Dirichlet process prior. Journal of the American Statistical Association, 89, 268–277. [5] Escobar, M. D., & West, M. 1995. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90, 577–588. [6] Favaro, S., & Teh, Y. W. 2013. MCMC for Normalized Random Measure Mixture Models. Statistical Science, 28(3), 335–359. [7] Favaro, S., & Walker, S. G. 2012. Slice sampling σ-Stable Poisson-Kingman mixture models. Journal of Computational and Graphical Statistics, 22, 830–847. [8] Favaro, S., Lomeli, M., Nipoti, B., & Teh, Y. W. 2014. On the Stick-Breaking representation of σ-Stable Poisson-Kingman models. Electronic Journal of Statistics, 8, 1063–1085. [9] Ghahramani, Z. 2015. Probabilistic Machine Learning and Artificial Inteligence. Nature, 521, 452459. [10] Gnedin, A., & Pitman, J. 2006. Exchangeable Gibbs partitions and Stirling triangles. Journal of Mathematical Sciences, 138, 5674–5684. [11] Hofert, M. 2011. Efficiently sampling nested Archimedean copulas. Comput. Statist. Data Anal., 55, 5770. [12] Ishwaran, H., & James, L. F. 2001. Gibbs Sampling Methods for Stick-Breaking Priors. Journal of the American Statistical Association, 96(453), 161–173. [13] James, L. F. 2002. Poisson process partition calculus with applications to exchangeable models and Bayesian nonparametrics. ArXiv:math/0205093. [14] Kanter, M. 1975. Stable densities under change of scale and total variation inequalities. Annals of Probability, 3, 697–707. [15] Kingman, J. F. C. 1967. Completely Random Measures. Pacific Journal of Mathematics, 21, 59–78. [16] Kingman, J. F. C. 1978. The representation of partition structures. Journal of the London Mathematical Society, 18, 374–380. [17] Lomeli, M., Favaro, S., & Teh, Y. W. 2015. A marginal sampler for σ-stable Poisson-Kingman mixture models. Journal of Computational and Graphical Statistics (To appear). [18] Neal, R. M. 1998. Markov Chain Sampling Methods for Dirichlet Process Mixture Models. Tech. rept. 9815. Department of Statistics, University of Toronto. [19] Neal, R. M. 2003. Slice sampling. Annals of Statistics, 31, 705–767. [20] Papaspiliopoulos, O., & Roberts, G. O. 2008. Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchical models. Biometrika, 95, 169–186. [21] Perman, M., Pitman, J., & Yor, M. 1992. Size-biased sampling of Poisson point processes and excursions. Probability Theory and Related Fields, 92, 21–39. [22] Pitman, J. 1996. Random discrete distributions invariant under size-biased permutation. Advances in Applied Probability, 28, 525–539. 8 [23] Pitman, J. 2003. Poisson-Kingman Partitions. Pages 1–34 of: Goldstein, D. R. (ed), Statistics and Science: a Festschrift for Terry Speed. Institute of Mathematical Statistics. [24] Pitman, J. 2006. Combinatorial Stochastic Processes. Lecture Notes in Mathematics. Springer-Verlag, Berlin. [25] Regazzini, E., Lijoi, A., & Pr¨uenster, I. 2003. Distributional results for means of normalized random measures with independent increments. Annals of Statistics, 31, 560–585. [26] Roeder, K. 1990. Density estimation with confidence sets exemplified by super-clusters and voids in the galaxies. Journal of the American Statistical Association, 85, 617–624. [27] von Renesse, M., Yor, M., & Zambotti, L. 2008. Quasi-invariance properties of a class of subordinators. Stochastic Processes and their Applications, 118, 2038–2057. [28] Walker, Stephen G. 2007. Sampling the Dirichlet Mixture Model with Slices. Communications in Statistics - Simulation and Computation, 36, 45. 9
2015
124
5,619
Learning with Symmetric Label Noise: The Importance of Being Unhinged Brendan van Rooyen∗,† Aditya Krishna Menon†,∗ Robert C. Williamson∗,† ∗The Australian National University †National ICT Australia { brendan.vanrooyen, aditya.menon, bob.williamson }@nicta.com.au Abstract Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2010] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2010] result by virtue of being negatively unbounded. The loss is a modification of the hinge loss, where one does not clamp at zero; hence, we call it the unhinged loss. We show that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any convex potential; this implies that strong ℓ2 regularisation makes most standard learners SLN-robust. Experiments confirm the unhinged loss’ SLN-robustness is borne out in practice. So, with apologies to Wilde [1895], while the truth is rarely pure, it can be simple. 1 Learning with symmetric label noise Binary classification is the canonical supervised learning problem. Given an instance space X, and samples from some distribution D over X × {±1}, the goal is to learn a scorer s: X →R with low misclassification error on future samples drawn from D. Our interest is in the more realistic scenario where the learner observes samples from some corruption D of D, where labels have some constant probability of being flipped, and the goal is still to perform well with respect to D. This problem is known as learning from symmetric label noise (SLN learning) [Angluin and Laird, 1988]. Long and Servedio [2010] showed that there exist linearly separable D where, when the learner observes some corruption D with symmetric label noise of any nonzero rate, minimisation of any convex potential over a linear function class results in classification performance on D that is equivalent to random guessing. Ostensibly, this establishes that convex losses are not “SLN-robust” and motivates the use of non-convex losses [Stempfel and Ralaivola, 2009, Masnadi-Shirazi et al., 2010, Ding and Vishwanathan, 2010, Denchev et al., 2012, Manwani and Sastry, 2013]. In this paper, we propose a convex loss and prove that it is SLN-robust. The loss avoids the result of Long and Servedio [2010] by virtue of being negatively unbounded. The loss is a modification of the hinge loss where one does not clamp at zero; thus, we call it the unhinged loss. This loss has several appealing properties, such as being the unique convex loss satisfying a notion of “strong” SLN-robustness (Proposition 5), being classification-calibrated (Proposition 6), consistent when minimised on D (Proposition 7), and having an simple optimal solution that is the difference of two kernel means (Equation 8). Finally, we show that this optimal solution is equivalent to that of a strongly regularised SVM (Proposition 8), and any twice-differentiable convex potential (Proposition 9), implying that strong ℓ2 regularisation endows most standard learners with SLN-robustness. 1 The classifier resulting from minimising the unhinged loss is not new [Devroye et al., 1996, Chapter 10], [Sch¨olkopf and Smola, 2002, Section 1.2], [Shawe-Taylor and Cristianini, 2004, Section 5.1]. However, establishing this classifier’s (strong) SLN-robustness, uniqueness thereof, and its equivalence to a highly regularised SVM solution, to our knowledge is novel. 2 Background and problem setup Fix an instance space X. We denote by D a distribution over X × {±1}, with random variables (X, Y) ∼D. Any D may be expressed via the class-conditionals (P, Q) = (P(X | Y = 1), P(X | Y = −1)) and base rate π = P(Y = 1), or via the marginal M = P(X) and class-probability function η: x 7→P(Y = 1 | X = x). We interchangeably write D as DP,Q,π or DM,η. 2.1 Classifiers, scorers, and risks A scorer is any function s: X →R. A loss is any function ℓ: {±1} × R →R. We use ℓ−1, ℓ1 to refer to ℓ(−1, ·) and ℓ(1, ·). The ℓ-conditional risk Lℓ: [0, 1] × R →R is defined as Lℓ: (η, v) 7→ η · ℓ1(v) + (1 −η) · ℓ−1(v). Given a distribution D, the ℓ-risk of a scorer s is defined as LD ℓ(s) .= E (X,Y)∼D [ℓ(Y, s(X))] , (1) so that LD ℓ(s) = E X∼M [Lℓ(η(X), s(X))]. For a set S, LD ℓ(S) is the set of ℓ-risks for all scorers in S. A function class is any F ⊆RX. Given some F, the set of restricted Bayes-optimal scorers for a loss ℓare those scorers in F that minimise the ℓ-risk: SD,F,∗ ℓ .= Argmin s∈F LD ℓ(s). The set of (unrestricted) Bayes-optimal scorers is SD,∗ ℓ = SD,F,∗ ℓ for F = RX. The restricted ℓ-regret of a scorer is its excess risk over that of any restricted Bayes-optimal scorer: regretD,F ℓ (s) .= LD ℓ(s) −inf t∈F LD ℓ(t). Binary classification is concerned with the zero-one loss, ℓ01 : (y, v) 7→Jyv < 0K + 1 2Jv = 0K. A loss ℓis classification-calibrated if all its Bayes-optimal scorers are also optimal for zero-one loss: (∀D) SD,∗ ℓ ⊆SD,∗ 01 . A convex potential is any loss ℓ: (y, v) 7→φ(yv), where φ: R →R+ is convex, non-increasing, differentiable with φ′(0) < 0, and φ(+∞) = 0 [Long and Servedio, 2010, Definition 1]. All convex potentials are classification-calibrated [Bartlett et al., 2006, Theorem 2.1]. 2.2 Learning with symmetric label noise (SLN learning) The problem of learning with symmetric label noise (SLN learning) is the following [Angluin and Laird, 1988, Kearns, 1998, Blum and Mitchell, 1998, Natarajan et al., 2013]. For some notional “clean” distribution D, which we would like to observe, we instead observe samples from some corrupted distribution SLN(D, ρ), for some ρ ∈[0, 1/2). The distribution SLN(D, ρ) is such that the marginal distribution of instances is unchanged, but each label is independently flipped with probability ρ. The goal is to learn a scorer from these corrupted samples such that LD 01(s) is small. For any quantity in D, we denote its corrupted counterparts in SLN(D, ρ) with a bar, e.g. M for the corrupted marginal distribution, and η for the corrupted class-probability function; additionally, when ρ is clear from context, we will occasionally refer to SLN(D, ρ) by D. It is easy to check that the corrupted marginal distribution M = M, and [Natarajan et al., 2013, Lemma 7] (∀x ∈X) η(x) = (1 −2ρ) · η(x) + ρ. (2) 3 SLN-robustness: formalisation We consider learners (ℓ, F) for a loss ℓand a function class F, with learning being the search for some s ∈F that minimises the ℓ-risk. Informally, (ℓ, F) is “robust” to symmetric label noise (SLNrobust) if minimising ℓover F gives the same classifier on both the clean distribution D, which 2 the learner would like to observe, and SLN(D, ρ) for any ρ ∈[0, 1/2), which the learner actually observes. We now formalise this notion, and review what is known about SLN-robust learners. 3.1 SLN-robust learners: a formal definition For some fixed instance space X, let ∆denote the set of distributions on X×{±1}. Given a notional “clean” distribution D, Nsln : ∆→2∆returns the set of possible corrupted versions of D the learner may observe, where labels are flipped with unknown probability ρ: Nsln : D 7→  SLN(D, ρ) | ρ ∈  0, 1 2  . Equipped with this, we define our notion of SLN-robustness. Definition 1 (SLN-robustness). We say that a learner (ℓ, F) is SLN-robust if (∀D ∈∆) (∀D ∈Nsln(D)) LD 01(SD,F,∗ ℓ ) = LD 01(SD,F,∗ ℓ ). (3) That is, SLN-robustness requires that for any level of label noise in the observed distribution D, the classification performance (wrt D) of the learner is the same as if the learner directly observes D. Unfortunately, a widely adopted class of learners is not SLN-robust, as we will now see. 3.2 Convex potentials with linear function classes are not SLN-robust Fix X = Rd, and consider learners with a convex potential ℓ, and a function class of linear scorers Flin = {x 7→⟨w, x⟩| w ∈Rd}. This captures e.g. the linear SVM and logistic regression, which are widely studied in theory and applied in practice. Disappointingly, these learners are not SLN-robust: Long and Servedio [2010, Theorem 2] give an example where, when learning under symmetric label noise, for any convex potential ℓ, the corrupted ℓ-risk minimiser over Flin has classification performance equivalent to random guessing on D. This implies that (ℓ, Flin) is not SLN-robust1 as per Definition 1. Proposition 1 (Long and Servedio [2010, Theorem 2]). Let X = Rd for any d ≥2. Pick any convex potential ℓ. Then, (ℓ, Flin) is not SLN-robust. 3.3 The fallout: what learners are SLN-robust? In light of Proposition 1, there are two ways to proceed in order to obtain SLN-robust learners: either we change the class of losses ℓ, or we change the function class F. The first approach has been pursued in a large body of work that embraces non-convex losses [Stempfel and Ralaivola, 2009, Masnadi-Shirazi et al., 2010, Ding and Vishwanathan, 2010, Denchev et al., 2012, Manwani and Sastry, 2013]. While such losses avoid the conditions of Proposition 1, this does not automatically imply that they are SLN-robust when used with Flin. In Appendix B, we present evidence that some of these losses are in fact not SLN-robust when used with Flin. The second approach is to consider suitably rich F that contains the Bayes-optimal scorer for D, e.g. by employing a universal kernel. With this choice, one can still use a convex potential loss, and in fact, owing to Equation 2, any classification-calibrated loss. Proposition 2. Pick any classification-calibrated ℓ. Then, (ℓ, RX) is SLN-robust. Both approaches have drawbacks. The first approach has a computational penalty, as it requires optimising a non-convex loss. The second approach has a statistical penalty, as estimation rates with a rich F will require a larger sample size. Thus, it appears that SLN-robustness involves a computational-statistical tradeoff. However, there is a variant of the first option: pick a loss that is convex, but not a convex potential. Such a loss would afford the computational and statistical advantages of minimising convex risks with linear scorers. Manwani and Sastry [2013] demonstrated that square loss, ℓ(y, v) = (1 −yv)2, is one such loss. We will show that there is a simpler loss that is convex and SLN-robust, but is not in the class of convex potentials by virtue of being negatively unbounded. To derive this loss, we first re-interpret robustness via a noise-correction procedure. 1Even if we were content with a difference of ϵ ∈[0, 1/2] between the clean and corrupted minimisers’ performance, Long and Servedio [2010, Theorem 2] implies that in the worst case ϵ = 1/2. 3 4 A noise-corrected loss perspective on SLN-robustness We now re-express SLN-robustness to reason about optimal scorers on the same distribution, but with two different losses. This will help characterise a set of “strongly SLN-robust” losses. 4.1 Reformulating SLN-robustness via noise-corrected losses Given any ρ ∈[0, 1/2), Natarajan et al. [2013, Lemma 1] showed how to associate with a loss ℓa noise-corrected counterpart ℓsuch that LD ℓ(s) = LD ℓ(s). The loss ℓis defined as follows. Definition 2 (Noise-corrected loss). Given any loss ℓand ρ ∈[0, 1/2), the noise-corrected loss ℓis (∀y ∈{±1}) (∀v ∈R) ℓ(y, v) = (1 −ρ) · ℓ(y, v) −ρ · ℓ(−y, v) 1 −2ρ . (4) Since ℓdepends on the unknown parameter ρ, it is not directly usable to design an SLN-robust learner. Nonetheless, it is a useful theoretical device, since, by construction, for any F, SD,F,∗ ℓ = SD,F,∗ ℓ . This means that a sufficient condition for (ℓ, F) to be SLN-robust is for SD,F,∗ ℓ = SD,F,∗ ℓ . Ghosh et al. [2015, Theorem 1] proved a sufficient condition on ℓsuch that this holds, namely, (∃C ∈R)(∀v ∈R) ℓ1(v) + ℓ−1(v) = C. (5) Interestingly, Equation 5 is necessary for a stronger notion of robustness, which we now explore. 4.2 Characterising a stronger notion of SLN-robustness As the first step towards a stronger notion of robustness, we rewrite (with a slight abuse of notation) LD ℓ(s) = E (X,Y)∼D [ℓ(Y, s(X))] = E (Y,S)∼R(D,s) [ℓ(Y, S)] .= Lℓ(R(D, s)), where R(D, s) is a distribution over labels and scores. Standard SLN-robustness requires that label noise does not change the ℓ-risk minimisers, i.e. that if s is such that Lℓ(R(D, s)) ≤Lℓ(R(D, s′)) for all s′, the same relation holds with D in place of D. Strong SLN-robustness strengthens this notion by requiring that label noise does not affect the ordering of all pairs of joint distributions over labels and scores. (This of course trivially implies SLN-robustness.) As with the definition of D, given a distribution R over labels and scores, let R be the corresponding distribution where labels are flipped with probability ρ. Strong SLN-robustness can then be made precise as follows. Definition 3 (Strong SLN-robustness). Call a loss ℓstrongly SLN-robust if for every ρ ∈[0, 1/2), (∀R, R′) Lℓ(R) ≤Lℓ(R′) ⇐⇒Lℓ(R) ≤Lℓ(R′). We now re-express strong SLN-robustness using a notion of order equivalence of loss pairs, which simply requires that two losses order all distributions over labels and scores identically. Definition 4 (Order equivalent loss pairs). Call a pair of losses (ℓ, ˜ℓ) order equivalent if (∀R, R′) Lℓ(R) ≤Lℓ(R′) ⇐⇒L˜ℓ(R) ≤L˜ℓ(R′). Clearly, order equivalence of (ℓ, ℓ) implies SD,F,∗ ℓ = SD,F,∗ ℓ , which in turn implies SLN-robustness. It is thus not surprising that we can relate order equivalence to strong SLN-robustness of ℓ. Proposition 3. A loss ℓis strongly SLN-robust iff for every ρ ∈[0, 1/2), (ℓ, ℓ) are order equivalent. This connection now lets us exploit a classical result in decision theory about order equivalent losses being affine transformations of each other. Combined with the definition of ℓ, this lets us conclude that the sufficient condition of Equation 5 is also necessary for strong SLN-robustness of ℓ. Proposition 4. A loss ℓis strongly SLN-robust if and only if it satisfies Equation 5. We now return to our original goal, which was to find a convex ℓthat is SLN-robust for Flin (and ideally more general function classes). The above suggests that to do so, it is reasonable to consider those losses that satisfy Equation 5. Unfortunately, it is evident that if ℓis convex, non-constant, and bounded below by zero, then it cannot possibly be admissible in this sense. But we now show that removing the boundedness restriction allows for the existence of a convex admissible loss. 4 5 The unhinged loss: a convex, strongly SLN-robust loss Consider the following simple, but non-standard convex loss: ℓunh 1 (v) = 1 −v and ℓunh −1 (v) = 1 + v. Compared to the hinge loss, the loss does not clamp at zero, i.e. it does not have a hinge. (Thus, peculiarly, it is negatively unbounded, an issue we discuss in §5.3.) Thus, we call this the unhinged loss2. The loss has a number of attractive properties, the most immediate being is its SLN-robustness. 5.1 The unhinged loss is strongly SLN-robust Since ℓunh 1 (v) + ℓunh −1 (v) = 2, Proposition 4 implies that ℓunh is strongly SLN-robust, and thus that (ℓunh, F) is SLN-robust for any F. Further, the following uniqueness property is not hard to show. Proposition 5. Pick any convex loss ℓ. Then, (∃C ∈R) ℓ1(v) + ℓ−1(v) = C ⇐⇒(∃A, B, D ∈R) ℓ1(v) = −A · v + B, ℓ−1(v) = A · v + D. That is, up to scaling and translation, ℓunh is the only convex loss that is strongly SLN-robust. Returning to the case of linear scorers, the above implies that (ℓunh, Flin) is SLN-robust. This does not contradict Proposition 1, since ℓunh is not a convex potential as it is negatively unbounded. Intuitively, this property allows the loss to offset the penalty incurred by instances that are misclassified with high margin by awarding a “gain” for instances that correctly classified with high margin. 5.2 The unhinged loss is classification calibrated SLN-robustness is by itself insufficient for a learner to be useful. For example, a loss that is uniformly zero is strongly SLN-robust, but is useless as it is not classification-calibrated. Fortunately, the unhinged loss is classification-calibrated, as we now establish. For technical reasons (see §5.3), we operate with FB = [−B, +B]X, the set of scorers with range bounded by B ∈[0, ∞). Proposition 6. Fix ℓ= ℓunh. For any DM,η, B ∈[0, ∞), SD,FB,∗ ℓ = {x 7→B · sign(2η(x) −1)}. Thus, for every B ∈[0, ∞), the restricted Bayes-optimal scorer over FB has the same sign as the Bayes-optimal classifier for 0-1 loss. In the limiting case where F = RX, the optimal scorer is attainable if we operate over the extended reals R ∪{±∞}, so that ℓunh is classification-calibrated. 5.3 Enforcing boundedness of the loss While the classification-calibration of ℓunh is encouraging, Proposition 6 implies that its (unrestricted) Bayes-risk is −∞. Thus, the regret of every non-optimal scorer s is identically +∞, which hampers analysis of consistency. In orthodox decision theory, analogous theoretical issues arise when attempting to establish basic theorems with unbounded losses [Ferguson, 1967, pg. 78]. We can side-step this issue by restricting attention to bounded scorers, so that ℓunh is effectively bounded. By Proposition 6, this does not affect the classification-calibration of the loss. In the context of linear scorers, boundedness of scorers can be achieved by regularisation: instead of working with Flin, one can instead use Flin,λ = {x 7→⟨w, x⟩| ||w||2 ≤1/ √ λ}, where λ > 0, so that Flin,λ ⊆FR/ √ λ for R = supx∈X ||x||2. Observe that as (ℓunh, F) is SLN-robust for any F, (ℓunh, Flin,λ) is SLN-robust for any λ > 0. As we shall see in §6.3, working with Flin,λ also lets us establish SLN-robustness of the hinge loss when λ is large. 5.4 Unhinged loss minimisation on corrupted distribution is consistent Using bounded scorers makes it possible to establish a surrogate regret bound for the unhinged loss. This shows classification consistency of unhinged loss minimisation on the corrupted distribution. 2This loss has been considered in Sriperumbudur et al. [2009], Reid and Williamson [2011] in the context of maximum mean discrepancy; see the Appendix. The analysis of its SLN-robustness is to our knowledge novel. 5 Proposition 7. Fix ℓ= ℓunh. Then, for any D, ρ ∈[0, 1/2), B ∈[1, ∞), and scorer s ∈FB, regretD 01(s) ≤regretD,FB ℓ (s) = 1 1 −2ρ · regretD,FB ℓ (s). Standard rates of convergence via generalisation bounds are also trivial to derive; see the Appendix. 6 Learning with the unhinged loss and kernels We now show that the optimal solution for the unhinged loss when employing regularisation and kernelised scorers has a simple form. This sheds further light on SLN-robustness and regularisation. 6.1 The centroid classifier optimises the unhinged loss Consider minimising the unhinged risk over the class of kernelised scorers FH,λ = {s: x 7→ ⟨w, Φ(x)⟩H | ||w||H ≤1/ √ λ} for some λ > 0, where Φ: X →H is a feature mapping into a reproducing kernel Hilbert space H with kernel k. Equivalently, given a distribution3 D, we want w∗ unh,λ = argmin w∈H E (X,Y)∼D [1 −Y · ⟨w, Φ(X)⟩] + λ 2 ⟨w, w⟩H. (6) The first-order optimality condition implies that w∗ unh,λ = 1 λ · E (X,Y)∼D [Y · Φ(X)] , (7) which is the kernel mean map of D [Smola et al., 2007], and thus the optimal unhinged scorer is s∗ unh,λ : x 7→1 λ · E (X,Y)∼D [Y · k(X, x)] = x 7→1 λ ·  π · E X∼P [k(X, x)] −(1 −π) · E X∼Q [k(X, x)]  . (8) From Equation 8, the unhinged solution is equivalent to a nearest centroid classifier [Manning et al., 2008, pg. 181] [Tibshirani et al., 2002] [Shawe-Taylor and Cristianini, 2004, Section 5.1]. Equation 8 gives a simple way to understand the SLN-robustness of (ℓunh, FH,λ), as the optimal scorers on the clean and corrupted distributions only differ by a scaling (see the Appendix): (∀x ∈X) E (X,Y)∼D [Y · k(X, x)] = 1 1 −2ρ · E (X,Y)∼D  Y · k(X, x)  . (9) Interestingly, Servedio [1999, Theorem 4] established that a nearest centroid classifier (which they termed “AVERAGE”) is robust to a general class of label noise, but required the assumption that M is uniform over the unit sphere. Our result establishes that SLN robustness of the classifier holds without any assumptions on M. In fact, Ghosh et al. [2015, Theorem 1] lets one quantify the unhinged loss’ performance under a more general noise model; see the Appendix for discussion. 6.2 Practical considerations We note several points relating to practical usage of the unhinged loss with kernelised scorers. First, cross-validation is not required to select λ, since changing λ only changes the magnitude of scores, not their sign. Thus, for the purposes of classification, one can simply use λ = 1. Second, we can easily extend the scorers to use a bias regularised with strength 0 < λb ̸= λ. Tuning λb is equivalent to computing s∗ unh,λ as per Equation 8, and tuning a threshold on a holdout set. Third, when H = Rd for d small, we can store w∗ unh,λ explicitly, and use this to make predictions. For high (or infinite) dimensional H, we can either make predictions directly via Equation 8, or use random Fourier features [Rahimi and Recht, 2007] to (approximately) embed H into some lowdimensional Rd, and then store w∗ unh,λ as usual. (The latter requires a translation-invariant kernel.) We now show that under some assumptions, w∗ unh,λ coincides with the solution of two established methods; the Appendix discusses some further relationships, e.g. to the maximum mean discrepancy. 3Given a training sample S ∼Dn, we can use plugin estimates as appropriate. 6 6.3 Equivalence to a highly regularised SVM and other convex potentials There is an interesting equivalence between the unhinged solution and that of a highly regularised SVM. This has been noted in e.g. Hastie et al. [2004, Section 6], which showed how SVMs approach a nearest centroid classifier, which is of course the optimal unhinged solution. Proposition 8. Pick any D and Φ: X →H with R = supx∈X ||Φ(x)||H < ∞. For any λ > 0, let w∗ hinge,λ = argmin w∈H E (X,Y)∼D [max(0, 1 −Y · ⟨w, Φ(x)⟩H)] + λ 2 ⟨w, w⟩H be the soft-margin SVM solution. Then, if λ ≥R2, w∗ hinge,λ = w∗ unh,λ. Since (ℓunh, FH,λ) is SLN-robust, it follows that for ℓhinge : (y, v) 7→max(0, 1−yv), (ℓhinge, FH,λ) is similarly SLN-robust provided λ is sufficiently large. That is, strong ℓ2 regularisation (and a bounded feature map) endows the hinge loss with SLN-robustness4. Proposition 8 can be generalised to show that w∗ unh,λ is the limiting solution of any twice differentiable convex potential. This shows that strong ℓ2 regularisation endows most learners with SLN-robustness. Intuitively, with strong regularisation, one only considers the behaviour of a loss near zero; since a convex potential φ has φ′(0) < 0, it will behave similarly to its linear approximation around zero, viz. the unhinged loss. Proposition 9. Pick any D, bounded feature mapping Φ: X →H, and twice differentiable convex potential φ with φ′′([−1, 1]) bounded. Let w∗ φ,λ be the minimiser of the regularised φ risk. Then, lim λ→∞ w∗ φ,λ ||w∗ φ,λ||H − w∗ unh,λ ||w∗ unh,λ||H 2 H = 0. 6.4 Equivalence to Fisher Linear Discriminant with whitened data For binary classification on DM,η, the Fisher Linear Discriminant (FLD) finds a weight vector proportional to the minimiser of square loss ℓsq : (y, v) 7→(1 −yv)2 [Bishop, 2006, Section 4.1.5], w∗ sq,λ = (EX∼M[XXT ] + λI)−1 · E(X,Y)∼D[Y · X]. (10) By Equation 9, and the fact that the corrupted marginal M = M, w∗ sq,λ is only changed by a scaling factor under label noise. This provides an alternate proof of the fact that (ℓsq, Flin) is SLN-robust5 [Manwani and Sastry, 2013, Theorem 2]. Clearly, the unhinged loss solution w∗ unh,λ is equivalent to the FLD and square loss solution w∗ sq,λ when the input data is whitened i.e. E X∼M  XXT  = I. With a well-specified F, e.g. with a universal kernel, both the unhinged and square loss asymptotically recover the optimal classifier, but the unhinged loss does not require a matrix inversion. With a misspecified F, one cannot in general argue for the superiority of the unhinged loss over square loss, or vice-versa, as there is no universally good surrogate to the 0-1 loss [Reid and Williamson, 2010, Appendix A]; the Appendix illustrate examples where both losses may underperform. 7 SLN-robustness of unhinged loss: empirical illustration We now illustrate that the unhinged loss’ SLN-robustness is empirically manifest. We reiterate that with high regularisation, the unhinged solution is equivalent to an SVM (and in the limit any classification-calibrated loss) solution. Thus, we do not aim to assert that the unhinged loss is “better” than other losses, but rather, to demonstrate that its SLN-robustness is not purely theoretical. We first show that the unhinged risk minimiser performs well on the example of Long and Servedio [2010] (henceforth LS10). Figure 1 shows the distribution D, where X = {(1, 0), (γ, 5γ), (γ, −γ)} ⊂R2, with marginal distribution M = { 1 4, 1 4, 1 2} and all three instances are deterministically positive. We pick γ = 1/2. The unhinged minimiser perfectly classifies all three points, regardless of the level of label noise (Figure 1). The hinge minimiser is perfect when there is no noise, but with even a small amount of noise, achieves a 50% error rate. 4Long and Servedio [2010, Section 6] show that ℓ1 regularisation does not endow SLN-robustness. 5Square loss escapes the result of Long and Servedio [2010] since it is not monotone decreasing. 7 0.5 1 −1 −0.5 0.5 1 Unhinged Hinge 0% noise Hinge 1% noise Figure 1: LS10 dataset. Hinge t-logistic Unhinged ρ = 0 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 ρ = 0.1 0.15 ± 0.27 0.00 ± 0.00 0.00 ± 0.00 ρ = 0.2 0.21 ± 0.30 0.00 ± 0.00 0.00 ± 0.00 ρ = 0.3 0.38 ± 0.37 0.22 ± 0.08 0.00 ± 0.00 ρ = 0.4 0.42 ± 0.36 0.22 ± 0.08 0.00 ± 0.00 ρ = 0.49 0.47 ± 0.38 0.39 ± 0.23 0.34 ± 0.48 Table 1: Mean and standard deviation of the 01 error over 125 trials on LS10. Grayed cells denote the best performer at that noise rate. We next consider empirical risk minimisers from a random training sample: we construct a training set of 800 instances, injected with varying levels of label noise, and evaluate classification performance on a test set of 1000 instances. We compare the hinge, t-logistic (for t = 2) [Ding and Vishwanathan, 2010] and unhinged minimisers using a linear scorer without a bias term, and regularisation strength λ = 10−16. From Table 1, even at 40% label noise, the unhinged classifier is able to find a perfect solution. By contrast, both other losses suffer at even moderate noise rates. We next report results on some UCI datasets, where we additionally tune a threshold so as to ensure the best training set 0-1 accuracy. Table 2 summarises results on a sample of four datasets. (The Appendix contains results with more datasets, performance metrics, and losses.) Even at noise close to 50%, the unhinged loss is often able to learn a classifier with some discriminative power. Hinge t-Logistic Unhinged ρ = 0 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 ρ = 0.1 0.01 ± 0.03 0.01 ± 0.03 0.00 ± 0.00 ρ = 0.2 0.06 ± 0.12 0.04 ± 0.05 0.00 ± 0.01 ρ = 0.3 0.17 ± 0.20 0.09 ± 0.11 0.02 ± 0.07 ρ = 0.4 0.35 ± 0.24 0.24 ± 0.16 0.13 ± 0.22 ρ = 0.49 0.60 ± 0.20 0.49 ± 0.20 0.45 ± 0.33 (a) iris. Hinge t-Logistic Unhinged ρ = 0 0.05 ± 0.00 0.05 ± 0.00 0.05 ± 0.00 ρ = 0.1 0.06 ± 0.01 0.07 ± 0.02 0.05 ± 0.00 ρ = 0.2 0.06 ± 0.01 0.08 ± 0.03 0.05 ± 0.00 ρ = 0.3 0.08 ± 0.04 0.11 ± 0.05 0.05 ± 0.01 ρ = 0.4 0.14 ± 0.10 0.24 ± 0.13 0.09 ± 0.10 ρ = 0.49 0.45 ± 0.26 0.49 ± 0.16 0.46 ± 0.30 (b) housing. Hinge t-Logistic Unhinged ρ = 0 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 ρ = 0.1 0.10 ± 0.08 0.11 ± 0.02 0.00 ± 0.00 ρ = 0.2 0.19 ± 0.11 0.15 ± 0.02 0.00 ± 0.00 ρ = 0.3 0.31 ± 0.13 0.22 ± 0.03 0.01 ± 0.00 ρ = 0.4 0.39 ± 0.13 0.33 ± 0.04 0.02 ± 0.02 ρ = 0.49 0.50 ± 0.16 0.48 ± 0.04 0.34 ± 0.21 (c) usps0v7. Hinge t-Logistic Unhinged ρ = 0 0.05 ± 0.00 0.04 ± 0.00 0.19 ± 0.00 ρ = 0.1 0.15 ± 0.03 0.24 ± 0.00 0.19 ± 0.01 ρ = 0.2 0.21 ± 0.03 0.24 ± 0.00 0.19 ± 0.01 ρ = 0.3 0.25 ± 0.03 0.24 ± 0.00 0.19 ± 0.03 ρ = 0.4 0.31 ± 0.05 0.24 ± 0.00 0.22 ± 0.05 ρ = 0.49 0.48 ± 0.09 0.40 ± 0.24 0.45 ± 0.08 (d) splice. Table 2: Mean and standard deviation of the 0-1 error over 125 trials on UCI datasets. 8 Conclusion and future work We proposed a convex, classification-calibrated loss, proved that is robust to symmetric label noise (SLN-robust), showed it is the unique loss that satisfies a notion of strong SLN-robustness, established that it is optimised by the nearest centroid classifier, and showed that most convex potentials, such as the SVM, are also SLN-robust when highly regularised. So, with apologies to Wilde [1895]: While the truth is rarely pure, it can be simple. Acknowledgments NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. The authors thank Cheng Soon Ong for valuable comments on a draft of this paper. 8 References Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343–370, 1988. Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138 – 156, 2006. Christopher M Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., 2006. Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Conference on Computational Learning Theory (COLT), pages 92–100, 1998. Vasil Denchev, Nan Ding, Hartmut Neven, and S.V.N. Vishwanathan. Robust classification with adiabatic quantum optimization. In International Conference on Machine Learning (ICML), pages 863–870, 2012. Luc Devroye, L´aszl´o Gy¨orfi, and G´abor Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. Nan Ding and S.V.N. Vishwanathan. t-logistic regression. In Advances in Neural Information Processing Systems (NIPS), pages 514–522. Curran Associates, Inc., 2010. Thomas S. Ferguson. Mathematical Statistics: A Decision Theoretic Approach. Academic Press, 1967. Aritra Ghosh, Naresh Manwani, and P. S. Sastry. Making risk minimization tolerant to label noise. Neurocomputing, 160:93 – 107, 2015. Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5:1391–1415, December 2004. ISSN 1532-4435. Michael Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM, 5(6):392–401, November 1998. Philip M. Long and Rocco A. Servedio. Random classification noise defeats all convex potential boosters. Machine Learning, 78(3):287–304, 2010. ISSN 0885-6125. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA, 2008. ISBN 0521865719, 9780521865715. Naresh Manwani and P. S. Sastry. Noise tolerance under risk minimization. IEEE Transactions on Cybernetics, 43(3):1146–1151, June 2013. Hamed Masnadi-Shirazi, Vijay Mahadevan, and Nuno Vasconcelos. On the design of robust classifiers for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010. Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep D. Ravikumar, and Ambuj Tewari. Learning with noisy labels. In Advances in Neural Information Processing Systems (NIPS), pages 1196–1204, 2013. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems (NIPS), pages 1177–1184, 2007. Mark D. Reid and Robert C. Williamson. Composite binary losses. Journal of Machine Learning Research, 11:2387–2422, December 2010. Mark D Reid and Robert C Williamson. Information, divergence and risk for binary experiments. Journal of Machine Learning Research, 12:731–817, Mar 2011. Bernhard Sch¨olkopf and Alexander J Smola. Learning with kernels, volume 129. MIT Press, 2002. Rocco A. Servedio. On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm. In Conference on Computational Learning Theory (COLT), 1999. John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge Uni. Press, 2004. Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch¨olkopf. A Hilbert space embedding for distributions. In Algorithmic Learning Theory (ALT), 2007. Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Gert R. G. Lanckriet, and Bernhard Sch¨olkopf. Kernel choice and classifiability for RKHS embeddings of probability distributions. In Advances in Neural Information Processing Systems (NIPS), 2009. Guillaume Stempfel and Liva Ralaivola. Learning SVMs from sloppily labeled data. In Artificial Neural Networks (ICANN), volume 5768, pages 884–893. Springer Berlin Heidelberg, 2009. Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, and Gilbert Chu. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences, 99(10): 6567–6572, 2002. Oscar Wilde. The Importance of Being Earnest, 1895. 9
2015
125
5,620
VISALOGY: Answering Visual Analogy Questions Fereshteh Sadeghi University of Washington fsadeghi@cs.washington.edu C. Lawrence Zitnick Microsoft Research larryz@microsoft.com Ali Farhadi University of Washington, The Allen Institute for AI ali@cs.washington.edu Abstract In this paper, we study the problem of answering visual analogy questions. These questions take the form of image A is to image B as image C is to what. Answering these questions entails discovering the mapping from image A to image B and then extending the mapping to image C and searching for the image D such that the relation from A to B holds for C to D. We pose this problem as learning an embedding that encourages pairs of analogous images with similar transformations to be close together using convolutional neural networks with a quadruple Siamese architecture. We introduce a dataset of visual analogy questions in natural images, and show first results of its kind on solving analogy questions on natural images. 1 Introduction Analogy is the task of mapping information from a source to a target. Analogical thinking is a crucial component in problem solving and has been regarded as a core component of cognition [1]. Analogies have been extensively explored in cognitive sciences and explained by several theories and models: shared structure [1], shared abstraction [2], identity of relation, hidden deduction [3], etc. The common two components among most theories are the discovery of a form of relation or mapping in the source and extension of the relation to the target. Such a process is very similar to the tasks in analogy questions in standardized tests such as the Scholastic Aptitude Test (SAT): A is to B as C is to what? In this paper, we introduce VISALOGY to address the problem of solving visual analogy questions. Three images Ia, Ib, and Ic are provided as input and a fourth image Id must be selected such that Ia is to Ib as Ic is to Id. This involves discovering an extendable mapping from Ia to Ib and then applying it to Ic to find Id. Estimating such a mapping for natural images using current feature spaces would require careful alignment, complex reasoning, and potentially expensive training data. Instead, we learn an embedding space where reasoning about analogies can be performed by simple vector transformations. This is in fact aligned with the traditional logical understanding of analogy as an arrow or homomorphism from source to the target. Our goal is to learn a representation that given a set of training analogies can generalize to unseen analogies across various categories and attributes. Figure 1 shows an example visual analogy question. Answering this question entails discovering the mapping from the brown bear to the white bear (in this case a color change), applying the same mapping to the brown dog, and then searching among a set of images (the middle row in Figure 1) to find an example that respects the discovered mapping from the brown dog best. Such a mapping should ideally prefer white dogs. The bottom row shows a ranking imposed by VISALOGY. We propose learning an embedding that encourages pairs of analogous images with similar mappings to be close together. Specifically, we learn a Convolutional Neural Network (CNN) with Siamese quadruple architecture (Figure 2) to obtain an embedding space where analogical reasoning can be 1 : :: : Answer: top ranked selections by our method Analogy Question ... Test set: correct answers mixed with distractor negative images ... Figure 1: Visual analogy question asks for a missing image Id given three images Ia, Ib, Ic in the analogy quadruple. Solving a visual analogy question entails discovering the mapping from Ia to Ib and applying it to Ic and search among a set of images (the middle row) to find the best image for which the mapping holds. The bottom row shows an ordering of the images imposed by VISALOGY based on how likely they can be the answer to the analogy question. done with simple vector transformations. Doing so involves fine tuning the last layers of our network so that the difference in the unit normalized activations between analogue images is similar for image pairs with similar mapping and dissimilar for those that are not. We also evaluate VISALOGY on generalization to unseen analogies. To show the benefits of the proposed method, we compare VISALOGY against competitive baselines that use standard CNNs trained for classification. Our experiments are conducted on datasets containing natural images as well as synthesized images and the results include quantitative evaluations of VISALOGY across different sizes of distractor sets. The performance in solving analogy questions is directly affected by the size of the set from which the candidate images are selected. In this paper we study the problem of visual analogies for natural images and show the first results of its kind on solving visual analogy questions for natural images. Our proposed method learns an embedding where similarities are transferable across pairs of analogous images using a Siamese network architecture. We introduce Visual Analogy Question Answering (VAQA), a dataset of natural images that can be used to generate analogies across different objects attributes and actions of animals. We also compile a large set of analogy questions using the 3D chair dataset [4] containing analogies across viewpoint and style. Our experimental evaluations show promising results on solving visual analogy questions. We explore different kinds of analogies with various numbers of distracters, and show generalization to unseen analogies. 2 Related Work The problem of solving analogy questions has been explored in NLP using word-pair connectives [5], supervised learning [6, 7, 8], distributional similarities [9], word vector representations and linguistic regularities [10], and learning by reading [11]. Solving analogy questions for diagrams and sketches has been extensively explored in AI [12]. These papers either assume simple forms of drawings [13], require an abstract representation of diagrams [14], or spatial reasoning [15]. In [16] an analogy-based framework is proposed to learn ‘image filters’ between a pair of images to creat an ‘analogous’ filtered result on a third image. Related to analogies is learning how to separate category and style properties in images, which has been studied using bilinear models [17]. In this paper, we study the problem of visual analogies for natural images possessing different semantic properties where obtaining abstract representations is extremely challenging. Our work is also related to metric learning using deep neural networks. In [18] a convolutional network is learned in a Siamese architecture for the task of face verification. Attributes have been shown to be effective representations for semantical image understanding [19]. In [20], the relative attributes are introduced to learn a ranking function per attribute. While these methods provide an efficient feature representation to group similar objects and map similar images nearby each other in an embedding space, they do not offer a semantic space that can capture object-to-object mapping and cannot be directly used for object-to-object analogical inference. In [21] the relationships between multiple pairs of classes are modeled via analogies, which is shown to improve recognition as well as GRE textual analogy tests. In our work we learn analogies without explicity considering categories and no textual data is provided in our analogy questions. Learning representations using both textual and visual information has also been explored using deep architectures. These representations show promising results for learning a mapping between 2 visual data[22] the same way that it was shown for text [23]. We differ from these methods as our objective is to directly optimized for analogy questions and our method does not use textual information. Different forms of visual reasoning has been explored in the Question-Answering domain. Recently, the visual question answering problem has been studied in several papers [24, 25, 26, 27, 28, 29]. In [25] a method is introduced for answering several types of textual questions grounded with images while [27] proposes the task of open-ended visual question answering. In another recent approach [26], knowledge extracted from web visual data is used to answer open-domain questions. While these works all use visual reasoning to answer questions, none have considered solving analogy questions. 3 Our Approach We pose answering a visual analogy question I1 : I2 :: I3 :? as the problem of discovering the mapping from image I1 to image I2 and searching for an image I4 that has the same relation to image I3 as I1 to I2. Specifically, we find a function T (parametrized by θ) that maps each pair of images (I1, I2) to a vector x12 = T(X1, X2; θ). The goal is to solve for parameters θ such that x12 ≈x34 for positive image analogies I1 : I2 :: I3 : I4. As we describe below, T is computed using the differences in ConvNet output features between images. 3.1 Quadruple Siamese Network A positive training example for our network is an analogical quadruple of images [I1 : I2 :: I3 : I4] where the transformation from I3 to I4 is the same as that of I1 to I2. To be able to solve the visual analogy problem, our learned parameters θ should map these two transformations to a similar location. To formalize this, we use a contrastive loss function L to measure how well T is capable of placing similar transformations nearby in the embedding space and pushing dissimilar transformations apart. Given a d-dimensional feature vector x for each pair of input images, the contrastive loss is defined as: Lm(x12, x34) = y||x12 −x34|| + (1 −y) max(m −||x12 −x34||, 0) (1) where x12 and x34 refer to the embedding feature vector for (I1, I2) and (I3, I4) respectively. Label y is 1 if the input quadruple [I1 : I2 :: I3 : I4] is a correct analogy or 0 otherwise. Also, m > 0 is the margin parameter that pushes x12 and x34 close to each other in the embedding space if y = 1 and forces the distance between x12 and x34 in wrong analogy pairs (y = 0) be bigger than m > 0, in the embedding space. We train our network with both correct and wrong analogy quadruples and the error is back propagated through stochastic gradient descent to adjust the network weights θ. The overview of our network architecture is shown in Figure 2. To compute the embedding vectors x we use the quadruple Siamese architecture shown in Figure 2. Using this architecture, each image in the analogy quadruple is fed through a ConvNet (AlexNet [30]) with shared parameters θ. The label y shows whether the input quadruple is a correct analogy (y = 1) or a false analogy (y = 0) example. To capture the transformation between image pairs (I1, I2) and (I3, I4), the outputs of the last fully connected layer are subtracted. We normalize our embedding vectors to have unit L2 length, which results in the Euclidean distance being the same as the cosine distance. If Xi are the outputs of the last fully connected layer in the ConvNet for image Ii, xij = T(Xi, Xj; θ) is computed by: T(Xi, Xj; θ) = Xi −Xj ||Xi −Xj|| (2) Using the loss function defined in Equation (1) may lead to the network overfitting. Positive analogy pairs in the training set can get pushed too close together in the embedding space during training. To overcome this problem, we consider a margin mP > 0 for positive analogy quadruples. In this case, x12 and x34 in the positive analogy pairs will be pushed close to each other only if the distance between them is bigger than mP > 0. It is clear that 0 ≤mP ≤mN should hold between the two margins. LmP ,mN (x12, x34) = y max(||x12 −x34|| −mP , 0) + (1 −y) max(mN −||x12 −x34||, 0) (3) 3 256   96   256   384   384   4096   4096   A  -­‐  B   L2   A  -­‐  B   I1   I3   I2   I4   Loss   x12   x34   y   Shared  θ   Shared  θ   Shared  θ   L2   Loss+   : : :   ::   :   Single  margin  embedding  space   One  posi=ve  analogy  instance   Nega=ve  instances   Loss-­‐   Nega=ve  margin   :   : Double  margin  embedding  space   : Loss+   : Posi=ve    margin   Loss-­‐   Nega=ve  margin   :   : Figure 2: VISALOGY Network has quadruple Siamese architecture with shared θ parameters. The network is trained with correct analogy quadruples of images [I1, I2, I3, I4] along with wrong analogy quadruples as negative samples. The contrastive loss function pushes (I1, I2) and (I3, I4) of correct analogies close to each other in the embedding space while forcing the distance between (I1, I2) and (I3, I4) in negative samples to be more than margin m. 3.2 Building Analogy Questions For creating a dataset of visual analogy questions we assume each training image has information (c, p) where c ∈C denotes its category and p ∈P denotes its property. Example properties include color, actions, and object orientation. A valid analogy quadruple should have the form: [I(ci,p1) 1 : I(ci,p2) 2 :: I(co,p1) 3 : I(co,p2) 4 ] where the two input images I1 and I2 have the same category ci, but their properties are different. That is, I1 has the property p1 while I2 has the property p2. Similarly, the output images I3 and I4 share the same category co where ci ̸= co. Also, I3 has the property p1 while I4 has the property p2 and p1 ̸= p2. Generating Positive Quadruples: Given a set of labeled images, we construct our set of analogy types. We select two distinct categories c, c′ ∈C and two distinct properties p, p′ ∈P which are shared between c and c′. Using these selections, we can build 4 different analogy types (either c or c′ can be considered as ci and co and similarly for p and p′). For each analogy type (e.g. [(ci, p1) : (ci, p2) :: (co, p1) : (co, p2)]), we can generate a set of positive analogy samples by combining corresponding images. This procedure provides a large number of positive analogy pairs. Generating Negative Quadruples: Using only positive samples for training the network leads to degenerate models, since the loss can be made zero by simply mapping each input image to a constant vector. Therefore, we also generate quadraples that violate the analogy rules as negative samples during training. To generate negative quadruples, we take two approaches. In the first approach, we randomly select 4 images from the whole set of training images and each time check that the generated quadruple is not a valid analogy. In the second approach, we first generate a positive analogy quadruple, then we randomly replace either of I3 or I4 with an improper image to break the analogy. Suppose we select I3 for replacement. Then we can either randomly select an image with category co and property p∗where p∗̸= p1 and p∗̸= p2 or we can randomly select an image with property p1 but with a category c∗where c∗̸= co. The second approach generates a set of hard negatives to help improve training. During the training, we randomly sample from the whole set of possible negatives. 4 Experiments Testing Scenario and Evaluation Metric: To evaluate the performance of our method for solving visual analogy questions, we create a set of correct analogy quadruples [I1 : I2 :: I3 :?] using the (c, p) labels of images. Given a set D of images which contain both positive and distracter images, we would like to rank each image Ii in D based on how well it completes the analogy. We compute the corresponding feature embeddings x1, x2, x3, for each of the input images as well as xi for each image in D and we rank based on: 4 Recall 10 0 10 1 10 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Ours AlexNet, ft AlexNet Chance 10 0 10 1 10 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Ours AlexNet, ft AlexNet Chance 10 0 10 1 10 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Ours AlexNet, ft AlexNet Chance 10 0 10 1 10 2 0 0.1 0.2 0.3 0.4 0.5 Ours AlexNet, ft AlexNet Chance D = 100 D = 500 D = 1000 D = 2000 Top-k retrieval Top-k retrieval Top-k retrieval Top-k retrieval Figure 3: Quantitative evaluation (log scale) on 3D chairs dataset. Recall as a function of the number (k) of images returned (Recall at top-k). For each question the recall at top-k is either 0 or 1 and is averaged over 10,000 questions. The size of the distractor set D is varied D = [100, 500, 1000, 2000]. ‘AlexNet’: AlexNet, ‘AlexNet ft’: AlexNet fine-tuned on chairs dataset for categorizing view-points. ranki = T(I1, I2).T(I3, Ii) ||T(I1, I2)||.||T(I3, Ii)||, i ∈1, ..., n (4) where T(.) is the embedding obtained from our network as explained in section 3. We consider the images with the same category c as of I3 and the same property p as of I2 to be a correct retrieval and thus a positive image and the rest of the images in D as negative images. We compute the recall at top-k to measure whether or not an image with an appropriate label has appeared in the top k retrieved images. Baseline: It has been shown that the output of the 7th layer in AlexNet produces high quality state-of-the-art image descriptors [30]. In each of our experiments, we compare the performance of solving visual analogy problems using the image embedding obtained from our network with the image representation of AlexNet. In practice, we pass each test image through AlexNet and our network, and extract the output from the last fully connected layer using both networks. Note that for solving general analogy questions the set of properties and categories are not known at the test time. Accordingly, our proposed network does not use any labels during training and is aimed to generalize the transformations without explictily using the label of categories and properties. Dataset: To evaluate the capability of our trained network for solving analogy questions in the test scenarios explained above, we use a large dataset of 3D chairs [4] as well as a novel dataset of natural images (VAQA), that we collected for solving analogy questions on natural images. 4.1 Implementation Details In all the experiments, we use stochastic gradient descent (SGD) to train our network. For initializing the weights of our network, we use the AlexNet pre-trained network for the task of large-scale object recognition (ILSVRC2012) provided by the BVLC Caffe website [31]. We fine-tune the last two fully connected layers (fc6, fc7) and the last convolutional layer (conv5) unless stated otherwise. We have also used the double margin loss function introduced in Equation 3 with mP = 0.2, mN = 0.4 which we empirically found to give the best results in a held-out validatation set. The effect of using a single margin vs. double margin loss function is also investigated in section 4.4. 4.2 Analogy Question Answering Using 3D Chairs We use a large collection of 1,393 models of chairs with different styles introduced in [4]. To make the dataset, the CAD models are download from Google/Trimble 3D Warehouse and each chair style is rendered on white background from different view points. For making analogy quadruples, we use 31 different view points of each chair style which results in 1,393*31 = 43,183 synthesized images. In this dataset, we treat different styles as different categories and different view points as different properties of the images according to the explanations given in section 3.2. We randomly select 1000 styles and 16 view points for training and keep the rest for testing. We use the rest of 393 classes of chairs with 15 view points (which are completely unseen during the training) to build unseen analogy questions that test the generalization capability of our network at test time. To construct an analogy question, we randomly select two different styles and two different view points. The first part of the analogy quadruple (I1, I2) contains two images with the same style and with two different view points. The images from the second half of the analogy quadruple (I3, I4), have another style and I3 has the same viewpoint as I1 and I4 has the same view point as I2. Together, I1, I2, I3 and I4 build an analogy question (I1 : I2 :: I3 :?) where I4 is the correct answer. Using this approach, the total number of positive analogies that could be used during training is 1000 2  × 16 2  ×4 = 999, 240. 5 : :: : : :: : : :: : : :: : Analogy Question ours baseline Figure 4: Left: Several examples of analogy questions from the 3D chairs dataset. In each question, the first and second chair have the same style while their view points change. The third image has the same view point as the first image but in a different style. The correct answer to each question is retrieved from a set with 100 distractors and should have the same style as the third image while its view point should be similar to the second image. Middle: Top-4 retrievals using the features obtained from our method . Right: Top-4 retrievals using AlexNet features. All retrievals are sorted from left to right To train our network, we uniformly sampled 700,000 quadruples (of positive and negative analogies) and initialized the weights with the AlexNet pre-trained network and fine-tuned its parameters. Figure 4 shows several samples of the analogy questions (left column) used at test time and the top-4 images retrieved by our method (middle column) compared with the baseline (right column). We see that our proposed approach can retrieve images with a style similar to that of the third image and with a view-point similar to the second image while the baseline approach is biased towards retrieving chairs with a style similar to that of the first and the second image. To quantitatively compare the performance of our method with the baseline, we randomly generated 10,000 analogy questions using the test images and report the average recall at top-k retrieval while varying the number of irrelevant images (D) in the distractor set. Note that, since there is only one image corresponding to each (style , view-point), there is only one positive answer image for each question. The performance of chance at the top-kth retrieval is k n where n is the size of D. The images of this dataset are synthesized and do not follow natural image statistics. Therefore, to be fair at comparing the results obtained from our network with that of the baseline (AlexNet), we fine-tune all layers of the AlexNet via a soft-max loss for categorization of different view-points and using the set of images seen during training. We then use the features obtained from the last fully connected layer (fc7) of this network to solve analogy questions. As shown in Figure 3, fine-tuning all layers of AlexNet (the violet curve referred to as ‘AlexNet,ft’ in the diagram) helps improve the performance of the baseline. However, the recall of our network still outperforms it with a large margin. 4.3 Analogy Question Answering using VAQA Dataset As explained in section 3.2, to construct a natural image analogy dataset we need to have images of numerous object categories with distinguishable properties. We also need to have these properties be shared amongst object categories so that we can make valid analogy quadruples using the (c, p) labels. In natural images, we consider the property of an object to be either the action that it is doing (for animate objects) or its attribute (for both animate and non-animate objects). Unfortunately, we found that current datasets have a sparse number of object properties per class, which restricts the number of possible analogy questions. For instance, many action datasets are human centric, and do not have analogous actions for animals. As a result, we collected our own dataset VAQA for solving visual analogy questions. Data collection: We considered a list of ‘attributes’ and ‘actions’ along with a list of common objects and paired them to make a list of (c, p) labels for collecting images. Out of this list, we removed (c, p) combinations that are not common in the real world (e.g. (horse,blue) is not common in the real world though there might be synthesized images of ‘blue horse’ in the web). We used the remaining list of labels to query Google Image Search with phrases made from concatenation of word c and p and downloaded 100 images for each phrase. The images are manually verified to contain the concept of interest. However, we did not pose any restriction about the view-point of the objects. After the pruning step, there exists around 70 images per category with a total of 7,500 images. The VAQA dataset consists of images corresponding to 112 phrases which are made out of 14 different categories and 22 properties. Using the shared properties amongst categories we can build 756 types of analogies. In our experiments, we used over 700,000 analogy questions for training our network. 6 10 0 10 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Ours AlexNet features Chance 10 0 10 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Ours AlexNet features Chance Top k retrieval Top k retrieval Seen Action Analogies Unseen Action Analogies 10 0 10 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Ours AlexNet features Chance 10 0 10 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Ours AlexNet features Chance Recall Unseen Attribute Analogies Seen Attribute Analogies Top k retrieval Top k retrieval Figure 5: Quantitative evaluation (log scale) on the VAQA dataset using ‘attribute’ and ‘action’ analogy questions. Recall as a function of the number (k) of images returned (Recall at top-k). For each question the recall at top-k is averaged over 10,000 questions. The size of the distractor set is fixed at 250 in all experiments. Results shown for analogy types seen in training are shown in the left two plots, and for analogy types not seen in training in the two right plots. Attribute analogy: Following the procedure explained in Section 3.2 we build positive and negative quadruples to train our network. To be able to test the generalization of the learned embeddings for solving analogy question types that are not seen during training, we randomly select 18 attribute analogy types and remove samples of them from the training set of analogies. Using the remaining analogy types, we sampled a total of 700,000 quadruples (positive and negative) that are used to train the network. Action analogy: Similarly, we trained our network to learn action analogies. For the generalization test, we remove 12 randomly selected analogy types and make the training quadruples using the remaining types. We sampled 700,000 quadruples (positive and negative) to train the network. Evaluation on VAQA: Using the unseen images during the training, we make analogy quadruples to test the trained networks for the ‘attribute’ and ‘action’ analogies. For evaluating the specification and generalization of our trained network we generate analogy quadruples in two scenarios of ‘seen’ and ‘unseen’ analogies using the analogy types seen during training and the ones in the withheld sets respectively. In each of these scenarios, we generated 10,000 analogy questions and report the average recall at top-k. For each question [I1 : I2 :: I3 :?], images that have property p equal to that of I2 and category c equal to I3 are considered as correct answers. The result is around 4 positive images for each question and we fix the distracter set to have 250 negative images for each question. Given the small size of our distracter set, we report the average recall at top-10. The obtained results in different scenarios as summarized in Figure 5. In all the cases, our method outperforms the baseline. Other than training separate networks for ‘attribute’ and ‘action’ analogies, we trained and tested our network with a combined set of analogy questions and obtained promising results with a gap of 5% compared to our baseline on the top-5 retrievals of the seen analogy questions. Note that our current dataset only has one property label per image (either for ‘attribute’ or ‘action’). Thus, a negative analogy for one property may be positive for the other. A more thorough analysis would require multi-property data, which we leave for future work. Qualitative Analysis: Figure 6, shows examples of attribute analogy questions that are used for evaluating our network along with the top five retrieved images obtained from our method and the baseline method. As explained above, during the data collection we only prune out images that do not contain the (c, p) of interest. Also, we do not pose any restriction for generating positive quadruples such as restricting the objects to have similar pose or having the same number of objects of interest in the quadruples. However, as can be seen in Figure 6 our network had been able to implicitly learn to generalize the count of objects. For example, in the first row of Figure 6, an image pair is [‘dog swimming’ : ‘dog standing’] and the second part of the analogy has an image of ‘multiple horses swimming’. Given this analogy question as input, our network has retrieved images with multiple ‘standing horses’ in the top five retrievals. 4.4 Ablation Study In this section, we investigate the effect of training the network with double margins (mP , mN) for positive and negative analogy quadruples compared with only using one single margin for negative quadruples. We perform an ablation experiment where we compare the performance of the network at top-k retrieval while being trained using either of the loss functions explained in Section 4. Also, in two different scenarios, we either fine-tune only the top fully connected layers fc6 and fc7 (re7 Analogy Question : :: : : :: : : :: : : :: : ours baseline : :: : : :: : : :: : :: : : :: : : Attribute Action Figure 6: Left: Samples of test analogy questions from VAQA dataset. Middle: Top-4 retrievals using the features obtained from our method. Right: Top-4 retrievals using AlexNet features. 10 0 10 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Ours[ft(fc6,fc7,c5)+(mP,mN)] Ours[ft(fc6,fc7)+(mP,mN)] Ours[ft(fc6,fc7,c5)+(mN)] Ours[ft(fc6,fc7)+(mN) AlexNet features Chance 10 0 10 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Ours[ft(fc6,fc7,c5)+(mP,mN)] Ours[ft(fc6,fc7)+(mP,mN)] Ours[ft(fc6,fc7,c5)+(mN)] Ours[ft(fc6,fc7)+(mN) AlexNet features Chance Top k retrieval Testing with Seen Analogy types Testing with Unseen Analogy types Top k retrieval Recall Figure 7: Quantitative comparison for the effect of using double margin vs. single margin for training the VISALOGY network. ferred to as ‘ft(fc6,fc7)’ in Figure 7) or the top fully connected layers plus the last convolutional layer c5 (referred to as ‘ft(fc6,fc7,c5)’) in Figure 7). We use a fixed training sample set consisting of 700,000 quadruples generated from the VAQA dataset in this experiment. In each case, we test the trained network using samples coming from the set of analogy questions whose types are seen/unseen during the training. As can be seen from Figure 7, using double margins (mP , mN) in the loss function has resulted in better performance in both testing scenarios. While using double margins results in a small increase in the ‘seen analogy types’ testing scenario, it has considerably increased the recall when the network was tested with ‘unseen analogy types’. This demonstrates that the use of double margins helps generalization. 5 Conclusion In this work, we introduce the new task of solving visual analogy questions. For exploring the task of visual analogy questions we provide a new dataset of natural images called VAQA. We answer the questions using a Siamese ConvNet architecture that provides an image embedding that maps together pairs of images that share similar property differences. We have demonstrated the performance of our proposed network using two datasets and have shown that our network can provide an effective feature representation for solving analogy problems compared to state-of-theart image representations. Acknowledgments: This work was in part supported by ONR N00014-13-1-0720, NSF IIS1218683, NSF IIS-IIS- 1338054, and Allen Distinguished Investigator Award. 8 References [1] Gentner, D., Holyoak, K.J., Kokinov, B.N.: The analogical mind: Perspectives from cognitive science. MIT press (2001) [2] Shelley, C.: Multiple analogies in science and philosophy. John Benjamins Publishing (2003) [3] Juthe, A.: Argument by analogy. Argumentation (2005) [4] Aubry, M., Maturana, D., Efros, A., Russell, B., Sivic, J.: Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In: CVPR. (2014) [5] Turney, P.D.: Similarity of semantic relations. Comput. Linguist. (2006) [6] Turney, P.D., Littman, M.L.: Corpus-based learning of analogies and semantic relations. CoRR (2005) [7] Baroni, M., Lenci, A.: Distributional memory: A general framework for corpus-based semantics. Comput. Linguist. (2010) [8] Jurgens, D.A., Turney, P.D., Mohammad, S.M., Holyoak, K.J.: Semeval-2012 task 2: Measuring degrees of relational similarity, ACL (2012) [9] Turney, P.D., Pantel, P.: From frequency to meaning: Vector space models of semantics. J. Artif. Int. Res. (2010) [10] Levy, O., Goldberg, Y.: Linguistic regularities in sparse and explicit word representations. In: CoNLL, ACL (2014) [11] Barbella, D.M., Forbus, K.D.: Analogical dialogue acts: Supporting learning by reading analogies in instructional texts. In: AAAI. (2011) [12] Chang, M.D., Forbus, K.D.: Using analogy to cluster hand-drawn sketches for sketch-based educational software. AI Magazine (2014) [13] Forbus, K.D., Usher, J.M., Tomai, E.: Analogical learning of visual/conceptual relationships in sketches. In: AAAI. (2005) [14] Forbus, K., Usher, J., Lovett, A., Lockwood, K., Wetzel, J.: Cogsketch: Sketch understanding for cognitive science research and for education. Topics in Cognitive Science (2011) [15] Chang, M.D., Wetzel, J.W., Forbus, K.D.: Spatial reasoning in comparative analyses of physics diagrams. In: Spatial Cognition IX. (2014) [16] Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: SIGGRAPH, ACM (2001) [17] Tenenbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural computation (2000) [18] Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: CVPR. (2005) [19] Farhadi, A., Endres, I., Hoiem, D., Forsyth, D.: Describing objects by their attributes. In: CVPR. (2009) [20] Parikh, D., Grauman, K.: Relative attributes. In: ICCV. (2011) [21] Hwang, S.J., Grauman, K., Sha, F.: Analogy-preserving semantic embedding for visual object categorization. In: ICML. (2013) [22] Kiros, R., Salakhutdinov, R., Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539 (2014) [23] Mikolov, T., Yih, W.t., Zweig, G.: Linguistic regularities in continuous space word representations. In: HLT-NAACL. (2013) [24] Geman, D., Geman, S., Hallonquist, N., Younes, L.: Visual turing test for computer vision systems. PNAS (2015) [25] Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. In: NIPS. (2014) [26] Sadeghi, F., Kumar Divvala, S., Farhadi, A.: VisKE: Visual Knowledge Extraction and Question Answering by Visual Verification of Relation Phrases. In: CVPR. (2015) [27] Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: Visual question answering. In: ICCV. (2015) [28] Yu, L., Park, E., Berg, A.C., Berg, T.L.: Visual madlibs: Fill in the blank description generation and question answering. In: ICCV. (2015) [29] Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based approach to answering questions about images. In: ICCV. (2015) [30] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. (2012) [31] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014) 9
2015
126
5,621
Cornering Stationary and Restless Mixing Bandits with Remix-UCB Julien Audiffren CMLA ENS Cachan, Paris Saclay University 94235 Cachan France audiffren@cmla.ens-cachan.fr Liva Ralaivola QARMA, LIF, CNRS Aix Marseille University F-13289 Marseille cedex 9, France liva.ralaivola@lif.univ-mrs.fr Abstract We study the restless bandit problem where arms are associated with stationary ϕ-mixing processes and where rewards are therefore dependent: the question that arises from this setting is that of carefully recovering some independence by ‘ignoring’ the values of some rewards. As we shall see, the bandit problem we tackle requires us to address the exploration/exploitation/independence trade-off, which we do by considering the idea of a waiting arm in the new Remix-UCB algorithm, a generalization of Improved-UCB for the problem at hand, that we introduce. We provide a regret analysis for this bandit strategy; two noticeable features of Remix-UCB are that i) it reduces to the regular Improved-UCB when the ϕ-mixing coefficients are all 0, i.e. when the i.i.d scenario is recovered, and ii) when ϕ(n) = O(n−α), it is able to ensure a controlled regret of order eΘ  ∆(α−2)/α ∗ log1/α T  , where ∆∗encodes the distance between the best arm and the best suboptimal arm, even in the case when α < 1, i.e. the case when the ϕ-mixing coefficients are not summable. 1 Introduction Bandit with mixing arms. The bandit problem consists in an agent who has to choose at each step between K arms. A stochastic process is associated to each arm, and pulling an arm produces a reward which is the realization of the corresponding stochastic process. The objective of the agent is to maximize its long term reward. In the abundant bandit literature, it is often assumed that the stochastic process associated to each arm is a sequence of independently and identically distributed (i.i.d) random variables (see, e.g. [12]). In that case, the challenge the agent has to address is the well-known exploration/exploitation problem: she has to simultaneously make sure that she collects information from all arms to try to identify the most rewarding ones—this is exploration—and to maximize the rewards along the sequence of pulls she performs—this is exploitation. Many algorithms have been proposed to solve this trade-off between exploration and exploitation [2, 3, 6, 12]. We propose to go a step further than the i.i.d setting and to work in the situation where the process associated with each arm is a stationary ϕ-mixing process: the rewards are thus dependent from one another, with a strength of dependence that weakens over time. From an application point of view, this is a reasonable dependence structure: if a user clicks on some ad (a typical use of bandit algorithms) at some point in time, it is very likely that her choice will have an influence on what she will click in the close future, while it may have a (lot) weaker impact on what ad she will choose to view in a more distant future. As it shall appear in the sequel, working with such dependent observations poses the question of how informative are some of the rewards with respect to the value of an arm since, because of the dependencies and the strong correlation between close-by (in time) rewards, they might not reflect the true ‘value’ of the arms. However, as the dependencies weaken over time, some kind of independence might be recovered if some rewards are ignored, in some sense. This 1 actually requires us to deal with a new tradeoff, the exploration/exploitation/independence tradeoff, where the usual exploration/exploitation compromise has to be balanced with the need for some independence. Dealing with this new tradeoff is the pivotal feature of our work. Non i.i.d bandit. A closely related setup that addresses the bandit problem with dependent rewards is when they are distributed according to Markov processes, such as Markov chains and Markov decision process (MDP) [16, 22], where the dependences between rewards are of bounded range, which is what distinguishes those works with ours. Contributions in this area study two settings: the rested case, where the process attached to an arm evolves only when the arm is pulled, and the restless case, where all processes simultaneously evolve at each time step. In the present work, we will focus on the restless setting. The adversarial bandit setup (see e.g. [1, 4, 19]) can be seen as a non i.i.d setup as the rewards chosen by the adversary might depend on the agent’s past actions. However, even if the algorithms developed for this framework can be used in our setting, they might perform very poorly as they are not designed to take advantage of any mixing structure. Finally, we may also mention the bandit scenario where the dependencies are between the arms instead being within-arm time-dependent (e.g., [17]); this is orthogonal to what we propose to study here. Mixing Processes. Mixing process theory is hardly new. One of the seminal works on the study of mixing processes was done by Bernstein [5] who introduced the well-known block method, central to prove results on mixing processes. In statistical machine learning, one of the first papers on estimators for mixing processes is [23]. More recent works include the contributions of Mohri and Rostamizadeh [14, 15], which address the problem of stability bound and Rademacher stability for ϕ- and β-mixing processes; Kulkarni et al [11] establish the consistency of regularized boosting algorithms learning from β-mixing processes, Steinwart et al [21] prove the consistency of support vector machines learning from α-mixing processes and Steinwart and Christmann [20] establish a general oracle inequality for generic regularized learning algorithms and α-mixing observations. As far as we know, it is the first time that mixing processes are studied in a multi-arm bandit framework. Contribution. Our main result states that a strategy based on the improved Upper Confidence Bound (or Improved-UCB, in the sequel) proposed by Auer and Ortner [2], allows us to achieve a controlled regret in the restless mixing scenario. Namely, our algorithm, Remix-UCB (which stands for Restless Mixing UCB), achieves a regret of the form eΘ(∆(α−2)/α ∗ log1/α T), where ∆∗encodes the distance between the best arm and the best suboptimal arm, α encodes the rate of decrease of the ϕ coefficients, i.e. ϕ(n) = O(nα), and eΘ is a O-like notation (that neglects logarithmic dependencies, see Section 2.2). It is worth noticing that all the results we give hold for α < 1, i.e. when the dependencies are no longer summable. When the mixing coefficients at hand are all zero, i.e. in the i.i.d case, the regret of our algorithm naturally reduces to the classical Improved-UCB. Remix-UCB uses the assumption about known (convergence rates of) ϕ-mixing coefficients, which is a classical standpoint that has been used by most of the papers studying the behavior of machine learning algorithms in the case of mixing processes (see e.g. [9, 14, 15, 18, 21, 23]). The estimation of the mixing coefficients poses a learning problem on its own (see e.g. [13] for the estimation of β-mixing coefficients) and is beyond the scope of this paper. Structure of the paper. Section 2 defines our setup: ϕ-mixing processes are recalled, together with a relevant concentration inequality for such processes [10, 15]; the notion of regret we focus on is given. Section 3 is devoted to the presentation of our algorithm, Remix-UCB, and to the statement of our main result regarding its regret. Finally, Section 4 discusses the obtained results. 2 Overview of the Problem 2.1 Concentration of Stationary ϕ-mixing Processes Let (Ω, F, P) be a probability space. We recall the notions of stationarity and ϕ-mixing processes. Definition 1 (Stationarity). A sequence of random variables X = {Xt}t∈Z is stationary if, for any t, m ≥0, s ≥0, (Xt, . . . , Xt+m) and (Xt+s, . . . , Xt+m+s) are identically distributed. Definition 2 (ϕ-mixing process). Let X = {Xt}t∈Z be a stationary sequence of random variables. For any i, j ∈Z ∪{−∞, +∞}, let σj i denote the σ-algebra generated by {Xt : i ≤t ≤j}. Then, 2 for any positive n, the ϕ-mixing coefficient ϕ(n) of the stochastic process X is defined as ϕ(n) = sup t,A∈σ+∞ t+n,B∈σt −∞,P(B)>0 P [A|B] −P [A] . (1) X is ϕ-mixing if ϕ(n) →0. X is algebraically mixing if ∃ϕ0 > 0, α > 0 so that ϕ(n) = ϕ0n−α. As we recall later, concentration inequalities are the pivotal tools to devise multi-armed bandits strategy. Hoeffding’s inequality [7, 8] is, for instance, at the root of a number of UCB-based methods. This inequality is yet devoted to characterize the deviation of the sum of independent variables from its expected value and cannot be used in the framework we are investigating. In the case of stationary ϕ-mixing distributions, there however is the following concentration inequality, due to [10] and [15]. Theorem 1 ([10, 15]). Let ψm : Um →R be a function defined over a countable space U, and X be a stationary ϕ-mixing process. If ψm is ℓ-Lipschitz wrt the Hamming metric for some ℓ> 0, then ∀ε > 0, PX [|ψm(X) −Eψm(X)| > ε] ≤2 exp  − ε2 2mℓ2Λ2m  , (2) where Λm .= 1 + 2 Pm τ=1 ϕ(τ) and ψm(X) = ψm(X0, . . . , Xm). Here, we do not have to use this concentration inequality in its full generality as we will restrict to the situation where ψm is the mean of its arguments, i.e. ψm(Xt1, . . . , Xtm) .= 1 m Pm i=1 Xti, which is obviously 1/m-Lipschitz provided that the Xt’s have range [0; 1]—which will be one of our working assumptions. If, with a slight abuse of notation, Λm is now used to denote Λm(t) .= 1 + 2 m X i=2 ϕ(ti −t1), (3) for an increasing sequence t = (ti)m i=1 of times steps, then, the concentration inequality that will serve our purpose is given in the next corollary. Corollary 1 ([10, 15]). Let X be a stationary ϕ mixing process. The following holds: for all ε > 0 and all m-sequence t = (ti)m i=1 with t1 < . . . < tm, P{Xt}t∈t " 1 m m X i=1 Xti −EX1 > ε # ≤2 exp  − mε2 2Λ2m(t)  . (4) (Thanks to the stationarity of {Xt}t∈Z and the linearity of the expectation, E Pm i=1 Xti = mEXt1.) Remark 3. According to Kontorovitch’s paper [10], the function Λm should be maxj n 1 + 2 Pm i=j+1 ϕ(ti −tj) o . However, when the time lag between two consecutive time steps ti and ti+1 is non-decreasing, which will be imposed by the Remix-UCB algorithm (see below), and the mixing coefficients are decreasing, which is a natural assumption that simply says that the amount of dependence between Xt and Xt′ reduces when |t −t′| increases, then Λm reduces to the more compact expression given by (3). Note that when there is independence, then ϕ(τ) = 0, for all τ, Λm = 1 and, as a consequence, Equation (4) reduces to Hoeffing’s inequality: the precise values of the time instants in t do not impact the value of the bound and the length m of t is the central parameter that matters. This is in clear contrast with what happens in the dependent setting, where the bound on the deviation of Pm i=1 Xti/m from its expectation directly depends on the timepoints ti through Λm. For two sequences t = (ti)m i=1 and t′ = (t′ i)m i=1 of m timepoints, Pm i=1 Xti/m may be more sharply concentrated around EX1 than Pm i=1 Xt′ i/m provided Λm(t) < Λm(t′), which can be a consequence of a more favorable spacing of the points in t than in t′. 2.2 Problem: Minimize the Expected Regret We may now define the multi-armed bandit problem we consider and the regret we want to control. Restless ϕ-mixing Bandits. We study the problem of sampling from a K-armed ϕ-mixing bandit. In our setting, pulling arm k at time t provides the agent with a realization of the random variable Xk t , 3 where the family  Xk t t∈Z satisfies the following assumptions: (A) ∀k, (Xk t )t∈Z is a stationary ϕmixing process with decreasing mixing coefficients ϕk and (B) ∀k, Xk 1 takes its values in a discrete finite set (by stationarity, the same holds for any Xk t , with t ̸= 1) included in [0; 1]. Regret The regret we want to bound is the classical pseudo-regret which, after T pulls, is given by R(T) .= Tµ∗−E T X t=1 µIt (5) where µk .= EXk 1 , µ∗.= maxk µk, and It is the index of the arm selected at time t. We want to devise a strategy is capable to select, at each time t, the arm It so that the obtained regret is minimal. Bottleneck. The setting we assume entails the possibility of long-term dependencies between the rewards output by the arms. Hence, as evoked earlier, in order to choose which arm to pull, the agent is forced to address the exploration/exploitation/independence trade-off where independence may be partially recovered by taking advantage of the observation regarding spacings of timepoints that induce sharper concentration of the empirical rewards than others. As emphasized later, targetting good spacing in the bandit framework translates into the idea of ignoring the rewards provided by some pulls to compute the empirical averages: this idea is carried by the concept of a waiting arm, which is formally defined later on. The questions raised by the waiting arm that we address with the Remix-UCB algorithm are a) how often should the waiting arm be pulled so the concentration of the empirical means is high enough to be relied on (so the usual exploration/exploitation tradeoff can be tackled) and b) from the regret standpoint, how hindering is it to pull the waiting arm? O and eΘ analysis. In the analysis of Remix-UCB that we provide, just as is the case for most, if not all, analyses that exist for bandit algorithms, we will focus in the order of the regret and we will not be concerned about the precise constants involved in the derived results. We will therefore naturally heavily rely on the usual O notation and on the eΘ notation, that bears the following meaning. Definition 4 ( eΘ notation). For any two functions f, g from R to R, we say that f = eΘ(g) if there exist α, β > 0 so that |f| logα |f| ≤|g|, and |g| logβ |g| ≤|f|. 3 Remix-UCB: a UCB Strategy for Restless Mixing Bandits This section contains our main contributions: the Remix-UCB algorithm. From now on, we use a ∨b (resp. a ∧b) for the maximum (resp. minimum) of two elements a and b. We consider that the processes attached to the arms are algebraically mixing and for arm k, the exponent is αk > 0: there exist ϕk,0 such that ϕk(t) = ϕk,0t−αk—this assumption is not very restrictive as considering rates such as t−αk are appropriate/natural to capture and characterize the decreasing behavior of the convergent sequence (ϕk(t))t. Also, we will sometimes say that arm k is faster (resp. slower) than arm k′ for k ̸= k′, to convey the fact that αk > αk′ (resp. αk < α′ k). For any k and any increasing sequence τ = (τ(n))t n=1 of t timepoints, the empirical reward bµτ k of k given τ is bµτ k .= 1 t Pt n=1 Xk τ(n). The subscripted notation τ k = (τk(n))1≤n≤t is used to denote the sequence of timepoints at which arm k was selected. Finally, we define Λτ k in a similar way as in (3), the difference with the former notation being the subscript k, as Λτ k k .= 1 + 2 t X n=1 ϕk(τk(n) −τk(1)). (6) We feel important to discuss when Improved-UCB may be robust to the mixing process scenario. 3.1 Robustness of Improved-UCB to Restless ϕ-Mixing Bandits We will not recall the Improved-UCB algorithm [2] in its entirety as it will turn out to be a special case of our Remix-UCB algorithm, but it is instructive to identify its distinctive features that make it a relevant base algorithm for the handling of mixing processes. First, it is essential to keep in mind that Improved-UCB is designed for the i.i.d case and that it achieves an optimal O(log T) regret. Second, it is an algorithm that works in successive rounds/epochs, at the end of each of which a number of arms are eliminated because they are identified (with high probability) as being the least 4 promising ones, from a regret point of view. More precisely, at each round, the same number of consecutive pulls is planned for each arm: this number is induced by Hoeffding’s inequality [8] and devised in such a way that all remaining arms share the same confidence interval for their respective expected gains, the µk = EXk 1 , for k in the set of remaining arms at the current round. From a technical standpoint, this is what makes it possible to draw conclusions on whether an arm is useless (i.e. eliminated) or not. It is enlightening to understand what are the favorable and unfavorable setups for Improved-UCB to keep working when facing restless mixing bandits. The following Proposition depicts the favorable case. Proposition 5. If P t ϕk(t) < +∞, ∀k, then the classical Improved-UCB run on the restless ϕ-mixing bandit preserves its O(log T) regret. Proof. Straightforward. Given the assumption on the mixing coefficients, it exists M > 0 such that maxk∈{1,··· ,K} P t≥0 ϕk(t) < M. Therefore, from Theorem 1, for any arm k, and any sequence τ of |τ| consecutive timepoints, P (|µk −bµτ k| > ε) ≤2 exp  − |τ|ε2 2(1+2M)2  , which is akin to Hoeffding’s inequality up to the multiplicative (1 + 2M)2 constant in the exponential. This, and the lines to prove the O(log T) regret of Improved-UCB [2] directly give the desired result. In the general case where P t ϕk(t) < +∞does not hold for every k, then nothing ensures for Improved-UCB to keep working, the idea of consecutive pulls being the essential culprit. To illustrate the problem, suppose that ∀k, ϕk(n) = n−1/4. Then, after a sequence τ = (t1 + 1, t1 + 2, . . . , t1 + t) of t consecutive time instances where k was selected, simple calculations give that Λτ k = O(t3/4) and the concentration inequality from Corollary 1 for bµτ k reads as P(|µk −bµτ k | > ε) ≤2 exp  −Cε2t−1/2 (7) where C is some strictly positive constant. The quality of the confidence interval that can be derived from this concentration inequality degrades when additional pulls are performed, which counters the usual nature of concentration inequalities and prevents the obtention of a reasonable regret for Improved-UCB. This is a direct consequence of the dependency of the ϕ-mixing variables. Indeed, if ϕ(n) decreases slowly, taking the average over multiple consecutive pulls may move the estimator away from the mean value of the stationary process. Another way of understanding the difference between the i.i.d. case and the restless mixing case is to look at the sizes of the confidence intervals around the true value of an arm when the time t to the next pull increases. Given Corollary 1, Improved-UCB run in the restless mixing scenario would advocate a pulling strategy based on the lengths κk of the confidence intervals given by ∀k, κk(t) .= |τ k|−1/2q 2(Λτ k k + 2ϕk(t −τ(1)))2 log(t) (8) where t is the overall time index. This shows that working in the i.i.d. case or in the mixing case can imply two different behaviors for the lengths of the confidence interval: in the i.i.d. scenario, κk has the same form as the classical UCB term (as ϕk = 0 and Λτ k k = 1) and is an increasing function of t while in the ϕ-mixing scenario the behavior may be non-monotonic with a decreasing confidence interval up to some point after which the confidence interval becomes increasingly larger. As the purpose of exploration is to tighten the confidence interval as much as possible, the mixing framework points to carefully designed strategies. For instance, when an arm is slow, it is beneficial to wait between two successive pulls of this arm. By alternating the pulls of the different arms, it is possible to wait up to K unit of time between two consecutive pulls of the same arm. However, it is not sufficient to recover enough independence between the two observed values. For instance, in the case described in (7), after a sequence τ = (t1, t1 + K, . . . , t1 + tK), simple calculations give that Λτ k = O((Kt)3/4) and the concentration inequality from Corollary 1 for bµτ k reads as P(|µk −bµτ k| > ε) ≤2 exp −CK3/2ε2t−1/2 which entails the same problem. The problem exhibited above is that if the decrease of the ϕk is too slow, pulling an arm in the traditional way, with consecutive pulls, and updating the value of the empirical estimator may lower the certainty with which the estimation of the expected gain is performed. To solve this problem and reduce the confidence interval that are computed for each arm, a better independence between 5 Algorithm 1 Remix-UCB, with parameter K, (αi)i=1···K, T, G defined in (11) B0 ←{1, · · · , K},α ←1 ∧mini∈B0 αi, bµi ←0, ni 0 ←0, , k = 1, . . . , K , i∗←1 for s = 1, . . . , ⌊G−1(T)⌋do Select arm : If |Bs| > 1, then until total time Ts = ⌈G(s)⌉pull each arm i ∈Bs at time τi(·) defined in (10). If no arm is ready to be pulled, pull the waiting arm i∗instead. Update : 1. Update the empirical mean bµi and the number of pulls ni for each arm i ∈Bs. 2. Obtain Bs+1 by eliminating from Bs each arm i such that bµi + s 2 (1 + 2 Pni j=1 ϕi(τi(j)))2 log(T 2−2s) ni < max k∈Bs bµk − s 2 (1 + 2 Pnk j=1 ϕk(τk(j)))2 log(T 2−2s) nk 3. update α ←1 ∧ min i∈Bs+1 αi, and i∗←argmax i∈Bs+1 bµi + s 2 (1 + 2 Pni j=1 ϕi(τi(j)))2 log(T 2−2s) ni end for the values observed from a given arm is required. This can only be achieved by waiting for the time to pass by. Since an arm must be pulled at each time t, simulating the time passing by may be implemented by the idea to pull an arm but not to update the empirical mean bµk of this arm with the observed reward. At the same time, it is important to note that even if we do not update the empirical mean of the arm, the resort to the waiting arm may impact the regret. It is therefore crucial to ensure that we pull the best possible arm to limit the resulting regret, whence the arm with the best optimistic value, being used as the waiting arm. Note that this arm may change over time. For the rest of the paper, τ will only refer to significant pulls of an arm, that is, pulls that lead to an update of the empirical value of the arm. 3.2 Algorithm and Regret bound We may now introduce Remix-UCB, depicted in Algorithm 1. As Improved-UCB, Remix-UCB works in epochs and eliminates, at each epoch, the significantly suboptimal arms. High-Level View. Let (θs)s∈N be a decreasing sequence of R∗ + and (δs)s∈N ∈RN +. The main idea promoted by Remix-UCB is to divide the time available in epochs 1, . . . , smax (the outer loop of the algorithm), such that at the end of each epoch s, for all the remaining arms k the following holds, P(bµτ k k ≥µk + θs) ∨P(bµτ k k ≤µk −θs) ≤δs, where τ k identifies the time instants up to current time t when arm k was selected. Using (4), this means that, for all k, with high probability: |bµτ k k −µk| ≤nk −1/2q 2(Λτ k )2 log(δs). (9) Thus, at the end of epoch s we have, with high probability, a uniform control of the uncertainty with which the empirical rewards bµτ k k approximate their corresponding rewards µk. Based on this, the algorithm eliminates the arms that appear significantly suboptimal (step 2 of the update of Remix-UCB). Just as in Improved-UCB, the process is re-iterated with parameters δs and θs adjusted as δs = 1/(Tθ2 s) and θs = 1/2s, where T is the time budget; the modifications of the δs and θs values makes it possible to gain additional information, through new pulls, on the quality of the remaining arms, so arms associated with close-by rewards can be distinguished by the algorithm. Policy for pulling arms at epoch s. The objective of the policy is to obtain a uniform control of the uncertainty/confidence intervals (9) of all the remaining arms. For some arm k and fixed time budget T, such a policy could be obtained as the solution of minηs,(ti)ηs i=1 tηs such that (Λτs)2 ns−1+ηs < ε where the times of pulls ti’s must be increasing and greater than t0 the last element of τ s−1, τ s = τ s−1 ∪ (t1, ...tηs) and ns−1 (the number of times this arm has already been pulled), ε, τ s−1 are given. This conveys our aim to obtain as fast and efficiently the targetted confidence interval. However, this problem does not have a closed-form solution and, even if it could be solved efficiently, we are more interested in assessing whether it is possible to devise relevant sequences of timepoints that induce a controlled regret, even if they do not solve the optimization problem. To this end, we only focus on 6 the best sampling rate of the arms, which is an approximation of the previous minimization problem: for each k, we search for sampling schemes of the form τk(n) = tn = O(nβ) for β ≥1. For the case where the ϕk are not summable ( αk ≤1), we have the following result. Proposition 6. Let αk ∈(0; 1] (recall that ϕk(n) = n−αk). The optimal sampling rate τk for arm k is τk(n) = eΘ(n1/αk). Proof. The idea of the proof is that if the sampling is too frequent (i.e. β close to 1), then the dependency between the values of the arm reduces the information obtained by taking the average. In other words, P n ϕk(τk(n)) increases too quickly. On the other hand, if the sampling is too scarce (i.e. β is very large), the information obtained at each pull is important, but the total amount of pulls in a given time T is approximately T 1/β and thus is too low. The optimal solution to this trade-off is to take β = 1/α, which directly comes from the fact that this is the point where P n ϕk(τk(n)) becomes logarithmic. The complete proof is available in the supplementary material. If αk < 1, for all k, this result means that the best policy (with a sampling scheme of the form O(nβ)) should update the empirical means associated with each arm k at a rate O(n1/αk); contrary to the i.i.d case it is therefore not relevant to try and update the empirical rewards at each time step. There henceforth must be gaps between updates of the means: this is precisely the role of the waiting arm to make this gaps possible. As seen in the depiction of Remix-UCB, when pulled, the waiting arm provides a reward that will count for the cumulative gains of the agent and help her control her regret, but that will not be used to update any empirical mean. As for a precise pulling strategy to implement given Proposition 6, it must be understood that it is the slowest arm that determines the best uniform control possible, since it is the one which will be selected the least number of times: it is unnecessary to pull the fastest arms more often than the slowest arm. Therefore, if i1, . . . , iks are the ks remaining arms at epoch s, and α .= 1 ∧ mini∈{i1,...,iks} αi1, then an arm selection strategy based on the rate of the slowest arm suggests to pull arm im and update ˆµτ im im for the n-th time at time instants  (τi1(n −1) + ks) ∨⌈n1/α⌉ if m = 1 τi1(n) + m −1 otherwise (10) (i.e. all arms are pulled at the same O(n1/α) frequency) and to pull the waiting arm while waiting. Time budget per epoch. In the Remix-UCB algorithm, the function G defines the size of the rounds. The definition of G is rather technical: we have G(s) = maxk∈Bs Gk(s) where Gk(s) .= inf  t ∈N+, 2(Λτ k )2 log(1/δs) ≤tθs (11) where the τk(n) are defined above. In other words, Gk encodes the minimum amount of time necessary to reach the aimed length of confidence interval by following the aforementioned policy. But the most interesting property of G is that G(s) = eΘ((θ−2 s log(δs))1/α). This is the key element which will be used in the proof of the regret bound which can be found in Theorem 2 below. Putting it all together. At epoch s, the Remix-UCB algorithm starts by selecting the best empirical arm and flags it as the waiting arm. It then determines the speed α of the slowest arm, after which it computes a time budget Ts = G(s). Then, until this time horizon is reached, it pulls arms following the policy described above. Finally, after the time budget is reached, the algorithm eliminates the arms whose empirical mean is significantly lower than the best available empirical mean. Note that when all the ϕk are summable, we have α = 1, and thus the algorithm never pulls the waiting arm: Remix-UCB mainly differs from Improved-UCB by its strategy of alternate pulls. The result below provides an upper bound for the regret of the Remix-UCB algorithm: Theorem 2. For all arm k, let 1 ≥αk > 0 and ϕk(n) = n−αk. Let α = mink∈{1,··· ,K} αk and ∆∗= mink∈{1,··· ,K}{∆k > 0}. If α ≤1, the regret of Remix-UCB is bounded in order by eΘ  ∆(α−2)/α ∗ log(T)1/α . (12) 1Since 1/α encodes the rate of sampling, it cannot be greater than 1. 7 Proof. The proof follows the same line as the proof of the upper bound of the regret of the Improved-UCB algorithm. The important modification is the sizes of the blocks, which depend in the mixing case of the ϕ mixing coefficient, and might grow arbitrary large, and the waiting arm, which does not exist in the i.i.d. setting. The dominant term in the regret mentioned in Theorem 2 is related to the pulls of the waiting arm. Indeed, the waiting arm is pulled with an always increasing frequency, but the quality of the waiting arm tends to increase over time, as the arms with the smallest values are eliminated. The complete proof is available in the supplementary material. 4 Discussion and Particular Cases We here discuss Theorem 2, and some of its variations for special cases of ϕ-mixing processes. First, in the i.i.d case, the regret of Improved-UCB is upper bounded by O ∆−1 ∗ log(T)  [2]. Observe that (12) comes down to this bound when α tends to 1. Also, note that it is an upper bound of the regret in the algebraically mixing case. It reflects the fact that in this particular case, it is possible to ignore the dependency of the mixing process. It also implies that, even if α < 1, i.e. even if the dependency cannot be ignored, by properly using the ϕ mixing property of the different stationary processes, it is possible to obtain an upper bound of polynomial logarithmic order. Another question is to see what happens when αk = 1, which is an important threshold in our study. Indeed, if αk = 1 the ϕk are not summable, but from Proposition 6 we have that τk(n) ≈O(n), i.e. the arms should be sampled as often as possible. Theorem 2 states that the regret is upper bounded in this case by eΘ(∆−1 ∗ log T). However, it is not possible to know if this bound is comparable to that of the i.i.d case due to the eΘ. Still, from the proof of Theorem 2 we get the following result: Corollary 2. For all arm k, let 1 ≥αk > 0 and ϕk(n) = n−αk. Let α = mink∈{1,··· ,K} αk. Then if α = 1, the regret for Algorithm 1 is upper bounded in order by O ∆−1 ∗Gα(log(T))  (13) where ∆∗= mink∈{1,··· ,K}{∆k > 0} and G is solution of G−1 α (x) = xα/(log(x))2. Although we do not have an explicit formula for the regret in the case α = 1, it is interesting to note that (13) is strictly negligible with respect to (12) ∀α < 1, but strictly dominates O ∆−1 ∗ log(T)  . This comes from that while in the case α = 1 the waiting arm is no longer used, the time budget necessary to complete step s is still higher that in the i.i.d case. When ϕ(n) decreases at a logarithmic speed (ϕ(n) ≈1/log(n)α for some α > 0), it is still possible to apply the same reasoning as the one developed in this paper. But in this case, Remix-UCB will only achieve a regret of eΘ exp  (T/∆∗)1/α , which is no longer logarithmic in T. In other words, if the ϕ mixing coefficients decrease too slowly, the information given by the concentration inequality in Theorem 1 is not sufficient to deduce interesting information about the mean value of the arms. In this case, the successive values of the ϕ-mixing processes are too dependent, and the randomness in the sequence of values is almost negligible; an adversarial bandit algorithm such as Exp4 [4] may give better results than Remix-UCB. 5 Conclusion We have studied an extension of the multi-armed bandit problem to the stationary ϕ-mixing framework in the restless case, by providing a functional algorithm and an upper bound of the regret in a general framework. Future work might include a study of a lower bound for the regret in the mixing process case: our first findings on the issue are that the analysis of the worst-case scenario in the mixing framework bears significant challenges. Another interesting point would be the study of the more difficult case of β-mixing processes. A rather different, but very interesting question that we may address in the future is the possibility to exploit a possible structure of the correlation between rewards over time. For instance, in the case wher the correlation of an arm with the close past is much higher than the correlation with the distant past, it might be interesting to see if the analysis done in [16] can be extended to exploit this correlation structure. Acknowledgments. This work is partially supported by the ANR-funded projet GRETA – Greediness: theory and algorithms (ANR-12-BS02-004-01) and the ND project. 8 References [1] Audibert JY, Bubeck S (2009) Minimax policies for adversarial and stochastic bandits. In: Annual Conference on Learning Theory [2] Auer P, Ortner R (2010) Ucb revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica 61:5565 [3] Auer P, Cesa-Bianchi N, Fischer P (2002) Finite-time analysis of the multi- armed bandit problem. Machine Learning Journal 47(23):235–256 [4] Auer P, Cesa-Bianchi N, Freund Y, Schapire RE (2002) The nonstochastic multiarmed bandit problem. SIAM Journal on Computing 32(1):48–77 [5] Bernstein S (1927) Sur l’extension du th´eor`eme limite du calcul des probabilit´es aux sommes de quantit´es d´ependantes. Mathematische Annalen 97(1):1–59 [6] Bubeck S, Cesa-Bianchi N (2012) Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Foundation and Trends in Machine Learning, vol 5. NOW [7] Hoeffding W (1948) A Class of Statistics with Asymptotically Normal Distribution. Annals of Mathematical Statistics 19(3):293–325 [8] Hoeffding W (1963) Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association 58(301):13–30, DOI 10.2307/2282952, URL http://dx.doi.org/10. 2307/2282952 [9] Karandikar RL, Vidyasagar M (2002) Rates of uniform convergence of empirical means with mixing processes. Statistics & probability letters 58(3):297–307 [10] Kontorovich L, Ramanan K (2008) Concentration inequalities for dependent random variables via the martingale method. The Annals of Probability 36(6):2126–2158 [11] Kulkarni S, Lozano A, Schapire RE (2005) Convergence and consistency of regularized boosting algorithms with stationary β-mixing observations. In: Advances in neural information processing systems, pp 819–826 [12] Lai TL, Robbins H (1985) Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6:422 [13] McDonald D, Shalizi C, Schervish M (2011) Estimating beta-mixing coefficients. arXiv preprint arXiv:11030941 [14] Mohri M, Rostamizadeh A (2009) Rademacher complexity bounds for non-i.i.d. processes. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in Neural Information Processing Systems 21, pp 1097–1104 [15] Mohri M, Rostamizadeh A (2010) Stability bounds for stationary -mixing and -mixing processes. Journal of Machine Learning Research 11:789–814 [16] Ortner R, Ryabko D, Auer P, Munos R (2012) Regret bounds for restless markov bandits. In: Proceeding of the Int. Conf. Algorithmic Learning Theory, pp 214–228 [17] Pandey S, Chakrabarti D, Agarwal D (2007) Multi-armed bandit problems with dependent arms. In: Proceedings of the 24th international conference on Machine learning, ACM, pp 721–728 [18] Ralaivola L, Szafranski M, Stempfel G (2010) Chromatic pac-bayes bounds for non-iid data: Applications to ranking and stationary β-mixing processes. The Journal of Machine Learning Research 11:1927–1956 [19] Seldin Y, Slivkins A (2014) One practical algorithm for both stochastic and adversarial bandits. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp 1287–1295 [20] Steinwart I, Christmann A (2009) Fast learning from non-iid observations. In: Advances in Neural Information Processing Systems, pp 1768–1776 [21] Steinwart I, Hush D, Scovel C (2009) Learning from dependent observations. Journal of Multivariate Analysis 100(1):175–194 [22] Tekin C, Liu M (2012) Online learning of rested and restless bandits. IEEE Transactions on Information Theory 58(8):5588–5611, URL http://dblp.uni-trier.de/db/journals/tit/ tit58.html#TekinL12 [23] Yu B (1994) Rates of convergence for empirical processes of stationary mixing sequences. Annals of Probability 22(1):94–116 9
2015
127
5,622
The Consistency of Common Neighbors for Link Prediction in Stochastic Blockmodels Purnamrita Sarkar Department of Statistics University of Texas at Austin purnamritas@austin.utexas.edu Deepayan Chakrabarti IROM, McCombs School of Business University of Texas at Austin deepay@utexas.edu Peter Bickel Department of Statistics University of California, Berkeley bickel@stat.berkeley.edu Abstract Link prediction and clustering are key problems for network-structured data. While spectral clustering has strong theoretical guarantees under the popular stochastic blockmodel formulation of networks, it can be expensive for large graphs. On the other hand, the heuristic of predicting links to nodes that share the most common neighbors with the query node is much fast, and works very well in practice. We show theoretically that the common neighbors heuristic can extract clusters with high probability when the graph is dense enough, and can do so even in sparser graphs with the addition of a “cleaning” step. Empirical results on simulated and real-world data support our conclusions. 1 Introduction Networks are the simplest representation of relationships between entities, and as such have attracted significant attention recently. Their applicability ranges from social networks such as Facebook, to collaboration networks of researchers, citation networks of papers, trust networks such as Epinions, and so on. Common applications on such data include ranking, recommendation, and user segmentation, which have seen wide use in industry. Most of these applications can be framed in terms of two problems: (a) link prediction, where the goal is to find a few similar nodes to a given query node, and (b) clustering, where we want to find groups of similar individuals, either around a given seed node or a full partitioning of all nodes in the network. An appealing model of networks is the stochastic blockmodel, which posits the existence of a latent cluster for each node, with link probabilities between nodes being simply functions of their clusters. Inference of the latent clusters allows one to solve both the link prediction problem and the clustering problem (predict all nodes in the query node’s cluster). Strong theoretical and empirical results have been achieved by spectral clustering, which uses the singular value decomposition of the network followed by a clustering step on the eigenvectors to determine the latent clusters. However, singular value decomposition can be expensive, particularly for (a) large graphs, when (b) many eigenvectors are desired. Unfortunately, both of these are common requirements. Instead, many fast heuristic methods are often used, and are empirically observed to yield good results [8]. One particularly common and effective method is to predict links to nodes that share many “common neighbors” with the query node q, i.e., rank nodes by |CN(q, i)|, where CN(q, i) = {u | q ∼u ∼i} (i ∼j represents an edge between i 1 and j). The intuition is that q probably has many links with others in its cluster, and hence probably also shares many common friends with others in its cluster. Counting common neighbors is particularly fast (it is a Join operation supported by all databases and Map-Reduce systems). In this paper, we study the theoretical properties of the common neighbors heuristic. Our contributions are the following: (a) We present, to our knowledge the first, theoretical analysis of the common neighbors for the stochastic blockmodel. (b) We demarcate two regimes, which we call semi-dense and semi-sparse, under which common neighbors can be successfully used for both link prediction and clustering. (c) In particular, in the semi-dense regime, the number of common neighbors between the query node q and another node within its cluster is significantly higher than that with a node outside its cluster. Hence, a simple threshold on the number of common neighbors suffices for both link prediction and clustering. (d) However, in the semi-sparse regime, there are too few common neighbors with any node, and hence the heuristic does not work. However, we show that with a simple additional “cleaning” step, we regain the theoretical properties shown for the semi-dense case. (e) We empirically demonstrate the effectiveness of counting common neighbors followed by the “cleaning” post-process on a variety of simulated and real-world datasets. 2 Related Work Link prediction has recently attracted a lot of attention, because of its relevance to important practical problems like recommendation systems, predicting future connections in friendship networks, better understanding of evolution of complex networks, study of missing or partial information in networks, etc [9, 8]. Algorithms for link prediction fall into two main groups: similarity-based, and model-based. Similarity-based methods: These methods use similarity measures based on network topology for link prediction. Some methods look at nodes two hops away from the query node: counting common neighbors, the Jaccard index, the Adamic-Adar score [1] etc. More complex methods include nodes farther away, such as the Katz score [7], and methods based on random walks [16, 2]. These are often intuitive, easily implemented, and fast, but they typically lack theoretical guarantees. Model-based methods: The second approach estimates parametric models for predicting links. Many popular network models fall in the latent variable model category [12, 3]. These models assign n latent random variables Z := (Z1, Z2, . . . , Zn) to n nodes in a network. These variables take values in a general space Z. The probability of linkage between two nodes is specified via a symmetric map h : Z × Z →[0, 1]. These Zi’s can be i.i.d Uniform(0,1) [3], or positions in some d−dimensional latent space [12]. In [5] a mixture of multivariate Gaussian distributions is used, each for a separate cluster. A Stochastic Blockmodel [6] is a special class of these models, where Zi is a binary length k vector encoding membership of a node in a cluster. In a well known special case (the planted partition model), all nodes in the same cluster connect to each other with probability α, whereas all pairs in different clusters connect with probability γ. In fact, under broad parameter regimes, the blockmodel approximation of networks has recently been shown to be analogous to the use of histograms as non-parametric summaries of an unknown probability distribution [11]. Varying the number of bins or the bandwidth corresponds to varying the number or size of communities. Thus blockmodels can be used to approximate more complex models (under broad smoothness conditions) if the number of blocks are allowed to increase with n. Empirical results: As the models become more complex, they also become computationally demanding. It has been commonly observed that simple and easily computable measures like common neighbors often have competitive performance with more complex methods. 2 This behavior has been empirically established across a variety of networks, starting from co-authorship networks [8] to router level internet connections, protein protein interaction networks and electrical power grid network [9]. Theoretical results: Spectral clustering has been shown to asymptotically recover cluster memberships for variations of Stochastic Blockmodels [10, 4, 13]. However, apart from [15], there is little understanding of why simple methods such as common neighbors perform so well empirically. Given their empirical success and computational tractability, the common neighbors heuristic is widely applied for large networks. Understanding the reasons for the accuracy of common neighbors under the popular stochastic blockmodel setting is the goal of our work. 3 Proposed Work Many link prediction methods ultimately make two assumptions: (a) each node belongs to a latent “cluster”, where nodes in the same cluster have similar behavior; and (b) each node is very likely to connect to others in its cluster, so link prediction is equivalent to finding other nodes in the cluster. These assumptions can be relaxed: instead of belonging to the same cluster, nodes could have “topic distributions”, with links being more likely between pairs of nodes with similar topical interests. However, we will focus on the assumptions stated above, since they are clean and the relaxations appear to be fundamentally similar. Model. Specifically, consider a stochastic blockmodel where each node i belongs to an unknown cluster ci ∈{C1, . . . , CK}. We assume that the number of clusters K is fixed as the number of nodes n increases. We also assume that each cluster has π = n/K members, though this can be relaxed easily. The probability P(i ∼j) of a link between nodes i and j (i ̸= j) depends only on the clusters of i and j: P(i ∼j) = Bci,cj ≜α{ci = cj} + γ{ci ̸= cj} for some α > γ > 0; in other words, the probability of a link is α between nodes in the same cluster, and γ otherwise. By definition, P(i ∼i) = 0. If the nodes were arranged so that all nodes in a cluster are contiguous, then the corresponding matrix, when plotted, attains a block-like structure, with the diagonal blocks (corresponding to links within a cluster) being denser than off-diagonal blocks (since α > γ). Under these assumptions, we ask the following two questions: Problem 1 (Link Prediction and Recommendation). Given node i, how can we identify at least a constant number of nodes from ci? Problem 2 (Local Cluster Detection). Given node i, how can we identify all nodes in ci? Problem 1 can be considered as the problem of finding good recommendations for a given node i. Here, the goal is to find a few good nodes that i could connect to (e.g., recommending a few possible friends on Facebook, or a few movies to watch next on Netflix). Since withincluster links have higher probability than across-cluster links (α > γ), predicting nodes from ci gives the optimal answer. Crucially, it is unnecessary to find all good nodes. As against that, Problem 2 requires us to find everyone in the given node’s cluster. This is the problem of detecting the entire cluster corresponding to a given node. Note that Problem 2 is clearly harder than Problem 1. We next present a summary of our results and the underlying intuition before delving into the details. 3.1 Intuition and Result Summary Current approaches. Standard approaches to inference for the stochastic blockmodel attempt to solve an even harder problem: Problem 3 (Full Cluster Detection). How can we identify the latent clusters ci for all i? A popular solution is via spectral clustering, involving two steps: (a) computing the top-K eigenvectors of the graph Laplacian, and (b) clustering the projections of each node on the 3 corresponding eigenspace via an algorithm like k-means [13]. A slight variation of this has been shown to work as long as (α −γ)/√α = Ω(log n/√n) and the average degree grows faster than poly-logarithmic powers of n [10]. However, (a) spectral clustering solves a harder problem than Problems 1 and 2, and (b) eigen-decompositions can be expensive, particularly for very large graphs. Our claim is that a simpler operation — counting common neighbors between nodes — can yield results that are almost as good in a broad parameter regime. Common neighbors. Given a node i, link prediction via common neighbors follows a simple prescription: predict a link to node j such that i and j have the maximum number |CN(i, j)| of shared friends CN(i, j) = {u | i ∼u ∼j}. The usefulness of common neighbors have been observed in practice [8] and justified theoretically for the latent distance model [15]. However, its properties under the stochastic blockmodel remained unknown. Intuitively, we would expect a pair of nodes i and j from the same cluster to have many common neighbors u from the same cluster, since both the links i ∼u and u ∼j occur with probability α, whereas for ci ̸= cj, at least one of the edges i ∼u and u ∼j must have the lower probability γ. P(u ∈CN(i, j) | ci = cj) = α2P(cu = ci | ci = cj) + γ2P(cu ̸= ci | ci = cj) = πα2 + (1 −π)γ2 P(u ∈CN(i, j) | ci ̸= cj) = αγP(cu = ci or cu = cj | ci ̸= cj) + γ2P(cu ̸= ci, cu ̸= cj | ci ̸= cj) = 2παγ + (1 −2π)γ2 = P(u ∈CN(i, j) | ci = cj) −π(α −γ)2 ≤P(u ∈CN(i, j) | ci = cj) Thus the expected number of common neighbors E [|CN(i, j)|] is higher when ci = cj. If we can show that the random variable CN(i, j) concentrates around its expectation, node pairs with the most common neighbors would belong to the same cluster. Thus, common neighbors would offer a good solution to Problem 1. We show conditions under which this is indeed the case. There are three key points regarding our method: (a) handling dependencies between common neighbor counts, (b) defining the graph density regime under which common neighbors is consistent, and (c) proposing a variant of common neighbors which significantly broadens this region of consistency. Dependence. CN(i, j) and CN(i, j′) are dependent; hence, distinguishing between within-group and outside-group nodes can be complicated even if each CN(i, j) concentrates around its expectation. We handle this via a careful conditioning step. Dense versus sparse graphs. In general, the parameters α and γ can be functions of n, and we can try to characterize parameter settings when common neighbors consistently returns nodes from the same cluster as the input node. We show that when the graph is sufficiently “dense” (average degree is growing faster than √n log n), common neighbors is powerful enough to answer Problem 2. Also, (α −γ)/α can go to zero at a suitable rate. On the other hand, the expected number of common neighbors between nodes tends to zero for sparser graphs, irrespective of whether the nodes are in the same cluster or not. Further, the standard deviation is of a higher order than the expectation, so there is no concentration. In this case, counting common neighbors fails, even for Problem 1. A variant with better consistency properties. However, we show that the addition of an extra post-processing step (henceforth, the “cleaning” step) still enables common neighbors to identify nodes from its own cluster, while reducing the number of off-cluster nodes to zero with probability tending to one as n →∞. This requires a stronger separation condition between α and γ. However, such “strong consistency” is only possible when the average degree grows faster than (n log n)1/3. Thus, the cleaning step extends the consistency of common neighbors beyond the O(1/√n) range. 4 4 Main Results We first split the edge set of the complete graph on n nodes into two sets: K1 and its complement K2 (independent of the given graph G). We compute common neighbors on G1 = G ∩K1 and perform a “cleaning” process on G2 = G ∩K2. The adjacency matrices of G1 and G2 are denoted by A1 and A2. We will fix a reference node q, which belongs to class C1 without loss of generality (recall that there are K clusters C1 . . . CK, each of size nπ). Let Xi(i ̸= q) denote the number of common neighbors between q and i. Algorithm 1 computes the set S = {i : Xi ≥tn} of nodes who have at least tn common neighbors with q on A1, whereas Algorithm 2 does a further degree thresholding on A2 to refine S into S1. Algorithm 1 Common neighbors screening algorithm 1: procedure Scan(A1, q, tn) 2: For 1 ≤i ≤n, Xi ←A2 1(q, i) 3: Xq ←0 4: S ←{i : Xi ≥tn} 5: return S Algorithm 2 Post Selection Cleaning algorithm 1: procedure Clean(S, A2, q, sn) 2: S1 ←{i : P j∈S A2(i, j) ≥sn} 3: return S1 To analyze the algorithms, we must specify conditions on graph densities. Recall that α and γ represent within-cluster and across-cluster link probabilities. We assume that α/γ is constant while α →0, γ →0; equivalently, assume that both α and γ are both some constant times ρ, where ρ →0. The analysis of graphs has typically been divided into two regimes. The dense regime consists of graphs with nρ →∞, where the expected degree nρ is a fraction of n as n grows. In the sparse regime, nρ = O(1), so degree is roughly constant. Our work explores a finer gradation, which we call semi-dense and semi-sparse, defined next. Definition 4.1 (Semi-dense graph). A sequence of graphs is called semi-dense if nρ2/ log n →∞as n →∞. Definition 4.2 (Semi-sparse graph). A sequence of graphs is called semi-sparse if nρ2 →0 but n2/3ρ/ log n →∞as n →∞. Our first result is that common neighbors is enough to solve not only the link-prediction problem (Problem 1) but also the local clustering problem (Problem 2) in the semi-dense case. This is because even though both nodes within and outside the query node’s cluster have a growing number of common neighbors with q, there is a clear distinction in the expected number of common neighbors between the two classes. Also, since the standard deviation is of a smaller order than the expectation, the random variables concentrate. Thus, we can pick a threshold tn such that SCAN(A1, q, tn) yields just the nodes in the same cluster as q with high probability. Note that the cleaning step (Algorithm 2) is not necessary in this case. Theorem 4.1 (Algorithm 1 solves Problem 2 in semi-dense graphs). Let tn = n π(α + γ)2/2 + (1 −2π)γ2 . Let S be the set of nodes returned by SCAN(A1, q, tn). Let nw and no denote the number of nodes in S ∩C1 and S \ C1 respectively. If the graph is semi-dense, and if α−γ α ≥ 2 √π  log n nα2 1/4 , then P(nw = nπ) →1 and P(no = 0) →1. Proof Sketch. We only sketch the proof here, deferring details to the supplementary material. Let dqa = P i∈Ca A1(q, i) be the number of links from the query node q to nodes in 5 cluster Ca. Let dq = {dq1, . . . qqK} and d = P a dqa. We first show that P(dq ∈Good) ≜P  dq1 ∈nπα(1 ± ψn) dqa ∈nπγ(1 ± ψn) ∀a ̸= 1  ≥1 −K n2 , (1) ψn ≜ p (6 log n)/(nπγ) = qp log n/n · Θ( p log n/(nρ2)) →0. (2) Conditioned on dq, Xi is the sum of K Binomial(dqa, B1a) independent random variables representing the number of common neighbors between q and i via nodes in each of the K clusters: E[Xi | dq, i ∈Ca] = dqaα + (d −dqa)γ. We have: η1 ≜E[Xi | dq ∈Good, i ∈C1] ≥n πα2 + (1 −π)γ2 (1 −ψn) ≜ℓn(1 −ψn) ηa ≜E[Xi | dq ∈Good, i ∈Ca, a ̸= 1] ≤n 2παγ + (1 −2π)γ2 (1 + ψn) ≜un(1 + ψn) Note that tn = (ℓn+un)/2, un ≤tn ≤ℓn, and ℓn−un = nπ(α−γ)2 ≥4 log n p nα2/ log n → ∞, where we applied condition on (α −γ)/α noted in the theorem statement. We show: P (Xi ≤tn | dq ∈Good, i ∈C1) ≤n−4/3+o(1) P (Xi ≥tn | dq ∈Good, i ∈Ca, a ̸= 1) ≤n−4/3+o(1) Conditioned on dq, both nw and no are sums of conditionally independent and identically distributed Bernoullis. P(nw = nπ) ≥P(dq ∈Good)P(nw = nπ | dq ∈Good) ≥  1 −K n2  · (1 −nπ · n−4/3) →1 P(no = 0) ≥P(dq ∈Good) · P(no = 0 | dq ∈Good) ≥1 −Θ(n−1/3) →1 There are two major differences between the semi-sparse and semi-dense cases. First, in the semi-sparse case, both expectations η1 and ηa are of the order O(nρ2) which tends to zero. Second, standard deviations on the number of common neighbors are of a larger order than expectations. Together, this means that the number of common neighbors to within-cluster and outside-cluster nodes can no longer be separated; hence, Algorithm 1 by itself cannot work. However, after cleaning, the entire cluster of the query node q can still be recovered. Theorem 4.2 (Algorithm 1 followed by Algorithm 2 solves Problem 2 in semi-sparse graphs). Let tn = 1 and sn = n2 (πα + (1 −π)γ)2 (α + γ)/2. Let S = Scan(A1, q, tn) and S1 = Clean(S, A2, q, sn). Let n(c) w  n(c) o  denote the number of nodes in S1 ∩C1 (S1 \ C1). If the graph is semi-sparse, and πα ≥3(1 −π)γ, then P  n(c) w = nπ  →1 and P  n(c) o = 0  →1. Proof Sketch. We only sketch the proof here, with details being deferred to the supplementary material. The degree bounds of Eq. 1 and the equations for E[Xi|dq ∈Good] hold even in the semi-sparse case. We can also bound the variances of Xi (which are sums of conditionally independent Bernoullis): var[Xi | dq ∈Good, i ∈C1] ≤E[Xi | dq ∈Good, i ∈C1] = η1 Since the expected number of common neighbors vanishes and the standard deviation is an order larger than the expectation, there is no hope for concentration; however, there are slight differences in the probability of having at least one common neighbor. First, by an application of the Paley-Zygmund inequality, we find: p1 ≜P(Xi ≥1 | dq ∈Good, i ∈C1) ≥ E[Xi | dq ∈Good, i ∈C1]2 var(Xi | dq ∈Good, i ∈C1) + E[Xi | dq ∈Good, i ∈C1]2 ≥ η2 1 η1 + η2 1 ≥ℓn(1 −ψn)(1 −η1) since η1 →0 6 For a > 1, Markov’s inequality gives: pa ≜P(Xi ≥1 | dq ∈Good, i ∈Ca, a ̸= 1) ≤E[Xi | dq ∈Good, i ∈Ca, a ̸= 1] = ηa Even though pa →0, nπpa = Θ(n2ρ2) →∞, so we can use concentration inequalities like the Chernoffbound again to bound nw and no. P(nw ≥nπp1(1 − p 6 log n/nπp1)) ≥1 −n−4/3 P(no ≤n(1 −π)pa(1 + p 6 log n/n(1 −π)pa)) ≥1 −n−4/3 Unlike the denser regime, nw and no can be of the same order here. Hence, the candidate set S returned by thresholding the common neighbors has a non-vanishing fraction of nodes from outside q’s community. However, this fraction is relatively small, which is what we would exploit in the cleaning step. Let θw and θo denote the expected number of edges in A2 from a node to S. The separation condition in the theorem statement gives θw −θo ≥4√θw log n. Setting the degree threshold sn = (θw + θo)/2, we bound the probability of mistakes in the cleaning step: P(∃i ∈C1 s.t. X j∈S A2(i, j) ≤sn | dq ∈Good) ≤n−1/3+o(1) P(∃i ̸∈C1 s.t. X j∈S A2(i, j) ≥sn | dq ∈Good) ≤n−1/3+o(1) Removing the conditioning on dq ∈Good (as in Theorem 4.1) yields the desired result. 5 Experiments We present our experimental results in two parts. First, we use simulations to support our theoretical claims. Next we present link prediction accuracies on real world collaborative networks to show that common neighbors indeed perform close to gold standard algorithms like spectral clustering and the Katz score. Implementation details: Recall that our algorithms are based on thresholding. When there is a large gap between common neighbors between node q and nodes in its cluster (e.g., in the semi-dense regime), this is equivalent to using the k-means algorithm with k = 2 to find S in Algorithm 1. The same holds for finding S1 in algorithm 2. When the number of nodes with more than two common neighbors is less than ten, we define the set S by finding all neighbors with at least one common neighbor (as in the semi-sparse regime). On the other hand, since the cleaning step works only when S is sufficiently large (so that degrees concentrate), we do not perform any cleaning when |S| < 30. While we used the split sample graph A2 in the cleaning step for ease of analysis, we did the cleaning using the same network in the experiments. Experimental setup for simulations: We use a stochastic blockmodel of 2000 nodes split into 4 equal-sized clusters. For each value of (α, γ) we pick 50 query nodes at random, and calculate the precision and recall of the result against nodes from the query node’s cluster (for any subset S and true cluster C, precision = |S ∩C|/|S| and recall = |S ∩C|/|C|). We report mean precision and recall over 50 random generated graph instances. Accuracy on simulated data: Figure 1 shows the precision and recall as degree grows, with the parameters (α, γ) satisfying the condition πα ≥3(1 −π)γ of Thm. 4.2. We see that cleaning helps both precision and recall, particularly in the medium-degree range (the semi-sparse regime). As a reference, we also plot the precision of spectral clustering, when it was given the correct number of clusters (K = 4). Above average degree of 10, spectral clustering gives perfect precision, whereas common neighbors can identify a large fraction of the true cluster once average degree is above 25. On the other hand, for average degree less than seven, spectral clustering performs poorly, whereas the precision of common neighbors is remarkably higher. Precision is relatively higher than recall for a broad degree regime, and this explains why common neighbors are a popular choice for link prediction. On a side 7 (A) (B) Figure 1: Recall and precision versus average degree: When degree is very small, none of the methods work well. In the medium-degree range (semi-sparse regime), we see that common neighbors gets increasingly better precision and recall, and cleaning helps. With high enough degrees (semi-dense regime), just common neighbors is sufficient and gets excellent accuracy. Table 1: AUC scores for co-authorship networks Dataset n Mean degree Time-steps AUC CN CN-clean SPEC Katz Random HepTH 5969 4 6 .70 .74 .82 .82 .49 Citeseer 4520 5 11 .88 .89 .89 .95 .52 NIPS 1222 3.95 9 .63 .69 .68 .78 .47 note, it is not surprising that in a very sparse graph common neighbors cannot identify the whole cluster, since not everyone can be reached in two hops. Accuracy on real-world data: We used publicly available co-authorship datasets over time where nodes represent authors and an edge represents a collaboration between two authors. In particular, we used subgraphs of the High Energy Physics (HepTH) coauthorship dataset (6 timesteps), the NIPS dataset (9 timesteps) and the Citeseer dataset (11 timesteps). We obtain the training graph by merging the first T-2 networks, use the T-1th step for cross-validation and use the last timestep as the test graph. The number of nodes and average degrees are reported in Table 1. We merged 1-2 years of papers to create one timestep (so that the median degree of the test graph is at least 1). We compare our algorithm (CN and CN-clean) with the Katz score which is used widely in link prediction [8] and spectral clustering of the network. Spectral clustering is carried out on the giant component of the network. Furthermore, we cross-validate the number of clusters using the held out graph. Our setup is very similar to link prediction experiments in related literature [14]. Since these datasets are unlabeled, we cannot calculate precision or recall as before. Instead for any score or affinity measure, we propose to perform link prediction experiments as follows. For a randomly picked node we calculate the score from the node to everyone else. We compute the AUC score of this vector against the edges in the test graph. We report the average AUC for 100 randomly picked nodes. Table 1 shows that even in sparse regimes common neighbors performs similar to benchmark algorithms. 6 Conclusions Counting common neighbors is a particularly useful heuristic: it is fast and also works well empirically. We prove the effectiveness of common neighbors for link prediction as well as local clustering around a query node, under the stochastic blockmodel setting. In particular, we show the existence of a semi-dense regime where common neighbors yields the right cluster w.h.p, and a semi-sparse regime where an additional “cleaning” step is required. Experiments with simulated as well as real-world datasets shows the efficacy of our approach, including the importance of the cleaning step. 8 References [1] L. Adamic and E. Adar. Friends and neighbors on the web. Social Networks, 25:211–230, 2003. [2] L. Backstrom and J. Leskovec. Supervised random walks: Predicting and recommending links in social networks. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, pages 635–644, New York, NY, USA, 2011. ACM. [3] P. J. Bickel and A. Chen. A nonparametric view of network models and newman girvan and other modularities. Proceedings of the National Academy of Sciences of the Unites States of America, 106(50):21068Ű21073, 2009. [4] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. Journal of Machine Learning Research - Proceedings Track, 23:35.1–35.23, 2012. [5] M. S. Handcock, A. E. Raftery, and J. M. Tantrum. Model-based clustering for social networks. Journal of the Royal Statistical Society: Series A (Statistics in Society), 170(2):301–354, 2007. [6] P. W. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109–137, 1983. [7] L. Katz. A new status index derived from sociometric analysis. In Psychometrika, volume 18, pages 39–43, 1953. [8] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In Conference on Information and Knowledge Management. ACM, 2003. [9] L. Lü and T. Zhou. Link prediction in complex networks: A survey. Physica A, 390(6):11501170, 2011. [10] F. McSherry. Spectral partitioning of random graphs. In FOCS, pages 529–537, 2001. [11] S. C. Olhede and P. J. Wolfe. Network histograms and universality of blockmodel approximation. Proceedings of the National Academy of Sciences of the Unites States of America, 111(41):14722–14727, 2014. [12] A. E. Raftery, M. S. Handcock, and P. D. Hoff. Latent space approaches to social network analysis. Journal of the American Statistical Association, 15:460, 2002. [13] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel. Annals of Statistics, 39:1878–1915, 2011. [14] P. Sarkar and P. J. Bickel. Role of normalization in spectral clustering for stochastic blockmodels. To appear in the Annals of Statistics., 2014. [15] P. Sarkar, D. Chakrabarti, and A. Moore. Theoretical justification of popular link prediction heuristics. In Conference on Learning Theory. ACM, 2010. [16] P. Sarkar and A. Moore. A tractable approach to finding closest truncated-commutetime neighbors in large graphs. In Proc. UAI, 2007. 9
2015
128
5,623
On the Accuracy of Self-Normalized Log-Linear Models Jacob Andreas∗, Maxim Rabinovich∗, Michael I. Jordan, Dan Klein Computer Science Division, University of California, Berkeley {jda,rabinovich,jordan,klein}@cs.berkeley.edu Abstract Calculation of the log-normalizer is a major computational obstacle in applications of log-linear models with large output spaces. The problem of fast normalizer computation has therefore attracted significant attention in the theoretical and applied machine learning literature. In this paper, we analyze a recently proposed technique known as “self-normalization”, which introduces a regularization term in training to penalize log normalizers for deviating from zero. This makes it possible to use unnormalized model scores as approximate probabilities. Empirical evidence suggests that self-normalization is extremely effective, but a theoretical understanding of why it should work, and how generally it can be applied, is largely lacking. We prove upper bounds on the loss in accuracy due to self-normalization, describe classes of input distributions that self-normalize easily, and construct explicit examples of high-variance input distributions. Our theoretical results make predictions about the difficulty of fitting self-normalized models to several classes of distributions, and we conclude with empirical validation of these predictions. 1 Introduction Log-linear models, a general class that includes conditional random fields (CRFs) and generalized linear models (GLMs), offer a flexible yet tractable approach modeling conditional probability distributions p(x|y) [1, 2]. When the set of possible y values is large, however, the computational cost of computing a normalizing constant for each x can be prohibitive—involving a summation with many terms, a high-dimensional integral or an expensive dynamic program. The machine translation community has recently described several procedures for training “selfnormalized” log-linear models [3, 4]. The goal of self-normalization is to choose model parameters that simultaneously yield accurate predictions and produce normalizers clustered around unity. Model scores can then be used as approximate surrogates for probabilities, obviating the computation normalizer computation. In particular, given a model of the form pη(y | x) = eηT T (y, x)−A(η, x) (1) with A (η, x) = log ! y∈Y eηT T (y, x) , (2) we seek a setting of η such that A(x, η) is close enough to zero (with high probability under p(x)) to be ignored. ∗Authors contributed equally. 1 This paper aims to understand the theoretical properties of self-normalization. Empirical results have already demonstrated the efficacy of this approach—for discrete models with many output classes, it appears that normalizer values can be made nearly constant without sacrificing too much predictive accuracy, providing dramatic efficiency increases at minimal performance cost. The broad applicability of self-normalization makes it likely to spread to other large-scale applications of log-linear models, including structured prediction (with combinatorially many output classes) and regression (with continuous output spaces). But it is not obvious that we should expect such approaches to be successful: the number of inputs (if finite) can be on the order of millions, the geometry of the resulting input vectors x highly complex, and the class of functions A(η, x) associated with different inputs quite rich. To find to find a nontrivial parameter setting with A(η, x) roughly constant seems challenging enough; to require that the corresponding η also lead to good classification results seems too much. And yet for many input distributions that arise in practice, it appears possible to choose η to make A(η, x) nearly constant without having to sacrifice classification accuracy. Our goal is to bridge the gap between theoretical intuition and practical experience. Previous work [5] bounds the sample complexity of self-normalizing training procedures for a restricted class of models, but leaves open the question of how self-normalization interacts with the predictive power of the learned model. This paper seeks to answer that question. We begin by generalizing the previously-studied model to a much more general class of distributions, including distributions with continuous support (Section 3). Next, we provide what we believe to be the first characterization of the interaction between self-normalization and model accuracy Section 4. This characterization is given from two perspectives: • a bound on the “likelihood gap” between self-normalized and unconstrained models • a conditional distribution provably hard to represent with a self-normalized model In Figure 5, we present empirical evidence that these bounds correctly characterize the difficulty of self-normalization, and in the conclusion we survey a set of open problems that we believe merit further investigation. 2 Problem background The immediate motivation for this work is a procedure proposed to speed up decoding in a machine translation system with a neural-network language model [3]. The language model used is a standard feed-forward neural network, with a “softmax” output layer that turns the network’s predictions into a distribution over the vocabulary, where each probability is log-proportional to its output activation. It is observed that with a sufficiently large vocabulary, it becomes prohibitive to obtain probabilities from this model (which must be queried millions of times during decoding). To fix this, the language model is trained with the following objective: max W ! i " N(yi|xi; W) −log ! y′ eN(y′|xi;W ) −α # log ! y′ eN(y′|xi;W )$2% where N(y|x; W) is the response of output y in the neural net with weights W given an input x. From a Lagrangian perspective, the extra penalty term simply confines the W to the set of “empirically normalizing” parameters, for which all log-normalizers are close (in squared error) to the origin. For a suitable choice of α, it is observed that the trained network is simultaneously accurate enough to produce good translations, and close enough to self-normalized that the raw scores N(yi|xi) can be used in place of log-probabilities without substantial further degradation in quality. We seek to understand the observed success of these models in finding accurate, normalizing parameter settings. While it is possible to derive bounds of the kind we are interested in for general neural networks [6], in this paper we work with a simpler linear parameterization that we believe captures the interesting aspects of this problem. 1 1It is possible to view a log-linear model as a single-layer network with a softmax output. More usefully, all of the results presented here apply directly to trained neural nets in which the last layer only is retrained to self-normalize [7]. 2 Related work The approach described at the beginning of this section is closely related to an alternative selfnormalization trick described based on noise-contrastive estimation (NCE) [8]. NCE is an alternative to direct optimization of likelihood, instead training a classifier to distinguish between true samples from the model, and “noise” samples from some other distribution. The structure of the training objective makes it possible to replace explicit computation of each log-normalizer with an estimate. In traditional NCE, these values are treated as part of the parameter space, and estimated simultaneously with the model parameters; there exist guarantees that the normalizer estimates will eventually converge to their true values. It is instead possible to fix all of these estimates to one. In this case, empirical evidence suggests that the resulting model will also exhibit self-normalizing behavior [4]. A host of other techniques exist for solving the computational problem posed by the log-normalizer. Many of these involve approximating the associated sum or integral using quadrature [9], herding [10], or Monte Carlo methods [11]. For the special case of discrete, finite output spaces, an alternative approach—the hierarchical softmax—is to replace the large sum in the normalizer with a series of binary decisions [12]. The output classes are arranged in a binary tree, and the probability of generating a particular output is the product of probabilities along the edges leading to it. This reduces the cost of computing the normalizer from O(k) to O(log k). While this limits the set of distributions that can be learned, and still requires greater-than-constant time to compute normalizers, it appears to work well in practice. It cannot, however, be applied to problems with continuous output spaces. 3 Self-normalizable distributions We begin by providing a slightly more formal characterization of a general log-linear model: Definition 1 (Log-linear models). Given a space of inputs X, a space of outputs Y, a measure µ on Y, a nonnegative function h : Y →R, and a function T : X × Y →Rd that is µ-measurable with respect to its second argument, we can define a log-linear model indexed by parameters η ∈Rd, with the form pη(y|x) = h(y)eη⊤T (x,y)−A(x,η) , (3) where A(x, η) ∆= log & Y h(y)eη⊤T (x,y) dµ(y) . (4) If A(x, η) ≤∞, then ' y pη(y|x) dµ(y) = 1, and pη(y|x) is a probability density over Y.2 We next formalize our notion of a self-normalized model. Definition 2 (Self-normalized models). The log-linear model pη(y|x) is self-normalized with respect to a set S ⊂X if for all x ∈S, A(x, η) = 0. In this case we say that S is self-normalizable, and η is self-normalizing w.r.t. S. An example of a normalizable set is shown in Figure 1a, and we provide additional examples below: 2Some readers may be more familiar with generalized linear models, which also describe exponential family distributions with a linear dependence on input. The presentation here is strictly more general, and has a few notational advantages: it makes explicit the dependence of A on x and η but not y, and lets us avoid tedious bookkeeping involving natural and mean parameterizations. [13] 3 (a) A self-normalizable set S for fixed η: the solutions (x1, x2) to A(x, η) = 0 with η⊤T(x, y) = η⊤ y x and η = {(−1, 1), (−1, −2)}. The set forms a smooth one-dimensional manifold bounded on either side by hyperplanes normal to (−1, 1) and (−1, −2). (b) Sets of approximately normalizing parameters η for fixed p(x): solutions (η1, η2) to E[A(x, η)2] = δ2 with T(x, y) = (x + y, −xy), y ∈{−1, 1} and p(x) uniform on {1, 2}. For a given upper bound on normalizer variance, the feasible set of parameters is nonconvex, and grows as δ increases. Figure 1: Self-normalizable data distributions and parameter sets. Example. Suppose S = {log 2, −log 2} , Y = {−1, 1} T(x, y) = [xy, 1] η = (1, log(2/5)) . Then for either x ∈S, A(x, η) = log(elog 2+log(2/5) + e−log 2+log(2/5)) = log((2/5)(2 + 1/2)) = 0 , and η is self-normalizing with respect to S. It is also easy to choose parameters that do not result in a self-normalized distribution, and in fact to construct a target distribution which cannot be self-normalized: Example. Suppose X = {(1, 0), (0, 1), (1, 1)} Y = {−1, 1} T(x, y) = (x1y, x2y, 1) Then there is no η such that A(x, η) = 0 for all x, and A(x, η) is constant if and only if η = 0. As previously motivated, downstream uses of these models may be robust to small errors resulting from improper normalization, so it would be useful to generalize this definition of normalizable distributions to distributions that are only approximately normalizable. Exact normalizability of the conditional distribution is a deterministic statement—there either does or does not exist some x that violates the constraint. In Figure 1a, for example, it suffices to have a single x off of the indicated surface to make a set non-normalizable. Approximate normalizability, by contrast, is inherently a probabilistic statement, involving a distribution p(x) over inputs. Note carefully that we are attempting to represent p(y|x) but have no representation of (or control over) p(x), and that approximate normalizability depends on p(x) but not p(y|x). Informally, if some input violates the self-normalization constraint by a large margin, but occurs only very infrequently, there is no problem; instead we are concerned with expected deviation. It 4 is also at this stage that the distinction between penalization of the normalizer vs. log-normalizer becomes important. The normalizer is necessarily bounded below by zero (so overestimates might appear much worse than underestimates), while the log-normalizer is unbounded in both directions. For most applications we are concerned with log probabilities and log-odds ratios, for which an expected normalizer close to zero is just as bad as one close to infinity. Thus the log-normalizer is the natural choice of quantity to penalize. Definition 3 (Approximately self-normalized models). The log-linear distribution pη(y|x) is δapproximately normalized with respect to a distribution p(x) over X if E[A(X, η)2] < δ2. In this case we say that p(x) is δ-approximately self-normalizable, and η is δ-approximately selfnormalizing. The sets of δ-approximately self-normalizing parameters for a fixed input distribution and feature function are depicted in Figure 1b. Unlike self-normalizable sets of inputs, self-normalizing and approximately self-normalizing sets of parameters may have complex geometry. Throughout this paper, we will assume that vectors of sufficient statistics T(x, y) have bounded ℓ2 norm at most R, natural parameter vectors η have ℓ2 norm at most B (that is, they are Ivanovregularized), and that vectors of both kinds lie in Rd. Finally, we assume that all input vectors have a constant feature—in particular, that x0 = 1 for every x (with corresponding weight η0). 3 The first question we must answer is whether the problem of training self-normalized models is feasible at all—that is, whether there exist any exactly self-normalizable data distributions p(x), or at least δ-approximately self-normalizable distributions for small δ. Section 3 already gave an example of an exactly normalizable distribution. In fact, there are large classes of both exactly and approximately normalizable distributions. Observation. Given some fixed η, consider the set Sη = {x ∈X : A(x, η) = 0}. Any distribution p(x) supported on Sη is normalizable. Additionally, every self-normalizable distribution is characterized by at least one such η. This definition provides a simple geometric characterization of self-normalizable distributions. An example solution set is shown in Figure 1a. More generally, if y is discrete and T(x, y) consists of |Y| repetitions of a fixed feature function t(x) (as in Figure 1a), then we can write A(x, η) = log ! y∈Y eη⊤ y t(x). (5) Provided η⊤ y t(x) is convex in x for each ηy, the level sets of A as a function of x form the boundaries of convex sets. In particular, exactly normalizable sets are always the boundaries of convex regions, as in the simple example Figure 1a. We do not, in general, expect real-world datasets to be supported on the precise class of selfnormalizable surfaces. Nevertheless, it is very often observed that data of practical interest lie on other low-dimensional manifolds within their embedding feature spaces. Thus we can ask whether it is sufficient for a target distribution to be well-approximated by a self-normalizing one. We begin by constructing an appropriate measurement of the quality of this approximation. Definition 4 (Closeness). An input distribution p(x) is D-close to a set S if E ( inf x∗∈S sup y∈Y ||T(X, y) −T(x∗, y)||2 ) ≤D (6) In other words, p(x) is D-close to S if a random sample from p is no more than a distance D from S in expectation. Now we can relate the quality of this approximation to the level of self-normalization achieved. Generalizing a result from [5], we have: Proposition 1. Suppose p(x) is D-close to {x : A(x, η) = 1}. Then p(x) is BD-approximately self-normalizable (recalling that ||x||2 ≤B). 3It will occasionally be instructive to consider the special case where X is the Boolean hypercube, and we will explicitly note where this assumption is made. Otherwise all results apply to general distributions, both continuous and discrete. 5 (Proofs for this section may be found in Appendix A.) The intuition here is that data distributions that place most of their mass in feature space close to normalizable sets are approximately normalizable on the same scale. 4 Normalization and model accuracy So far our discussion has concerned the problem of finding conditional distributions that selfnormalize, without any concern for how well they actually perform at modeling the data. Here the relationship between the approximately self-normalized distribution and the true distribution p(y|x) (which we have so far ignored) is essential. Indeed, if we are not concerned with making a good model it is always trivial to make a normalized one—simply take η = 0 and then scale η0 appropriately! We ultimately desire both good self-normalization and good data likelihood, and in this section we characterize the tradeoff between maximizing data likelihood and satisfying a self-normalization constraint. We achieve this characterization by measuring the likelihood gap between the classical maximum likelihood estimator, and the MLE subject to a self-normalization constraint. Specifically, given pairs ((x1, y1), (x2, y2), . . . , (xn, yn)), let ℓ(η|x, y) = * i log pη(yi|xi). Then define ˆη = arg max η ℓ(η|x, y) (7) ˆηδ = arg max η:V (η)≤δ ℓ(η|x, y) (8) (where V (η) = 1 n * i A(xi, η)2). We would like to obtain a bound on the likelihood gap, which we define as the quantity ∆ℓ(ˆη, ˆηδ) = 1 n(ℓ(ˆη|x, y) −ℓ(ˆηδ|x, y)) . (9) We claim: Theorem 2. Suppose Y has finite measure. Then asymptotically as n →∞ ∆ℓ(ˆη, ˆηδ) ≤ + 1 − δ R||ˆη||2 , E KL(pη(·|X) || Unif) . (10) (Proofs for this section may be found in Appendix B.) This result lower-bounds the likelihood at ˆηδ by explicitly constructing a scaled version of ˆη that satisfies the self-normalization constraint. Specifically, if η is chosen so that normalizers are penalized for distance from log µ(Y) (e.g. the logarithm of the number of classes in the finite case), then any increase in η along the span of the data is guaranteed to increase the penalty. From here it is possible to choose an α ∈(0, 1) such that αˆη satisfies the constraint. The likelihood at αˆη is necessarily less than ℓ(ˆηδ|x, y), and can be used to obtain the desired lower bound. Thus at one extreme, distributions close to uniform can be self-normalized with little loss of likelihood. What about the other extreme—distributions “as far from uniform as possible”? With suitable assumptions about the form of pˆη(y|x), we can use the same construction of a self-normalizing parameter to achieve an alternative characterization for distributions that are close to deterministic: Proposition 3. Suppose that X is a subset of the Boolean hypercube, Y is finite, and T(x, y) is the conjunction of each element of x with an indicator on the output class. Suppose additionally that in every input x, pˆη(y|x) makes a unique best prediction—that is, for each x ∈X, there exists a unique y∗∈Y such that whenever y ̸= y∗, η⊤T(x, y∗) > η⊤T(x, y). Then ∆ℓ(ˆη, ˆηδ) ≤b + ||η||2 −δ R ,2 e−cδ/R (11) for distribution-dependent constants b and c. This result is obtained by representing the constrained likelihood with a second-order Taylor expansion about the true MLE. All terms in the likelihood gap vanish except for the remainder; this can be 6 upper-bounded by the ||ˆηδ||2 2 times the largest eigenvalue the feature covariance matrix at ˆηδ, which in turn is bounded by e−cδ/R. The favorable rate we obtain for this case indicates that “all-nonuniform” distributions are also an easy class for self-normalization. Together with Theorem 2, this suggests that hard distributions must have some mixture of uniform and nonuniform predictions for different inputs. This is supported by the results in Section 4. The next question is whether there is a corresponding lower bound; that is, whether there exist any conditional distributions for which all nearby distributions are provably hard to self-normalize. The existence of a direct analog of Theorem 2 remains an open problem, but we make progress by developing a general framework for analyzing normalizer variance. One key issue is that while likelihoods are invariant to certain changes in the natural parameters, the log normalizers (and therefore their variance) is far from invariant. We therefore focus on equivalence classes of natural parameters, as defined below. Throughout, we will assume a fixed distribution p(x) on the inputs x. Definition 5 (Equivalence of parameterizations). Two natural parameter values η and η′ are said to be equivalent (with respect to an input distribution p(x)), denoted η ∼η′ if pη(y|X) = pη′(y|X) a.s. p(x) We can then define the optimal log normalizer variance for the distribution associated with a natural parameter value. Definition 6 (Optimal variance). We define the optimal log normalizer variance of the log-linear model associated with a natural parameter value η by V ∗(η) = inf η′∼η Varp(x) [A(X, η)] . We now specialize to the case where Y is finite with |Y| = K and where T : Y ×X →RKd satisfies T(k, x)k′j = δkk′xj. This is an important special case that arises, for example, in multi-way logistic regression. In this setting, we can show that despite the fundamental non-identifiability of the model, the variance can still be shown to be high under any parameterization of the distribution. Theorem 4. Let X = {0, 1}d and let the input distribution p(x) be uniform on X. There exists an η0 ∈RKd such that for η = αη0, α > 0, V ∗(η) ≥ ||η||2 2 32d(d −1) −4Ke− √ 1−1 d ||η||2 2(d−1) ||η||2 . 5 Experiments The high-level intuition behind the results in the preceding section can be summarized as follows: 1) for predictive distributions that are in expectation high-entropy or low-entropy, self-normalization results in a relatively small likelihood gap; 2) for mixtures of high- and low-entropy distributions, self-normalization may result in a large likelihood gap. More generally, we expect that an increased tolerance for normalizer variance will be associated with a decreased likelihood gap. In this section we provide experimental confirmation of these predictions. We begin by generating a set of random sparse feature vectors, and an initial weight vector η0. In order to produce a sequence of label distributions that smoothly interpolate between low-entropy and high-entropy, we introduce a temperature parameter τ, and for various settings of τ draw labels from pτη. We then fit a selfnormalized model to these training pairs. In addition to the synthetic data, we compare our results to empirical data [3] from a self-normalized language model. Figure 2a plots the tradeoff between the likelihood gap and the error in the normalizer, under various distributions (characterized by their KL from uniform). Here the tradeoff between self-normalization and model accuracy can be seen—as the normalization constraint is relaxed, the likelihood gap decreases. 7 ∆ℓ 0" 0.05" 0.1" 0.15" 0.2" 0" 0.5" 1" 1.5" KL=2.6" KL=5.0" LM" δ (a) Normalization / likelihood tradeoff. As the normalization constraint δ is relaxed, the likelihood gap ∆ℓdecreases. Lines marked “KL=” are from synthetic data; the line marked “LM” is from [3]. ∆ℓ 0" 0.05" 0.1" 0.15" 0.2" 0" 5" 10" 15" 20" EKL(pη||Unif) (b) Likelihood gap as a function of expected divergence from the uniform distribution. As predicted by theory, the likelihood gap increases, then decreases, as predictive distributions become more peaked. Figure 2: Experimental results Figure 2b shows how the likelihood gap varies as a function of the quantity EKL(pη(·|X)||Unif). As predicted, it can be seen that both extremes of this quantity result in small likelihood gaps, while intermediate values result in large likelihood gaps. 6 Conclusions Motivated by the empirical success of self-normalizing parameter estimation procedures, we have attempted to establish a theoretical basis for the understanding of such procedures. We have characterized both self-normalizable distributions, by constructing provably easy examples, and training procedures, by bounding the loss of likelihood associated with self-normalization. While we have addressed many of the important first-line theoretical questions around selfnormalization, this study of the problem is by no means complete. We hope this family of problems will attract further study in the larger machine learning community; toward that end, we provide the following list of open questions: 1. How else can the approximately self-normalizable distributions be characterized? The class of approximately normalizable distributions we have described is unlikely to correspond perfectly to real-world data. We expect that Proposition 1 can be generalized to other parametric classes, and relaxed to accommodate spectral or sparsity conditions. 2. Are the upper bounds in Theorem 2 or Proposition 3 tight? Our constructions involve relating the normalization constraint to the ℓ2 norm of η, but in general some parameters can have very large norm and still give rise to almost-normalized distributions. 3. Do corresponding lower bounds exist? While it is easy to construct of exactly selfnormalizable distributions (which suffer no loss of likelihood), we have empirical evidence that hard distributions also exist. It would be useful to lower-bound the loss of likelihood in terms of some simple property of the target distribution. 4. Is the hard distribution in Theorem 4 stable? This is related to the previous question. The existence of high-variance distributions is less worrisome if such distributions are fairly rare. If the lower bound falls off quickly as the given construction is perturbed, then the associated distribution may still be approximately self-normalizable with a good rate. We have already seen that new theoretical insights in this domain can translate directly into practical applications. Thus, in addition to their inherent theoretical interest, answers to each of these questions might be applied directly to the training of approximately self-normalized models in practice. We expect that self-normalization will find increasingly many applications, and we hope the results in this paper provide a first step toward a complete theoretical and empirical understanding of self-normalization in log-linear models. Acknowledgments The authors would like to thank Robert Nishihara for useful discussions. JA and MR are supported by NSF Graduate Fellowships, and MR is additionally supported by the Fannie and John Hertz Foundation Fellowship. 8 References [1] Lafferty, J. D.; McCallum, A.; Pereira, F. C. N. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001; pp 282–289. [2] McCullagh, P.; Nelder, J. A. Generalized linear models; Chapman and Hall, 1989. [3] Devlin, J.; Zbib, R.; Huang, Z.; Lamar, T.; Schwartz, R.; Makhoul, J. Fast and robust neural network joint models for statistical machine translation. Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2014. [4] Vaswani, A.; Zhao, Y.; Fossum, V.; Chiang, D. Decoding with large-scale neural language models improves translation. Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2013. [5] Andreas, J.; Klein, D. When and why are log-linear models self-normalizing? Proceedings of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics. 2014. [6] Bartlett, P. L. IEEE Transactions on Information Theory 1998, 44, 525–536. [7] Anthony, M.; Bartlett, P. Neural network learning: theoretical foundations; Cambridge University Press, 2009. [8] Gutmann, M.; Hyv¨arinen, A. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Proceedings of the International Conference on Artificial Intelligence and Statistics. 2010; pp 297–304. [9] O’Hagan, A. Journal of statistical planning and inference 1991, 29, 245–260. [10] Chen, Y.; Welling, M.; Smola, A. Proceedings of the Conference on Uncertainty in Artificial Intelligence 2010, 109–116. [11] Doucet, A.; De Freitas, N.; Gordon, N. An introduction to sequential Monte Carlo methods; Springer, 2001. [12] Morin, F.; Bengio, Y. Proceedings of the International Conference on Artificial Intelligence and Statistics 2005, 246. [13] Yang, E.; Allen, G.; Liu, Z.; Ravikumar, P. K. Graphical models via generalized linear models. Advances in Neural Information Processing Systems. 2012; pp 1358–1366. 9
2015
129
5,624
Nonparametric von Mises Estimators for Entropies, Divergences and Mutual Informations Kirthevasan Kandasamy Carnegie Mellon University kandasamy@cs.cmu.edu Akshay Krishnamurthy Microsoft Research, NY akshaykr@cs.cmu.edu Barnab´as P´oczos, Larry Wasserman Carnegie Mellon University bapoczos@cs.cmu.edu, larry@stat.cmu.edu James M. Robins Harvard University robins@hsph.harvard.edu Abstract We propose and analyse estimators for statistical functionals of one or more distributions under nonparametric assumptions. Our estimators are derived from the von Mises expansion and are based on the theory of influence functions, which appear in the semiparametric statistics literature. We show that estimators based either on data-splitting or a leave-one-out technique enjoy fast rates of convergence and other favorable theoretical properties. We apply this framework to derive estimators for several popular information theoretic quantities, and via empirical evaluation, show the advantage of this approach over existing estimators. 1 Introduction Entropies, divergences, and mutual informations are classical information-theoretic quantities that play fundamental roles in statistics, machine learning, and across the mathematical sciences. In addition to their use as analytical tools, they arise in a variety of applications including hypothesis testing, parameter estimation, feature selection, and optimal experimental design. In many of these applications, it is important to estimate these functionals from data so that they can be used in downstream algorithmic or scientific tasks. In this paper, we develop a recipe for estimating statistical functionals of one or more nonparametric distributions based on the notion of influence functions. Entropy estimators are used in applications ranging from independent components analysis [15], intrinsic dimension estimation [4] and several signal processing applications [9]. Divergence estimators are useful in statistical tasks such as two-sample testing. Recently they have also gained popularity as they are used to measure (dis)-similarity between objects that are modeled as distributions, in what is known as the “machine learning on distributions” framework [5, 28]. Mutual information estimators have been used in in learning tree-structured Markov random fields [19], feature selection [25], clustering [18] and neuron classification [31]. In the parametric setting, conditional divergence and conditional mutual information estimators are used for conditional two sample testing or as building blocks for structure learning in graphical models. Nonparametric estimators for these quantities could potentially allow us to generalise several of these algorithms to the nonparametric domain. Our approach gives sample-efficient estimators for all these quantities (and many others), which often outperfom the existing estimators both theoretically and empirically. Our approach to estimating these functionals is based on post-hoc correction of a preliminary estimator using the Von Mises Expansion [7, 36]. This idea has been used before in the semiparametric statistics literature [3, 30]. However, most studies are restricted to functionals of one distribution and have focused on a “data-split” approach which splits the samples for density estimation and functional estimation. While the data-split (DS) estimator is known to achieve the parametric con1 vergence rate for sufficiently smooth densities [3, 14], in practical settings, as we show in our simulations, splitting the data results in poor empirical performance. In this paper we introduce the method of influence function based nonparametric estimators to the machine learning community and expand on this technique in several novel and important ways. The main contributions of this paper are: 1. We propose a “leave-one-out” (LOO) technique to estimate functionals of a single distribution. We prove that it has the same convergence rates as the DS estimator. However, the LOO estimator has better empirical performance in our simulations since it makes efficient use of the data. 2. We extend both DS and LOO methods to functionals of multiple distributions and analyse their convergence. Under sufficient smoothness both estimators achieve the parametric rate and the DS estimator has a limiting normal distribution. 3. We prove a lower bound for estimating functionals of multiple distributions. We use this to establish minimax optimality of the DS and LOO estimators under sufficient smoothness. 4. We use the approach to construct and implement estimators for various entropy, divergence, mutual information quantities and their conditional versions. A subset of these functionals are listed in Table 1 in the Appendix. Our software is publicly available at github.com/kirthevasank/if-estimators. 5. We compare our estimators against several other approaches in simulation. Despite the generality of our approach, our estimators are competitive with and in many cases superior to existing specialised approaches for specific functionals. We also demonstrate how our estimators can be used in machine learning applications via an image clustering task. Our focus on information theoretic quantities is due to their relevance in machine learning applications, rather than a limitation of our approach. Indeed our techniques apply to any smooth functional. History: We provide a brief history of the post-hoc correction technique and influence functions. We defer a detailed discussion of other approaches to estimating functionals to Section 5. To our knowledge, the first paper using a post-hoc correction estimator was that of Bickel and Ritov [2]. The line of work following this paper analysed integral functionals of a single one dimensional density of the form R ν(p) [2, 3, 11, 14]. A recent paper by Krishnamurthy et al. [12] also extends this line to functionals of multiple densities, but only considers polynomial functionals of the form R pαqβ for densities p and q. All approaches above of use data splitting. Our work contributes to this line of research in two ways: we extend the technique to a more general class of functionals and study the empirically superior LOO estimator. A fundamental quantity in the design of our estimators is the influence function, which appears both in robust and semiparametric statistics. Indeed, our work is inspired by that of Robins et al. [30] and Emery et al. [6] who propose a (data-split) influence-function based estimator for functionals of a single distribution. Their analysis for nonparametric problems rely on ideas from semiparametric statistics: they define influence functions for parametric models and then analyse estimators by looking at all parametric submodels through the true parameter. 2 Preliminaries Let X be a compact metric space equipped with a measure µ, e.g. the Lebesgue measure. Let F and G be measures over X that are absolutely continuous w.r.t µ. Let f, g ∈L2(X) be the Radon-Nikodym derivatives with respect to µ. We focus on estimating functionals of the form: T(F) = T(f) = φ Z ν(f)dµ  or T(F, G) = T(f, g) = φ Z ν(f, g)dµ  , (1) where φ, ν are real valued Lipschitz functions that twice differentiable. Our framework permits more general functionals (e.g. functionals based on the conditional densities), but we will focus on this form for ease of exposition. To facilitate presentation of the main definitions, it is easiest to work with functionals of one distribution T(F). Define M to be the set of all measures that are absolutely continuous w.r.t µ, whose Radon-Nikodym derivatives belong to L2(X). 2 Central to our development is the Von Mises expansion (VME), which is the distributional analog of the Taylor expansion. For this we introduce the Gˆateaux derivative which imposes a notion of differentiability in topological spaces. We then introduce the influence function. Definition 1. Let P, H ∈M and U : M →R be any functional. The map U ′ : M →R where U ′(H; P) = ∂U(P +tH) ∂t |t=0 is called the Gˆateaux derivative at P if the derivative exists and is linear and continuous in H. U is Gˆateaux differentiable at P if the Gˆateaux derivative exists at P. Definition 2. Let U be Gˆateaux differentiable at P. A function ψ(·; P) : X →R which satisfies U ′(Q −P; P) = R ψ(x; P)dQ(x), is the influence function of U w.r.t the distribution P. By the Riesz representation theorem, the influence function exists uniquely since the domain of U is a bijection of L2(X) and consequently a Hilbert space. The classical work of Fernholz [7] defines the influence function in terms of the Gˆateaux derivative by, ψ(x; P) = U ′(δx −P; P) = ∂U((1 −t)P + tδx) ∂t t=0, (2) where δx is the dirac delta function at x. While our functionals are defined only on non-atomic distributions, we can still use (2) to compute the influence function. The function computed this way can be shown to satisfy Definition 2. Based on the above, the first order VME is, U(Q) = U(P) + U ′(Q −P; P) + R2(P, Q) = U(P) + Z ψ(x; P)dQ(x) + R2(P, Q), (3) where R2 is the second order remainder. Gˆateaux differentiability alone will not be sufficient for our purposes. In what follows, we will assign Q →F and P →bF, where F, bF are the true and estimated distributions. We would like to bound the remainder in terms of a distance between F and bF. For functionals T of the form (1), we restrict the domain to be only measures with continuous densities, Then, we can control R2 using the L2 metric of the densities. This essentially means that our functionals satisfy a stronger form of differentiability called Fr´echet differentiability [7, 36] in the L2 metric. Consequently, we can write all derivatives in terms of the densities, and the VME reduces to a functional Taylor expansion on the densities (Lemmas 9, 10 in Appendix A): T(q) = T(p) + φ′ Z ν(p)  Z (q −p)ν′(p) + R2(p, q) = T(p) + Z ψ(x; p)q(x)dµ(x) + O(∥p −q∥2 2). (4) This expansion will be the basis for our estimators. These ideas generalise to functionals of multiple distributions and to settings where the functional involves quantities other than the density. A functional T(P, Q) of two distributions has two Gˆateaux derivatives, T ′ i(·; P, Q) for i = 1, 2 formed by perturbing the ith argument with the other fixed. The influence functions ψ1, ψ2 satisfy, ∀P1, P2 ∈M, T ′ 1(Q1 −P1; P1, P2) = ∂T(P1 + t(Q1 −P1), P2) ∂t t=0 = Z ψ1(u; P1, P2)dQ1(u), (5) T ′ 2(Q2 −P2; P1, P2) = ∂T(P1, P2 + t(Q2 −P2)) ∂t t=0 = Z ψ2(u; P1, P2)dQ2(u). The VME can be written as, T(q1, q2) = T(p1, p2) + Z ψ1(x; p1, p2)q1(x)dx + Z ψ2(x; p1, p2)q2(x)dx + O(∥p1 −q1∥2 2) + O(∥p2 −q2∥2 2). (6) 3 Estimating Functionals First consider estimating a functional of a single distribution, T(f) = φ( R ν(f)dµ) from samples Xn 1 ∼f. We wish to find an estimator bT with low expected mean squared error (MSE) E[( bT −T)2]. 3 Using the VME (4), Emery et al. [6] and Robins et al. [30] suggest a natural estimator. If we use half of the data Xn/2 1 to construct an estimate ˆf (1) of the density f, then by (4): T(f) −T( ˆf (1)) = Z ψ(x; ˆf (1))f(x)dµ + O(∥f −ˆf (1)∥2 2). As the influence function does not depend on (the unknown) F, the first term on the right hand side is simply an expectation of ψ(X; ˆf (1)) w.r.t F. We can use the second half of the data Xn n/2+1 to estimate this expectation with its sample mean. This leads to the following preliminary estimator: bT (1) DS = T( ˆf (1)) + 1 n/2 n X i=n/2+1 ψ(Xi; ˆf (1)). (7) We can similarly construct an estimator bT (2) DS by using Xn n/2+1 for density estimation and Xn/2 1 for averaging. Our final estimator is obtained via bTDS = ( bT (1) DS + bT (2) DS )/2. In what follows, we shall refer to this estimator as the Data-Split (DS) estimator. The DS estimator for functionals of one distribution has appeared before in the statistics literature [2, 3, 30]. The rate of convergence of this estimator is determined by the O(∥f −ˆf (1)∥2 2) error in the VME and the n−1 rate for estimating an expectation. Lower bounds from several literature [3, 14] confirm minimax optimality of the DS estimator when f is sufficiently smooth. The data splitting trick is common approach [3, 12, 14] as the analysis is straightforward. While in theory DS estimators enjoy good rates of convergence, data splitting is unsatisfying from a practical standpoint since using only half the data each for estimation and averaging invariably decreases the accuracy. To make more effective use of the sample, we propose a Leave-One-Out (LOO) version of the above estimator, bTLOO = 1 n n X i=1  T( ˆf−i) + ψ(Xi; ˆf−i)  . (8) where ˆf−i is a density estimate using all the samples Xn 1 except for Xi. We prove that the LOO Estimator achieves the same rate of convergence as the DS estimator but empirically performs much better. Our analysis is specialised to the case where ˆf−i is a kernel density estimate (Section 4). We can extend this method to estimate functionals of two distributions. Say we have n i.i.d samples Xn 1 from f and m samples Y m 1 from g. Akin to the one distribution case, we propose the following DS and LOO versions. bT (1) DS = T( ˆf (1), ˆg(1)) + 1 n/2 n X i=n/2+1 ψf(Xi; ˆf (1), ˆg(1)) + 1 m/2 m X j=m/2+1 ψg(Yj; ˆf (1), ˆg(1)). (9) bTLOO = 1 max(n, m) max(n,m) X i=1  T( ˆf−i, ˆg−i) + ψf(Xi; ˆf−i, ˆg−i) + ψg(Yi; ˆf−i, ˆg−i)  . (10) Here, ˆg(1), ˆg−i are defined similar to ˆf (1), ˆf−i. For the DS estimator, we swap the samples to compute bT (2) DS and average. For the LOO estimator, if n > m we cycle through the points Y m 1 until we have summed over all Xn 1 or vice versa. bTLOO is asymmetric when n ̸= m. A seemingly natural alternative would be to sum over all nm pairings of Xi’s and Yj’s. However, this is computationally more expensive. Moreover, a straightforward modification of our proof in Appendix D.2 shows that both approaches converge at the same rate if n and m are of the same order. Examples: We demonstrate the generality of our framework by presenting estimators for several entropies, divergences mutual informations and their conditional versions in Table 1 (Appendix H). For many functionals in the table, these are the first computationally efficient estimators proposed. We hope this table will serve as a good reference for practitioners. For several functionals (e.g. conditional and unconditional R´enyi-α divergence, conditional Tsallis-α mutual information) the estimators are not listed only because the expressions are too long to fit into the table. Our software implements a total of 17 functionals which include all the estimators in the table. In Appendix F we illustrate how to apply our framework to derive an estimator for any functional via an example. 4 As will be discussed in Section 5, when compared to other alternatives, our technique has several favourable properties: the computational complexity of our method is O(n2) when compared to O(n3) of other methods; for several functionals we do not require numeric integration; unlike most other methods [28, 32], we do not require any tuning of hyperparameters. 4 Analysis Some smoothness assumptions on the densities are warranted to make estimation tractable. We use the H¨older class, which is now standard in nonparametrics literature. Definition 3. Let X ⊂Rd be a compact space. For any r = (r1, . . . , rd), ri ∈N, define |r| = P i ri and Dr = ∂|r| ∂xr1 1 ...∂x rd d . The H¨older class Σ(s, L) is the set of functions on L2(X) satisfying, |Drf(x) −Drf(y)| ≤L∥x −y∥s−r, for all r s.t. |r| ≤⌊s⌋and for all x, y ∈X. Moreover, define the Bounded H¨older Class Σ(s, L, B′, B) to be {f ∈Σ(s, L) : B′ < f < B}. Note that large s implies higher smoothness. Given n samples Xn 1 from a d-dimensional density f, the kernel density estimator (KDE) with bandwidth h is ˆf(t) = 1/(nhd) Pn i=1 K t−Xi h  . Here K : Rd →R is a smoothing kernel [35]. When f ∈Σ(s, L), by selecting h ∈Θ(n −1 2s+d ) the KDE achieves the minimax rate of OP (n −2s 2s+d ) in mean squared error. Further, if f is in the bounded H¨older class Σ(s, L, B′, B) one can truncate the KDE from below at B′ and from above at B and achieve the same convergence rate [3]. In our analysis, the density estimators ˆf (1), ˆf−i, ˆg(1), ˆg−i are formed by either a KDE or a truncated KDE, and we will make use of these results. We will also need the following regularity condition on the influence function. This is satisfied for smooth functionals including those in Table 1. We demonstrate this in our example in Appendix F. Assumption 4. For a functional T(f) of one distribution, the influence function ψ satisfies, E  (ψ(X; f ′) −ψ(X; f))2 ∈O(∥f −f ′∥2) as ∥f −f ′∥2 →0. For a functional T(f, g) of two distributions, the influence functions ψf, ψg satisfy, Ef h (ψf(X; f ′, g′) −ψf(X; f, g))2i ∈O(∥f −f ′∥2 + ∥g −g′∥2) as ∥f −f ′∥2, ∥g −g′∥2 →0. Eg h (ψg(Y ; f ′, g′) −ψg(Y ; f, g))2i ∈O(∥f −f ′∥2 + ∥g −g′∥2) as ∥f −f ′∥2, ∥g −g′∥2 →0. Under the above assumptions, Emery et al. [6], Robins et al. [30] show that the DS estimator on a single distribution achieves MSE E[( bTDS−T(f))2] ∈O(n −4s 2s+d +n−1) and further is asymptotically normal when s > d/2. Their analysis in the semiparametric setting contains the nonparametric setting as a special case. In Appendix B we review these results with a simpler self contained analysis that directly uses the VME and has more interpretable assumptions. An attractive property of our proof is that it is agnostic to the density estimator used provided it achieves the correct rates. For the LOO estimator (Equation (8)), we establish the following result. Theorem 5 (Convergence of LOO Estimator for T(f)). Let f ∈Σ(s, L, B, B′) and ψ satisfy Assumption 4. Then, E[( bTLOO −T(f))2] is O(n −4s 2s+d ) when s < d/2 and O(n−1) when s ≥d/2. The key technical challenge in analysing the LOO estimator (when compared to the DS estimator) is in bounding the variance as there are several correlated terms in the summation. The bounded difference inequality is a popular trick used in such settings, but this requires a supremum on the influence functions which leads to significantly worse rates. Instead we use the Efron-Stein inequality which provides an integrated version of bounded differences that can recover the correct rate when coupled with Assumption 4. Our proof is contingent on the use of the KDE as the density estimator. While our empirical studies indicate that bTLOO’s limiting distribution is normal (Fig 2(c)), the proof seems challenging due to the correlation between terms in the summation. We conjecture that bTLOO is indeed asymptotically normal but for now leave it to future work. 5 We reiterate that while the convergence rates are the same for both DS and LOO estimators, the data splitting degrades empirical performance of bTDS as we show in our simulations. Now we turn our attention to functionals of two distributions. When analysing asymptotics we will assume that as n, m →∞, n/(n + m) →ζ ∈(0, 1). Denote N = n + m. For the DS estimator (9) we generalise our analysis for one distribution to establish the theorem below. Theorem 6 (Convergence/Asymptotic Normality of DS Estimator for T(f, g)). Let f, g ∈ Σ(s, L, B, B′) and ψf, ψg satisfy Assumption 4. Then, E[( bTDS −T(f, g))2] is O(n −4s 2s+d + m −4s 2s+d ) when s < d/2 and O(n−1 + m−1) when s ≥d/2. Further, when s > d/2 and when ψf, ψg ̸= 0, bTDS is asymptotically normal, √ N( bTDS −T(f, g)) D −→N  0, 1 ζ Vf [ψf(X; f, g)] + 1 1 −ζ Vg [ψg(Y ; f, g)]  . (11) The convergence rate is analogous to the one distribution case with the estimator achieving the parametric rate under similar smoothness conditions. The asymptotic normality result allows us to construct asymptotic confidence intervals for the functional. Even though the asymptotic variance of the influence function is not known, by Slutzky’s theorem any consistent estimate of the variance gives a valid asymptotic confidence interval. In fact, we can use an influence function based estimator for the asymptotic variance, since it is also a differentiable functional of the densities. We demonstrate this in our example in Appendix F. The condition ψf, ψg ̸= 0 is somewhat technical. When both ψf and ψg are zero, the first order terms vanishes and the estimator converges very fast (at rate 1/n2). However, the asymptotic behavior of the estimator is unclear. While this degeneracy occurs only on a meagre set, it does arise for important choices, such as the null hypothesis f = g in two-sample testing problems. Finally, for the LOO estimator (10) on two distributions we have the following result. Convergence is analogous to the one distribution setting and the parametric rate is achieved when s > d/2. Theorem 7 (Convergence of LOO Estimator for T(f, g)). Let f, g ∈Σ(s, L, B, B′) and ψf, ψg satisfy Assumption 4. Then, E[( bTLOO −T(f, g))2] is O(n −4s 2s+d + m −4s 2s+d ) when s < d/2 and O(n−1 + m−1) when s ≥d/2. For many functionals, a H¨olderian assumption (Σ(s, L)) alone is sufficient to guarantee the rates in Theorems 5,6 and 7. However, for some functionals (such as the α-divergences) we require ˆf, ˆg, f, g to be bounded above and below. Existing results [3, 12] demonstrate that estimating such quantities is difficult without this assumption. Now we turn our attention to the question of statistical difficulty. Via lower bounds given by Birg´e and Massart [3] and Laurent [14] we know that the DS and LOO estimators are minimax optimal when s > d/2 for functionals of one distribution. In the following theorem, we present a lower bound for estimating functionals of two distributions. Theorem 8 (Lower Bound for T(f, g)). Let f, g ∈Σ(s, L) and bT be any estimator for T(f, g). Define τ = min{8s/(4s + d), 1}. Then there exists a strictly positive constant c such that, lim inf n→∞inf b T sup f,g∈Σ(s,L) E  ( bT −T(f, g))2 ≥c n−τ + m−τ . Our proof, given in Appendix E, is based on LeCam’s method [35] and generalises the analysis of Birg´e and Massart [3] for functionals of one distribution. This establishes minimax optimality of the DS/LOO estimators for functionals of two distributions when s ≥d/2. However, when s < d/2 there is a gap between our upper and lower bounds. It is natural to ask if it is possible to improve on our rates in this regime. A series of work [3, 11, 14] shows that, for integral functionals of one distribution, one can achieve the n−1 rate when s > d/4 by estimating the second order term in the functional Taylor expansion. This second order correction was also done for polynomial functionals of two distributions with similar statistical gains [12]. While we believe this is possible here, these estimators are conceptually complicated and computationally expensive – requiring O(n3 + m3) running time compared to the O(n2 + m2) running time for our estimator. The first order estimator has a favorable balance between statistical and computational efficiency. Further, not much is known about the limiting distribution of second order estimators. 6 102 103 10 −1 n | bT −T | Shannon Entropy 1D Plug-in DS LOO kNN KDP Vasicek-KDE 102 103 10 −1 n | bT −T | Shannon Entropy 2D Plug-in DS LOO kNN KDP Voronoi 102 103 10 −4 10−3 10−2 10 −1 n | bT −T | KL Divergence Plug-in DS LOO kNN 102 103 10 −1 10 0 n | bT −T | Renyi-0.75 Divergence Plug-in DS LOO kNN 102 103 10 −4 10 −3 10 −2 10−1 n | bT −T | Hellinger Divergence Plug-in DS LOO kNN 102 103 10−4 10−3 10−2 10−1 n | bT −T | Tsallis-0.75 Divergence Plug-in DS LOO kNN Figure 1: Comparison of DS/LOO estimators against alternatives on different functionals. The y-axis is the error | bT −T(f, g)| and the x-axis is the number of samples. All curves were produced by averaging over 50 experiments. Discretisation in hyperparameter selection may explain some of the unsmooth curves. 5 Comparison with Other Approaches Estimation of statistical functionals under nonparametric assumptions has received considerable attention over the last few decades. A large body of work has focused on estimating the Shannon entropy– Beirlant et al. [1] gives a nice review of results and techniques. More recent work in the single-distribution setting includes estimation of R´enyi and Tsallis entropies [17, 24]. There are also several papers extending some of these techniques to divergence estimation [10, 12, 26, 27, 37]. Many of the existing methods can be categorised as plug-in methods: they are based on estimating the densities either via a KDE or using k-Nearest Neighbors (k-NN) and evaluating the functional on these estimates. Plug-in methods are conceptually simple but unfortunately suffer several drawbacks. First, they typically have worse convergence rate than our approach, achieving the parametric rate only when s ≥d as opposed to s ≥d/2 [19, 32]. Secondly, using either the KDE or k-NN, obtaining the best rates for plug-in methods requires undersmoothing the density estimate and we are not aware for principled approaches for selecting this smoothing parameter. In contrast, the bandwidth used in our estimators is the optimal bandwidth for density estimation so we can select it using a number of approaches, e.g. cross validation. This is convenient from a practitioners perspective as the bandwidth can be selected automatically, a convenience that other estimators do not enjoy. Secondly, plugin methods based on the KDE always require computationally burdensome numeric integration. In our approach, numeric integration can be avoided for many functionals of interest (See Table 1). Another line of work focuses more specifically on estimating f-Divergences. Nguyen et al. [22] estimate f-divergences by solving a convex program and analyse the method when the likelihood ratio of the densities belongs to an RKHS. Comparing the theoretical results is not straightforward as it is not clear how to port the RKHS assumption to our setting. Further, the size of the convex program increases with the sample size which is problematic for large samples. Moon and Hero [21] use a weighted ensemble estimator for f-divergences. They establish asymptotic normality and the parametric convergence rate only when s ≥d, which is a stronger smoothness assumption than is required by our technique. Both these works only consider f-divergences, whereas our method has wider applicability and includes f-divergences as a special case. 6 Experiments We compare the estimators derived using our methods on a series of synthetic examples. We compare against the methods in [8, 20, 23, 26–29, 33]. Software for the estimators was obtained either 7 102 103 10−1 n | bT −T | Conditional Tsallis-0.75 Divergence DS LOO (a) −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Quantiles of N (0, 1) Quantiles of n−1/2( bTDS −T )/ˆσ (b) −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Quantiles of N (0, 1) Quantiles of n−1/2( bTLOO −T )/ˆσ (c) Figure 2: Fig (a): Comparison of the LOO vs DS estimator on estimating the Conditional Tsallis divergence in 4 dimensions. Note that the plug-in estimator is intractable due to numerical integration. There are no other known estimators for the conditional tsallis divergence. Figs (b), (c): QQ plots obtained using 4000 samples for Hellinger divergence estimation in 4 dimensions using the DS and LOO estimators respectively. directly from the papers or from Szab´o [34]. For the DS/LOO estimators, we estimate the density via a KDE with the smoothing kernels constructed using Legendre polynomials [35]. In both cases and for the plug in estimator we choose the bandwidth by performing 5-fold cross validation. The integration for the plug in estimator is approximated numerically. We test the estimators on a series of synthetic datasets in 1 −4 dimension. The specifics of the densities used in the examples and methods compared to are given in Appendix G. The results are shown in Figures 1 and 2. We make the following observations. In most cases the LOO estimator performs best. The DS estimator approaches the LOO estimator when there are many samples but is generally inferior to the LOO estimator with few samples. This, as we have explained before is because data splitting does not make efficient use of the data. The k-NN estimator for divergences [28] requires choosing a k. For this estimator, we used the default setting for k given in the software. As performance is sensitive to the choice of k, it performs well in some cases but poorly in other cases. We reiterate that the hyper-parameter of our estimator (bandwidth of the kernel) can be selected automatically using cross validation. Next, we test the DS and LOO estimators for asymptotic normality on a 4-dimensional Hellinger divergence estimation problem. We use 4000 samples for estimation. We repeat this experiment 200 times and compare the empiriical asymptotic distribution (i.e. the √ 4000( bT −T(f, g))/bS values where bS is the estimated asymptotic variance) to a N(0, 1) distribution on a QQ plot. The results in Figure 2 suggest that both estimators are asymptotically normal. Image clustering: We demonstrate the use of our nonparametric divergence estimators in an image clustering task on the ETH-80 datset [16]. Using our Hellinger divergence estimator we achieved an accuracy of 92.47% whereas a naive spectral clustering approach achieved only 70.18%. When we used a k-NN estimator for the Hellinger divergence [28] we achieved 90.04% which attests to the superiority of our method. Since this is not the main focus of this work we defer this to Appendix G. 7 Conclusion We generalise existing results in Von Mises estimation by proposing an empirically superior LOO technique for estimating functionals and extending the framework to functionals of two distributions. We also prove a lower bound for the latter setting. We demonstrate the practical utility of our technique via comparisons against other alternatives and an image clustering application. An open problem arising out of our work is to derive the limiting distribution of the LOO estimator. Acknowledgements This work is supported in part by NSF Big Data grant IIS-1247658 and DOE grant DESC0011114. References [1] Jan Beirlant, Edward J. Dudewicz, L´aszl´o Gy¨orfi, and Edward C. Van der Meulen. Nonparametric entropy estimation: An overview. International Journal of Mathematical and Statistical Sciences, 1997. 8 [2] Peter J. Bickel and Ya’acov Ritov. Estimating integrated squared density derivatives: sharp best order of convergence estimates. Sankhy¯a: The Indian Journal of Statistics, 1988. [3] Lucien Birg´e and Pascal Massart. Estimation of integral functionals of a density. Ann. of Stat., 1995. [4] Kevin M. Carter, Raviv Raich, and Alfred O. Hero. On local intrinsic dimension estimation and its applications. IEEE Transactions on Signal Processing, 2010. [5] Inderjit S. Dhillon, Subramanyam Mallela, and Rahul Kumar. A Divisive Information Theoretic Feature Clustering Algorithm for Text Classification. J. Mach. Learn. Res., 2003. [6] M Emery, A Nemirovski, and D Voiculescu. Lectures on Prob. Theory and Stat. Springer, 1998. [7] Luisa Fernholz. Von Mises calculus for statistical functionals. Lecture notes in statistics. Springer, 1983. [8] Mohammed Nawaz Goria, Nikolai N Leonenko, Victor V Mergel, and Pier Luigi Novi Inverardi. A new class of random vector entropy estimators and its applications. Nonparametric Statistics, 2005. [9] Hero, Bing Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs. IEEE Signal Processing Magazine, 19, 2002. [10] David K¨allberg and Oleg Seleznjev. Estimation of entropy-type integral functionals. arXiv, 2012. [11] G´erard Kerkyacharian and Dominique Picard. Estimating nonquadratic functionals of a density using haar wavelets. Annals of Stat., 1996. [12] Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, and Larry Wasserman. Nonparametric Estimation of R´enyi Divergence and Friends. In ICML, 2014. [13] Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, and Larry Wasserman. On Estimating L2 2 Divergence. In Artificial Intelligence and Statistics, 2015. [14] B´eatrice Laurent. Efficient estimation of integral functionals of a density. Ann. of Stat., 1996. [15] Erik Learned-Miller and Fisher John. ICA using spacings estimates of entropy. Mach. Learn. Res., 2003. [16] Bastian Leibe and Bernt Schiele. Analyzing Appearance and Contour Based Methods for Object Categorization. In CVPR, 2003. [17] Nikolai Leonenko and Oleg Seleznjev. Statistical inference for the epsilon-entropy and the quadratic R´enyi entropy. Journal of Multivariate Analysis, 2010. [18] Jeremy Lewi, Robert Butera, and Liam Paninski. Real-time adaptive information-theoretic optimization of neurophysiology experiments. In NIPS, 2006. [19] Han Liu, Larry Wasserman, and John D Lafferty. Exponential concentration for mutual information estimation with application to forests. In NIPS, 2012. [20] Erik G Miller. A new class of Entropy Estimators for Multi-dimensional Densities. In ICASSP, 2003. [21] Kevin Moon and Alfred Hero. Multivariate f-divergence Estimation With Confidence. In NIPS, 2014. [22] XuanLong Nguyen, Martin J. Wainwright, and Michael I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 2010. [23] Havva Alizadeh Noughabi and Reza Alizadeh Noughabi. On the Entropy Estimators. Journal of Statistical Computation and Simulation, 2013. [24] D´avid P´al, Barnab´as P´oczos, and Csaba Szepesv´ari. Estimation of R´enyi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs. In NIPS, 2010. [25] Hanchuan Peng, Fulmi Long, and Chris Ding. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE PAMI, 2005. [26] Fernando P´erez-Cruz. KL divergence estimation of continuous distributions. In IEEE ISIT, 2008. [27] Barnab´as P´oczos and Jeff Schneider. On the estimation of alpha-divergences. In AISTATS, 2011. [28] Barnab´as P´oczos, Liang Xiong, and Jeff G. Schneider. Nonparametric Divergence Estimation with Applications to Machine Learning on Distributions. In UAI, 2011. [29] David Ramırez, Javier Vıa, Ignacio Santamarıa, and Pedro Crespo. Entropy and Kullback-Leibler Divergence Estimation based on Szegos Theorem. In EUSIPCO, 2009. [30] James Robins, Lingling Li, Eric Tchetgen, and Aad W. van der Vaart. Quadratic semiparametric Von Mises Calculus. Metrika, 2009. [31] Elad Schneidman, William Bialek, and Michael J. Berry II. An Information Theoretic Approach to the Functional Classification of Neurons. In NIPS, 2002. [32] Shashank Singh and Barnabas Poczos. Exponential Concentration of a Density Functional Estimator. In NIPS, 2014. [33] Dan Stowell and Mark D Plumbley. Fast Multidimensional Entropy Estimation by k-d Partitioning. IEEE Signal Process. Lett., 2009. [34] Zolt´an Szab´o. Information Theoretical Estimators Toolbox. J. Mach. Learn. Res., 2014. [35] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2008. [36] Aad W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998. [37] Qing Wang, Sanjeev R. Kulkarni, and Sergio Verd´u. Divergence estimation for multidimensional densities via k-nearest-neighbor distances. IEEE Transactions on Information Theory, 2009. 9
2015
13
5,625
Learnability of Influence in Networks Harikrishna Narasimhan David C. Parkes Yaron Singer Harvard University, Cambridge, MA 02138 hnarasimhan@seas.harvard.edu, {parkes, yaron}@seas.harvard.edu Abstract We show PAC learnability of influence functions for three common influence models, namely, the Linear Threshold (LT), Independent Cascade (IC) and Voter models, and present concrete sample complexity results in each case. Our results for the LT model are based on interesting connections with neural networks; those for the IC model are based an interpretation of the influence function as an expectation over random draw of a subgraph and use covering number arguments; and those for the Voter model are based on a reduction to linear regression. We show these results for the case in which the cascades are only partially observed and we do not see the time steps in which a node has been influenced. We also provide efficient polynomial time learning algorithms for a setting with full observation, i.e. where the cascades also contain the time steps in which nodes are influenced. 1 Introduction For several decades there has been much interest in understanding the manner in which ideas, language, and information cascades spread through society. With the advent of social networking technologies in recent years, digital traces of human interactions are becoming available, and the problem of predicting information cascades from these traces has gained enormous practical value. For example, this is critical in applications like viral marketing, where one needs to maximize awareness about a product by selecting a small set of influential users [1]. To this end, the spread of information in networks is modeled as an influence function which maps a set of seed nodes who initiate the cascade to (a distribution on) the set of individuals who will be influenced as a result [2]. These models are parametrized by variables that are unknown and need to be estimated from data. There has been much work on estimating the parameters of influence models (or the structure of the underlying social graph) from observed cascades of influence spread, and on using the estimated parameters to predict influence for a given seed set [3, 4, 5, 6, 7, 8]. These parameter estimation techniques make use of local influence information at each node, and there has been a recent line of work devoted to providing sample complexity guarantees for these local estimation techniques [9, 10, 11, 12, 13]. However, one cannot locally estimate the influence parameters when the cascades are not completely observed (e.g. when the cascades do not contain the time at which the nodes are influenced). Moreover, influence functions can be sensitive to errors in model parameters, and existing results do not tell us to what accuracy the individual parameters need to be estimated to obtain accurate influence predictions. If the primary goal in an application is to predict influence accurately, it is natural to ask for algorithms that have learnability guarantees on the influence function itself. A benchmark for studying such questions is the Probably Approximately Correct (PAC) learning framework [14]: Are influence functions PAC learnable? While many influence models have been popularized due to their approximation guarantees for influence maximization [2, 15, 16], learnability of influence is an equally fundamental property. Part of this work was done when HN was a PhD student at the Indian Institute of Science, Bangalore. 1 In this paper, we show PAC learnability for three well-studied influence models: the Linear Threshold, the Independent Cascade, and the Voter models. We primarily consider a setting where the cascades are partially observed, i.e. where only the nodes influenced and not the time steps at which they were influenced are observed. This is a setting where existing local estimation techniques cannot be applied to obtain parameter estimates. Additionally, for a fully observed setting where the time of influence is also observed, we show polynomial time learnability; our methods here are akin to using local estimation techniques, but come with guarantees on the global influence function. Main results. Our learnability results are summarized below. • Linear threshold (LT) model: Our result here is based on an interesting observation that LT influence functions can be seen as multi-layer neural network classifiers, and proceed by bounding their VC-dimension. The method analyzed here picks a function with zero training error. While this can be computationally hard to implement under partial observation, we provide a polynomial time algorithm for the full observation case using local computations. • Independent cascade (IC) model: Our result uses an interpretation of the influence function as an expectation over random draw of a subgraph [2]; this allows us to show that the function is Lipschitz and invoke covering number arguments. The algorithm analyzed for partial observation is based on global maximum likelihood estimation. Under full observation (and additional assumptions), we show polynomial time learnability using a local estimation technique. • Voter model: Our result follows from a reduction of the learning problem to a linear regression problem; the resulting learning algorithm can be implemented in polynomial time for both the full and partial observation settings. Related work. A related problem to ours is that of inferring the structure of the underlying social graph from cascades [6]. There has been a series of results on polynomial sample complexity guarantees for this problem under variants of the IC model [9, 12, 10, 11]. Most of these results make specific assumptions on the cascades/graph structure, and assume a full observation setting. On the other hand, in our problem, the structure of the social graph is assumed to be known, and the goal is to provably learn the underlying influence function. Our results do not depend on assumptions on the network structure, and primarily apply to the more challenging partial observation setting. The work that is most related to ours is that of Du et al. [13], who show polynomial sample complexity results for learning influence in the LT and IC models (under partial observation). However, their approach uses approximations to influence functions and consequently requires a strong technical condition to hold, which is not necessarily satisfied in general. Our results for the LT and IC models are some what orthogonal. While the authors in [13] trade-off assumptions on learnability and gain efficient algorithms that work well in practice, our goal is to show unconditional sample complexity for learning influence. We do this at the expense of the efficiency of the learning algorithms in the partial observation setting. Moreover, the technical approach we take is substantially different. There has also been work on learnability of families of discrete functions such as submodular [17] and coverage functions [18], under the PAC and the variant PMAC frameworks. These results assume availability of a training sample containing exact values of the target function on the given input sets. While IC influence functions can be seen as coverage functions [2], the previous results do not directly apply to the IC class, as in practice, the true (expected) value of an IC influence function on a seed set is never observed, and only a random realization is seen. In contrast, our learnability result for IC functions do not require the exact function values to be known. Moreover, the previous results require strict assumptions on the input distribution. Since we focus on learnability of specific function classes rather than large families of discrete functions, we are able to handle general seed distributions for most part. Other results relevant to our work include learnability of linear influence games [19], where the techniques used bear some similarity to our analysis for the LT model. 2 Preliminaries Influence models. We represent a social network as a finite graph G = (V, E), where the nodes V = {1, . . . , n} represent a set of n individuals and edges E ⊆V 2 represent their social links. Let |E| = r. The graph is assumed to be directed unless otherwise specified. Each edge (u, v) ∈E is associated with a weight wuv ∈R+ that indicates the strength of influence of node v on node u. We consider a setting where each node in the network holds an opinion in {0, 1} and opinions 2 disseminate in the network. This dissemination process begins with a small subset of nodes called the seed which have opinion 1 while the rest have opinion 0, and continues in discrete time steps. In every time step, a node may change its opinion from 0 to 1 based on the opinion of its neighbors, and according to some local model of influence; if this happens, we say that the node is influenced. We will use N(u) to denote the set of neighbors of node u, and At to denote the set of nodes that are influenced at time step t. We consider three well-studied models: • Linear threshold (LT) model: Each node u holds a threshold ru ∈R+, and is influenced at time t if the total incoming weight from its neighbors that were influenced at the previous time step t −1 exceeds the threshold: P v∈N(u)∩At−1 wuv ≥ku. Once influenced, node u can then influence its neighbors for one time step, and never changes its opinion to 0.1 • Independent cascade (IC) model: Restricting edge weights wuv to be in [0, 1], a node u is influenced at time t independently by each neighbor v who was influenced at time t −1. The node can then influence its neighbors for one time step, and never changes its opinion to 0. • Voter model: The graph is assumed to be undirected (with self-loops); at time step t, a node u adopts the opinion of its neighbor v with probability wuv/ P v′∈N(u)∪{u} wuv′. Unlike the LT and IC models, here a node may change its opinion from 1 to 0 or 0 to 1 at every step. We stress that a node is influenced at time t if it changes its opinion from 0 to 1 exactly at t. Also, in both the LT and IC models, no node gets influenced more than once and hence an influence cascade can last for at most n time steps. For simplicity, we shall consider in all our definitions only cascades of length n. While revisiting the Voter model in Section 5, we will look at more general cascades. Definition 1 (Influence function). Given an influence model, a (global) influence function F : 2V →[0, 1]n maps an initial set of nodes X ⊆V seeded with opinion 1 to a vector of probabilities [F1(X), . . . , Fn(X)] ∈[0, 1]n, where the uth coordinate indicates the probability of node u ∈V being influenced during any time step of the corresponding influence cascades. Note that for the LT model, the influence process is deterministic, and the influence function simply outputs a binary vector in {0, 1}n. Let FG denote the class of all influence functions under an influence model over G, obtained for different choices of parameters (edge weights/thresholds) in the model. We will be interested in learning the influence function for a given parametrization of this influence model. We shall assume that the initial set of nodes that are seeded with opinion 1 at the start of the influence process, or the seed set, is chosen i.i.d. according to a distribution µ over all subsets of nodes. We are given a training sample consisting of draws of initial seed sets from µ, along with observations of nodes influenced in the corresponding influence process. Our goal is to then learn from FG an influence function that best captures the observed influence process. Measuring Loss. To measure quality of the learned influence function, we define a loss function ℓ: 2V × [0, 1]n →R+ that for any subset of influenced nodes Y ⊆V and predicted influence probabilities p ∈[0, 1]n assigns a value ℓ(Y, p) measuring discrepancy between Y and p. We define the error of a learned function F ∈FG for a given seed distribution µ and model parametrization as the expected loss incurred by F: errℓ[F] = EX,Y  ℓ Y, F(X)  , where the above expectation is over a random draw of the seed set X from distribution µ and over the corresponding subsets of nodes Y influenced during the cascade. We will be particularly interested in the difference between the error of an influence function FS ∈ FG learned from a training sample S and the minimum possible error achievable over all influence functions in FG: errℓ FS  −infF ∈FG errℓ F  , and would like to learn influence functions for which this difference is guaranteed to be small (using only polynomially many training examples). Full and partial observation. We primarily work in a setting in which we observe the nodes influenced in a cascade, but not the time step at which they were influenced. In other words, we assume availability of a partial observed training sample S = {(X1, Y 1) . . . , (Xm, Y m)}, where Xi denotes the seed set of a cascade i and Y i is the set of nodes influenced in that cascade. We will also consider a refined notion of full observation in which we are provided a training sample S = {(X1, Y 1 1:n) . . . , (Xm, Y m 1:n)}, where Y i 1:n = {Y i 1 , . . . , Y i n} and Y i t is the set of nodes in 1In settings where the node thresholds are unknown, it is common to assume that they are chosen randomly by each node [2]. In our setup, the thresholds are parameters that need to be learned from cascades. 3 cascade i who were influenced precisely at time step t. Notice that here the complete set of nodes influenced in cascade i is given by Sn t=1 Y i t . This setting is particularly of interest when discussing learnability in polynomial time. The structure of the social graph is always assumed to be known. PAC learnability of influence functions. Let FG be the class of all influence functions under an influence model over a n-node social network G = (V, E). We say FG is probably approximately correct (PAC) learnable w.r.t. loss ℓif there exists an algorithm s.t. the following holds for ∀ϵ, δ ∈ (0, 1), for all parametrizations of the model, and for all (or a subset of) distributions µ over seed sets: when the algorithm is given a partially observed training sample S = {(X1, Y 1), . . . , (Xm, Y m)} with m ≥poly(1/ϵ, 1/δ) examples, it outputs an influence function FS ∈FG for which PS  errℓ FS  −inf F ∈FG errℓ F  ≥ϵ  ≤δ, where the above probability is over the randomness in S. Moreover, FG is efficiently PAC learnable under this setting if the running time of the algorithm in the above definition is polynomial in m and in the size of G. We say FG is (efficiently) PAC learnable under full observation if the above definition holds with a fully observed training sample S = {(X1, Y 1 1:n), . . . , (Xm, Y m 1:n)}. Sensitivity of influence functions to parameter errors. A common approach to predicting influence under full observation is to estimate the model parameters using local influence information at each node. However, an influence function can be highly sensitive to errors in estimated parameters. E.g. consider an IC model on a chain of n nodes where all edge parameters are 1; if the parameters have all been underestimated with a constant error of ϵ, the estimated probability of the last node being influenced is (1 −ϵ)n, which is exponentially smaller than the true value 1 for large n. Our results for full observation provide concrete sample complexity guarantees for learning influence functions using local estimation, to any desired accuracy; in particular, for the above example, our results prescribe that ϵ be driven below 1/n for accurate predictions (see Section 4 on IC model). Of course, under partial observation, we do not see enough information to locally estimate the individual model parameters, and the influence function needs to be learned directly from cascades. 3 The Linear Threshold model We start with learnability in the Linear Threshold (LT) model. Given that the influence process is deterministic and the influence function outputs binary values, we use the 0-1 loss for evaluation; for any subset of nodes Y ⊆V and predicted boolean vector q ∈{0, 1}n, this is the fraction of nodes on which the prediction is wrong: ℓ0-1(Y, q) = 1 n Pn u=1 1(χu(Y ) ̸= qu), where χu(Y ) = 1(u ∈Y ). Theorem 1 (PAC learnability under LT model). The class of influence functions under the LT model is PAC learnable w.r.t. ℓ0-1 and the corresponding sample complexity is eO ϵ−1(r + n)  . Furthermore, in the full observation setting the influence functions can be learned in polynomial time. The proof is in Appendix A and we give an outline here. Let F w denote a LT influence function with parameters w ∈Rr+n (edge weights and thresholds) and let us focus on the partial observation setting (only a node and not its time of influence is observed). Consider a simple algorithm that outputs an influence function with zero error on training sample S = {(X1, Y 1), . . . , (Xm, Y m)}: 1 m m X i=1 ℓ0-1 Y i, F w(Xi)  = 1 mn m X i=1 n X u=1 1 χu(Y i) ̸= F w u (Xi)  . (1) Such a function always exists as the training cascades are generated using the LT model. We will shortly look at computational issues in implementing this algorithm. We now explain our PAC learnability result for this algorithm. The main idea is in interpreting LT influence functions as neural networks with linear threshold activations. The proof follows by bounding the VC-dimension of the class of all functions F w u for node u, and using standard arguments in showing learnability under finite VC-dimension [20]. We sketch the neural network (NN) construction in two steps (local influence as a two-layer NN, and the global influence as a multilayer network; see Figure 1), where a crucial part is in ensuring that no node gets influenced more than once during the influence process: 1. Local influence as a two-layer NN. Recall that the (local) influence at a node u for previously influenced nodes Z is given by 1 P v∈N(u)∩Z wuv ≥ku  . This can be modeled as a linear (binary) classifier, or equivalently as a two-layer NN with linear threshold activations. Here the input layer contains a unit for each node in the network and takes a binary value indicating whether the node 4 Figure 1: Modeling a single time step t of the influence process Ft,u : 2V →{0, 1} as a neural network (t ≥2): the portion in black computes whether or not node u is influenced in the current time step t, while that in red/blue enforces the constraint that u does not get influenced more than once during the influence process. Here ξt,u is 1 when a node has been influenced previously and 0 otherwise. The dotted red edges represent strong negative signals (has a large negative weight) and the dotted blue edges represent strong positive signals. The initial input to each node u in the input layer is 1(u ∈X), while that for the auxiliary nodes (in red) is 0. is present in Z; the output layer contains a binary unit indicating whether u is influenced after one time step; the connections between the two layers correspond to the edges between u and other nodes; and the threshold term on the output unit is the threshold parameter ku. Thus the first step of the influence process can be modeled using a NN with two n-node layers (the input layer takes information about the seed set, and the binary output indicates which nodes got influenced). 2. From local to global: the multilayer network. The two-layer NN can be extended to multiple time steps by replicating the output layer once for each step. However, the resulting NN will allow a node to get influenced more than once during the influence process. To avoid this, we introduce an additional binary unit u′ for each node u in a layer, which will record whether node u was influenced in previous time steps. In particular, whenever node u is influenced in a layer, a strong positive signal is sent to activate u′ in the next layer, which in turn will send out strong negative signals to ensure u is never activated in subsequent layers2; we use additional connections to ensure that u′ remains active there after. Note that a node u in layer t + 1 is 1 whenever u is influenced at time step t; let F w t,u : 2V →{0, 1} denote this function computed at u for a given seed set. The LT influence function F w u (which for seed set X is 1 whenever u is influenced in any one of the n time steps) is then given by F w u (X) = Pn t=1 F w t,u(X). Clearly, F w u can be modeled as a NN with n + 1 layers. A naive application of classic VC-dimension results for NN [21] will give us that the VC-dimension of the class of functions Fu is eO(n(r + n)) (counting r + n parameters for each layer). Since the same parameters are repeated across layers, this can be tightened to eO(r + n). The remaining proof involves standard uniform convergence arguments [20] and a union bound over all nodes. 3.1 Efficient computation Having shown PAC learnability, we turn to efficient implementation of the prescribed algorithm. Partial observation. In the case where the training set does not specify the time at which each node was infected, finding an influence function with zero training error is computationally hard in general (as this is similar to learning a recurrent neural network). In practice, however, we can leverage the neural network construction, and solve the problem approximately by replacing linear threshold activation functions with sigmoidal activations and the 0-1 loss with a suitable continuous surrogate loss, and apply back-propagation based methods used for neural network learning. Full observation. Here it turns out that the algorithm can be implemented in polynomial time using local computations. Given a fully observed sample S = {(X1, Y 1 1:n), . . . , (Xm, Y m 1:n)}, the loss of an influence function F for any (X, Y1:n) is given by ℓ0-1(∪n t=1Yt, F(X)) and as before measures the fraction of mispredicted nodes. The prescribed algorithm then seeks to find parameters w for which the corresponding training error is 0. Given that the time of influence is observed, this problem can be decoupled into a set of linear programs (LPs) at each node; this is akin to locally estimating the parameters at each node. In particular, let wu denote the parameters local to node u (incoming weights and threshold), and let fu(Z; wu) = 1 P v∈N(u)∩Z wuv ≥ku  denote the local influence at u for set Z of previously influence nodes. Let bα1,u(wu) = 1 m Pm i=1 1 χu(Y i 1 ) ̸= fu(Xi; wu)  and bαt,u(wu) = 1 m Pm i=1 1 χu(Y i t ) ̸= fu(Y i t−1; wu)  , t ≥2, that given the set of nodes Y i t−1 influenced at time t −1, measures the local prediction error at time t. Since the training sample was 2By a strong signal, we mean a large positive/negative connection weight which will outweigh signals from other connections. Indeed such connections can be created when the weights are all bounded. 5 generated by a LT model, there always exists parameters such that bαt,u(wu) = 0 for each t and u, which also implies that the overall training error is 0. Such a set of parameters can be obtained by formulating a suitable LP that can be solved in polynomial time. The details are in Appendix A.2. 4 The Independent Cascade model We now address the question of learnability in the Independent Cascade (IC) model. Since the influence functions here have probabilistic outputs, the proof techniques we shall use will be different from the previous section, and will rely on arguments based on covering numbers. In this case, we use the squared loss which for any Y ⊆V and q ∈[0, 1]n, is given by: ℓsq(Y, q) = 1 n Pn u=1[χu(Y )(1 −qu)2 + (1 −χu(Y ))q2 u]. We shall make a mild assumption that the edge probabilities are bounded away from 0 and 1, i.e. w ∈[λ, 1 −λ]r for some λ ∈(0, 0.5). Theorem 2 (PAC learnability under IC model). The class of influence functions under the IC model is PAC learnable w.r.t. ℓsq and the sample complexity is m = eO ϵ−2n3r  . Furthermore, in the full observation setting, under additional assumptions (see Assumption 1), the influence functions can be learned in polynomial time with sample complexity eO(ϵ−2nr3). The proof is given in Appendix B. As noted earlier, an IC influence function can be sensitive to errors in estimated parameters. Hence before discussing our algorithms and analysis, we seek to understand the extent to which changes in the IC parameters can produce changes in the influence function, and in particular, check if the function is Lipschitz. For this, we use the closed-form interpretation of the IC function as an expectation of an indicator term over a randomly drawn subset of edges from the network (see [2]). More specifically, the IC cascade process can be seen as activating a subset of edges in the network; since each edge can be activated at most once, the active edges can be seen as having been chosen apriori using independent Bernoulli draws. Consider a random subgraph of active edges obtained by choosing each edge (u, v) ∈E independently with probability wuv. For a given subset of such edges A ⊆E and seed set X ⊆V , let σu(A, X) be an indicator function that evaluates to 1 if u is reachable from a node in X via edges in A and 0 otherwise. Then the IC influence function can be written as an expectation of σ over random draw of the subgraph: F w u (X) = X A⊆E Y (a,b)∈A wab Y (a,b)/∈A (1 −wab) σu(A, X). (2) While the above definition involves an exponential number of terms, it can be verified that the corresponding gradient is bounded, thus implying that the IC function is Lipschitz.3 Lemma 3. Fix X ⊆V . For any w, w′ ∈Rr with ∥w −w′∥1 ≤ϵ, F w u (X) −F w′ u (X) ≤ϵ. This result tells us how small the parameter errors need to be to obtain accurate influence predictions and will be crucially used in our learnability results. Note that for the chain example in Section 2, this tells us that the errors need to be less than 1/n for meaningful influence predictions. We are now ready to provide the PAC learning algorithm for the partial observation setting with sample S = {(X1, Y 1), . . . , (Xm, Y m)}; we shall sketch the proof here. The full observation case is outlined in Section 4.1, where we shall make use of the a different approach based on local estimation. Let F w denote the IC influence function with parameters w. The algorithm that we consider for partial observation resorts to a maximum likelihood (ML) estimation of the (global) IC function. Let χu(Y ) = 1(u ∈Y ). Define the (global) log-likelihood for a cascade (X, Y ) as: L(X, Y ; w) = n X u=1 χu(Y ) ln F w u (X)  + (1 −χu(Y )) ln 1 −F w u (X)  , The prescribed algorithm then solves the following optimization problem, and outputs an IC influence function F w from the solution w obtained. max w ∈[λ,1−λ]r m X i=1 L(Xi, Y i; w). (3) 3In practice, IC influence functions can be computed through suitable sampling approaches. Also, note that a function class can be PAC learnable even if the individual functions cannot be computed efficiently. 6 To provide learnability guarantees for the above ML based procedure, we construct a finite ϵ-cover over the space of IC influence functions, i.e. show that the class can be approximated to a factor of ϵ (in the infinity norm sense) by a finite set of IC influence functions. We first construct an ϵ-cover of size O((r/ϵ)r) over the space of parameters [λ, 1 −λ]r, and use Lipschitzness to translate this to an ϵ-cover of same size over the IC class. Following this, standard uniform convergence arguments [20] can be used to derive a sample complexity guarantee on the expected likelihood with a logarithmic dependence on the cover size; this then implies the desired learnability result w.r.t. ℓsq: Lemma 4 (Sample complexity guarantee on the log-likelihood objective). Fix ϵ, δ ∈(0, 1) and m = eO ϵ−2n3r  . Let w be the parameters obtained from ML estimation. Then w.p. ≥1 −δ, sup w∈[λ,1−λ]r E  1 nL(X, Y ; w)  −E  1 nL(X, Y ; w)  ≤ϵ. Compared to results for the LT model, the sample complexity in Theorem 2 has a square dependence on 1/ϵ. This is not surprising, as unlike the LT model, where the optimal 0-1 error is zero, the optimal squared error here is non-zero in general; in fact, there are standard sample complexity lower bound results that show that for similar settings, one cannot obtain a tighter bound in terms of 1/ϵ [20]. We wish to also note that the approach of Du et al. (2014) for learning influence under partial observation [13] uses the same interpretation of the IC influence function as in Eq. (2), but rather than learning the parameters of the model, they seek to learn the weights on the individual indicator functions. Since there are exponentially many indicator terms, they resort to constructing approximations to the influence function, for which a strong technical condition needs to be satisfied; this condition need not however hold in most settings. In contrast, our result applies to general settings. 4.1 Efficient computation Partial observation. The optimization problem in Eq. (3) that we need to solve for the partial observation case is non-convex in general. Of course, in practice, this can be solved approximately using gradient-based techniques, using sample-based gradient computations to deal with the exponential number of terms in the definition of F w in the objective (see Appendix B.5). Full observation. On the other hand, when training sample S = {(X1, Y 1 1:n), . . . , (Xm, Y m 1:n)} contains fully observed cascades, we are able to show polynomial time learnability. For the LT model, we were assured of a set of parameters that would yield zero 0-1 error on the training sample, and hence the same procedure prescribed for partial information could be implemented under the full observation in polynomial time by reduction to local computations. This is not the case with the IC model, where we resort to the common approach of learning influence by estimating the model parameters through a local maximum likelihood (ML) estimation technique. This method is similar to the maximum likelihood procedure used in [9] for solving a different problem of recovering the structure of an unknown network from cascades. For the purpose of showing learnability, we find it sufficient to apply this procedure to only the first time step of the cascade. Our analysis first provides guarantees on the estimated parameters, and uses the Lipschitz property in Lemma 3 to translate them to guarantees on the influence function. Since we now wish to give guarantees in the parameter space, we will require that there exists unique set of parameters that explains the IC cascade process; for this, we will need stricter assumptions. We assume that all edges have a minimum influence strength, and that even when all neighbors of a node u are influenced in a time step, there is a small probability of u not being influenced in the next step; we consider a specific seed distribution, where each node has a non-zero probability of (not) being a seed node. Assumption 1. Let w∗denote the parameters of the underlying IC model. Then there exists λ ≥ γ ∈(0, 0.5) such that w∗ uv ≥λ for all (u, v) ∈E and Q v∈N(u)(1−wuv) ≥γ for all u ∈V . Also, each node in V is chosen independently in the initial seed set with probability κ ∈(0, 1). We first define the local log-likelihood for given seed set X and nodes Y1 influenced at t = 1: L(X, Y1; β) = X u/∈X  χu(Y1) ln  1 −exp  − X v∈N(u)∩X βuv  −(1 −χu(Y1)) X v∈N(u)∩X βuv  , where we have used log-transformed parameters βuv = −ln(1 −wuv), so that the objective is concave in β. The prescribed algorithm then solves the following maximization problem over all 7 parameters that satisfy Assumption 1 and constructs an IC influence function from the parameters. max β ∈Rr + m X i=1 L(Xi, Y i 1 ; β) s.t. ∀(u, v) ∈E, βuv ≥ln  1 1 −λ  , ∀u ∈V, X v∈N(u) βuv ≥ln  1 γ  . This problem breaks down into smaller convex problems and can be solved efficiently (see [9]). Proposition 5 (PAC learnability under IC model with full observation). Under full observation and Assumption 1, the class of IC influence functions is PAC learnable in polynomial time through local ML estimation. The corresponding sample complexity is eO nr3(κ2(1 −κ)4λ2γ2ϵ2)−1 . The proof is provided in Appendix B.6 and proceeds through the following steps: (1) we use covering number arguments to show that the local log-likelihood for the estimated parameters is close to the optimal value; (2) we then show that under Assumption 1, the expected log-likelihood is strongly concave, which gives us that closeness to the true model parameters in terms of the likelihood also implies closeness to the true parameters in the parameter space; (3) we finally use the Lipschitz property in Lemma 3 to translate this to guarantees on the global influence function. Note that the sample complexity here has a worse dependence on the number of edges r compared to the partial observation case; this is due to the two-step approach of requiring guarantees on the individual parameters, and then transferring them to the influence function. The better dependence on the number of nodes n is a consequence of estimating parameters locally. It would be interesting to see if tighter results can be obtained by using influence information from all time steps, and making different assumptions on the model parameters (e.g. correlation decay assumption in [9]). 5 The Voter model Before closing, we sketch of our learnability results for the Voter model, where unlike previous models the graph is undirected (with self-loops). Here we shall be interested in learning influence for a fixed number of K time steps as the cascades can be longer than n. With the squared loss again as the loss function, this problem almost immediately reduces to linear least squares regression. Let W ∈[0, 1]n×n be a matrix of normalized edge weights with Wuv = wuv/ P v∈N(u)∪{u} wuv if (u, v) ∈E and 0 otherwise. Note that W can be seen as a one-step probability transition matrix. Then for an initial seed set Z ⊆V , the probability of a node u being influenced under this model after one time step can be verified to be 1⊤ u W1X, where 1X ∈{0, 1}n is a column vector containing 1 in entries corresponding to nodes in X, and 0 everywhere else. Similarly, for calculating the probability of a node u being influenced after K time steps, one can use the K-step transition matrix: Fu(X) = 1⊤ u (WK)1X. Now setting b = (WK)⊤1u, we have Fu(X) = b⊤1X which is essentially a linear function parametrized by n weights. Thus learning influence in the Voter model (for fixed cascade length) can be posed as n independent linear regression (one per node) with n coefficients each. This can be solved in polynomial time even with partially observed data. We then have the following from standard results [20]. Theorem 6 (PAC learnability under Voter model). The class of influence functions under the Voter model is PAC learnable w.r.t. ℓsq in polynomial time and the sample complexity is eO ϵ−2n  . 6 Conclusion We have established PAC learnability of some of the most celebrated models of influence in social networks. Our results point towards interesting connections between learning theory and the literature on influence in networks. Beyond the practical implications of the ability to learn influence functions from cascades, the fact that the main models of influence are PAC learnable, serves as further evidence of their potent modeling capabilities. It would be interesting to see if our results extend to generalizations of the LT and IC models, and to investigate sample complexity lower bounds. Acknowledgements. Part of this work was carried out while HN was visiting Harvard as a part of a student visit under the Indo-US Joint Center for Advanced Research in Machine Learning, Game Theory & Optimization supported by the Indo-US Science & Technology Forum. HN thanks Kevin Murphy, Shivani Agarwal and Harish G. Ramaswamy for helpful discussions. YS and DP were supported by NSF grant CCF-1301976 and YS by CAREER CCF-1452961 and a Google Faculty Research Award. 8 References [1] Pedro Domingos and Matthew Richardson. Mining the network value of customers. In KDD, 2001. [2] David Kempe, Jon M. Kleinberg, and ´Eva Tardos. Maximizing the spread of influence through a social network. In KDD, 2003. [3] Amit Goyal, Francesco Bonchi, and Laks VS Lakshmanan. Learning influence probabilities in social networks. In KDD, 2010. [4] Manuel Gomez-Rodriguez, David Balduzzi, and Bernhard Sch¨olkopf. Uncovering the temporal dynamics of diffusion networks. In ICML, 2011. [5] Nan Du, Le Song, Alexander J. Smola, and Ming Yuan. Learning networks of heterogeneous influence. In NIPS, 2012. [6] Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion and influence. ACM Transactions on Knowledge Discovery from Data, 5(4):21, 2012. [7] Nan Du, Le Song, Manuel Gomez-Rodriguez, and Hongyuan Zha. Scalable influence estimation in continuous-time diffusion networks. In NIPS, 2013. [8] Abir De, Sourangshu Bhattacharya, Parantapa Bhattacharya, Niloy Ganguly, and Soumen Chakrabarti. Learning a linear influence model from transient opinion dynamics. In CIKM, 2014. [9] Praneeth Netrapalli and Sujay Sanghavi. Learning the graph of epidemic cascades. In SIGMETRICS, 2012. [10] Hadi Daneshmand, Manuel Gomez-Rodriguez, Le Song, and Bernhard Sch¨olkopf. Estimating diffusion network structures: Recovery conditions, sample complexity & soft-thresholding algorithm. In ICML, 2014. [11] Jean Pouget-Abadie and Thibaut Horel. Inferring graphs from cascades: A sparse recovery framework. ICML, 2015. [12] Bruno D. Abrahao, Flavio Chierichetti, Robert Kleinberg, and Alessandro Panconesi. Trace complexity of network inference. In KDD, 2013. [13] Nan Du, Yingyu Liang, Maria-Florina Balcan, and Le Song. Influence function learning in information diffusion networks. In ICML, 2014. [14] Leslie G. Valiant. A theory of the learnable. Commununications of the ACM, 27(11):1134– 1142, 1984. [15] Elchanan Mossel and S´ebastien Roch. On the submodularity of influence in social networks. In STOC, 2007. [16] Eyal Even-Dar and Asaf Shapira. A note on maximizing the spread of influence in social networks. Information Processing Letters, 111(4):184–187, 2011. [17] Maria-Florina Balcan and Nicholas J.A. Harvey. Learning submodular functions. In STOC, 2011. [18] Vitaly Feldman and Pravesh Kothari. Learning coverage functions and private release of marginals. In COLT, 2014. [19] Jean Honorio and Luis Ortiz. Learning the structure and parameters of large-population graphical games from behavioral data. Journal of Machine Learning Research, 16:1157–1210, 2015. [20] Martin Anthony and Peter L. Bartlett. Neural network learning: Theoretical foundations. Cambridge University Press, 1999. [21] Peter L. Bartlett and Wolfgang Maass. Vapnik Chervonenkis dimension of neural nets. Handbook of Brain Theory and Neural Networks, pages 1188–1192, 1995. [22] Tong Zhang. Statistical behaviour and consistency of classification methods based on convex risk minimization. Annals of Mathematical Statistics, 32:56–134, 2004. 9
2015
130
5,626
Linear Response Methods for Accurate Covariance Estimates from Mean Field Variational Bayes Ryan Giordano UC Berkeley rgiordano@berkeley.edu Tamara Broderick MIT tbroderick@csail.mit.edu Michael Jordan UC Berkeley jordan@cs.berkeley.edu Abstract Mean field variational Bayes (MFVB) is a popular posterior approximation method due to its fast runtime on large-scale data sets. However, a well known major failing of MFVB is that it underestimates the uncertainty of model variables (sometimes severely) and provides no information about model variable covariance. We generalize linear response methods from statistical physics to deliver accurate uncertainty estimates for model variables—both for individual variables and coherently across variables. We call our method linear response variational Bayes (LRVB). When the MFVB posterior approximation is in the exponential family, LRVB has a simple, analytic form, even for non-conjugate models. Indeed, we make no assumptions about the form of the true posterior. We demonstrate the accuracy and scalability of our method on a range of models for both simulated and real data. 1 Introduction With increasingly efficient data collection methods, scientists are interested in quickly analyzing ever larger data sets. In particular, the promise of these large data sets is not simply to fit old models but instead to learn more nuanced patterns from data than has been possible in the past. In theory, the Bayesian paradigm yields exactly these desiderata. Hierarchical modeling allows practitioners to capture complex relationships between variables of interest. Moreover, Bayesian analysis allows practitioners to quantify the uncertainty in any model estimates—and to do so coherently across all of the model variables. Mean field variational Bayes (MFVB), a method for approximating a Bayesian posterior distribution, has grown in popularity due to its fast runtime on large-scale data sets [1–3]. But a well known major failing of MFVB is that it gives underestimates of the uncertainty of model variables that can be arbitrarily bad, even when approximating a simple multivariate Gaussian distribution [4– 6]. Also, MFVB provides no information about how the uncertainties in different model variables interact [5–8]. By generalizing linear response methods from statistical physics [9–12] to exponential family variational posteriors, we develop a methodology that augments MFVB to deliver accurate uncertainty estimates for model variables—both for individual variables and coherently across variables. In particular, as we elaborate in Section 2, when the approximating posterior in MFVB is in the exponential family, MFVB defines a fixed-point equation in the means of the approximating posterior, 1 and our approach yields a covariance estimate by perturbing this fixed point. We call our method linear response variational Bayes (LRVB). We provide a simple, intuitive formula for calculating the linear response correction by solving a linear system based on the MFVB solution (Section 2.2). We show how the sparsity of this system for many common statistical models may be exploited for scalable computation (Section 2.3). We demonstrate the wide applicability of LRVB by working through a diverse set of models to show that the LRVB covariance estimates are nearly identical to those produced by a Markov Chain Monte Carlo (MCMC) sampler, even when MFVB variance is dramatically underestimated (Section 3). Finally, we focus in more depth on models for finite mixtures of multivariate Gaussians (Section 3.3), which have historically been a sticking point for MFVB covariance estimates [5, 6]. We show that LRVB can give accurate covariance estimates orders of magnitude faster than MCMC (Section 3.3). We demonstrate both theoretically and empirically that, for this Gaussian mixture model, LRVB scales linearly in the number of data points and approximately cubically in the dimension of the parameter space (Section 3.4). Previous Work. Linear response methods originated in the statistical physics literature [10–13]. These methods have been applied to find new learning algorithms for Boltzmann machines [13], covariance estimates for discrete factor graphs [14], and independent component analysis [15]. [16] states that linear response methods could be applied to general exponential family models but works out details only for Boltzmann machines. [10], which is closest in spirit to the present work, derives general linear response corrections to variational approximations; indeed, the authors go further to formulate linear response as the first term in a functional Taylor expansion to calculate full pairwise joint marginals. However, it may not be obvious to the practitioner how to apply the general formulas of [10]. Our contributions in the present work are (1) the provision of concrete, straightforward formulas for covariance correction that are fast and easy to compute, (2) demonstrations of the success of our method on a wide range of new models, and (3) an accompanying suite of code. 2 Linear response covariance estimation 2.1 Variational Inference Suppose we observe N data points, denoted by the N-long column vector x, and denote our unobserved model parameters by θ. Here, θ is a column vector residing in some space Θ; it has J subgroups and total dimension D. Our model is specified by a distribution of the observed data given the model parameters—the likelihood p(x|θ)—and a prior distributional belief on the model parameters p(θ). Bayes’ Theorem yields the posterior p(θ|x). Mean-field variational Bayes (MFVB) approximates p(θ|x) by a factorized distribution of the form q(θ) = J j=1 q(θj). q is chosen so that the Kullback-Liebler divergence KL(q||p) between q and p is minimized. Equivalently, q is chosen so that E := L + S, for L := Eq[log p(θ|x)] (the expected log posterior) and S := −Eq[log q(θ)] (the entropy of the variational distribution), is maximized: q∗:= arg min q KL(q||p) = arg min q Eq [log q(θ) −log p(θ|x)] = arg max q E. (1) Up to a constant in θ, the objective E is sometimes called the “evidence lower bound”, or the ELBO [5]. In what follows, we further assume that our variational distribution, q (θ), is in the exponential family with natural parameter η and log partition function A: log q (θ|η) = ηT θ −A (η) (expressed with respect to some base measure in θ). We assume that p (θ|x) is expressed with respect to the same base measure in θ as for q. Below, we will make only mild regularity assumptions about the true posterior p(θ|x) and no assumptions about its form. If we assume additionally that the parameters η∗at the optimum q∗(θ) = q(θ|η∗) are in the interior of the feasible space, then q(θ|η) may instead be described by the mean parameterization: m := Eqθ 2 with m∗:= Eq∗θ. Thus, the objective E can be expressed as a function of m, and the first-order condition for the optimality of q∗becomes the fixed point equation ∂E ∂m  m=m∗= 0 ⇔  ∂E ∂m + m  m=m∗= m∗⇔M(m∗) = m∗for M(m) := ∂E ∂m + m. (2) 2.2 Linear Response Let V denote the covariance matrix of θ under the variational distribution q∗(θ), and let Σ denote the covariance matrix of θ under the true posterior, p(θ|x): V := Covq∗θ, Σ := Covpθ. In MFVB, V may be a poor estimator of Σ, even when m∗≈Epθ, i.e., when the marginal estimated means match well [5–7]. Our goal is to use the MFVB solution and linear response methods to construct an improved estimator for Σ. We will focus on the covariance of the natural sufficient statistic θ, though the covariance of functions of θ can be estimated similarly (see Appendix A). The essential idea of linear response is to perturb the first-order condition M(m∗) = m∗around its optimum. In particular, define the distribution pt (θ|x) as a log-linear perturbation of the posterior: log pt (θ|x) := log p (θ|x) + tT θ −C (t) , (3) where C (t) is a constant in θ. We assume that pt(θ|x) is a well-defined distribution for any t in an open ball around 0. Since C (t) normalizes pt(θ|x), it is in fact the cumulant-generating function of p(θ|x), so the derivatives of C (t) evaluated at t = 0 give the cumulants of θ. To see why this perturbation may be useful, recall that the second cumulant of a distribution is the covariance matrix, our desired estimand: Σ = Covp(θ) = d dtT dtC(t)  t=0 = d dtT Eptθ  t=0 . The practical success of MFVB relies on the fact that its estimates of the mean are often good in practice. So we assume that m∗ t ≈Eptθ, where m∗ t is the mean parameter characterizing q∗ t and q∗ t is the MFVB approximation to pt. (We examine this assumption further in Section 3.) Taking derivatives with respect to t on both sides of this mean approximation and setting t = 0 yields Σ = Covp(θ) ≈dm∗ t dtT  t=0 =: ˆΣ, (4) where we call ˆΣ the linear response variational Bayes (LRVB) estimate of the posterior covariance of θ. We next show that there exists a simple formula for ˆΣ. Recalling the form of the KL divergence (see Eq. (1)), we have that −KL(q||pt) = E+tT m =: Et. Then by Eq. (2), we have m∗ t = Mt(m∗ t ) for Mt(m) := M(m) + t. It follows from the chain rule that dm∗ t dt = ∂Mt ∂mT  m=m∗ t dm∗ t dt + ∂Mt ∂t = ∂Mt ∂mT  m=m∗ t dm∗ t dt + I, (5) where I is the identity matrix. If we assume that we are at a strict local optimum and so can invert the Hessian of E, then evaluating at t = 0 yields ˆΣ = dm∗ t dtT  t=0 = ∂M ∂m ˆΣ + I =  ∂2E ∂m∂mT + I  ˆΣ + I ⇒ ˆΣ = −  ∂2E ∂m∂mT −1 , (6) 3 where we have used the form for M in Eq. (2). So the LRVB estimator ˆΣ is the negative inverse Hessian of the optimization objective, E, as a function of the mean parameters. It follows from Eq. (6) that ˆΣ is both symmetric and positive definite when the variational distribution q∗is at least a local maximum of E. We can further simplify Eq. (6) by using the exponential family form of the variational approximating distribution q. For q in exponential family form as above, the negative entropy −S is dual to the log partition function A [17], so S = −ηT m + A(η); hence, dS dm = ∂S ∂ηT dη dm + ∂S ∂m = ∂A ∂η −m  dη dm −η(m) = −η(m). Recall that for exponential families, ∂η(m)/∂m = V −1. So Eq. (6) becomes1 ˆΣ = −  ∂2L ∂m∂mT + ∂2S ∂m∂mT −1 = −(H −V −1)−1, for H := ∂2L ∂m∂mT . ⇒ ˆΣ = (I −V H)−1V. (7) When the true posterior p(θ|x) is in the exponential family and contains no products of the variational moment parameters, then H = 0 and ˆΣ = V . In this case, the mean field assumption is correct, and the LRVB and MFVB covariances coincide at the true posterior covariance. Furthermore, even when the variational assumptions fail, as long as certain mean parameters are estimated exactly, then this formula is also exact for covariances. E.g., notably, MFVB is well-known to provide arbitrarily bad estimates of the covariance of a multivariate normal posterior [4–7], but since MFVB estimates the means exactly, LRVB estimates the covariance exactly (see Appendix B). 2.3 Scaling the matrix inverse Eq. (7) requires the inverse of a matrix as large as the parameter dimension of the posterior p(θ|x), which may be computationally prohibitive. Suppose we are interested in the covariance of parameter sub-vector α, and let z denote the remaining parameters: θ = (α, z)T . We can partition Σ = (Σα, Σαz; Σzα, Σz) . Similar partitions exist for V and H. If we assume a mean-field factorization q(α, z) = q(α)q(z), then Vαz = 0. (The variational distributions may factor further as well.) We calculate the Schur complement of ˆΣ in Eq. (7) with respect to its zth component to find that ˆΣα = (Iα −VαHα −VαHαz  Iz −VzHz)−1VzHzα −1 Vα. (8) Here, Iα and Iz refer to α- and z-sized identity matrices, respectively. In cases where (Iz −VzHz)−1 can be efficiently calculated (e.g., all the experiments in Section 3; see Fig. (5) in Appendix D), Eq. (8) requires only an α-sized inverse. 3 Experiments We compare the covariance estimates from LRVB and MFVB in a range of models, including models both with and without conjugacy 2. We demonstrate the superiority of the LRVB estimate to MFVB in all models before focusing in on Gaussian mixture models for a more detailed scalability analysis. For each model, we simulate datasets with a range of parameters. In the graphs, each point represents the outcome from a single simulation. The horizontal axis is always the result from an MCMC 1For a comparison of this formula with the frequentist “supplemented expectation-maximization” procedure see Appendix C. 2All the code is available on our Github repository, rgiordan/LinearResponseVariationalBayesNIPS2015, 4 procedure, which we take as the ground truth. As discussed in Section 2.2, the accuracy of the LRVB covariance for a sufficient statistic depends on the approximation m∗ t ≈Eptθ. In the models to follow, we focus on regimes of moderate dependence where this is a reasonable assumption for most of the parameters (see Section 3.2 for an exception). Except where explicitly mentioned, the MFVB means of the parameters of interest coincided well with the MCMC means, so our key assumption in the LRVB derivations of Section 2 appears to hold. 3.1 Normal-Poisson model Model. First consider a Poisson generalized linear mixed model, exhibiting non-conjugacy. We observe Poisson draws yn and a design vector xn, for n = 1, ..., N. Implicitly below, we will everywhere condition on the xn, which we consider to be a fixed design matrix. The generative model is: zn|β, τ indep ∼ N  zn|βxn, τ −1 , yn|zn indep ∼ Poisson (yn| exp(zn)) , (9) β ∼N(β|0, σ2 β), τ ∼Γ(τ|ατ, βτ). For MFVB, we factorize q (β, τ, z) = q (β) q (τ) N n=1 q (zn). Inspection reveals that the optimal q (β) will be Gaussian, and the optimal q (τ) will be gamma (see Appendix D). Since the optimal q (zn) does not take a standard exponential family form, we restrict further to Gaussian q (zn). There are product terms in L (for example, the term Eq [τ] Eq [β] Eq [zn]), so H = 0, and the mean field approximation does not hold; we expect LRVB to improve on the MFVB covariance estimate. A detailed description of how to calculate the LRVB estimate can be found in Appendix D. Results. We simulated 100 datasets, each with 500 data points and a randomly chosen value for µ and τ. We drew the design matrix x from a normal distribution and held it fixed throughout. We set prior hyperparameters σ2 β = 10, ατ = 1, and βτ = 1. To get the “ground truth” covariance matrix, we took 20000 draws from the posterior with the R MCMCglmm package [18], which used a combination of Gibbs and Metropolis Hastings sampling. Our LRVB estimates used the autodifferentiation software JuMP [19]. Results are shown in Fig. (1). Since τ is high in many of the simulations, z and β are correlated, and MFVB underestimates the standard deviation of β and τ. LRVB matches the MCMC standard deviation for all β, and matches for τ in all but the most correlated simulations. When τ gets very high, the MFVB assumption starts to bias the point estimates of τ, and the LRVB standard deviations start to differ from MCMC. Even in that case, however, the LRVB standard deviations are much more accurate than the MFVB estimates, which underestimate the uncertainty dramatically. The final plot shows that LRVB estimates the covariances of z with β, τ, and log τ reasonably well, while MFVB considers them independent. Figure 1: Posterior mean and covariance estimates on normal-Poisson simulation data. 3.2 Linear random effects Model. Next, we consider a simple random slope linear model, with full details in Appendix E. We observe scalars yn and rn and a vector xn, for n = 1, ..., N. Implicitly below, we will everywhere 5 condition on all the xn and rn, which we consider to be fixed design matrices. In general, each random effect may appear in multiple observations, and the index k(n) indicates which random effect, zk, affects which observation, yn. The full generative model is: yn|β, z, τ indep ∼ N  yn|βT xn + rnzk(n), τ −1 , zk|ν iid ∼N  zk|0, ν−1 , β ∼N(β|0, Σβ), ν ∼Γ(ν|αν, βν), τ ∼Γ(τ|ατ, βτ). We assume the mean-field factorization q (β, ν, τ, z) = q (β) q (τ) q (ν) K k=1 q (zn). Since this is a conjugate model, the optimal q will be in the exponential family with no additional assumptions. Results. We simulated 100 datasets of 300 datapoints each and 30 distinct random effects. We set prior hyperparameters to αν = 2, βν = 2, ατ = 2 , βτ = 2, and Σβ = 0.1−1I. Our xn was 2-dimensional. As in Section 3.1, we implemented the variational solution using the autodifferentiation software JuMP [19]. The MCMC fit was performed with using MCMCglmm [18]. Intuitively, when the random effect explanatory variables rn are highly correlated with the fixed effects xn, then the posteriors for z and β will also be correlated, leading to a violation of the mean field assumption and an underestimated MFVB covariance. In our simulation, we used rn = x1n + N(0, 0.4), so that rn is correlated with x1n but not x2n. The result, as seen in Fig. (2), is that β1 is underestimated by MFVB, but β2 is not. The ν parameter, in contrast, is not wellestimated by the MFVB approximation in many of the simulations. Since the LRVB depends on the approximation m∗ t ≈Eptθ, its LRVB covariance is not accurate either (Fig. (2)). However, LRVB still improves on the MFVB standard deviation. Figure 2: Posterior mean and covariance estimates on linear random effects simulation data. 3.3 Mixture of normals Model. Mixture models constitute some of the most popular models for MFVB application [1, 2] and are often used as an example of where MFVB covariance estimates may go awry [5, 6]. Thus, we will consider in detail a Gaussian mixture model (GMM) consisting of a K-component mixture of P-dimensional multivariate normals with unknown component means, covariances, and weights. In what follows, the weight πk is the probability of the kth component, µk is the P-dimensional mean of the kth component, and Λk is the P × P precision matrix of the kth component (so Λ−1 k is the covariance parameter). N is the number of data points, and xn is the nth observed P-dimensional data point. We employ the standard trick of augmenting the data generating process with the latent indicator variables znk, for n = 1, ..., N and k = 1, ..., K, such that znk = 1 implies xn ∼ N(µk, Λ−1 k ). So the generative model is: P(znk = 1) = πk, p(x|π, µ, Λ, z) =  n=1:N  k=1:K N(xn|µk, Λ−1 k )znk (10) We used diffuse conditionally conjugate priors (see Appendix F for details). We make the variational assumption q (µ, π, Λ, z) = K k=1 q (µk) q (Λk) q (πk) N n=1 q (zn). We compare the accuracy and 6 speed of our estimates to Gibbs sampling on the augmented model (Eq. (10)) using the function rnmixGibbs from the R package bayesm. We implemented LRVB in C++, making extensive use of RcppEigen [20]. We evaluate our results both on simulated data and on the MNIST data set [21]. Results. For simulations, we generated N = 10000 data points from K = 2 multivariate normal components in P = 2 dimensions. MFVB is expected to underestimate the marginal variance of µ, Λ, and log(π) when the components overlap since that induces correlation in the posteriors due to the uncertain classification of points between the clusters. We check the covariances estimated with Eq. (7) against a Gibbs sampler, which we treat as the ground truth.3 We performed 198 simulations, each of which had at least 500 effective Gibbs samples in each variable—calculated with the R tool effectiveSize from the coda package [22]. The first three plots show the diagonal standard deviations, and the third plot shows the off-diagonal covariances. Note that the off-diagonal covariance plot excludes the MFVB estimates since most of the values are zero. Fig. (3) shows that the raw MFVB covariance estimates are often quite different from the Gibbs sampler results, while the LRVB estimates match the Gibbs sampler closely. For a real-world example, we fit a K = 2 GMM to the N = 12665 instances of handwritten 0s and 1s in the MNIST data set. We used PCA to reduce the pixel intensities to P = 25 dimensions. Full details are provided in Appendix G. In this MNIST analysis, the Λ standard deviations were under-estimated by MFVB but correctly estimated by LRVB (Fig. (3)); the other parameter standard deviations were estimated correctly by both and are not shown. Figure 3: Posterior mean and covariance estimates on GMM simulation and MNIST data. 3.4 Scaling experiments We here explore the computational scaling of LRVB in more depth for the finite Gaussian mixture model (Section 3.3). In the terms of Section 2.3, α includes the sufficient statistics from µ, π, and Λ, and grows as O(KP 2). The sufficient statistics for the variational posterior of µ contain the P-length vectors µk, for each k, and the (P + 1)P/2 second-order products in the covariance matrix µkµT k . Similarly, for each k, the variational posterior of Λ involves the (P + 1)P/2 sufficient statistics in the symmetric matrix Λk as well as the term log |Λk|. The sufficient statistics for the posterior of πk are the K terms log πk.4 So, minimally, Eq. (7) will require the inverse of a matrix of size 3The likelihood described in Section 3.3 is symmetric under relabeling. When the component locations and shapes have a real-life interpretation, the researcher is generally interested in the uncertainty of µ, Λ, and π for a particular labeling, not the marginal uncertainty over all possible re-labelings. This poses a problem for standard MCMC methods, and we restrict our simulations to regimes where label switching did not occur in our Gibbs sampler. The MFVB solution conveniently avoids this problem since the mean field assumption prevents it from representing more than one mode of the joint posterior. 4Since K k=1 πk = 1, using K sufficient statistics involves one redundant parameter. However, this does not violate any of the necessary assumptions for Eq. (7), and it considerably simplifies the calculations. Note that though the perturbation argument of Section 2 requires the parameters of p(θ|x) to be in the interior of the feasible space, it does not require that the parameters of p(x|θ) be interior. 7 O(KP 2). The sufficient statistics for z have dimension K × N. Though the number of parameters thus grows with the number of data points, Hz = 0 for the multivariate normal (see Appendix F), so we can apply Eq. (8) to replace the inverse of an O(KN)-sized matrix with multiplication by the same matrix. Since a matrix inverse is cubic in the size of the matrix, the worst-case scaling for LRVB is then O(K2) in K, O(P 6) in P, and O(N) in N. In our simulations (Fig. (4)) we can see that, in practice, LRVB scales linearly5 in N and approximately cubically in P across the dimensions considered.6 The P scaling is presumably better than the theoretical worst case of O(P 6) due to extra efficiency in the numerical linear algebra. Note that the vertical axis of the leftmost plot is on the log scale. At all the values of N, K and P considered here, LRVB was at least as fast as Gibbs sampling and often orders of magnitude faster. Figure 4: Scaling of LRVB and Gibbs on simulation data in both log and linear scales. Before taking logs, the line in the two lefthand (N) graphs is y ∝x, and in the righthand (P) graph, it is y ∝x3. 4 Conclusion The lack of accurate covariance estimates from the widely used mean-field variational Bayes (MFVB) methodology has been a longstanding shortcoming of MFVB. We have demonstrated that in sparse models, our method, linear response variational Bayes (LRVB), can correct MFVB to deliver these covariance estimates in time that scales linearly with the number of data points. Furthermore, we provide an easy-to-use formula for applying LRVB to a wide range of inference problems. Our experiments on a diverse set of models have demonstrated the efficacy of LRVB, and our detailed study of scaling of mixtures of multivariate Gaussians shows that LRVB can be considerably faster than traditional MCMC methods. We hope that in future work our results can be extended to more complex models, including Bayesian nonparametric models, where MFVB has proven its practical success. Acknowledgments. The authors thank Alex Blocker for helpful comments. R. Giordano and T. Broderick were funded by Berkeley Fellowships. 5The Gibbs sampling time was linearly rescaled to the amount of time necessary to achieve 1000 effective samples in the slowest-mixing component of any parameter. Interestingly, this rescaling leads to increasing efficiency in the Gibbs sampling at low P due to improved mixing, though the benefits cease to accrue at moderate dimensions. 6For numeric stability we started the optimization procedures for MFVB at the true values, so the time to compute the optimum in our simulations was very fast and not representative of practice. On real data, the optimization time will depend on the quality of the starting point. Consequently, the times shown for LRVB are only the times to compute the LRVB estimate. The optimization times were on the same order. 8 References [1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [2] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis, 1(1):121–143, 2006. [3] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303–1347, 2013. [4] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. Chapter 33. [5] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006. Chapter 10. [6] R. E. Turner and M. Sahani. Two problems with variational expectation maximisation for time-series models. In D. Barber, A. T. Cemgil, and S. Chiappa, editors, Bayesian Time Series Models. 2011. [7] B. Wang and M. Titterington. Inadequacy of interval estimates corresponding to variational Bayesian approximations. In Workshop on Artificial Intelligence and Statistics, pages 373–380, 2004. [8] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the Royal Statistical Society: Series B (statistical methodology), 71(2):319–392, 2009. [9] G. Parisi. Statistical Field Theory, volume 4. Addison-Wesley New York, 1988. [10] M. Opper and O. Winther. Variational linear response. In Advances in Neural Information Processing Systems, 2003. [11] M. Opper and D. Saad. Advanced mean field methods: Theory and practice. MIT press, 2001. [12] T. Tanaka. Information geometry of mean-field approximation. Neural Computation, 12(8):1951–1968, 2000. [13] H. J. Kappen and F. B. Rodriguez. Efficient learning in Boltzmann machines using linear response theory. Neural Computation, 10(5):1137–1156, 1998. [14] M. Welling and Y. W. Teh. Linear response algorithms for approximate inference in graphical models. Neural Computation, 16(1):197–221, 2004. [15] P. A. d. F. R. Højen-Sørensen, O. Winther, and L. K. Hansen. Mean-field approaches to independent component analysis. Neural Computation, 14(4):889–918, 2002. [16] T. Tanaka. Mean-field theory of Boltzmann machine learning. Physical Review E, 58(2):2302, 1998. [17] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 1(1-2):1–305, 2008. [18] J. D. Hadfield. MCMC methods for multi-response generalized linear mixed models: The MCMCglmm R package. Journal of Statistical Software, 33(2):1–22, 2010. [19] M. Lubin and I. Dunning. Computing in operations research using Julia. INFORMS Journal on Computing, 27(2):238–248, 2015. [20] D. Bates and D. Eddelbuettel. Fast and elegant numerical linear algebra using the RcppEigen package. Journal of Statistical Software, 52(5):1–24, 2013. [21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [22] M. Plummer, N. Best, K. Cowles, and K. Vines. CODA: Convergence diagnosis and output analysis for MCMC. R News, 6(1):7–11, 2006. [23] X. L. Meng and D. B. Rubin. Using EM to obtain asymptotic variance-covariance matrices: The SEM algorithm. Journal of the American Statistical Association, 86(416):899–909, 1991. [24] A. W¨achter and L. T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):25–57, 2006. 9
2015
131
5,627
Weighted Theta Functions and Embeddings with Applications to Max-Cut, Clustering and Summarization Fredrik D. Johansson Computer Science & Engineering Chalmers University of Technology G¨oteborg, SE-412 96, Sweden frejohk@chalmers.se Ankani Chattoraj∗ Brain & Cognitive Sciences University of Rochester Rochester, NY 14627-0268, USA achattor@ur.rochester.edu Chiranjib Bhattacharyya Computer Science and Automation Indian Institute of Science Bangalore 560012, Karnataka, India chiru@csa.iisc.ernet.in Devdatt Dubhashi Computer Science & Engineering Chalmers University of Technology G¨oteborg, SE-412 96, Sweden dubhashi@chalmers.se Abstract We introduce a unifying generalization of the Lov´asz theta function, and the associated geometric embedding, for graphs with weights on both nodes and edges. We show how it can be computed exactly by semidefinite programming, and how to approximate it using SVM computations. We show how the theta function can be interpreted as a measure of diversity in graphs and use this idea, and the graph embedding in algorithms for Max-Cut, correlation clustering and document summarization, all of which are well represented as problems on weighted graphs. 1 Introduction Embedding structured data, such as graphs, in geometric spaces, is a central problem in machine learning. In many applications, graphs are attributed with weights on the nodes and edges – information that needs to be well represented by the embedding. Lov´asz introduced a graph embedding together with the famous theta function in the seminal paper [19], giving his celebrated solution to the problem of computing the Shannon capacity of the pentagon. Indeed, Lov´asz’s embedding is a very elegant and powerful representation of unweighted graphs, that has come to play a central role in information theory, graph theory and combinatorial optimization [10, 8]. However, despite there being at least eight different formulations of ϑ(G) for unweighted graphs, see for example [20], there does not appear to be a version that applies to graphs with weights on the edges. This is surprising, as it has a natural interpretation in the information theoretic problem of the original definition [19]. A version of the Lov´asz number for edge-weighted graphs, and a corresponding geometrical representation, could open the way to new approaches to learning problems on data represented as similarity matrices. Here we propose such a generalization for graphs with weights on both nodes and edges, by combining a few key observations. Recently, Jethava et al. [14] discovered an interesting connection between the original theta function and a central problem in machine learning, namely the one class Support Vector Machine (SVM) formulation [14]. This kernel based method gives yet another equivalent characterization of the Lov´asz number. Crucially, it is easily modified to yield an equivalent characterization of the closely related Delsarte version of the Lov´asz number ∗This work was performed when the author was affiliated with CSE at Chalmers University of Technology. 1 introduced by Schrijver [24] which is more flexible and often more convenient to work with. Using this kernel characterization of the Delsarte version of Lov´asz number, we define a theta function and embedding of weighted graphs, suitable for learning with data represented as similarity matrices. The original theta function is limited to applications on small graphs, because of its formulation as a semidefinite program (SDP). In [14], Jethava et al. showed that their kernel characterization can be used to compute a number and an embedding of a graph that are often good approximations to the theta function and embedding, and that can be computed fast, scaling to very large graphs. Here we give the analogous approximate method for weighted graphs. We use this approximation to solve the weighted maximum cut problem faster than the classical SDP relaxation. Finally, we show that our edge-weighted theta function has a natural interpretation as a measure of diversity in graphs. We use this intuition to define a centroid-based correlation clustering algorithm that automatically chooses the number of clusters and initializes the centroids. We also show how to use the support vectors, computed in the kernel characterization with both node and edge weights, to perform extractive document summarization. To summarize our main contributions: • We introduce a unifying generalization of the famous Lov´asz number applicable to graphs with weights on both nodes and edges. • We show that via our characterization, we can compute a good approximation to our weighted theta function and the corresponding embeddings using SVM computations. • We show that the weighted version of the Lov´asz number can be interpreted as a measure of diversity in graphs, and we use this to define a correlation clustering algorithm dubbed ϑ-means that automatically a) chooses the number of clusters, and b) initializes centroids. • We apply the embeddings corresponding to the weighted Lov´asz numbers to solve weighted maximum cut problems faster than the classical SDP methods, with similar accuracy. • We apply the weighted kernel characterization of the theta function to document summarization, exploiting both node and edge weights. 2 Extensions of Lov´asz and Delsarte numbers for weighted graphs Background Consider embeddings of undirected graphs G = (V, E). Lov´asz introduced an elegant embedding, implicit in the definition of his celebrated theta function ϑ(G) [19], famously an upper bound on the Shannon capacity and sandwiched between the independence number and the chromatic number of the complement graph. ϑ(G) = min {ui},c max i 1 (c⊤ui)2 , u⊤ i uj = 0, ∀(i, j) ̸∈E, ∥ui∥= ∥c∥= 1 . (1) The vectors {ui}, c are so-called orthonormal representations or labellings, the dimension of which is determined by the optimization. We refer to both {ui}, and the matrix U = [u1, . . . , un] as an embedding G, and use the two notations interchangeably. Jethava et al. [14] introduced a characterization of the Lov´asz ϑ function that established a close connection with the one-class support vector machine [23]. They showed that, for an unweighted graph G = (V, E), ϑ(G) = min K∈K(G) ω(K), where (2) K(G) := {K ⪰0 | Kii = 1, Kij = 0, ∀(i, j) ̸∈E}, (3) ω(K) := max αi≥0 f(α; K), f(α; K) := 2 X i αi − X i,j Kijαiαj (4) is the dual formulation of the one-class SVM problem, see [16]. Note that the conditions on K only refer to the non-edges of G. In the sequel, ω(K) and f(α; K) always refer to the definitions in (4). 2.1 New weighted versions of ϑ(G) A key observation in proving (2), is that the set of valid orthonormal representations is equivalent to the set of kernels K. This equivalence can be preserved in a natural way when generalizing the 2 definition to weighted graphs: any constraint on the inner product uT i uj may be represented as constraints on the elements Kij of the kernel matrix. To define weighted extensions of the theta function, we need to first pass to the closely related Delsarte version of the Lov´asz number introduced by Schrijver [24]. In the Delsarte version, the orthogonality constraint for non-edges is relaxed to uT i uj ≤0, (i, j) ̸∈E. With reference to the formulation (2) it is easy to observe that the Delsarte version is given by ϑ1(G) = min K∈K1(G) ω(K), where K1(G) := {K ⪰0 | Kii = 1, Kij ≤0, ∀(i, j) ̸∈E} (5) In other words, the Lov´asz number corresponds to orthogonal labellings of G with orthogonal vectors on the unit sphere assigned to non–adjacent nodes whereas the Delsarte version corresponds to obtuse labellings, i.e. the vectors corresponding to non–adjacent nodes are vectors on the unit sphere meeting at obtuse angles. In both cases, the corresponding number is essentially the half-angle of the smallest spherical cap containing all the vectors assigned to the nodes. Comparing (2) and (5) it follows that ϑ1(G) ≤ϑ(G). In the sequel, we will use the Delsarte version and obtuse labellings to define weighted generalizations of the theta function. We observe in passing, that for any K ∈K1, and for any independent set I in the graph, taking αi = 1 if i ∈I and 0 otherwise, ω(K) ≥2 X i αi − X i,j αiαjKij = X i αi − X i̸=j αiαjKij ≥ X i αi = |I| (6) since for each term in the second sum, either (i, j) is an edge, in which case either αi or αj is zero, or (i, j) is a non–edge in which case Kij ≤0. Thus, like ϑ(G), the Delsarte version ϑ1(G) is also an upper bound on the stability or independence number α(G). Kernel characterization of theta functions on node-weighted graphs Lov´asz number has a classical extension to graphs with node weights σ = [σ1, . . . , σn]⊤, see for example [17]. The generalization, in the Delsarte version (note the inequality constraint), is the following ϑ(G, σ) = min {ui},c max i σi (c⊤ui)2 , u⊤ i uj ≤0, ∀(i, j) ̸∈E, ∥ui∥= ∥c∥= 1 . (7) By passing to the dual of (7), see section 2.1 and [16], we may, as for unweighted graphs, characterize ϑ(G, σ) by a minimization over the set of kernels, K(G, σ) := {K ⪰0 | Kii = 1/σi, Kij ≤0, ∀(i, j) ̸∈E} (8) and, just like in the unweighted case, ϑ1(G, σ) = minK∈K(G,σ) ω(K). When σi = 1, ∀i ∈V , this reduces to the unweighted case. We also note that for any K ∈K(G, σ) and for any independent set I in the graph, taking αi = σi if i ∈I and 0 otherwise, ω(K) ≥2 X i αi − X i,j αiαjKij = 2 X i∈I σi − X i∈I σ2 i σi − X i̸=j αiαjKij ≥ X i∈I σi , (9) since Kij ≤0 ∀(i, j) ̸∈E. Thus, ϑ1(G, σ) ≥ω(K) is an upper bound on the weight of the maximum-weight independent set. Extension to edge-weighted graphs The kernel characterization of ϑ1(G) allows one to define a natural extension to data given as similarity matrices represented in the form of a weighted graph G = (V, S). Here, S is a similarity function on (unordered) node pairs, and S(i, j) ∈[0, 1] with +1 representing complete similarity and 0 complete dissimilarity. The obtuse labellings corresponding to the Delsarte version are somewhat more flexible even for unweighted graphs, but is particularly well suited for weighted graphs. We define ϑ1(G, S) := min K∈K(G,S) ω(K) where K(G, S) := {K ⪰0 | Kii = 1, Kij ≤Sij} (10) In the case of an unweighted graph, where Sij ∈{0, 1}, this reduces exactly to (5). 3 Table 1: Characterizations of weighted theta functions. In the first row are characterizations following the original definition. In the second are kernel characterizations. The bottom row are versions of the LS-labelling [14]. In all cases, ∥ui∥= ∥c∥= 1. A refers to the adjacency matrix of G. Unweighted Node-weighted Edge-weighted min {ui} min c max i 1 (c⊤ui)2 u⊤ i uj ≤0, ∀(i, j) ̸∈E min {ui} min c max i σi (c⊤ui)2 u⊤ i uj = 0, ∀(i, j) ̸∈E min {ui} min c max i 1 (c⊤ui)2 u⊤ i uj ≤Sij, i ̸= j KG = {K ⪰0 | Kii = 1, Kij = 0, ∀(i, j) ̸∈E} KG,σ = {K ⪰0 | Kii = 1/σi, Kij = 0, ∀(i, j) ̸∈E} KG,S = {K ⪰0 | Kii = 1, Kij ≤Sij, i ̸= j} KLS = A |λn(A)| + I Kσ LS = A σmax|λn(A)|+diag(σ)−1 KS LS = S |λn(S)| + I Unifying weighted generalization We may now combine both node and edge weights to form a fully general extension to the Delsarte version of the Lov´asz number, ϑ1(G, σ, S) = min K∈K(G,σ,S) ω(K), K(G, σ, S) :=  K ⪰0 | Kii = 1 σi , Kij ≤ Sij √σiσj  (11) It is easy to see that for unweighted graphs, Sij ∈{0, 1}, σi = 1, the definition reduces to the Delsarte version of the theta function in (5). ϑ1(G, σ, S) is hence a strict generalization of ϑ1(G). All the proposed weighted extensions are defined by the same objective, ω(K). The only difference is the set K, specialized in various ways, over which the minimum, minK∈K ω(K), is computed. It also is important to note, that with the generalization of the theta function comes an implicit generalization of the geometric representation of G. Specifically, for any feasible K in (11), there is an embedding U = [u1, . . . , un] such that K = U ⊤U with the properties u⊤ i uj√σiσj ≤Sij, ∥ui∥2 = 1/√σi, which can be retrieved using matrix decomposition. Note that u⊤ i uj√σiσj is exactly the cosine similarity between ui and uj, which is a very natural choice when Sij ∈[0, 1]. The original definition of the (Delsarte) theta function and its extensions, as well as their kernel characterizations, can be seen in table 1. We can prove the equivalence of the embedding (top) and kernel characterizations (middle) using the following result. Proposition 2.1. For any embedding U ∈Rd×n with K = U ⊤U, and f in (4), the following holds min c∈Sd−1 max i 1 (c⊤ui)2 = max αi≥0 f(α; K) . (12) Proof. The result is given as part of the proof of Theorem 3 in Jethava et al. [14]. See also [16]. As we have already established in section 2 that any set of geometric embeddings have a characterization as a set of kernel matrices, it follows that the minimizing the LHS in (12) over a (constrained) set of orthogonal representations, {ui}, is equivalent to minimizing the RHS over a kernel set K. 3 Computation and fixed-kernel approximation The weighted generalization of the theta function, ϑ1(G, σ, S), defined in the previous section, may be computed as a semidefinite program. In fact ϑ1(G, σ, S) = 1/(t∗)2 for t∗, the solution to the following problem. For details, see the supplementary material [16]. minimize X t subject to X ⪰0, X ∈R(n+1)×(n+1) Xi,n+1 ≥t, Xii = 1/σi, i ∈[n] Xij ≤Sij/√σiσj, i ̸= j, i, j ∈[n] (13) 4 While polynomial in time complexity [13], solving the SDP is too slow in many cases. To address this, Jethava et al. [14] introduced a fast approximation to (the unweighted) ϑ(G), dubbed SVMtheta. They showed that in some cases, the minimization over K in (2) can be replaced by a fixed choice of K, while causing just a constant-factor error. Specifically, for unweighted graphs with adjacency matrix A, Jethava et al. [14] defined the so called LS-labelling, KLS(G) = A/|λn(A)| + I, and showed that for large families of graphs ϑ(G) ≤ω(KLS(G)) ≤γϑ(G), for a constant γ. We extend the LS-labelling to weighted graphs. For graphs with edge weights, represented by a similarity matrix S, the original definition may be used, with S substituted for A. For node weighted graphs we also must satisfy the constraint Kii = 1/σi, see (8). A natural choice, still ensuring positive semidefiniteness is, KLS(G, σ) = A σmax|λn(A)| + diag(σ)−1 (14) where diag(σ)−1 is the diagonal matrix Σ with elements Σii = 1/σi, and σmax = maxn i=1 σi. Both weighted versions of the LS-labelling are presented in table 1. The fully generalized labelling, for graphs with weights on both nodes and edges, KLS(G, σ, S) can be obtained by substituting S for A in (14). As with the exact characterization, we note that KLS(G, σ, S) reduces to KLS(G) for the uniform case, Sij ∈{0, 1}, σi = 1. For all versions of the LS-labelling of G, as with the exact characterization, a geometric embedding U may be obtained from KLS using matrix decompotion. 3.1 Computational complexity Solving the full problem in the kernel characterization (11), is not faster than the computing the SDP characterization (13). However, for a fixed K, the one-class SVM can be solved in O(n2) time [12]. Retrieving the embedding U : K = U T U may be done using Cholesky or singular value decomposition (SVD). In general, algorithms for these problems have complexity O(n3). However, in many cases, a rank d approximation to the decomposition is sufficient, see for example [9]. A thin (or truncated) SVD corresponding to the top d singular values may be computed in O(n2d) time [5] for d = O(√n). The remaining issue is the computation of K. The complexity of computing the LS-labelling discussed in the previous section is dominated by the computation of the minimum eigenvalue λn(A). This can be done approximately in ˜O(m) time, where m is the number of edges of the graph [1]. Overall, the complexity of computing both the embedding U and ω(K) is O(dn2). 4 The theta function as diversity in graphs: ϑ-means clustering In section 2, we defined extensions of the Delsarte version of the Lov´asz number, ϑ1(G) and the associated geometric embedding, for weighted graphs. Now we wish to show how both ϑ(G) and the geometric embedding are useful for solving common machine learning tasks. We build on an intuition of ϑ(G) as a measure of diversity in graphs, illustrated here by a few simple examples. For complete graphs Kn, it is well known that ϑ(Kn) = 1, and for empty graphs Kn, ϑ(Kn) = n. We may interpret these graphs as having 1 and n clusters respectively. Graphs with several disjoint clusters make a natural middle-ground. For a graph G that is a union of k disjoint cliques, ϑ(G) = k. Now, consider the analogue of (6) for graphs with edge weights Sij. For any K ∈K(G, S) and for any subset H of nodes, let αi = 1 if i ∈H and 0 otherwise. Then, since Kij ≤Sij, 2 X i αi − X ij αiαjKij = X i αi − X i̸=j αiαjKij ≥|H| − X i̸=j,i,j∈H Sij . Maximizing this expression may be viewed as the trade-off of finding a subset of nodes that is both large and diverse; the objective function is the size of the set subjected to a penalty for non–diversity. In general support vector machines, non-zero support values αi correspond to support vectors, defining the decision boundary. As a result, nodes i ∈V with high values αi may be interpreted as an important and diverse set of nodes. 4.1 ϑ-means clustering A common problem related to diversity in graphs is correlation clustering [3]. In correlation clustering, the task is to cluster a set of items V = {1, . . . , n}, based on their similarity, or correlation, 5 Algorithm 1 ϑ-means clustering 1: Input: Graph G, with weight matrix S and node weights σ. 2: Compute kernel K ∈K(G, σ, S) 3: α∗ i ←arg maxαi f(α; K), as in (4) 4: Sort alphas according to ji such that αj1 ≥αj2 ≥... ≥αjn 5: Let k = ⌈ˆϑ⌉where ˆϑ ←ω(K) = f(α∗; K) 6: either a) 7: Initialize labels Zi = arg maxj∈{j1,...,jk} Kij 8: Output: result of kernel k-means with kernel K, k = ⌈ˆϑ⌉and Z as initial labels 9: or b) 10: Compute U : K = U T U, with columns Ui, and let C ←{Uji : i ≤k} 11: Output: result of k-means with k = ⌈ˆϑ⌉and C as initial cluster centroids S : V × V →Rn×n, without specifying the number of clusters beforehand. This is naturally posed as a problem of clustering the nodes of an edge-weighted graph. In a variant called overlapping correlation clustering [4], items may belong to several, overlapping, clusters. The usual formulation of correlation clustering is an integer linear program [3]. Making use of geometric embeddings, we may convert the graph clustering problem to the more standard problem of clustering a set of points {ui}n i=1 ∈Rd×n, allowing the use of an arsenal of established techniques, such as k-means clustering. However, we remind ourselves of two common problems with existing clustering algorithms. Problem 1: Number of clusters Many clustering algorithms relies on the user making a good choice of k, the number of clusters. As this choice can have dramatic effect on both the accuracy and speed of the algorithm, heuristics for choosing k, such as Pham et al. [22], have been proposed. Problem 2: Initialization Popular clustering algorithms such as Lloyd’s k-means, or expectationmaximization for Gaussian mixture models require an initial guess of the parameters. As a result, these algorithms are often run repeatedly with different random initializations. We propose solutions to both problems based on ϑ1(G). To solve Problem 1, we choose k = ⌈ϑ1(G)⌉. This is motivated by ϑ1(G) being a measure of diversity. For Problem 2, we propose initializing parameters based on the observation that the non-zero αi are support vectors. Specifically, we let the initial clusters by represented by the set of k nodes, I ⊂V , with the largest αi. In k-means clustering, this corresponds to letting the initial centroids be {ui}i∈I. We summarize these ideas in algorithm 1, comprising both ϑ-means and kernel ϑ-means clustering. In section 3.1, we showed that computating the approximate weighted theta function and embedding, can be done in O(dn2) time for a rank d = O(√n) approximation to the SVD. As is well-known, Lloyd’s algorithm has a very high worst-case complexity and will dominate the overall complexity. 5 Experiments 5.1 Weighted Maximum Cut The maximum cut problem (Max-Cut), a fundamental problem in graph algorithms, with applications in machine learning [25], has famously been solved using geometric embeddings defined by semidefinite programs [9]. Here, given a graph G, we compute an embedding U ∈Rd×n, the SVM-theta labelling in [15], using the LS-labelling, KLS. To reduce complexity, while preserving accuracy [9], we use a rank d = √ 2n truncated SVD, see section 3.1. We apply the GoemansWilliamson random hyperplane rounding [9] to partition the embedding into two sets of points, representing the cut. The rounding was repeated 5000 times, and the maximum cut is reported. Helmberg & Rendl [11] constructed a set of 54 graphs, 24 of which are weighted, that has since often been used as benchmarks for Max-Cut. We use the six of the weighted graphs for which there are multiple published results [6, 21]. Our approach is closest to that of the SDP-relaxation, which 6 Table 2: Weighted maximum cut. c is the weight of the produced cut. SDP [6] SVM-ϑ Best known [21] Graph c Time c Time c Time G11 528 165s 522 3.13s 564 171.8s G12 522 145s 518 2.94s 556 241.5s G13 542 145s 540 2.97s 580 227.5s G32 1280 1318s 1286 35.5s 1398 900.6s G33 1248 1417s 1260 36.4s 1376 925.6s G34 1264 1295s 1268 37.9s 1372 925.6s Table 3: Clustering of the (mini) newsgroup dataset. Average (and std. deviation) over 5 splits. ˆk is the average number of clusters predicted. The true number is k = 16. F1 ˆk Time VOTE/BOEM 31.29 ± 4.0 124 8.7m PIVOT/BOEM 30.07 ± 3.4 120 14m BEST/BOEM 29.67 ± 3.4 112 13m FIRST/BOEM 26.76 ± 3.8 109 14m k-MEANS+RAND 17.31 ± 1.3 2 15m k-MEANS+INIT 20.06 ± 6.8 3 5.2m ϑ-MEANS+RAND 35.60 ± 4.3 25 45s ϑ-MEANS 36.20 ± 4.9 25 11s has time complexity O(mn log2 n/ϵ3) [2]. In comparison, our method takes O(n2.5) time, see section 3.1. The results are presented in table 2. For all graphs, the SVM approximation is comparable to or better than the SDP solution, and considerably faster than the best known method [21].1 5.2 Correlation clustering We evaluate several different versions of algorithm 1 in the task of correlation clustering, see section 4.1. We consider a) the full version (ϑ-MEANS), b) one with k = ⌈ˆϑ⌉but random initialization of centroids (ϑ-MEANS+RAND), c) one with α-based initialization but choosing k according to Pham et al. [22] (k-MEANS+INIT) and d) k according to [22] and random initialization (k-MEANS+RAND). For the randomly initialized versions, we use 5 restarts of k-means++. In all versions, we cluster the points of the embedding defined by the fixed kernel (LS-labelling) K = KLS(G, S). Elsner & Schudy [7] constructed five affinity matrices for a subset of the classical 20-newsgroups dataset. Each matrix, corresponding to a different split of the data, represents the similarity between messages in 16 different newsgroups. The task is to cluster the messages by their respective newsgroup. We run algorithm 1 on every split, and compute the F1-score [7], reporting the average and standard deviation over all splits, as well as the predicted number of clusters, ˆk. We compare our results to several greedy methods described by Elsner & Schudy [7], see table 3. We only compare to their logarithmic weighting schema, as the difference to using additive weights was negligible [7]. The results are presented in table 3. We observe that the full ϑ-means method achieves the highest F1-score, followed by the version with random initialization (instead of using embeddings of nodes with highest αi, see algorithm 1). We note also that choosing k by the method of Pham et al. [22] consistently results in too few clusters, and with the greedy search methods, far too many. 5.3 Overlapping Correlation Clustering Bonchi et al. [4] constructed a benchmark for overlapping correlation clustering based on two datasets for multi-label classification, Yeast and Emotion. The datasets consist of 2417 and 593 items belonging to one or more of 14 and 6 overlapping clusters respectively. Each set can be represented as an n × k binary matrix L, where k is the number of clusters and n is the number of items, 1Note that the timing results for the SDP method are from the original paper, published in 2001. 7 Table 4: Clustering of the Yeast and Emotion datasets. †The total time for finding the best solution. The times for OCC-ISECT for a single k was 2.21s and 80.4s respectively. Emotion Yeast Prec. Rec. F1 Time Prec. Rec. F1 Time OCC-ISECT [4] 0.98 1 0.99 12.1† 0.99 1.00 1.00 716s† ϑ-means (no k-means) 1 1 1 0.34s 0.94 1 0.97 6.67s such that Lic = 1 iff item i belongs to cluster c. From L, a weight matrix S is defined such that Sij is the Jaccard coefficient between rows i and j of L. S is often sparse, as many of the pairs do not share a single cluster. The correlation clustering task is to reconstruct L from S. Here, we use only the centroids C = {uj1, ..., ujk} produced by algorithm 1, without running kmeans. We let each centroid c = 1, ..., k represent a cluster, and assign a node i ∈V to that cluster, i.e. ˆLic = 1, iff uT i ujc > 0. We compute the precision and recall following Bonchi et al. [4]. For comparison with Bonchi et al. [4], we run their algorithm called OCC-ISECT with the parameter ¯k, bounding the number of clusters, in the interval 1, ..., 16 and select the one resulting in lowest cost. The results are presented in table 4. For Emotion and Yeast, ϑ-means estimated the number of clusters, k to be 6 (the correct number) and 8 respectively. For OCC-Isect, the k with the lowest cost were 10 and 13. We note that while very similar in performance, the ϑ-means algorithms is considerably faster than OCC-ISECT, especially when k is unknown. 5.4 Document summarization Finally, we briefly examine the idea of using αi to select a both relevant and diverse set of items, in a very natural application of the weighted theta function – extractive summarization [18]. In extractive summarization, the goal is to automatically summarize a text by picking out a small set of sentences that best represents the whole text. We may view the sentences of a text as the nodes of a graph, with edge weights Sij, the similarity between sentences, and node weights σi representing the relevance of the sentence to the text as a whole. The trade-off between brevity and relevance described above can then be viewed as finding a set of nodes that has both high total weight and high diversity. This is naturally accomplished using our framework by computing [α∗ 1, . . . , α∗ n]⊤= arg maxαi>0 f(α; K) for fixed K = KLS(G, σ, S) and picking the sentences with the highest α∗ i . We apply this method to the multi-document summarization task of DUC-042. We let Sij be the TF-IDF sentence similarity described by Lin & Bilmes [18], and let σi = (P j Sij)2. State-of-theart systems, purpose-built for summarization, achieve around 0.39 in recall and F1 score [18]. Our method achieves a score of 0.33 on both measures which is about the same as the basic version of [18]. This is likely possible to improve by tuning the trade-off between relevance and diversity, such as a making a more sophisticated choice of S and σ. However, we leave this to future work. 6 Conclusions We have introduced a unifying generalization of Lov´asz’s theta function and the corresponding geometric embedding to graphs with node and edge weights, characterized as a minimization over a constrained set of kernel matrices. This allows an extension of a fast approximation of the Lov´asz number to weighted graphs, defined by an SVM problem for a fixed kernel matrix. We have shown that the theta function has a natural interpretation as a measure of diversity in graphs, a useful function in several machine learning problems. Exploiting these results, we have defined algorithms for weighted maximum cut, correlation clustering and document summarization. Acknowledgments This work is supported in part by the Swedish Foundation for Strategic Research (SSF). 2http://duc.nist.gov/duc2004/ 8 References [1] S. Arora, E. Hazan, and S. Kale. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on, pages 339–348. IEEE, 2005. [2] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(1):121–164, 2012. [3] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56(1-3):89–113, 2004. [4] F. Bonchi, A. Gionis, and A. Ukkonen. Overlapping correlation clustering. Knowledge and information systems, 35(1):1–32, 2013. [5] M. Brand. Fast low-rank modifications of the thin singular value decomposition. Linear algebra and its applications, 415(1):20–30, 2006. [6] S. Burer and R. D. Monteiro. A projected gradient algorithm for solving the maxcut sdp relaxation. Optimization methods and Software, 15(3-4):175–200, 2001. [7] M. Elsner and W. Schudy. Bounding and comparing methods for correlation clustering beyond ilp. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, pages 19–27. Association for Computational Linguistics, 2009. [8] M. X. Goemans. Semidefinite programming in combinatorial optimization. Math. Program., 79:143–161, 1997. [9] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995. [10] M. Gr¨otschel, L. Lov´asz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization, volume 2 of Algorithms and Combinatorics. Springer, 1988. [11] C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM Journal on Optimization, 10(3):673–696, 2000. [12] D. Hush, P. Kelly, C. Scovel, and I. Steinwart. Qp algorithms with guaranteed accuracy and run time for support vector machines. Journal of Machine Learning Research, 7:733–769, 2006. [13] G. Iyengar, D. J. Phillips, and C. Stein. Approximating semidefinite packing programs. SIAM Journal on Optimization, 21(1):231–268, 2011. [14] V. Jethava, A. Martinsson, C. Bhattacharyya, and D. Dubhashi. Lov´asz ϑ function, svms and finding dense subgraphs. The Journal of Machine Learning Research, 14(1):3495–3536, 2013. [15] V. Jethava, J. Sznajdman, C. Bhattacharyya, and D. Dubhashi. Lovasz ϑ, svms and applications. In Information Theory Workshop (ITW), 2013 IEEE, pages 1–5. IEEE, 2013. [16] F. D. Johanson, A. Chattoraj, C. Bhattacharyya, and D. Dubhashi. Supplementary material, 2015. [17] D. E. Knuth. The sandwich theorem. Electr. J. Comb., 1, 1994. [18] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 510–520. Association for Computational Linguistics, 2011. [19] L. Lov´asz. On the shannon capacity of a graph. IEEE Transactions on Information Theory, 25(1):1–7, 1979. [20] L. Lov´asz and K. Vesztergombi. Geometric representations of graphs. Paul Erdos and his Mathematics, 1999. [21] R. Mart´ı, A. Duarte, and M. Laguna. Advanced scatter search for the max-cut problem. INFORMS Journal on Computing, 21(1):26–38, 2009. [22] D. T. Pham, S. S. Dimov, and C. Nguyen. Selection of k in k-means clustering. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 219(1):103– 119, 2005. [23] B. Sch¨olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural computation, 13(7):1443–1471, 2001. [24] A. Schrijver. A comparison of the delsarte and lov´asz bounds. Information Theory, IEEE Transactions on, 25(4):425–429, 1979. [25] J. Wang, T. Jebara, and S.-F. Chang. Semi-supervised learning using greedy max-cut. The Journal of Machine Learning Research, 14(1):771–800, 2013. 9
2015
132
5,628
End-to-end Learning of LDA by Mirror-Descent Back Propagation over a Deep Architecture Jianshu Chen⇤, Ji He†, Yelong Shen⇤, Lin Xiao⇤, Xiaodong He⇤, Jianfeng Gao⇤, Xinying Song⇤and Li Deng⇤ ⇤Microsoft Research, Redmond, WA 98052, USA, {jianshuc,yeshen,lin.xiao,xiaohe,jfgao,xinson,deng}@microsoft.com †Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA, jvking@uw.edu Abstract We develop a fully discriminative learning approach for supervised Latent Dirichlet Allocation (LDA) model using Back Propagation (i.e., BP-sLDA), which maximizes the posterior probability of the prediction variable given the input document. Different from traditional variational learning or Gibbs sampling approaches, the proposed learning method applies (i) the mirror descent algorithm for maximum a posterior inference and (ii) back propagation over a deep architecture together with stochastic gradient/mirror descent for model parameter estimation, leading to scalable and end-to-end discriminative learning of the model. As a byproduct, we also apply this technique to develop a new learning method for the traditional unsupervised LDA model (i.e., BP-LDA). Experimental results on three real-world regression and classification tasks show that the proposed methods significantly outperform the previous supervised topic models, neural networks, and is on par with deep neural networks. 1 Introduction Latent Dirichlet Allocation (LDA) [5], among various forms of topic models, is an important probabilistic generative model for analyzing large collections of text corpora. In LDA, each document is modeled as a collection of words, where each word is assumed to be generated from a certain topic drawn from a topic distribution. The topic distribution can be viewed as a latent representation of the document, which can be used as a feature for prediction purpose (e.g., sentiment analysis). In particular, the inferred topic distribution is fed into a separate classifier or regression model (e.g., logistic regression or linear regression) to perform prediction. Such a separate learning structure usually significantly restricts the performance of the algorithm. For this purpose, various supervised topic models have been proposed to model the documents jointly with the label information. In [4], variational methods was applied to learn a supervised LDA (sLDA) model by maximizing the lower bound of the joint probability of the input data and the labels. The DiscLDA method developed in [15] learns the transformation matrix from the latent topic representation to the output in a discriminative manner, while learning the topic to word distribution in a generative manner similar to the standard LDA. In [26], max margin supervised topic models are developed for classification and regression, which are trained by optimizing the sum of the variational bound for the log marginal likelihood and an additional term that characterizes the prediction margin. These methods successfully incorporate the information from both the input data and the labels, and showed better performance in prediction compared to the vanilla LDA model. One challenge in LDA is that the exact inference is intractable, i.e., the posterior distribution of the topics given the input document cannot be evaluated explicitly. For this reason, various approximate 1 φk K β zd,n wd,n N ↵ ✓d D yd U, γ Figure 1: Graphical representation of the supervised LDA model. Shaded nodes are observables. inference methods are proposed, such as variational learning [4, 5, 26] and Gibbs sampling [9, 27], for computing the approximate posterior distribution of the topics. In this paper, we will show that, although the full posterior probability of the topic distribution is difficult, its maximum a posteriori (MAP) inference, as a simplified problem, is a convex optimization problem when the Dirichlet parameter satisfies certain conditions, which can be solved efficiently by the mirror descent algorithm (MDA) [2, 18, 21]. Indeed, Sontag and Roy [19] pointed out that the MAP inference problem of LDA in this situation is polynomial-time and can be solved by an exponentiated gradient method, which shares a same form as our mirror-descent algorithm with constant step-size. Nevertheless, different from [19], which studied the inference problem alone, our focus in this paper is to integrate back propagation with mirror-descent algorithm to perform fully discriminative training of supervised topic models, as we proceed to explain below. Among the aforementioned methods, one training objective of the supervised LDA model is to maximize the joint likelihood of the input and the output variables [4]. Another variant is to maximize the sum of the log likelihood (or its variable bound) and a prediction margin [26, 27]. Moreover, the DiscLDA optimizes part of the model parameters by maximizing the marginal likelihood of the input variables, and optimizes the other part of the model parameters by maximizing the conditional likelihood. For this reason, DiscLDA is not a fully discriminative training of all the model parameters. In this paper, we propose a fully discriminative training of all the model parameters by maximizing the posterior probability of the output given the input document. We will show that the discriminative training can be performed in a principled manner by naturally integrating the backpropagation with the MDA-based exact MAP inference. To our best knowledge, this paper is the first work to perform a fully end-to-end discriminative training of supervised topic models. Discriminative training of generative model is widely used and usually outperforms standard generative training in prediction tasks [3, 7, 12, 14, 25]. As pointed out in [3], discriminative training increases the robustness against the mismatch between the generative model and the real data. Experimental results on three real-world tasks also show the superior performance of discriminative training. In addition to the aforementioned related studies on topic models [4, 15, 26, 27], there have been another stream of work that applied empirical risk minimization to graphical models such as Markov Random Field and nonnegative matrix factorization [10, 20]. Specifically, in [20], an approximate inference algorithm, belief propagation, is used to compute the belief of the output variables, which is further fed into a decoder to produce the prediction. The approximate inference and the decoder are treated as an entire black-box decision rule, which is tuned jointly via back propagation. Our work is different from the above studies in that we use an MAP inference based on optimization theory to motivate the discriminative training from a principled probabilistic framework. 2 Smoothed Supervised LDA Model We consider the smoothed supervised LDA model in Figure 1. Let K be the number of topics, N be the number of words in each document, V be the vocabulary size, and D be the number of documents in the corpus. The generative process of the model in Figure 1 can be described as: 1. For each document d, choose the topic proportions according to a Dirichlet distribution: ✓d ⇠p(✓d|↵) = Dir(↵), where ↵is a K ⇥1 vector consisting of nonnegative components. 2. Draw each column φk of a V ⇥K matrix Φ independently from an exchangeable Dirichlet distribution: φk ⇠Dir(β) (i.e., Φ ⇠p(Φ|β)), where β > 0 is the smoothing parameter. 3. To generate each word wd,n: 2 (a) Choose a topic zd,n ⇠p(zd,n|✓d) = Multinomial(✓d). 1 (b) Choose a word wd,n ⇠p(wd,n|zd,n, Φ) = Multinomial(φzd,n). 4. Choose the C ⇥1 response vector: yd ⇠p(yd|✓, U, γ). (a) In regression, p(yd|✓d, U, γ) = N(U✓d, γ−1), where U is a C ⇥K matrix consisting of regression coefficients. (b) In multi-class classification, p(yd|✓d, U, γ) = Multinomial ! Softmax(γU✓d) " , where the softmax function is defined as Softmax(x)c = exc PC c0=1 exc0 , c = 1, . . . , C. Therefore, the entire model can be described by the following joint probability p(Φ|β) D Y d=1 h p(yd|✓d, U, γ) · p(✓d|↵) · p(wd,1:N|zd,1:N, Φ) · p(zd,1:N|✓d) | {z } ,p(yd,✓d,wd,1:N,zd,1:N|Φ,U,↵,γ) i (1) where wd,1:N and zd,1:N denotes all the words and the associated topics, respectively, in the d-th document. Note that the model in Figure 1 is slightly different from the one proposed in [4], where the response variable yd in Figure 1 is coupled with ✓d instead of zd,1:N as in [4]. Blei and Mcauliffe also pointed out this choice as an alternative in [4]. This modification will lead to a differentiable end-to-end cost trainable by back propagation with superior prediction performance. To develop a fully discriminative training method for the model parameters Φ and U, we follow the argument in [3], which states that the discriminative training is also equivalent to maximizing the joint likelihood of a new model family with an additional set of parameters: arg max Φ,U,˜Φ p(Φ|β)p(˜Φ|β) D Y d=1 p(yd|wd,1:N, Φ, U, ↵, γ) D Y d=1 p(wd,1:N|˜Φ, ↵) (2) where p(wd,1:N|˜Φ, ↵) is obtained by marginalizing p(yd, ✓d, wd,1:N, zd,1:N|Φ, U, ↵, γ) in (1) and replace Φ with ˜Φ. The above problem (2) decouples into arg max Φ,U h ln p(Φ|β) + D X d=1 ln p(yd|wd,1:N, Φ, U, ↵, γ) i (3) arg max ˜Φ h ln p(˜Φ|β) + D X d=1 ln p(wd,1:N|˜Φ, ↵) i (4) which are the discriminative learning problem of supervised LDA (Eq. (3)), and the unsupervised learning problem of LDA (Eq. (4)), respectively. We will show that both problems can be solved in a unified manner using a new MAP inference and back propagation. 3 Maximum A Posterior (MAP) Inference We first consider the inference problem in the smoothed LDA model. For the supervised case, the main objective is to infer yd given the words wd,1:N in each document d, i.e., computing p(yd|wd,1:N, Φ, U, ↵, γ) = Z ✓d p(yd|✓d, U, γ)p(✓d|wd,1:N, Φ, ↵)d✓d (5) where the probability p(yd|✓d, U, γ) is known (e.g., multinomial or Gaussian for classification and regression problems — see Section 2). The main challenge is to evaluate p(✓d|wd,1:N, Φ, ↵), i.e., infer the topic proportion given each document, which is also the important inference problem in the unsupervised LDA model. However, it is well known that the exact evaluation of the posterior probability p(✓d|wd,1:N, Φ, ↵) is intractable [4, 5, 9, 15, 26, 27]. For this reason, various approximate inference methods, such as variational inference [4, 5, 15, 26] and Gibbs sampling [9, 27], 1We will represent all the multinomial variables by a one-hot vector that has a single component equal to one at the position determined by the multinomial variable and all other components being zero. 3 have been proposed to compute the approximate posterior probability. In this paper, we take an alternative approach for inference; given each document d, we only seek a point (MAP) estimate of ✓d, instead of its full (approximate) posterior probability. The major motivation is that, although the full posterior probability of ✓d is difficult, its MAP estimate, as a simplified problem, is more tractable (and it is a convex problem under certain conditions). Furthermore, with the MAP estimate of ✓d, we can infer the prediction variable yd according to the following approximation from (5): p(yd|wd,1:N, Φ, U, ↵, γ) = E✓d|wd,1:N [p(yd|✓d, U, γ)] ⇡p(yd|ˆ✓d|wd,1:N , U, γ) (6) where E✓d|wd,1:N denotes the conditional expectation with respect to ✓d given wd,1:N, and the expectation is sampled by the MAP estimate, ˆ✓d|wd,1:N , of ✓d given wd,1:N, defined as ˆ✓d|wd,1:N = arg max ✓d p(✓d|wd,1:N, Φ, ↵, β) (7) The approximation gets more precise when p(✓d|wd,1:N, Φ, ↵, β) becomes more concentrated around ˆ✓d|wd,1;N . Experimental results on several real datasets (Section 5) show that the approximation (6) provides excellent prediction performance. Using the Bayesian rule p(✓d|wd,1:N, Φ, ↵) = p(✓d|↵)p(wd,1:N|✓d, Φ)/p(wd,1:N|Φ, ↵) and the fact that p(wd,1:N|Φ, ↵) is independent of ✓d, we obtain the equivalent form of (7) as ˆ✓d|wd,1:N = arg max ✓d2PK ⇥ ln p(✓d|↵) + ln p(wd,1:N|✓d, Φ) ⇤ (8) where PK = {✓2 RK : ✓j ≥0, PK j=1 ✓j = 1} denotes the (K −1)-dimensional probability simplex, p(✓d|↵) is the Dirichlet distribution, and p(wd,1:N|✓d, Φ) can be computed by integrating p(wd,1:N, zd,1:N|✓d, Φ) = QN n=1 p(wd,n|zd,n, Φ)p(zd,n|✓d) over zd,1:N, which leads to (derived in Section A of the supplementary material) p(wd,1:N|✓d, Φ) = VY v=1 ✓K X j=1 ✓d,jΦvj ◆xd,v = p(xd|✓d, Φ) (9) where xd,v denotes the term frequency of the v-th word (in vocabulary) inside the d-th document, and xd denotes the V -dimensional bag-of-words (BoW) vector of the d-th document. Note that p(wd,1:N|✓d, Φ) depends on wd,1:N only via the BoW vector xd, which is the sufficient statistics. Therefore, we use p(xd|✓d, Φ) and p(wd,1:N|✓d, Φ) interchangeably from now on. Substituting the expression of Dirichlet distribution and (9) into (8), we get ˆ✓d|wd,1:N = arg max ✓d2PK ⇥ xT d ln(Φ✓d) + (↵−1)T ln ✓d ⇤ = arg min ✓d2PK ⇥ −xT d ln(Φ✓d) −(↵−1)T ln ✓d ⇤ (10) where we dropped the terms independent of ✓d, and 1 denotes an all-one vector. Note that when ↵≥1 (↵> 1), the optimization problem (10) is (strictly) convex and is non-convex otherwise. 3.1 Mirror Descent Algorithm for MAP Inference An efficient approach to solving the constrained optimization problem (10) is the mirror descent algorithm (MDA) with Bregman divergence chosen to be generalized Kullback-Leibler divergence [2, 18, 21]. Specifically, let f(✓d) denote the cost function in (10), then the MDA updates the MAP estimate of ✓d iteratively according to: ✓d,` = arg min ✓d2PK  f(✓d,`−1) + [r✓df(✓d,`−1)]T (✓d −✓d,`−1) + 1 Td,` (✓d, ✓d,`−1) 3 (11) ✓d,` denotes the estimate of ✓d,` at the `-th iteration, Td,` denotes the step-size of MDA, and (x, y) is the Bregman divergence chosen to be (x, y) = xT ln(x/y) −1T x + 1T y. The argmin in (11) can be solved in closed-form (see Section B of the supplementary material) as ✓d,` = 1 C✓ · ✓d,`−1 ⊙exp ✓ Td,`  ΦT xd Φ✓d,`−1 + ↵−1 ✓d,`−1 3◆ , ` = 1, . . . , L, ✓d,0 = 1 K 1 (12) 4 Mirror Descent Cell Mirror Descent Cell … Normalization Figure 2: Layered deep architecture for computing p(yd|wd,1:N, Φ, U, ↵, γ), where ()/() denotes element-wise division, ⊙denotes Hadamard product, and exp() denotes element-wise exponential. where C✓is a normalization factor such that ✓d,` adds up to one, ⊙denotes Hadamard product, L is the number of MDA iterations, and the divisions in (12) are element-wise operations. Note that the recursion (12) naturally enforces each ✓d,` to be on the probability simplex. The MDA step-size Td,` can be either constant, i.e., Td,` = T, or adaptive over iterations and samples, determined by line search (see Section C of the supplementary material). The computation complexity in (12) is low since most computations are sparse matrix operations. For example, although by itself Φ✓d,`−1 in (12) is a dense matrix multiplication, we only need to evaluate the elements of Φ✓d,`−1 at the positions where the corresponding elements of xd are nonzero, because all other elements of xd/Φ✓d,`−1 is known to be zero. Overall, the computation complexity in each iteration of (12) is O(nTok · K), where nTok denotes the number of unique tokens in the document. In practice, we only use a small number of iterations, L, in (12) and use ✓d,L to approximate ˆ✓d|wd,1:N so that (6) becomes p(yd|wd,1:N, Φ, U, ↵, γ) ⇡p(yd|✓d,L, U, γ) (13) In summary, the inference of ✓d and yd can be implemented by the layered architecture in Figure 2, where the top layer infers yd using (13) and the MDA layers infer ✓d iteratively using (12). Figure 2 also implies that the the MDA layers act as a feature extractor by generating the MAP estimate ✓d,L for the output layer. Our end-to-end learning strategy developed in the next section jointly learns the model parameter U at the output layer and the model parameter Φ at the feature extractor layers to maximize the posterior of the prediction variable given the input document. 4 Learning by Mirror-Descent Back Propagation We now consider the supervised learning problem (3) and the unsupervised learning problem (4), respectively, using the developed MDA-based MAP inference. We first consider the supervised learning problem. With (13), the discriminative learning problem (3) can be approximated by arg min Φ,U " −ln p(Φ|β) − D X d=1 ln p(yd|✓d,L, U, γ) # (14) which can be solved by stochastic mirror descent (SMD). Note that the cost function in (14) depends on U explicitly through p(yd|✓d,L, U, γ), which can be computed directly from its definition in Section 2. On the other hand, the cost function in (14) depends on Φ implicitly through ✓d,L. From Figure 2, we observe that ✓d,L not only depends on Φ explicitly (as indicated in the MDA block on the right-hand side of Figure 2) but also depends on Φ implicitly via ✓d,L−1, which in turn depends on Φ both explicitly and implicitly (through ✓d,L−2) and so on. That is, the dependency of the cost function on Φ is in a layered manner. Therefore, we devise a back propagation procedure to efficiently compute its gradient with respect to Φ according to the mirror-descent graph in Figure 2, which back propagate the error signal through the MDA blocks at different layers. The gradient formula and the implementation details of the learning algorithm can be found in Sections C–D in the supplementary material. For the unsupervised learning problem (4), the gradient of ln p(˜Φ|β) with respect to ˜Φ assumes the same form as that of ln p(Φ|β). Moreover, it can be shown that the gradient of ln p(wd,1:N|˜Φ, ↵, γ) 5 with respect ˜Φ can be expressed as (see Section E of the supplementary material): @ ln p(wd,1:N|˜Φ, ↵) @ ˜Φ = E✓d|xd ⇢@ @ ˜Φ ln p(xd|✓d, ˜Φ) 7 (a) ⇡ @ @ ˜Φ ln p(xd|✓d,L, ˜Φ) (15) where p(xd|✓d, ˜Φ) assumes the same form as (9) except Φ is replaced by ˜Φ. The expectation is evaluated with respect to the posterior probability p(✓d|wd,1:N, ˜Φ, ↵), and is sampled by the MAP estimate of ✓d in step (a). ✓d,L is an approximation of ˆ✓d|wd,1:N computed via (12) and Figure 2. 5 Experiments 5.1 Description of Datasets and Baselines We evaluated our proposed supervised learning (denoted as BP-sLDA) and unsupervised learning (denoted as BP-LDA) methods on three real-world datasets. The first dataset we use is a large-scale dataset built on Amazon movie reviews (AMR) [16]. The data set consists of 7.9 million movie reviews (1.48 billion words) from Amazon, written by 889,176 users, on a total of 253,059 movies. For text preprocessing we removed punctuations and lowercasing capital letters. A vocabulary of size 5,000 is built by selecting the most frequent words. (In another setup, we keep the full vocabulary of 701K.) Same as [24], we shifted the review scores so that they have zero mean. The task is formulated as a regression problem, where we seek to predict the rating score using the text of the review. Second, we consider a multi-domain sentiment (MultiSent) classification task [6], which contains a total 342,104 reviews on 25 types of products, such as apparel, electronics, kitchen and housewares. The task is formulated as a binary classification problem to predict the polarity (positive or negative) of each review. Likewise, we preprocessed the text by removing punctuations and lowercasing capital letters, and built a vocabulary of size 1,000 from the most frequent words. In addition, we also conducted a second binary text classification experiment on a large-scale proprietary dataset for business-centric applications (1.2M documents and vocabulary size of 128K). The baseline algorithms we considered include Gibbs sampling (Gibbs-LDA) [17], logistic/linear regression on bag-of-words, supervised-LDA (sLDA) [4], and MedLDA [26], which are implemented either in C++ or Java. And our proposed algorithms are implemented in C#.2 For BP-LDA and Gibbs-LDA, we first train the models in an unsupervised manner, and then generate per-document topic proportion ✓d as their features in the inference steps, on top of which we train a linear (logistic) regression model on the regression (classification) tasks. 5.2 Prediction Performance We first evaluate the prediction performance of our models and compare them with the traditional (supervised) topic models. Since the training of the baseline topic models takes much longer time than BP-sLDA and BP-LDA (see Figure 5), we compare their performance on two smaller datasets, namely a subset (79K documents) of AMR (randomly sampled from the 7.9 million reviews) and the MultiSent dataset (342K documents), which are all evaluated with 5-fold cross validation. For AMR regression, we use the predictive R2 to measure the prediction performance, defined as: pR2 = 1 −(P d(yo d −yd)2)/(P d(yo d −¯yo d)2), where yo d denotes the label of the d-th document in the heldout (out-of-fold) set during the 5-fold cross validation, ¯yo d is the mean of all yo d in the heldout set, and yd is the predicted value. The pR2 scores of different models with varying number of topics are shown in Figure 3(a). Note that the BP-sLDA model outperforms the other baselines with large margin. Moreover, the unsupervised BP-LDA model outperforms the unsupervised LDA model trained by Gibbs sampling (Gibbs-LDA). Second, on the MultiSent binary classification task, we use the area-under-the-curve (AUC) of the operating curve of probability of correct positive versus probability of false positive as our performance metric, which are shown in Figure 3(b). It also shows that BP-sLDA outperforms other methods and that BP-LDA outperforms the Gibbs-LDA model. Next, we compare our BP-sLDA model with other strong discriminative models (such as neural networks) by conducting two large-scale experiments: (i) regression task on AMR full dataset (7.9M documents) and (ii) binary classification task on the proprietary business-centric dataset (1.2M documents). For the large-scale AMR regression, we can see that pR2 improves significantly compared 2A third-party code is available online at https://github.com/jvking/bp-lda. 6 0 20 40 60 80 100 120 0 0.1 0.2 0.3 0.4 0.5 Number of topics pR2 BP−sLDA Linear MedLDA sLDA BP−LDA Gibbs−LDA (a) AMR regression task (79K) 0 20 40 60 80 100 120 60 65 70 75 80 85 90 95 Number of topics AUC (%) BP−sLDA Logistic regression MedLDA sLDA BP−LDA Gibbs−LDA (b) MultiSent classification task 0 20 40 60 80 100 120 88 89 90 91 92 93 Number of topics AUC (%) BP−sLDA Logistic regression (c) MultiSent task (zoom in) Figure 3: Prediction performance on AMR regression task (measured in pR2) and MultiSent classification task (measured in AUC). Higher score is better for both, with perfect value being one. Table 1: pR2 (in percentage) on full AMR data (7.9M documents). The standard deviations in the parentheses are obtained from 5-fold cross validation. Number of topics 5 10 20 50 100 200 Linear Regression (voc5K) 38.4 (0.1) Neural Network (voc5K) 59.0 (0.1) 61.0 (0.1) 62.3 (0.4) 63.5 (0.7) 63.1 (0.8) 63.5 (0.4) BP-sLDA (↵=1.001, voc5K) 61.4 (0.1) 65.3 (0.3) 69.1 (0.2) 74.7 (0.3) 74.3 (2.4) 78.3 (1.1) BP-sLDA (↵=0.5, voc5K) 54.7 (0.1) 54.5 (1.2) 57.0 (0.2) 61.3 (0.3) 67.1 (0.1) 74.5 (0.2) BP-sLDA (↵=0.1, voc5K) 53.3 (2.8) 56.1 (0.1) 58.4 (0.1) 64.1 (0.1) 70.6 (0.3) 75.7 (0.2) Linear Regression (voc701K) 41.5 (0.2) BP-sLDA (↵=1.001,voc701K) 69.8 (0.2) 74.3 (0.3) 78.5 (0.2) 83.6 (0.6) 80.1 (0.9) 84.7 (2.8) to the best results on the 79K dataset shown in Figure 3(a), and also significantly outperform the neural network models with same number of model parameters. Moreover, the best deep neural network (200⇥200 in hidden layers) gives pR2 of 76.2%(±0.6%), which is worse than 78.3% of BP-sLDA. In addition, BP-sLDA also significantly outperforms Gibbs-sLDA [27], Spectral-sLDA [24], and the Hybrid method (Gibbs-sLDA initialized with Spectral-sLDA) [24], whose pR2 scores (reported in [24]) are between 10% and 20% for 5 ⇠10 topics (and deteriorate when further increasing the topic number). The results therein are obtained under same setting as this paper. To further demonstrate the superior performance of BP-sLDA on the large vocabulary scenario, we trained BP-sLDA on full vocabulary (701K) AMR and show the results in Table 1, which are even better than the 5K vocabulary case. Finally, for the binary text classification task on the proprietary dataset, the AUCs are given in Table 2, where BP-sLDA (200 topics) achieves 31% and 18% relative improvements over logistic regression and neural network, respectively. Moreover, on this task, BP-sLDA is also on par with the best DNN (a larger model consisting of 200⇥200 hidden units with dropout), which achieves an AUC of 93.60. 5.3 Analysis and Discussion We now analyze the influence of different hyper parameters on the prediction performance. Note from Figure 3(a) that, when we increase the number of topics, the pR2 score of BP-sLDA first improves and then slightly deteriorates after it goes beyond 20 topics. This is most likely to be caused by overfitting on the small dataset (79K documents), because the BP-sLDA models trained on the full 7.9M dataset produce much higher pR2 scores (Table 1) than that on the 79K dataset and keep improving as the model size (number of topics) increases. To understand the influence of the mirror descent steps on the prediction performance, we plot in Figure 4(a) the pR2 scores of BP-sLDA on the 7.9M AMR dataset for different values of mirror-descent steps L. When L increases, for small models (K = 5 and K = 20), the pR2 score remains the same, and, for a larger model (K = 100), the pR2 score first improves and then remain the same. One explanation for this phenomena is that larger K implies that the inference problem (10) becomes an optimization problem of higher dimension, which requires more mirror descent iterations. Moreover, the mirrordescent back propagation, as an end-to-end training of the prediction output, would compensate the imperfection caused by the limited number of inference steps, which makes the performance insensitive to L once it is large enough. In Figure 4(b), we plot the percentage of the dominant 7 Table 2: AUC (in percentage) on the business-centric proprietary data (1.2M documents, 128K vocabulary). The standard deviations in the parentheses are obtained from five random initializations. Number of topics 5 10 20 50 100 200 Logistic Regression 90.56 (0.00) Neural Network 90.95 (0.07) 91.25 (0.05) 91.32 (0.23) 91.54 (0.11) 91.90 (0.05) 91.98 (0.05) BP-sLDA 92.02 (0.02) 92.21 (0.03) 92.35 (0.07) 92.58 (0.03) 92.82 (0.07) 93.50 (0.06) 0 20 40 60 80 100 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Number of mirror descent iterations (layers) pR2 5 topics 20 topics 100 topics (a) Influence of MDA iterations L 0 20 40 60 80 100 0 10 20 30 40 50 Number of topics Percentage of dominant topics (%) BP−sLDA (α=1.001) BP−sLDA (α=0.5) BP−sLDA (α=0.1) Gibbs−LDA (α=0.5) Gibbs−LDA (α=0.1) BP−LDA (α=0.5) BP−LDA (α=0.1) (b) Sparsity of the topic distribution 5 10 20 0 2 4 6 8 10 12 Number of topics Negative per−word log−likelihood BP−LDA (α=1.001) BP−LDA (α=0.5) BP−LDA (α=0.1) Gibbs−LDA (α=0.5) Gibbs−LDA (α=0.1) (c) Per-word log-likelihoods Figure 4: Analysis of the behaviors of BP-sLDA and BP-LDA models. topics (which add up to 90% probability) on AMR, which shows that BP-sLDA learns sparse topic distribution even when ↵= 1.001 and obtains sparser topic distribution with smaller ↵(i.e., 0.5 and 0.1). In Figure 4(c), we evaluate the per-word log-likelihoods of the unsupervised models on AMR dataset using the method in [23]. The per-word log-likelihood of BP-LDA with ↵= 1.001 is worse than the case of ↵= 0.5 and ↵= 0.1 for Gibbs-LDA, although its prediction performance is better. This suggests the importance of the Dirichlet prior in text modeling [1, 22] and a potential tradeoff between the text modeling performance and the prediction performance. 5.4 Efficiency in Computation Time 0 20 40 60 80 100 10 −2 10 −1 10 0 10 1 10 2 10 3 Number of topics Training time in hours sLDA (79K) BP−sLDA (79K) MedLDA (79K) BP−sLDA (7.9M) Figure 5: Training time on the AMR dataset. (Tested on Intel Xeon E5-2680 2.80GHz.) To compare the efficiency of the algorithms, we show the training time of different models on the AMR dataset (79K and 7.9M) in Figure 5, which shows that our algorithm scales well with respect to increasing model size (number of topics) and increasing number of data samples. 6 Conclusion We have developed novel learning approaches for supervised LDA models, using MAP inference and mirror-descent back propagation, which leads to an end-to-end discriminative training. We evaluate the prediction performance of the model on three realworld regression and classification tasks. The results show that the discriminative training significantly improves the performance of the supervised LDA model relative to previous learning methods. Future works include (i) exploring faster algorithms for the MAP inference (e.g., accelerated mirror descent), (ii) developing semi-supervised learning of LDA using the framework from [3], and (iii) learning ↵from data. Finally, also note that the layered architecture in Figure 2 could be viewed as a deep feedforward neural network [11] with structures designed from the topic model in Figure 1. This opens up a new direction of combining the strength of both generative models and neural networks to develop new deep learning models that are scalable, interpretable and having high prediction performance for text understanding and information retrieval [13]. 8 References [1] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic models. In Proc. UAI, pages 27–34, 2009. [2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003. [3] C. M. Bishop and J. Lasserre. Generative or discriminative? getting the best of both worlds. Bayesian Statistics, 8:3–24, 2007. [4] D. M. Blei and J. D. Mcauliffe. Supervised topic models. In Proc. NIPS, pages 121–128, 2007. [5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993–1022, 2003. [6] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. ACL, volume 7, pages 440–447, 2007. [7] G. Bouchard and B. Triggs. The tradeoff between generative and discriminative classifiers. In Proc. COMPSTAT, pages 721–728, 2004. [8] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, Jul. 2011. [9] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proc. of the National Academy of Sciences, pages 5228–5235, 2004. [10] J. R. Hershey, J. L. Roux, and F. Weninger. Deep unfolding: Model-based inspiration of novel deep architectures. arXiv:1409.2574, 2014. [11] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82–97, 2012. [12] A. Holub and P. Perona. A discriminative framework for modelling object classes. In Proc. IEEE CVPR, volume 1, pages 664–671, 2005. [13] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proc. CIKM, pages 2333–2338, 2013. [14] S. Kapadia. Discriminative Training of Hidden Markov Models. PhD thesis, University of Cambridge, 1998. [15] S. Lacoste-Julien, F. Sha, and M. I. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. In Proc. NIPS, pages 897–904, 2008. [16] J. J. McAuley and J. Leskovec. From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. In Proc. WWW, pages 897–908, 2013. [17] Andrew Kachites McCallum. MALLET: A Machine Learning for Language Toolkit. http://mallet.cs.umass.edu, 2002. [18] D. B. Nemirovsky. A. S., Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, New York, 1983. [19] D. Sontag and D. Roy. Complexity of inference in latent dirichlet allocation. In Proc. NIPS, pages 1008–1016, 2011. [20] V. Stoyanov, A. Ropson, and J. Eisner. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In Proc. AISTATS, pages 725–733, 2011. [21] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. SIAM Journal on Optimization, 2008. [22] H. M. Wallach, D. M. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In Proc. NIPS, pages 1973–1981, 2009. [23] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In Proc. ICML, pages 1105–1112, 2009. [24] Y. Wang and J. Zhu. Spectral methods for supervised topic models. In Proc. NIPS, pages 1511–1519, 2014. [25] Oksana Yakhnenko, Adrian Silvescu, and Vasant Honavar. Discriminatively trained Markov model for sequence classification. In Proc. IEEE ICDM, 2005. [26] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: maximum margin supervised topic models. JMLR, 13(1):2237–2278, 2012. [27] J. Zhu, N. Chen, H. Perkins, and B. Zhang. Gibbs max-margin topic models with data augmentation. JMLR, 15(1):1073–1110, 2014. 9
2015
133
5,629
Robust Spectral Inference for Joint Stochastic Matrix Factorization Moontae Lee, David Bindel Dept. of Computer Science Cornell University Ithaca, NY 14850 {moontae,bindel}@cs.cornell.edu David Mimno Dept. of Information Science Cornell University Ithaca, NY 14850 mimno@cornell.edu Abstract Spectral inference provides fast algorithms and provable optimality for latent topic analysis. But for real data these algorithms require additional ad-hoc heuristics, and even then often produce unusable results. We explain this poor performance by casting the problem of topic inference in the framework of Joint Stochastic Matrix Factorization (JSMF) and showing that previous methods violate the theoretical conditions necessary for a good solution to exist. We then propose a novel rectification method that learns high quality topics and their interactions even on small, noisy data. This method achieves results comparable to probabilistic techniques in several domains while maintaining scalability and provable optimality. 1 Introduction Summarizing large data sets using pairwise co-occurrence frequencies is a powerful tool for data mining. Objects can often be better described by their relationships than their inherent characteristics. Communities can be discovered from friendships [1], song genres can be identified from co-occurrence in playlists [2], and neural word embeddings are factorizations of pairwise cooccurrence information [3, 4]. Recent Anchor Word algorithms [5, 6] perform spectral inference on co-occurrence statistics for inferring topic models [7, 8]. Co-occurrence statistics can be calculated using a single parallel pass through a training corpus. While these algorithms are fast, deterministic, and provably guaranteed, they are sensitive to observation noise and small samples, often producing effectively useless results on real documents that present no problems for probabilistic algorithms. -0.04 -0.02 0 0.02 0.04 0.06 0.08 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04 0.05 Area = 0.000313 -0.04 -0.02 0 0.02 0.04 0.06 0.08 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04 0.05 Area = 0.002602 -0.02 -0.01 0 0.01 0.02 0.03 -0.015 -0.01 -0.005 0 0.005 0.01 0.015 0.02 Area = 0.000660 Figure 1: 2D visualizations show the low-quality convex hull found by Anchor Words [6] (left) and a better convex hull (middle) found by discovering anchor words on a rectified space (right). We cast this general problem of learning overlapping latent clusters as Joint-Stochastic Matrix Factorization (JSMF), a subset of non-negative matrix factorization that contains topic modeling as a special case. We explore the conditions necessary for inference from cooccurrence statistics and show that the Anchor Words algorithms necessarily violate such conditions. Then we propose a rectified algorithm that matches the performance of probabilistic inference—even on small and noisy datasets—without losing efficiency and provable guarantees. Validating on both real and synthetic data, we demonstrate that our rectification not only produces better clusters, but also, unlike previous work, learns meaningful cluster interactions. 1 Let the matrix C represent the co-occurrence of pairs drawn from N objects: Cij is the joint probability p(X1 = i, X2 = j) for a pair of objects i and j. Our goal is to discover K latent clusters by approximately decomposing C ≈BABT . B is the object-cluster matrix, in which each column corresponds to a cluster and Bik = p(X = i|Z = k) is the probability of drawing an object i conditioned on the object belonging to the cluster k; and A is the cluster-cluster matrix, in which Akl = p(Z1 = k, Z2 = l) represents the joint probability of pairs of clusters. We call the matrices C and A joint-stochastic (i.e., C ∈J SN, A ∈J SK) due to their correspondence to joint distributions; B is column-stochastic. Example applications are shown in Table 1. Table 1: JSMF applications, with anchor-word equivalents. Domain Object Cluster Basis Document Word Topic Anchor Word Image Pixel Segment Pure Pixel Network User Community Representative Legislature Member Party/Group Partisan Playlist Song Genre Signature Song Anchor Word algorithms [5, 6] solve JSMF problems using a separability assumption: each topic contains at least one “anchor” word that has non-negligible probability exclusively in that topic. The algorithm uses the co-occurrence patterns of the anchor words as a summary basis for the co-occurrence patterns of all other words. The initial algorithm [5] is theoretically sound but unable to produce column-stochastic word-topic matrix B due to unstable matrix inversions. A subsequent algorithm [6] fixes negative entries in B, but still produces large negative entries in the estimated topic-topic matrix A. As shown in Figure 3, the proposed algorithm infers valid topic-topic interactions. 2 Requirements for Factorization In this section we review the probabilistic and statistical structures of JSMF and then define geometric structures of co-occurrence matrices required for successful factorization. C ∈RN×N is a joint-stochastic matrix constructed from M training examples, each of which contain some subset of N objects. We wish to find K ≪N latent clusters by factorizing C into a column-stochastic matrix B ∈RN×K and a joint-stochastic matrix A ∈RK×K, satisfying C ≈BABT . A α Z1 Z2 X1 X2 Bk nm(nm −1) 1 ≤m ≤M 1 ≤k ≤K 1 Figure 2: The JSMF event space differs from LDA’s. JSMF deals only with pairwise co-occurrence events and does not generate observations/documents. Probabilistic structure. Figure 2 shows the event space of our model. The distribution A over pairs of clusters is generated first from a stochastic process with a hyperparameter α. If the m-th training example contains a total of nm objects, our model views the example as consisting of all possible nm(nm −1) pairs of objects.1 For each of these pairs, cluster assignments are sampled from the selected distribution ((z1, z2) ∼A). Then an actual object pair is drawn with respect to the corresponding cluster assignments (x1 ∼Bz1, x2 ∼Bz2). Note that this process does not explain how each training example is generated from a model, but shows how our model understands the objects in the training examples. Following [5, 6], our model views B as a set of parameters rather than random variables.2 The primary learning task is to estimate B; we then estimate A to recover the hyperparameter α. Due to the conditional independence X1 ⊥X2 | (Z1 or Z2), the factorization C ≈BABT is equivalent to p(X1, X2|A; B) = X z1 X z2 p(X1|Z1; B)p(Z1, Z2|A)p(X2|Z2; B). Under the separability assumption, each cluster k has a basis object sk such that p(X = sk|Z = k) > 0 and p(X = sk|Z ̸= k) = 0. In matrix terms, we assume the submatrix of B comprised of 1Due to the bag-of-words assumption, every object can pair with any other object in that example, except itself. One implication of our work is better understanding the self-co-occurrences, the diagonal entries in the co-occurrence matrix. 2In LDA, each column of B is generated from a known distribution Bk ∼Dir(β). 2 the rows with indices S = {s1, . . . , sK} is diagonal. As these rows form a non-negative basis for the row space of B, the assumption implies rank+(B) = K = rank(B).3 Providing identifiability to the factorization, this assumption becomes crucial for inference of both B and A. Note that JSMF factorization is unique up to column permutation, meaning that no specific ordering exists among the discovered clusters, equivalent to probabilistic topic models (see the Appendix). Statistical structure. Let f(α) be a (known) distribution of distributions from which a cluster distribution is sampled for each training example. Saying Wm ∼f(α), we have M i.i.d samples {W1, . . . , WM} which are not directly observable. Defining the posterior cluster-cluster matrix A∗ M = 1 M PM m=1 WmW T m and the expectation A∗= E[WmW T m], Lemma 2.2 in [5] showed that4 A∗ M −→A∗ as M −→∞. (1) Denote the posterior co-occurrence for the m-th training example by C∗ m and all examples by C∗. Then C∗ m = BWmW T mBT , and C∗= 1 M PM m=1 C∗ m. Thus C∗= B 1 M M X m=1 WmW T m ! BT = BA∗ MBT . (2) Denote the noisy observation for the m-th training example by Cm, and all examples by C. Let W = [W1|...|WM] be a matrix of topics. We will construct Cm so that E[C|W] is an unbiased estimator of C∗. Thus as M →∞ C −→E[C] = C∗= BA∗ MBT −→BA∗BT . (3) Geometric structure. Though the separability assumption allows us to identify B even from the noisy observation C, we need to throughly investigate the structure of cluster interactions. This is because it will eventually be related to how much useful information the co-occurrence between corresponding anchor bases contains, enabling us to best use our training data. Say DNN n is the set of n × n doubly non-negative matrices: entrywise non-negative and positive semidefinite (PSD). Claim A∗ M, A∗∈DNN K and C∗∈DNN N Proof Take any vector y ∈RK. As A∗ M is defined as a sum of outer-products, yT A∗ My = 1 M M X m=1 yT WmW T my = 1 M X (W T my)T (W T my) = X (non-negative) ≥0. (4) Thus A∗ M ∈PSDK. In addition, (A∗ M)kl = p(Z1 = k, Z2 = l) ≥0 for all k, l. Proving A∗∈DNN K is analogous by the linearity of expectation. Relying on double non-negativity of A∗ M, Equation (3) implies not only the low-rank structure of C∗, but also double non-negativity of C∗by a similar proof (see the Appendix). The Anchor Word algorithms in [5, 6] consider neither double non-negativity of cluster interactions nor its implication on co-occurrence statistics. Indeed, the empirical co-occurrence matrices collected from limited data are generally indefinite and full-rank, whereas the posterior co-occurrences must be positive semidefinite and low-rank. Our new approach will efficiently enforce double nonnegativity and low-rankness of the co-occurrence matrix C based on the geometric property of its posterior behavior. We will later clarify how this process substantially improves the quality of the clusters and their interactions by eliminating noises and restoring missing information. 3 Rectified Anchor Words Algorithm In this section, we describe how to estimate the co-occurrence matrix C from the training data, and how to rectify C so that it is low-rank and doubly non-negative. We then decompose the rectified C′ in a way that preserves the doubly non-negative structure in the cluster interaction matrix. 3rank+(B) means the non-negative rank of the matrix B, whereas rank(B) means the usual rank. 4This convergence is not trivial while 1 M PM m=1 Wm →E[Wm] as M →∞by the Central Limit Theorem. 3 Generating co-occurrence C. Let Hm be the vector of object counts for the m-th training example, and let pm = BWm where Wm is the document’s latent topic distribution. Then Hm is assumed to be a sample from a multinomial distribution Hm ∼Multi(nm, pm) where nm = PN i=1 H(i) m , and recall E[Hm] = nmpm = nmBWm and Cov(Hm) = nm diag(pm) −pmpT m  . As in [6], we generate the co-occurrence for the m-th example by Cm = HmHT m −diag(Hm) nm(nm −1) . (5) The diagonal penalty in Eq. 5 cancels out the diagonal matrix term in the variance-covariance matrix, making the estimator unbiased. Putting dm = nm(nm −1), that is E[Cm|Wm] = 1 dm E[HmHT m] − 1 dm diag(E[Hm]) = 1 dm (E[Hm]E[Hm]T + Cov(Hm) −diag(E[Hm])) = B(WmW T m)BT ≡C∗ m. Thus E[C|W] = C∗by the linearity of expectation. Rectifying co-occurrence C. While C is an unbiased estimator for C∗in our model, in reality the two matrices often differ due to a mismatch between our model assumptions and the data5 or due to error in estimation from limited data. The computed C is generally full-rank with many negative eigenvalues, causing a large approximation error. As the posterior co-occurrence C∗must be lowrank, doubly non-negative, and joint-stochastic, we propose two rectification methods: Diagonal Completion (DC) and Alternating Projection (AP). DC modifies only diagonal entries so that C becomes low-rank, non-negative, and joint-stochastic; while AP enforces modifies every entry and enforces the same properties as well as positive semi-definiteness. As our empirical results strongly favor alternating projection, we defer the details of diagonal completion to the Appendix. Based on the desired property of the posterior co-occurrence C∗, we seek to project our estimator C onto the set of joint-stochastic, doubly non-negative, low rank matrices. Alternating projection methods like Dykstra’s algorithm [9] allow us to project onto an intersection of finitely many convex sets using projections onto each individual set in turn. In our setting, we consider the intersection of three sets of symmetric N × N matrices: the elementwise non-negative matrices NN N, the normalized matrices NORN whose entry sum is equal to 1, and the positive semi-definite matrices with rank K, PSDNK. We project onto these three sets as follows: ΠPSDNK(C) = UΛ+ KU T , ΠN ORN (C) = C + 1 −P i,j Cij N 2 11T , ΠN N N (C) = max{C, 0}. where C = UΛU T is an eigendecomposition and Λ+ K is the matrix Λ modified so that all negative eigenvalues and any but the K largest positive eigenvalues are set to zero. Truncated eigendecompositions can be computed efficiently, and the other projections are likewise efficient. While NN N and NORN are convex, PSDNK is not. However, [10] show that alternating projection with a non-convex set still works under certain conditions, guaranteeing a local convergence. Thus iterating three projections in turn until the convergence rectifies C to be in the desired space. We will show how to satisfy such conditions and the convergence behavior in Section 5. Selecting basis S. The first step of the factorization is to select the subset S of objects that satisfy the separability assumption. We want the K best rows of the row-normalized co-occurrence matrix C so that all other rows lie nearly in the convex hull of the selected rows. [6] use the GramSchmidt process to select anchors, which computes pivoted QR decomposition, but did not utilize the sparsity of C. To scale beyond small vocabularies, they use random projections that approximately preserve ℓ2 distances between rows of C. For all experiments we use a new pivoted QR algorithm (see the Appendix) that exploits sparsity instead of using random projections, and thus preserves deterministic inference.6 Recovering object-cluster B. After finding the set of basis objects S, we can infer each entry of B by Bayes’ rule as in [6]. Let {p(Z1 = k|X1 = i)}K k=1 be the coefficients that reconstruct the i-th row of C in terms of the basis rows corresponding to S. Since Bik = p(X1 = i|Z1 = k), 5There is no reason to expect real data to be generated from topics, much less exactly K latent topics. 6To effectively use random projections, it is necessary to either find proper dimensions based on multiple trials or perform low-dimensional random projection multiple times [25] and merge the resulting anchors. 4 we can use the corpus frequencies p(X1 = i) = P j Cij to estimate Bik ∝p(Z1 = k|X1 = i)p(X1 = i). Thus the main task for this step is to solve simplex-constrained QPs to infer a set of such coefficients for each object. We use an exponentiated gradient algorithm to solve the problem similar to [6]. Note that this step can be efficiently done in parallel for each object. 22.842 -7.687 0.629 -2.723 -12.888 -7.687 43.605 -4.986 -7.788 -22.930 0.629 -4.986 12.782 -5.269 -2.998 -2.723 -7.788 -5.269 19.237 -3.267 -12.888 -22.930 -2.998 -3.267 42.367 45.021 0.000 0.000 0.000 0.000 0.000 43.086 0.000 0.000 0.000 0.000 0.000 52.828 0.000 0.000 0.000 0.000 0.000 17.527 0.000 0.000 0.000 0.000 0.000 76.153 0.114 0.000 0.002 0.024 0.004 0.000 0.115 0.010 0.007 0.017 0.002 0.010 0.162 0.016 0.012 0.024 0.007 0.016 0.072 0.014 0.004 0.017 0.012 0.014 0.328 −22.93 −11.23 0.00 0.17 0.34 0.50 0.67 0.84 1.00 23.46 Figure 3: The algorithm of [6] (first panel) produces negative cluster co-occurrence probabilities. A probabilistic reconstruction alone (this paper & [5], second panel) removes negative entries but has no offdiagonals and does not sum to one. Trying after rectification (this paper, third panel) produces a valid joint stochastic matrix. Recovering cluster-cluster A. [6] recovered A by minimizing ∥C −BABT ∥F ; but the inferred A generally has many negative entries, failing to model the probabilistic interaction between topics. While we can further project A onto the joint-stochastic matrices, this produces a large approximation error. We consider an alternate recovery method that again leverages the separability assumption. Let CSS be the submatrix whose rows and columns correspond to the selected objects S, and let D be the diagonal submatrix BS∗of rows of B corresponding to S. Then CSS = DADT = DAD =⇒A = D−1CSSD−1. (6) This approach efficiently recovers a cluster-cluster matrix A mostly based on the co-occrrurence information between corresponding anchor basis, and produces no negative entries due to the stability of diagonal matrix inversion. Note that the principle submatrices of a PSD matrix are also PSD; hence, if C ∈PSDN then CSS, A ∈PSDK. Thus, not only is the recovered A an unbiased estimator for A∗ M, but also it is now doubly non-negative as A∗ M ∈DNN K after the rectification.7 4 Experimental Results Our Rectified Anchor Words algorithm with alternating projection fixes many problems in the baseline Anchor Words algorithm [6] while matching the performance of Gibbs sampling [11] and maintaining spectral inference’s determinism and independence from corpus size. We evaluate direct measurement of matrix quality as well as indicators of topic utility. We use two text datasets: NIPS full papers and New York Times news articles.8 We eliminate a minimal list of 347 English stop words and prune rare words based on tf-idf scores and remove documents with fewer than five tokens after vocabulary curation. We also prepare two non-textual item-selection datasets: users’ movie reviews from the Movielens 10M Dataset,9 and music playlists from the complete Yes.com dataset.10 We perform similar vocabulary curation and document tailoring, with the exception of frequent stop-object elimination. Playlists often contain the same songs multiple times, but users are unlikely to review the same movies more than once, so we augment the movie dataset so that each review contains 2 × (stars) number of movies based on the half-scaled rating information that varies from 0.5 stars to 5 stars. Statistics of our datasets are shown in Table 2. Table 2: Statistics of four datasets. Dataset M N Avg. Len NIPS 1,348 5k 380.5 NYTimes 269,325 15k 204.9 Movies 63,041 10k 142.8 Songs 14,653 10k 119.2 We run DC 30 times for each experiment, randomly permuting the order of objects and using the median results to minimize the effect of different orderings. We also run 150 iterations of AP alternating PSDNK, NORN, and NN N in turn. For probabilistic Gibbs sampling, we use the Mallet with the standard option doing 1,000 iterations. All metrics are evaluated against the original C, not against the rectified C′, whereas we use B and A inferred from the rectified C′. 7We later realized that essentially same approach was previously tried in [5], but it was not able to generate a valid topic-topic matrix as shown in the middle panel of Figure 3. 8https://archive.ics.uci.edu/ml/datasets/Bag+of+Words 9http://grouplens.org/datasets/movielens 10http://www.cs.cornell.edu/˜shuochen/lme 5 Qualitative results. Although [6] report comparable results to probabilistic algorithms for LDA, the algorithm fails under many circumstances. The algorithm prefers rare and unusual anchor words that form a poor basis, so topic clusters consist of the same high-frequency terms repeatedly, as shown in the upper third of Table 3. In contrast, our algorithm with AP rectification successfully learns themes similar to the probabilistic algorithm. One can also verify that cluster interactions given in the third panel of Figure 3 explain how the five topics correlate with each other. Table 3: Each line is a topic from NIPS (K = 5). Previous work simply repeats the most frequent words in the corpus five times. Arora et al. 2013 (Baseline) neuron layer hidden recognition signal cell noise neuron layer hidden cell signal representation noise neuron layer cell hidden signal noise dynamic neuron layer cell hidden control signal noise neuron layer hidden cell signal recognition noise This paper (AP) neuron circuit cell synaptic signal layer activity control action dynamic optimal policy controller reinforcement recognition layer hidden word speech image net cell field visual direction image motion object orientation gaussian noise hidden approximation matrix bound examples Probabilistic LDA (Gibbs) neuron cell visual signal response field activity control action policy optimal reinforcement dynamic robot recognition image object feature word speech features hidden net layer dynamic neuron recurrent noise gaussian approximation matrix bound component variables Similar to [12], we visualize the five anchor words in the cooccurrence space after 2D PCA of C. Each panel in Figure 1 shows a 2D embedding of the NIPS vocabulary as blue dots and five selected anchor words in red. The first plot shows standard anchor words and the original cooccurrence space. The second plot shows anchor words selected from the rectified space overlaid on the original co-occurrence space. The third plot shows the same anchor words as the second plot overlaid on the AP-rectified space. The rectified anchor words provide better coverage on both spaces, explaining why we are able to achieve reasonable topics even with K = 5. Rectification also produces better clusters in the non-textual movie dataset. Each cluster is notably more genre-coherent and year-coherent than the clusters from the original algorithm. When K = 15, for example, we verify a cluster of Walt Disney 2D Animations mostly from the 1990s and a cluster of Fantasy movies represented by Lord of the Rings films, similar to clusters found by probabilistic Gibbs sampling. The Baseline algorithm [6] repeats Pulp Fiction and Silence of the Lambs 15 times. Quantitative results. We measure the intrinsic quality of inference and summarization with respect to the JSMF objectives as well as the extrinsic quality of resulting topics. Lines correspond to four methods: ◦Baseline for the algorithm in the previous work [6] without any rectification, △DC for Diagonal Completion, □AP for Alternating Projection, and ⋄Gibbs for Gibbs sampling. Anchor objects should form a good basis for the remaining objects. We measure Recovery error 1 N PN i ∥Ci −PK k p(Z1 = k|X1 = i)CSk∥2  with respect to the original C matrix, not the rectified matrix. AP reduces error in almost all cases and is more effective than DC. Although we expect error to decrease as we increase the number of clusters K, reducing recovery error for a fixed K by choosing better anchors is extremely difficult: no other subset selection algorithm [13] decreased error by more than 0.001. A good matrix factorization should have small elementwise Approximation error ∥C −BABT ∥F  . DC and AP preserve more of the information in the original matrix C than the Baseline method, especially when K is small.11 We expect nontrivial interactions between clusters, even when we do not explicitly model them as in [14]. Greater diagonal Dominancy 1 K PK k p(Z2 = k|Z1 = k)  indicates lower correlation between clusters.12 AP and Gibbs results are similar. We do not report held-out probability because we find that relative results are determined by user-defined smoothing parameters [12, 24]. Specificity 1 K PK k KL (p(X|Z = k)∥p(X))  measures how much each cluster is distinct from the corpus distribution. When anchors produce a poor basis, the conditional distribution of clus11In the NYTimes corpus, 10−2 is a large error: each element is around 10−9 due to the number of normalized entries. 12Dominancy in Songs corpus lacks any Baseline results at K > 10 because dominancy is undefined if an algorithm picks a song that occurs at most once in each playlist as a basis object. In this case, the original construction of CSS, and hence of A, has a zero diagonal element, making dominancy NaN. 6 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Recovery Approximation Dominancy Specificity Dissimilarity Coherence 0.03 0.04 0.05 0.06 0.00 0.05 0.10 0.15 0.2 0.4 0.6 0.8 1.0 0 1 2 3 0 5 10 15 −320 −280 −240 −200 −160 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 Category G AP Baseline DC Gibbs Nips G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Recovery Approximation Dominancy Specificity Dissimilarity Coherence 0.05 0.10 0.15 0.20 0.25 0.00 0.05 0.10 0.15 0.20 0.2 0.4 0.6 0.8 1.0 0 1 2 3 5 10 15 −400 −350 −300 5 10 15 25 50 75100150 5 10 15 25 50 75100150 5 10 15 25 50 75100150 5 10 15 25 50 75100150 5 10 15 25 50 75100150 5 10 15 25 50 75100150 Category G AP Baseline DC Gibbs NYTimes G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Recovery Approximation Dominancy Specificity Dissimilarity Coherence 0.04 0.06 0.08 0.10 0 5 10 0.25 0.50 0.75 1.00 0 1 2 3 4 0 5 10 15 −240 −210 −180 −150 −120 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 Category G AP Baseline DC Gibbs Movies G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Recovery Approximation Dominancy Specificity Dissimilarity Coherence 0.050 0.075 0.100 0.125 0.150 0.000 0.005 0.010 0.015 0.4 0.6 0.8 1.0 0 1 2 3 4 5 0 5 10 15 20 −700 −500 −300 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 5 10 15 25 50 75100 Category G AP Baseline DC Gibbs Songs Figure 4: Experimental results on real dataset. The x-axis indicates logK where K varies by 5 up to 25 topics and by 25 up to 100 or 150 topics. Whereas the Baseline algorithm largely fails with small K and does not infer quality B and A even with large K, Alternating Projection (AP) not only finds better basis vectors (Recovery), but also shows stable and comparable behaviors to probabilistic inference (Gibbs) in every metric. ters given objects becomes uniform, making p(X|Z) similar to p(X). Inter-topic Dissimilarity counts the average number of objects in each cluster that do not occur in any other cluster’s top 20 objects. Our experiments validate that AP and Gibbs yield comparably specific and distinct topics, while Baseline and DC simply repeat the corpus distribution as in Table 3. Coherence 1 K PK k P∈T opk x1̸=x2 log D2(x1,x2)+ϵ D1(x2)  penalizes topics that assign high probability (rank > 20) to words that do not occur together frequently. AP produces results close to Gibbs sampling, and far from the Baseline and DC. While this metric correlates with human evaluation of clusters [15] “worse” coherence can actually be better because the metric does not penalize repetition [12]. In semi-synthetic experiments [6] AP matches Gibbs sampling and outperforms the Baseline, but the discrepancies in topic quality metrics are smaller than in the real experiments (see Appendix). We speculate that semi-synthetic data is more “well-behaved” than real data, explaining why issues were not recognized previously. 5 Analysis of Algorithm Why does AP work? Before rectification, diagonals of the empirical C matrix may be far from correct. Bursty objects yield diagonal entries that are too large; extremely rare objects that occur at most once per document yield zero diagonals. Rare objects are problematic in general: the corresponding rows in the C matrix are sparse and noisy, and these rows are likely to be selected by the pivoted QR. Because rare objects are likely to be anchors, the matrix CSS is likely to be highly diagonally dominant, and provides an uninformative picture of topic correlations. These problems are exacerbated when K is small relative to the effective rank of C, so that an early choice of a poor anchor precludes a better choice later on; and when the number of documents M is small, in which case the empirical C is relatively sparse and is strongly affected by noise. To mitigate this issue, [24] run exhaustive grid search to find document frequency cutoffs to get informative anchors. As 7 model performance is inconsistent for different cutoffs and search requires cross-validation for each case, it is nearly impossible to find good heuristics for each dataset and number of topics. Fortunately, a low-rank PSD matrix cannot have too many diagonally-dominant rows, since this violates the low rank property. Nor can it have diagonal entries that are small relative to off-diagonals, since this violates positive semi-definiteness. Because the anchor word assumption implies that non-negative rank and ordinary rank are the same, the AP algorithm ideally does not remove the information we wish to learn; rather, 1) the low-rank projection in AP suppresses the influence of small numbers of noisy rows associated with rare words which may not be well correlated with the others, and 2) the PSD projection in AP recovers missing information in diagonals. (As illustrated in the Dominancy panel of the Songs corpus in Figure 4, AP shows valid dominancies even after K > 10 in contrast to the Baseline algorithm.) Why does AP converge? AP enjoys local linear convergence [10] if 1) the initial C is near the convergence point C′, 2) PSDNK is super-regular at C′, and 3) strong regularity holds at C′. For the first condition, recall that we rectified C′ by pushing C toward C∗, which is the ideal convergence point inside the intersection. Since C →C∗as shown in (5), C is close to C′ as desired.The proxregular sets13 are subsets of super-regular sets, so prox-regularity of PSDNK at C′ is sufficient for the second condition. For permutation invariant M ⊂RN, the spectral set of symmetric matrices is defined as λ−1(M) = {X ∈SN : (λ1(X), . . . , λN(X)) ∈M}, and λ−1(M) is prox-regular if and only if M is prox-regular [16, Th. 2.4]. Let M be {x ∈R+ n : |supp(x)| = K}. Since each element in M has exactly K positive components and all others are zero, λ−1(M) = PSDNK. By the definition of M and K < N, PM is locally unique almost everywhere, satisfying the second condition almost surely. (As the intersection of the convex set PSDN and the smooth manifold of rank K matrices, PSDNK is a smooth manifold almost everywhere.) Checking the third condition a priori is challenging, but we expect noise in the empirical C to prevent an irregular solution, following the argument of Numerical Example 9 in [10]. We expect AP to converge locally linearly and we can verify local convergence of AP in practice. Empirically, the ratio of average distances between two iterations are always ≤0.9794 on the NYTimes dataset (see the Appendix), and other datasets were similar. Note again that our rectified C′ is a result of pushing the empirical C toward the ideal C∗. Because approximation factors of [6] are all computed based on how far C and its co-occurrence shape could be distant from C∗’s, all provable guarantees of [6] hold better with our rectified C′. 6 Related and Future Work JSMF is a specific structure-preserving Non-negative Matrix Factorization (NMF) performing spectral inference. [17, 18] exploit a similar separable structure for NMF problmes. To tackle hyperspectral unmixing problems, [19, 20] assume pure pixels, a separability-equivalent in computer vision. In more general NMF without such structures, RESCAL [21] studies tensorial extension of similar factorization and SymNMF [22] infers BBT rather than BABT . For topic modeling, [23] performs spectral inference on third moment tensor assuming topics are uncorrelated. As the core of our algorithm is to rectify the input co-occurrence matrix, it can be combined with several recent developments. [24] proposes two regularization methods for recovering better B. [12] nonlinearly projects co-occurrence to low-dimensional space via t-SNE and achieves better anchors by finding the exact anchors in that space. [25] performs multiple random projections to low-dimensional spaces and recovers approximate anchors efficiently by divide-and-conquer strategy. In addition, our work also opens several promising research directions. How exactly do anchors found in the rectified C′ form better bases than ones found in the original space C? Since now the topic-topic matrix A is again doubly non-negative and joint-stochastic, can we learn super-topics in a multi-layered hierarchical model by recursively applying JSMF to topic-topic co-occurrence A? Acknowledgments This research is supported by NSF grant HCC:Large-0910664. We thank Adrian Lewis for valuable discussions on AP convergence. 13A set M is prox-regular if PM is locally unique. 8 References [1] Alan Mislove, Bimal Viswanath, Krishna P. Gummadi, and Peter Druschel. You are who you know: Inferring user profiles in Online Social Networks. In Proceedings of the 3rd ACM International Conference of Web Search and Data Mining (WSDM’10), New York, NY, February 2010. [2] Shuo Chen, J. Moore, D. Turnbull, and T. Joachims. Playlist prediction via metric embedding. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages 714–722, 2012. [3] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for word representation. In EMNLP, 2014. [4] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In NIPS, 2014. [5] S. Arora, R. Ge, and A. Moitra. Learning topic models – going beyond SVD. In FOCS, 2012. [6] Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. A practical algorithm for topic modeling with provable guarantees. In ICML, 2013. [7] T. Hofmann. Probabilistic latent semantic analysis. In UAI, pages 289–296, 1999. [8] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, pages 993–1022, 2003. Preliminary version in NIPS 2001. [9] JamesP. Boyle and RichardL. Dykstra. A method for finding projections onto the intersection of convex sets in Hilbert spaces. In Advances in Order Restricted Statistical Inference, volume 37 of Lecture Notes in Statistics, pages 28–47. Springer New York, 1986. [10] Adrian S. Lewis, D. R. Luke, and Jrme Malick. Local linear convergence for alternating and averaged nonconvex projections. Foundations of Computational Mathematics, 9:485–513, 2009. [11] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:5228–5235, 2004. [12] Moontae Lee and David Mimno. Low-dimensional embeddings for interpretable anchor-based topic inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1319–1328. Association for Computational Linguistics, 2014. [13] Mary E Broadbent, Martin Brown, Kevin Penner, I Ipsen, and R Rehman. Subset selection algorithms: Randomized vs. deterministic. SIAM Undergraduate Research Online, 3:50–71, 2010. [14] D. Blei and J. Lafferty. A correlated topic model of science. Annals of Applied Statistics, pages 17–35, 2007. [15] David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. Optimizing semantic coherence in topic models. In EMNLP, 2011. [16] A. Daniilidis, A. S. Lewis, J. Malick, and H. Sendov. Prox-regularity of spectral functions and spectral sets. Journal of Convex Analysis, 15(3):547–560, 2008. [17] Christian Thurau, Kristian Kersting, and Christian Bauckhage. Yes we can: simplex volume maximization for descriptive web-scale matrix factorization. In CIKM’10, pages 1785–1788, 2010. [18] Abhishek Kumar, Vikas Sindhwani, and Prabhanjan Kambadur. Fast conical hull algorithms for nearseparable non-negative matrix factorization. CoRR, pages –1–1, 2012. [19] Jos M. P. Nascimento, Student Member, and Jos M. Bioucas Dias. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, pages 898–910, 2005. [20] C´ecile Gomez, H. Le Borgne, Pascal Allemand, Christophe Delacourt, and Patrick Ledru. N-FindR method versus independent component analysis for lithological identification in hyperspectral imagery. International Journal of Remote Sensing, 28(23):5315–5338, 2007. [21] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning (ICML11), ICML, pages 809–816. ACM, 2011. [22] Da Kuang, Haesun Park, and Chris H. Q. Ding. Symmetric nonnegative matrix factorization for graph clustering. In SDM. SIAM / Omnipress, 2012. [23] Anima Anandkumar, Dean P. Foster, Daniel Hsu, Sham Kakade, and Yi-Kai Liu. A spectral algorithm for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pages 926–934, 2012. [24] Thang Nguyen, Yuening Hu, and Jordan Boyd-Graber. Anchors regularized: Adding robustness and extensibility to scalable topic-modeling algorithms. In Association for Computational Linguistics, 2014. [25] Tianyi Zhou, Jeff A Bilmes, and Carlos Guestrin. Divide-and-conquer learning by anchoring a conical hull. In Advances in Neural Information Processing Systems 27, pages 1242–1250. 2014. 9
2015
134
5,630
Minimax Time Series Prediction Wouter M. Koolen Centrum Wiskunde & Informatica wmkoolen@cwi.nl Alan Malek UC Berkeley malek@berkeley.edu Peter L. Bartlett UC Berkeley & QUT bartlett@cs.berkeley.edu Yasin Abbasi-Yadkori Queensland University of Technology yasin.abbasiyadkori@qut.edu.au Abstract We consider an adversarial formulation of the problem of predicting a time series with square loss. The aim is to predict an arbitrary sequence of vectors almost as well as the best smooth comparator sequence in retrospect. Our approach allows natural measures of smoothness such as the squared norm of increments. More generally, we consider a linear time series model and penalize the comparator sequence through the energy of the implied driving noise terms. We derive the minimax strategy for all problems of this type and show that it can be implemented efficiently. The optimal predictions are linear in the previous observations. We obtain an explicit expression for the regret in terms of the parameters defining the problem. For typical, simple definitions of smoothness, the computation of the optimal predictions involves only sparse matrices. In the case of norm-constrained data, where the smoothness is defined in terms of the squared norm of the comparator’s increments, we show that the regret grows as T/√λT , where T is the length of the game and λT is an increasing limit on comparator smoothness. 1 Introduction In time series prediction, tracking, and filtering problems, a learner sees a stream of (possibly noisy, vector-valued) data and needs to predict the future path. One may think of robot poses, meteorological measurements, stock prices, etc. Popular stochastic models for such tasks include the auto-regressive moving average (ARMA) model in time series analysis, Brownian motion models in finance, and state space models in signal processing. In this paper, we study the time series prediction problem in the regret framework; instead of making assumptions on the data generating process, we ask: can we predict the data sequence online almost as well as the best offline prediction method in some comparison class (in this case, offline means that the comparator only needs to model the data sequence after seeing all of it)? Our main contribution is computing the exact minimax strategy for a range of time series prediction problems. As a concrete motivating example, let us pose the simplest nontrivial such minimax problem min a1 max x1∈B · · · min aT max xT ∈B T X t=1 ∥at −xt∥2 | {z } Loss of Learner − min ˆa1,...,ˆaT ( T X t=1 ∥ˆat −xt∥2 | {z } Loss of Comparator + λT T +1 X t=1 ∥ˆat −ˆat−1∥2 | {z } Comparator Complexity ) . (1) This notion of regret is standard in online learning, going back at least to [1] in 2001, which views it as the natural generalization of L2 regularization to deal with non-stationarity comparators. We offer two motivations for this regularization. First, one can interpret the complexity term as the magnitude 1 of the noise required to generate the comparator using a multivariate Gaussian random walk, and, generalizing slightly, as the energy of the innovations required to model the comparator using a single, fixed linear time series model (e.g. specific ARMA coefficients). Second, we can view the comparator term in Equation (1) as akin to the Lagrangian of a constrained optimization problem. Rather than competing with the comparator sequence ˆa1, . . . , ˆaT that minimizes the cumulative loss subject to a hard constraint on the complexity term, the learner must compete with the comparator sequence that best trades off the cumulative loss and the smoothness. The Lagrange multiplier, λT , controls the trade-off. Notice that it is natural to allow λT to grow with T, since that penalizes the comparator’s change per round more than the loss per round. For the particular problem (1) we obtain an efficient algorithm using amortized O(d) time per round, where d is the dimension of the data; there is no nasty dependence on T as often happens with minimax algorithms. Our general minimax analysis extends to more advanced complexity terms. For example, we may regularize instead by higher-order smoothness (magnitude of increments of increments, etc.), or more generally, we may consider a fixed linear process and regularize the comparator by the energy of its implied driving noise terms (innovations). We also deal with arbitrary sequences of rank-one quadratic constraints on the data. We show that the minimax algorithm is of a familiar nature; it is a linear filter, with a twist. Its coefficients are not time-invariant but instead arise from the intricate interplay between the regularization and the range of the data, combined with shrinkage. Fortunately, they may be computed in a pre-processing step by a simple recurrence. An unexpected detail of the analysis is the following. As we will show, the regret objective in (1) is a convex quadratic function of all data, and the sub-problem objectives that arise from the backward induction steps in the minimax analysis remain quadratic functions of the past. However, they may be either concave or convex. Changing direction of curvature is typically a source of technical difficulty: the minimax solution is different in either case. Quite remarkably, we show that one can determine a priori which rounds are convex and which are concave and apply the appropriate solution method in each. We also consider what happens when the assumptions we need to make for the minimax analysis to go through are violated. We will show that the obtained minimax algorithm is in fact highly robust. Simply applying it unlicensed anyway results in adaptive regret bounds that scale naturally with the realized data magnitude (or, more generally, its energy). 1.1 Related Work There is a rich history of tracking problems in the expert setting. In this setting, the learner has some finite number of actions to play and must select a distribution over actions to play each round in such a way as to guarantee that the loss is almost as small as the best single action in hindsight. The problem of tracking the best expert forces the learner to compare with sequences of experts (usually with some fixed number of switches). The fixed-share algorithm [2] was an early solution, but there has been more recent work [3, 4, 5, 6]. Tracking experts has been applied to other areas; see e.g. [7] for an application to sequential allocation. An extension to linear combinations of experts where the expert class is penalized by the p-norm of the sequence was considered in [1]. Minimax algorithms for squared Euclidean loss have been studied in several contexts such as Gaussian density estimation [8] and linear regression [9]. In [10], the authors showed that the minimax algorithm for quadratic loss is Follow the Leader (i.e. predicting the previous data mean) when the player is constrained to play in a ball around the previous data mean. Additionally, Moroshko and Krammer [11, 12] propose a weak notion of non-stationarity that allows them to apply the last-step minimax approach to a regression-like framework. The tracking problem in the regret setting has been considered previously, e.g. [1], where the authors studied the best linear predictor with a comparison class of all sequences with bounded smoothness P t∥at −at−1∥2 and proposed a general method for converting regret bounds in the static setting to ones in the shifting setting (where the best expert is allowed to change). Outline We start by presenting the formal setup in Section 2 and derive the optimal offline predictions. In Section 3 we zoom in to single-shot quadratic games, and solve these both in the convex and concave case. With this in hand, we derive the minimax solution to the time series prediction problem by backward induction in Section 4. In Section 5 we focus on the motivating problem 2 (1) for which we give a faster implementation and tightly sandwich the minimax regret. Section 6 concludes with discussion, conjectures and open problems. 2 Protocol and Offline Problem The game protocol is described in Figure 1 and is the usual online prediction game with squared Euclidean loss. The goal of the learner is to incur small regret, that is, to predict the data almost as well as the best complexity-penalized sequence ˆa1 · · · ˆaT chosen in hindsight. Our motivating problem (1) gauged complexity by the sum of squared norms of the increments, thus encouraging smoothness. Here we generalize to complexity terms defined by a complexity matrix K ⪰0, and charge the comparator ˆa1 · · · ˆaT by P s,t Ks,tˆa⊺ s ˆat. We recover the smoothness penalty of (1) by taking K to be the T × T tridiagonal matrix For t = 1, 2, . . . , T : • Learner predicts at ∈Rd • Environment reveals xt ∈Rd • Learner suffers loss ∥at −xt∥2. Figure 1: Protocol       2 −1 −1 2 −1 ... −1 2 −1 −1 2       , (2) but we may also regularize by e.g. the sum of squared norms (K = I), the sum of norms of higher order increments, or more generally, we may consider a fixed linear process and take K1/2 to be the matrix that recovers the driving noise terms from the signal, and then our penalty is exactly the energy of the implied noise for that linear process. We now turn to computing the identity and quality of the best competitor sequence in hindsight. Theorem 1. For any complexity matrix K ⪰0, regularization scalar λT ≥0, and d × T data matrix XT = [x1 · · · xT ] the problem L∗:= min ˆa1,...,ˆaT T X t=1 ∥ˆat −xt∥2 + λT X s,t Ks,tˆa⊺ s ˆat has linear minimizer and quadratic value given by [ˆa1 · · · ˆaT ] = XT (I + λT K)−1 and L∗= tr XT (I −(I + λT K)−1)X⊺ T  . Proof. Writing ˆ A = [ˆa1 · · · ˆaT ] we can compactly express the offline problem as L∗= min ˆ A tr  ( ˆ A −XT )⊺( ˆ A −XT ) + λT K ˆ A⊺ˆ A  . The ˆ A derivative of the objective is 2( ˆ A −XT ) + 2λT ˆ AK. Setting this to zero yields the minimizer ˆ A = XT (I + λT K)−1. Back-substitution and simplification result in value tr XT (I −(I + λT K)−1)X⊺ T  . Note that for the choice of K in (2) computing the optimal ˆ A can be performed in O(dT) time by solving the linear system A(I + λT KT ) = XT directly. This system decomposes into d (one per dimension) independent tridiagonal systems, each in T (one per time step) variables, which can each be solved in linear time using Gaussian elimination. This theorem shows that the objective of our minimax problem is a quadratic function of the data. In order to solve a T round minimax problem with quadratic regret objective, we first solve simple single round quadratic games. 3 Minimax Single-shot Squared Loss Games One crucial tool in the minimax analysis of our tracking problem will be solving particular singleshot min-max games. In such games, the player and adversary play prediction a and data x resulting in payoff given by the following square loss plus a quadratic in x: V (a, x) := ∥a −x∥2 + (α −1)∥x∥2 + 2b⊺x. (3) 3 0 1 2 3 4 5 6 -4 -2 0 2 4 α V ∗ The quadratic and linear terms in x have coefficients α ∈R and b ∈Rd. Note that V (a, x) is convex in a and either convex or concave in x as decided by the sign of α. The following result, proved in Appendix B.1 and illustrated for ∥b∥= 1 by the figure to the right, gives the minimax analysis for both cases. Theorem 2. Let V (a, x) be as in (3). If ∥b∥≤1, then the minimax problem V ∗:= min a∈Rd max x∈Rd:∥x∥≤1 V (a, x) has value V ∗=    ∥b∥2 1 −α if α ≤0, ∥b∥2 + α if α ≥0, and minimizer a =    b 1 −α if α ≤0, b if α ≥0. (4) We also want to look at the performance of this strategy when we do not impose the norm bound ∥x∥≤1 nor make the assumption ∥b∥≤1. By evaluating (3) we obtain an adaptive expression that scales with the actual norm ∥x∥2 of the data. Theorem 3. Let a be the strategy from (4). Then, for any data x ∈Rd and any b ∈Rd, V (a, x) = ∥b∥2 1 −α + α b 1 −α −x 2 ≤ ∥b∥2 1 −α if α ≤0, and V (a, x) = ∥b∥2 + α∥x∥2 if α ≥0. These two theorems point out that the strategy in (4) is amazingly versatile. The former theorem establishes minimax optimality under data constraint ∥x∥≤1 assuming that ∥b∥≤1. Yet the latter theorem tells us that, even without constraints and assumptions, this strategy is still an extremely useful heuristic. For its actual regret is bounded by the minimax regret we would have incurred if we would have known the scale of the data ∥x∥(and ∥b∥) in advance. The norm bound we imposed in the derivation induces the complexity measure for the data to which the strategy adapts. This robustness property will extend to the minimax strategy for time series prediction. Finally, it remains to note that we present the theorems in the canonical case. Problems with a constraint of the form ∥x −c∥≤β may be canonized by re-parameterizing by x′ = x−c β and a′ = a−c β and scaling the objective by β−2. We find Corollary 4. Fix β ≥0 and c ∈Rd. Let V ∗(α, b) denote the minimax value from (4) with parameters α, b. If ∥(α −1)c + b∥≤β then min a max x:∥x−c∥≤β V (a, x) = β2V ∗  α, (α −1)c + b β  + 2b⊺c + (α −1)∥c∥2. With this machinery in place, we continue the minimax analysis of time series prediction problems. 4 Minimax Time Series Prediction In this section, we give the minimax solution to the online prediction problem. Recall that the evaluation criterion, the regret, is defined by R := T X t=1 ∥at −xt∥2 − min ˆa1,...,ˆaT T X t=1 ∥ˆat −xt∥2 + λT tr  K ˆ A⊺ˆ A  (5) where K ⪰0 is a fixed T × T matrix measuring the complexity of the comparator sequence. Since all the derivations ahead will be for a fixed T, we drop the T subscript on the λ. We study the minimax problem R∗:= min a1 max x1 · · · min aT max xT R (6) under the constraint on the data that ∥Xtvt∥≤1 in each round t for some fixed sequence v1, . . . vT such that vt ∈Rt. This constraint generalizes the norm bound constraint from the motivating problem (1), which is recovered by taking vt = et. This natural generalization allows us to also consider bounded norms of increments, bounded higher order discrete derivative norms etc. 4 We compute the minimax regret and get an expression for the minimax algorithm. We show that, at any point in the game, the value is a quadratic function of the past samples and the minimax algorithm is linear: it always predicts with a weighted sum of all past samples. Most intriguingly, the value function can either be a convex or concave quadratic in the last data point, depending on the regularization. We saw in the previous section that these two cases require a different minimax solution. It is therefore an extremely fortunate fact that the particular case we find ourselves in at each round is not a function of the past data, but just a property of the problem parameters K and vt. We are going to solve the sequential minimax problem (6) one round at a time. To do so, it is convenient to define the value-to-go of the game from any state Xt = [x1 · · · xt] recursively by V (XT ) := −L∗ and V (Xt−1) := min at max xt:∥Xtvt∥≤1 ∥at −xt∥2 + V (Xt). We are interested in the minimax algorithm and minimax regret R∗= V (X0). We will show that the minimax value and strategy are a quadratic and linear function of the observations. To express the value and strategy and state the necessary condition on the problem, we will need a series of scalars dt and matrices Rt ∈Rt×t for t = 1, . . . , T, which, as we will explain below, arises naturally from the minimax analysis. The matrices, which depend on the regularization parameter λ, comparator complexity matrix K and data constraints vt, are defined recursively back-to-front. The base case is RT := (I + λT K)−1. Using the convenient abbreviations vt = wt  ut 1  and Rt =  At bt b⊺ t ct  we then recursively define Rt−1 and set dt by Rt−1 := At + (bt −ctut) (bt −ctut)⊺−ctutu⊺ t , dt := ct w2 t if ct ≥0, (7a) Rt−1 := At + btb⊺ t 1 −ct , dt := 0 if ct ≤0. (7b) Using this recursion for dt and Rt, we can perform the exact minimax analysis under a certain condition on the interplay between the data constraint and the regularization. We then show below that the obtained algorithm has a condition-free data-dependent regret bound. Theorem 5. Assume that K and vt are such that any data sequence XT satisfying the constraint ∥Xtvt∥≤1 for all rounds t ≤T also satisfies Xt−1 (ct −1)ut −bt  ≤1/wt for all rounds t ≤T. Then the minimax value of and strategy for problem (6) are given by V (Xt) = tr (Xt (Rt −I) X⊺ t ) + T X s=t+1 ds and at = Xt−1 ( bt 1−ct if ct ≤0, bt −ctut if ct ≥0, In particular, this shows that the minimax regret (6) is given by R∗= PT t=1 dt. Proof. By induction. The base case V (XT ) is Theorem 1. For any t < T we apply the definition of V (Xt−1) and the induction hypothesis to get V (Xt−1) = min at max xt:∥Xtvt∥≤1 ∥at −xt∥2 + tr (Xt(Rt −I)X⊺ t ) + T X s=t+1 ds = tr(Xt−1(At −I)X⊺ t−1) + T X s=t+1 dt + C where we abbreviated C := min at max xt:∥Xtvt∥≤1 ∥at −xt∥2 + (ct −1)x⊺ t xt + 2x⊺ t Xt−1bt. Without loss of generality, assume wt > 0. Now, as ∥Xtvt∥≤1 iff ∥Xt−1ut + xt∥≤1/wt, application of Corollary 4 with α = ct, b = Xt−1bt, β = 1/wt and c = −Xt−1ut followed by Theorem 2 results in optimal strategy at = ( Xt−1bt 1−ct if ct ≤0, −ctXt−1ut + Xt−1bt if ct ≥0. 5 and value C = (ct−1)∥Xt−1ut∥2−2b⊺ t X⊺ t−1Xt−1ut+ ( Xt−1 (ct −1)ut −bt  2 /(1 −ct) if ct ≤0, Xt−1 (ct −1)ut −bt  2 + ct/w2 t if ct ≥0, Expanding all squares and rearranging (cycling under the trace) completes the proof. On the one hand, from a technical perspective the condition of Theorem 5 is rather natural. It guarantees that the prediction of the algorithm will fall within the constraint imposed on the data. (If it would not, we could benefit by clipping the prediction. This would be guaranteed to reduce the loss, and it would wreck the backwards induction.) Similar clipping conditions arise in the minimax analyses for linear regression [9] and square loss prediction with Mahalanobis losses [13]. In practice we typically do not have a hard bound on the data. Sill, by running the above minimax algorithm obtained for data complexity bounds ∥Xtvt∥≤1, we get an adaptive regret bound that scales with the actual data complexity ∥Xtvt∥2, as can be derived by replacing the application of Theorem 2 in the proof of Theorem 5 by an invocation of Theorem 3. Theorem 6. Let K ⪰0 and vt be arbitrary. The minimax algorithm obtained in Theorem 5 keeps the regret (5) bounded by R ≤PT t=1 dt∥Xtvt∥2 for any data sequence XT . 4.1 Computation, sparsity In the important special case (typical application) where the regularization K and data constraint vt are encoding some order of smoothness, we find that K is banded diagonal and vt only has a few tail non-zero entries. It hence is the case that R−1 T −1 = I + λK is sparse. We now argue that the recursive updates (7) preserve sparsity of the inverse R−1 t . In Appendix C we derive an update for R−1 t−1 in terms of R−1 t . For computation it hence makes sense to tabulate R−1 t directly. We now argue (proof in Appendix B.2) that all R−1 t are sparse. Theorem 7. Say the vt are V -sparse (all but their tail V entries are zero). And say that K is D-banded (all but the the main and D −1 adjacent diagonals to either side are zero). Then each R−1 t is the sum of the D-banded matrix I + λK1:t,1:t and a (D + V −2)-blocked matrix (i.e. all but the lower-right block of size D + V −2 is zero). So what does this sparsity argument buy us? We only need to maintain the original D-banded matrix K and the (D + V −2)2 entries of the block perturbation. These entries can be updated backwards from t = T, . . . , 1 in O((D + V −2)3) time per round using block matrix inverses. This means that the run-time of the entire pre-processing step is linear in T. For updates and prediction we need ct and bt, which we can compute using Gaussian elimination from R−1 t in O(t(D + V )) time. In the next section we will see a special case in which we can update and predict in constant time. 5 Norm-bounded Data with Increment Squared Regularization We return to our motivating problem (1) with complexity matrix K = KT given by (2) and norm constrained data, i.e. vt = et. We show that the Rt matrices are very simple: their inverse is I + λKt with its lower-right entry perturbed. Using this, we show that the prediction is a linear combination of the past observations with weights decaying exponentially backward in time. We derive a constant-time update equation for the minimax prediction and tightly sandwich the regret. Here, we will calculate a few quantities that will be useful throughout this section. The inverse (I + λKT )−1 can be computed in closed form as a direct application of the results in [14]: Lemma 8. Recall that sinh(x) = ex−e−x 2 and cosh(x) = ex+e−x 2 . For any λ ≥0: (I + λKT )−1 i,j = cosh (T + 1 −|i −j|)ν  −cosh (T + 1 −i −j)ν  2λ sinh(ν) sinh (T + 1)ν  , where ν = cosh−1 1 + 1 2λ  . 6 We need some control on this inverse. We will use the abbreviations zt := (I + λKt)−1et, (8) ht := e⊺ t (I + λKt)−1et = e⊺ t zt, and (9) h := 2 1 + 2λ + √ 1 + 4λ. (10) We now show that these quantities are easily computable (see Appendix B for proofs). Lemma 9. Let ν be as in Lemma 8. Then, we can write ht = 1 −(λh)2t 1 −(λh)2t+2 h, and limt→∞ht = h from below, exponentially fast. A direct application of block matrix inversion (Lemma 12) results in Lemma 10. We have ht = 1 1 + 2λ −λ2ht−1 and zt = ht  λzt−1 1  . Intriguingly, following the optimal algorithm for all T rounds can be done in O(Td) computation and O(d) memory. These resource requirements are surprising as playing weighted averages typically requires O(T 2d). We found that the weighted averages are similar between rounds and can be updated cheaply. We are now ready to state the main result of this section, proved in Appendix B.3. Theorem 11. Let zt and ht be as in (8) and Kt as in (2). For the minimax problem (1) we have R−1 t = I + λKt + γtete⊺ t and the minimax prediction in round t is given by at = λctXt−1zt−1 where γt = 1 ct −1 ht and ct satisfy the recurrence cT = hT and ct−1 = ht−1 + λ2h2 t−1ct (1 + ct). 5.1 Implementation Theorem 11 states that the minimax prediction is at = λctXt−1zt−1. Using Lemma 10, we can derive an incremental update for at by defining a1 = 0 and at+1 = λct+1Xtzt = λct+1[Xt−1 xt]ht  λzt−1 1  = λct+1ht (Xt−1λzt−1 + xt) = λct+1ht at ct + xt  . This means we can predict in constant time O(d) per round. 5.2 Lower Bound By Theorem 5, using that wt = 1 so that dt = ct, the minimax regret equals PT t=1 ct. For convenience, we define rt := 1 −(λT h)2t (and rT +1 = 1) so that ht = hrt/rt+1. We can obtain a lower bound on ct from the expression given in Theorem 11 by ignoring the (positive) c2 t term to obtain: ct−1 ≥ht−1 + λ2 T h2 t−1ct. By unpacking this lower bound recursively, we arrive at ct ≥h T X k=t (λT h)2(k−t) r2 t rkrk+1 . 7 Since r2 t /(riri+1) is a decreasing function in i for every t, we have r2 t riri+1 ≥ rt rt+1 which leads to T X t=1 ct ≥h T X t=1 T X k=t (λT h)2(k−t) rt rt+1 ≥h Z T −1 0 Z T t+1 (λT h)2(k−t) rt rt+1 dkdt = Ω  − hT 2 log(λT h)  where we have exploited the fact that the integrand is monotonic and concave in k and monotonic and convex in t to lower bound the sums with an integral. See Claim 14 in the appendix for more details. Since −log(λT h) = O(1/√λT ) and h = Ω(1/λT ), we have that PT t=1 ct = Ω( T √λT ), matching the upper bound below. 5.3 Upper Bound As h ≥ht, the alternative recursion c′ T +1 = 0 and c′ t−1 = h + λ2h2c′ t(1 + c′ t) satisfies c′ t ≥ct. A simple induction 1 shows that c′ t is increasing with decreasing t, and it must hence have a limit. This limit is a fixed-point of c 7→h + λ2h2c(1 + c). This results in a quadratic equation, which has two solutions. Our starting point c′ T +1 = 0 lies below the half-way point 1−λ2h2 2λ2h2 > 0, so the sought limit is the smaller solution: c = −λ2h2 + 1 − p (λ2h2 −1)2 −4λ2h3 2λ2h2 . This is monotonic in h. Plugging in the definition of h, we find c = √ 4λ + 1(2λ + 1) + 4λ + 1 − √ 2 q 2λ λ 2 √ 4λ + 1 + 7  + 3 √ 4λ + 1 + 4  + √ 4λ + 1 + 1 4λ2 . Series expansion around λ →∞results in c ≤(1 + λ)−1/2. So all in all, the bound is R∗= O  T √1 + λT  , where we have written the explicit T dependence of λ. As discussed in the introduction, allowing λT to grow with T is natural and necessary for sub-linear regret. If λT were constant, the regret term and complexity term would grow with T at the same rate, effectively forcing the learner to compete with sequences that could track the xt sequence arbitrarily well. 6 Discussion We looked at obtaining the minimax solution to simple tracking/filtering/time series prediction problems with square loss, square norm regularization and square norm data constraints. We obtained a computational method to get the minimax result. Surprisingly, the problem turns out to be a mixture of per-step quadratic minimax problems that can be either concave or convex. These two problems have different solutions. Since the type of problem that is faced in each round is not a function of the past data, but only of the regularization, the coefficients of the value-to-go function can still be computed recursively. However, extending the analysis beyond quadratic loss and constraints is difficult; the self-dual property of the 2-norm is central to the calculations. Several open problems arise. The stability of the coefficient recursion is so far elusive. For the case of norm bounded data, we found that the ct are positive and essentially constant. However, for higher order smoothness constraints on the data (norm bounded increments, increments of increments, . . . ) the situation is more intricate. We find negative ct and oscillating ct, both diminishing and increasing. Understanding the behavior of the minimax regret and algorithm as a function of the regularization K (so that we can tune λ appropriately) is an intriguing and elusive open problem. Acknowledgments We gratefully acknowledge the support of the NSF through grant CCF-1115788, and of the Australian Research Council through an Australian Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Thanks also to the Simons Institute for the Theory of Computing Spring 2015 Information Theory Program. 1For the base case, cT +1 = 0 ≤cT = h. Then c′ t−1 = h+λ2h2c′ t(1+c′ t) ≥h+λ2h2c′ t+1(1+c′ t+1) = c′ t. 8 References [1] Mark Herbster and Manfred K Warmuth. Tracking the best linear predictor. The Journal of Machine Learning Research, 1:281–309, 2001. [2] Mark Herbster and Manfred K. Warmuth. Tracking the best expert. Machine Learning, 32:151–178, 1998. [3] Claire Monteleoni. Online learning of non-stationary sequences. Master’s thesis, MIT, May 2003. Artificial Intelligence Report 2003-11. [4] Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu. An online learning-based framework for tracking. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI), pages 101–108, 2010. [5] Olivier Bousquet and Manfred K Warmuth. Tracking a small set of experts by mixing past posteriors. The Journal of Machine Learning Research, 3:363–396, 2003. [6] Nicol`o Cesa-bianchi, Pierre Gaillard, Gabor Lugosi, and Gilles Stoltz. Mirror Descent meets Fixed Share (and feels no regret). In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 980–988. Curran Associates, Inc., 2012. [7] Avrim Blum and Carl Burch. On-line learning and the metrical task system problem. Machine Learning, 39(1):35–58, 2000. [8] Eiji Takimoto and Manfred K. Warmuth. The minimax strategy for Gaussian density estimation. In 13th COLT, pages 100–106, 2000. [9] Peter L. Bartlett, Wouter M. Koolen, Alan Malek, Manfred K. Warmuth, and Eiji Takimoto. Minimax fixed-design linear regression. In P. Gr¨unwald, E. Hazan, and S. Kale, editors, Proceedings of The 28th Annual Conference on Learning Theory (COLT), pages 226–239, 2015. [10] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory (COLT 2008), pages 415–423, December 2008. [11] Edward Moroshko and Koby Crammer. Weighted last-step min-max algorithm with improved sub-logarithmic regret. In N. H. Bshouty, G. Stoltz, N. Vayatis, and T. Zeugmann, editors, Algorithmic Learning Theory - 23rd International Conference, ALT 2012, Lyon, France, October 29-31, 2012. Proceedings, volume 7568 of Lecture Notes in Computer Science, pages 245–259. Springer, 2012. [12] Edward Moroshko and Koby Crammer. A last-step regression algorithm for non-stationary online learning. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2013, Scottsdale, AZ, USA, April 29 - May 1, 2013, volume 31 of JMLR Proceedings, pages 451–462. JMLR.org, 2013. [13] Wouter M. Koolen, Alan Malek, and Peter L. Bartlett. Efficient minimax strategies for square loss games. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems (NIPS) 27, pages 3230–3238, December 2014. [14] G. Y. Hu and Robert F. O’Connell. Analytical inversion of symmetric tridiagonal matrices. Journal of Physics A: Mathematical and General, 29(7):1511, 1996. 9
2015
135
5,631
Learning to Segment Object Candidates Pedro O. Pinheiro∗ Ronan Collobert Piotr Doll´ar pedro@opinheiro.com locronan@fb.com pdollar@fb.com Facebook AI Research Abstract Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation. 1 Introduction Object detection is one of the most foundational tasks in computer vision [21]. Until recently, the dominant paradigm in object detection was the sliding window framework: a classifier is applied at every object location and scale [4, 8, 32]. More recently, Girshick et al. [10] proposed a two-phase approach. First, a rich set of object proposals (i.e., a set of image regions which are likely to contain an object) is generated using a fast (but possibly imprecise) algorithm. Second, a convolutional neural network classifier is applied on each of the proposals. This approach provides a notable gain in object detection accuracy compared to classic sliding window approaches. Since then, most stateof-the-art object detectors for both the PASCAL VOC [7] and ImageNet [5] datasets rely on object proposals as a first preprocessing step [10, 15, 33]. Object proposal algorithms aim to find diverse regions in an image which are likely to contain objects. For efficiency and detection performance reasons, an ideal proposal method should possess three key characteristics: (i) high recall (i.e., the proposed regions should contain the maximum number of possible objects), (ii) the high recall should be achieved with the minimum number of regions possible, and (iii) the proposed regions should match the objects as accurately as possible. In this paper, we present an object proposal algorithm based on Convolutional Networks (ConvNets) [20] that satisfies these constraints better than existing approaches. ConvNets are an important class of algorithms which have been shown to be state of the art in many large scale object recognition tasks. They can be seen as a hierarchy of trainable filters, interleaved with non-linearities ∗Pedro O. Pinheiro is with the Idiap Research Institute in Martigny, Switzerland and Ecole Polytechnique F´ed´erale de Lausanne (EPFL) in Lausanne, Switzerland. This work was done during an internship at FAIR. 1 and pooling. ConvNets saw a resurgence after Krizhevsky et al. [18] demonstrated that they perform very well on the ImageNet classification benchmark. Moreover, these models learn sufficiently general image features, which can be transferred to many different tasks [10, 11, 3, 22, 23]. Given an input image patch, our algorithm generates a class-agnostic mask and an associated score which estimates the likelihood of the patch fully containing a centered object (without any notion of an object category). The core of our model is a ConvNet which jointly predicts the mask and the object score. A large part of the network is shared between those two tasks: only the last few network layers are specialized for separately outputting a mask and score prediction. The model is trained by optimizing a cost function that targets both tasks simultaneously. We train on MS COCO [21] and evaluate the model on two object detection datasets, PASCAL VOC [7] and MS COCO. By leveraging powerful ConvNet feature representations trained on ImageNet and adapted on the large amount of segmented training data available in COCO, we are able to beat the state of the art in object proposals generation under multiple scenarios. Our most notable achievement is that our approach beats other methods by a large margin while considering a smaller number of proposals. Moreover, we demonstrate the generalization capabilities of our model by testing it on object categories not seen during training. Finally, unlike all previous approaches for generating segmentation proposals, we do not rely on edges, superpixels, or any other form of low-level segmentation. Our approach is the first to learn to generate segmentation proposals directly from raw image data. The paper is organized as follows: §2 presents related work, §3 describes our architecture choices, and §4 describes our experiments in different datasets. We conclude in §5. 2 Related Work In recent years, ConvNets have been widely used in the context of object recognition. Notable systems are AlexNet [18] and more recently GoogLeNet [29] and VGG [27], which perform exceptionally well on ImageNet. In the setting of object detection, Girshick et al. [10] proposed R-CNN, a ConvNet-based model that beats by a large margin models relying on hand-designed features. Their approach can be divided into two steps: selection of a set of salient object proposals [31], followed by a ConvNet classifier [18, 27]. Currently, most state-of-the-art object detection approaches [30, 12, 9, 25] rely on this pipeline. Although they are slightly different in the classification step, they all share the first step, which consist of choosing a rich set of object proposals. Most object proposal approaches leverage low-level grouping and saliency cues. These approaches usually fall into three categories: (1) objectness scoring [1, 34], in which proposals are extracted by measuring the objectness score of bounding boxes, (2) seed segmentation [14, 16, 17], where models start with multiple seed regions and generate separate foreground-background segmentation for each seed, and (3) superpixel merging [31, 24], where multiple over-segmentations are merged according to various heuristics. These models vary in terms of the type of proposal generated (bounding boxes or segmentation masks) and if the proposals are ranked or not. For a more complete survey of object proposal methods, we recommend the recent survey from Hosang et al. [13]. Although our model shares high level similarities with these approaches (we generate a set of ranked segmentation proposals), these results are achieved quite differently. All previous approaches for generating segmentation masks, including [17] which has a learning component, rely on low-level segmentations such as superpixels or edges. Instead, we propose a data-driven discriminative approach based on a deep-network architecture to obtain our segmentation proposals. Most closely related to our approach, Multibox [6, 30] proposed to train a ConvNet model to generate bounding box object proposals. Their approach, similar to ours, generates a set of ranked class-agnostic proposals. However, our model generates segmentation proposals instead of the less informative bounding box proposals. Moreover, the model architectures, training scheme, etc., are quite different between our approach and [30]. More recently, Deepbox [19] proposed a ConvNet model that learns to rerank proposals generated by EdgeBox, a bottom-up method for bounding box proposals. This system shares some similarities to our scoring network. Our model, however, is able to generate the proposals and rank them in one shot from the test image, directly from the pixel space. Finally, concurrently with this work, Ren et al. [25] proposed ‘region proposal networks’ for generating box proposals that shares similarities with our work. We emphasize, however, that unlike all these approaches our method generates segmentation masks instead of bounding boxes. 2 VGG# 1x1# conv# 2x2# pool# # x:#3x224x224# 512x14x14# 512x7x7# 512x1x1# 1024x1x1# fsegm(x):#224x224# fscore(x):#1x1# 512x14x14# 512x1x1# 56x56# Figure 1: (Top) Model architecture: the network is split into two branches after the shared feature extraction layers. The top branch predicts a segmentation mask for the the object located at the center while the bottom branch predicts an object score for the input patch. (Bottom) Examples of training triplets: input patch x, mask m and label y. Green patches contain objects that satisfy the specified constraints and therefore are assigned the label y = 1. Note that masks for negative examples (shown in red) are not used and are shown for illustrative purposes only. 3 DeepMask Proposals Our object proposal method predicts a segmentation mask given an input patch, and assigns a score corresponding to how likely the patch is to contain an object. Both mask and score predictions are achieved with a single convolutional network. ConvNets are flexible models which can be applied to various computer vision tasks and they alleviate the need for manually designed features. Their flexible nature allows us to design a model in which the two tasks (mask and score predictions) can share most of the layers of the network. Only the last layers are task-specific (see Figure 1). During training, the two tasks are learned jointly. Compared to a model which would have two distinct networks for the two tasks, this architecture choice reduces the capacity of the model and increases the speed of full scene inference at test time. Each sample k in the training set is a triplet containing (1) the RGB input patch xk, (2) the binary mask corresponding to the input patch mk (with mij k ∈{±1}, where (i, j) corresponds to a pixel location on the input patch) and (3) a label yk ∈{±1} which specifies whether the patch contains an object. Specifically, a patch xk is given label yk = 1 if it satisfies the following constraints: (i) the patch contains an object roughly centered in the input patch (ii) the object is fully contained in the patch and in a given scale range Otherwise, yk = −1, even if an object is partially present. The positional and scale tolerance used in our experiments are given shortly. Assuming yk = 1, the ground truth mask mk has positive values only for the pixels that are part of the single object located in the center of the patch. If yk = −1 the mask is not used. Figure 1, bottom, shows examples of training triplets. Figure 1, top, illustrates an overall view of our model, which we call DeepMask. The top branch is responsible for predicting a high quality object segmentation mask and the bottom branch predicts the likelihood that an object is present and satisfies the above two constraints. We next describe in detail each part of the architecture, the training procedure, and the fast inference procedure. 3.1 Network Architecture The parameters for the layers shared between the mask prediction and the object score prediction are initialized with a network that was pre-trained to perform classification on the ImageNet dataset [5]. This model is then fine-tuned for generating object proposals during training. We choose the VGGA architecture [27] which consists of eight 3 × 3 convolutional layers (followed by ReLU nonlinearities) and five 2 × 2 max-pooling layers and has shown excellent performance. 3 As we are interested in inferring segmentation masks, the spatial information provided in the convolutional feature maps is important. We therefore remove all the final fully connected layers of the VGG-A model. Additionally we also discard the last max-pooling layer. The output of the shared layers has a downsampling factor of 16 due to the remaining four 2 × 2 max-pooling layers; given an input image of dimension 3 × h × w, the output is a feature map of dimensions 512 × h 16 × w 16. Segmentation: The branch of the network dedicated to segmentation is composed of a single 1 × 1 convolution layer (and ReLU non-linearity) followed by a classification layer. The classification layer consists of h×w pixel classifiers, each responsible for indicating whether a given pixel belongs to the object in the center of the patch. Note that each pixel classifier in the output plane must be able to utilize information contained in the entire feature map, and thus have a complete view of the object. This is critical because unlike in semantic segmentation, our network must output a mask for a single object even when multiple objects are present (e.g., see the elephants in Fig. 1). For the classification layer one could use either locally or fully connected pixel classifiers. Both options have drawbacks: in the former each classifier has only a partial view of the object while in the latter the classifiers have a massive number of redundant parameters. Instead, we opt to decompose the classification layer into two linear layers with no non-linearity in between. This can be viewed as a ‘low-rank’ variant of using fully connected linear classifiers. Such an approach massively reduces the number of network parameters while allowing each pixel classifier to leverage information from the entire feature map. Its effectiveness is shown in the experiments. Finally, to further reduce model capacity, we set the output of the classification layer to be ho×wo with ho < h and wo < w and upsample the output to h × w to match the input dimensions. Scoring: The second branch of the network is dedicated to predicting if an image patch satisfies constraints (i) and (ii): that is if an object is centered in the patch and at the appropriate scale. It is composed of a 2 × 2 max-pooling layer, followed by two fully connected (plus ReLU non-linearity) layers. The final output is a single ‘objectness’ score indicating the presence of an object in the center of the input patch (and at the appropriate scale). 3.2 Joint Learning Given an input patch xk ∈I, the model is trained to jointly infer a pixel-wise segmentation mask and an object score. The loss function is a sum of binary logistic regression losses, one for each location of the segmentation network and one for the object score, over all training triplets (xk, mk, yk): L(θ) = X k  1+yk 2woho X ij log(1 + e−mij k f ij segm(xk)) + λ log(1 + e−ykfscore(xk))  (1) Here θ is the set of parameters, f ij segm(xk) is the prediction of the segmentation network at location (i, j), and fscore(xk) is the predicted object score. We alternate between backpropagating through the segmentation branch and scoring branch (and set λ = 1 32). For the scoring branch, the data is sampled such that the model is trained with an equal number of positive and negative samples. Note that the factor multiplying the first term of Equation 1 implies that we only backpropagate the error over the segmentation branch if yk = 1. An alternative would be to train the segmentation branch using negatives as well (setting mij k = 0 for all pixels if yk = 0). However, we found that training with positives only was critical for generalizing beyond the object categories seen during training and for achieving high object recall. This way, during inference the network attempts to generate a segmentation mask at every patch, even if no known object is present. 3.3 Full Scene Inference During full image inference, we apply the model densely at multiple locations and scales. This is necessary so that for each object in the image we test at least one patch that fully contains the object (roughly centered and at the appropriate scale), satisfying the two assumptions made during training. This procedure gives a segmentation mask and object score at each image location. Figure 2 illustrates the segmentation output when the model is applied densely to an image at a single scale. The full image inference procedure is efficient since all computations can be computed convolutionally. The VGG features can be computed densely in a fraction of a second given a typical input image. For the segmentation branch, the last fully connected layer can be computed via convolutions applied to the VGG features. The scores are likewise computed by convolutions on the VGG features followed by two 1 × 1 convolutional layers. Exact runtimes are given in §4. 4 Figure 2: Output of segmentation model applied densely to a full image with a 16 pixel stride (at a single scale at the central horizontal image region). Multiple locations give rise to good masks for each of the three monkeys (scores not shown). Note that no monkeys appeared in our training set. Finally, note that the scoring branch of the network has a downsampling factor 2× larger than the segmentation branch due to the additional max-pooling layer. Given an input test image of size ht × wt, the segmentation and object network generate outputs of dimension ht 16 × wt 16 and ht 32 × wt 32 , respectively. In order to achieve a one-to-one mapping between the mask prediction and object score, we apply the interleaving trick right before the last max-pooling layer for the scoring branch to double its output resolution (we use exactly the implementation described in [26]). 3.4 Implementation Details During training, an input patch xk is considered to contain a ‘canonical’ positive example if an object is precisely centered in the patch and has maximal dimension equal to exactly 128 pixels. However, having some tolerance in the position of an object within a patch is critical as during full image inference most objects will be observed slightly offset from their canonical position. Therefore, during training, we randomly jitter each ‘canonical’ positive example to increase the robustness of our model. Specifically, we consider translation shift (of ±16 pixels), scale deformation (of 2±1/4), and also horizontal flip. In all cases we apply the same transformation to both the image patch xk and the ground truth mask mk and assign the example a positive label yk = 1. Negative examples (yk = −1) are any patches at least ±32 pixels or 2±1 in scale from any canonical positive example. During full image inference we apply the model densely at multiple locations (with a stride of 16 pixels) and scales (scales 2−2 to 21 with a step of 21/2). This ensures that there is at least one tested image patch that fully contains each object in the image (within the tolerances used during training). As in the original VGG-A network [27], our model is fed with RGB input patches of dimension 3 × 224 × 224. Since we removed the fifth pooling layer, the common branch outputs a feature map of dimensions 512 × 14 × 14. The score branch of our network is composed of 2 × 2 max pooling followed by two fully connected layers (with 512 and 1024 hidden units, respectively). Both of these layers are followed by ReLU non-linearity and a dropout [28] procedure with a rate of 0.5. A final linear layer then generates the object score. The segmentation branch begins with a single 1 × 1 convolutional layer with 512 units. This feature map is then fully connected to a low dimensional output of size 512, which is further fully connected to each pixel classifier to generate an output of dimension 56 × 56. As discussed, there is no nonlinearity between these two layers. In total, our model contains around 75M parameters. A final bilinear upsampling layer is added to transform the 56 × 56 output prediction to the full 224 × 224 resolution of the ground-truth (directly predicting the full resolution output would have been much slower). We opted for a non-trainable layer as we observed that a trainable one simply learned to bilinearly upsample. Alternatively, we tried downsampling the ground-truth instead of upsampling the network output; however, we found that doing so slightly reduced accuracy. Design architecture and hyper-parameters were chosen using a subset of the MS COCO validation data [21] (non-overlapping with the data we used for evaluation). We considered a learning rate of .001. We trained our model using stochastic gradient descent with a batch size of 32 examples, momentum of .9, and weight decay of .00005. Aside from the pre-trained VGG features, weights are initialized randomly from a uniform distribution. Our model takes around 5 days to train on a Nvidia Tesla K40m. To binarize predicted masks we simply threshold the continuous output (using a threshold of .1 for PASCAL and .2 for COCO). All the experiments were conducted using Torch71. 1http://torch.ch 5 Figure 3: DeepMask proposals with highest IoU to the ground truth on selected images from COCO. Missed objects (no matching proposals with IoU > 0.5) are marked with a red outline. 4 Experimental Results In this section, we evaluate the performance of our approach on the PASCAL VOC 2007 test set [7] and on the first 5000 images of the MS COCO 2014 validation set [21]. Our model is trained on the COCO training set which contains about 80,000 images and a total of nearly 500,000 segmented objects. Although our model is trained to generate segmentation proposals, it can also be used to provide box proposals by taking the bounding boxes enclosing the segmentation masks. Figure 3 shows examples of generated proposals with highest IoU to the ground truth on COCO. Metrics: We measure accuracy using the common Intersection over Union (IoU) metric. IoU is the intersection of a candidate proposal and ground-truth annotation divided by the area of their union. This metric can be applied to both segmentation and box proposals. Following Hosang et al. [13], we evaluate the performance of the proposal methods considering the average recall (AR) between IoU 0.5 and 1.0 for a fixed number of proposals. AR has been shown to correlate extremely well with detector performance (recall at a single IoU threshold is far less predictive) [13]. Methods: We compare to the current top-five publicly-available proposal methods including: EdgeBoxes [34], SelectiveSearch [31], Geodesic [16], Rigor [14], and MCG [24]. These methods achieve top results on object detection (when coupled with R-CNNs [10]) and also obtain the best AR [13]. Results: Figure 4 (a-c) compares the performance of our approach, DeepMask, to existing proposal methods on PASCAL (using boxes) and COCO (using both boxes and segmentations). Shown is the AR of each method as a function of the number of generated proposals. Under all scenarios DeepMask (and its variants) achieves substantially better AR for all numbers of proposals considered. AR at selected proposal counts and averaged across all counts (AUC) is reported in Tables 1 and 2 for COCO and PASCAL, respectively. Notably, DeepMask achieves an order of magnitude reduction in the number of proposals necessary to reach a given AR under most scenarios. For example, with 100 segmentation proposals DeepMask achieves an AR of .245 on COCO while competing methods require nearly 1000 segmentation proposals to achieve similar AR. 6 # proposals 10 0 10 1 10 2 10 3 average recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 DeepMask MCG SelectiveSearch Rigor Geodesic EdgeBoxes (a) Box proposals on PASCAL. # proposals 10 0 10 1 10 2 10 3 average recall 0 0.1 0.2 0.3 0.4 0.5 0.6 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic EdgeBoxes (b) Box proposals on COCO. # proposals 10 0 10 1 10 2 10 3 average recall 0 0.1 0.2 0.3 0.4 0.5 0.6 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (c) Segm. proposals on COCO. # proposals 10 0 10 1 10 2 10 3 average recall 0 0.1 0.2 0.3 0.4 0.5 0.6 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (d) Small objects (area< 322). # proposals 10 0 10 1 10 2 10 3 average recall 0 0.1 0.2 0.3 0.4 0.5 0.6 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (e) Medium objects. # proposals 10 0 10 1 10 2 10 3 average recall 0 0.1 0.2 0.3 0.4 0.5 0.6 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (f) Large objects (area> 962). IoU 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (g) Recall with 10 proposals. IoU 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (h) Recall with 100 proposals. IoU 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 DeepMask DeepMaskZoom MCG SelectiveSearch Rigor Geodesic (i) Recall with 1000 proposals. Figure 4: (a-c) Average recall versus number of box and segmentation proposals on various datasets. (d-f) AR versus number of proposals for different object scales on segmentation proposals in COCO. (g-h) Recall versus IoU threshold for different number of segmentation proposals in COCO. Box Proposals Segmentation Proposals AR@10 AR@100 AR@1000 AUC AR@10 AR@100 AR@1000 AUCS AUCM AUCL AUC EdgeBoxes [34] .074 .178 .338 .139 Geodesic [16] .040 .180 .359 .126 .023 .123 .253 .013 .086 .205 .085 Rigor [14] .133 .337 .101 .094 .253 .022 .060 .178 .074 SelectiveSearch [31] .052 .163 .357 .126 .025 .095 .230 .006 .055 .214 .074 MCG [24] .101 .246 .398 .180 .077 .186 .299 .031 .129 .324 .137 DeepMask20 .139 .286 .431 .217 .109 .215 .314 .020 .227 .317 .164 DeepMask20∗ .152 .306 .432 .228 .123 .233 .314 .020 .257 .321 .175 DeepMaskZoom .150 .326 .482 .242 .127 .261 .366 .068 .263 .308 .194 DeepMaskFull .149 .310 .442 .231 .118 .235 .323 .020 .244 .342 .176 DeepMask .153 .313 .446 .233 .126 .245 .331 .023 .266 .336 .183 Table 1: Results on the MS COCO dataset for both bounding box and segmentation proposals. We report AR at different number of proposals (10, 100 and 1000) and also AUC (AR averaged across all proposal counts). For segmentation proposals we report overall AUC and also AUC at different scales (small/medium/large objects indicated by superscripts S/M/L). See text for details. Scale: The COCO dataset contains objects in a wide range of scales. In order to analyze performance in more detail, we divided the objects in the validation set into roughly equally sized sets according to object pixel area a: small (a < 322), medium (322 ≤a ≤962), and large (a > 962) objects. Figure 4 (d-f) shows performance at each scale; all models perform poorly on small objects. To improve accuracy of DeepMask we apply it at an additional smaller scale (DeepMaskZoom). This boosts performance (especially for small objects) but at a cost of increased inference time. 7 PASCAL VOC07 AR@10 AR@100 AR@1000 AUC EdgeBoxes [34] .203 .407 .601 .309 Geodesic [16] .121 .364 .596 .230 Rigor [14] .164 .321 .589 .239 SelectiveSearch [31] .085 .347 .618 .241 MCG [24] .232 .462 .634 .344 DeepMask .337 .561 .690 .433 Table 2: Results on PASCAL VOC 2007 test. Figure 5: Fast R-CNN results on PASCAL. Localization: Figure 4 (g-i) shows the recall each model achieves as the IoU varies, shown for different number of proposals per image. DeepMask achieves a higher recall in virtually every scenario, except at very high IoU, in which it falls slightly below other models. This is likely due to the fact that our method outputs a downsampled version of the mask at each location and scale; a multiscale approach or skip connections could improve localization at very high IoU. Generalization: To see if our approach can generalize to unseen classes [2, 19], we train two additional versions of our model, DeepMask20 and DeepMask20∗. DeepMask20 is trained only with objects belonging to one of the 20 PASCAL categories (subset of the full 80 COCO categories). DeepMask20∗is similar, except we use the scoring network from the original DeepMask. Results for the two models when evaluated on all 80 COCO categories (as in all other experiments) are shown in Table 1. Compared to DeepMask, DeepMask20 exhibits a drop in AR (but still outperforms all previous methods). DeepMask20∗, however, matches the performance of DeepMask. This surprising result demonstrates that the drop in accuracy is due to the discriminatively trained scoring branch (DeepMask20 is inadvertently trained to assign low scores to the other 60 categories); the segmentation branch generalizes extremely well even when trained on a reduced set of categories. Architecture: In the segmentation branch, the convolutional features are fully connected to a 512 ‘low-rank’ layer which is in turn connected to the 56×56 output (with no intermediate non-linearity), see §3. We also experimented with a ‘full-rank’ architecture (DeepMaskFull) with over 300M parameters where each of the 56 × 56 outputs was directly connected to the convolutional features. As can be seen in Table 1, DeepMaskFull is slightly inferior to our final model (and much slower). Detection: As a final validation, we evaluate how DeepMask performs when coupled with an object detector on PASCAL VOC 2007 test. We re-train and evaluate the state-of-the-art Fast R-CNN [9] using proposals generated by SelectiveSearch [31] and our method. Figure 5 shows the mean average precision (mAP) for Fast R-CNN with varying number of proposals. Most notably, with just 100 DeepMask proposals Fast R-CNN achieves mAP of 68.2% and outperforms the best results obtained with 2000 SelectiveSearch proposals (mAP of 66.9%). We emphasize that with 20× fewer proposals DeepMask outperforms SelectiveSearch (this is consistent with the AR numbers in Table 1). With 500 DeepMask proposals, Fast R-CNN improves to 69.9% mAP, after which performance begins to degrade (a similar effect was observed in [9]). Speed: Inference takes an average of 1.6s per image in the COCO dataset (1.2s on the smaller PASCAL images). Our runtime is competitive with the fastest segmentation proposal methods (Geodesic [16] runs at ∼1s per PASCAL image) and substantially faster than most (e.g., MCG [24] takes ∼30s). Inference time can further be dropped by ∼30% by parallelizing all scales in a single batch (eliminating GPU overhead). We do, however, require use of a GPU for efficient inference. 5 Conclusion In this paper, we propose an innovative framework to generate segmentation object proposals directly from image pixels. At test time, the model is applied densely over the entire image at multiple scales and generates a set of ranked segmentation proposals. We show that learning features for object proposal generation is not only feasible but effective. Our approach surpasses the previous state of the art by a large margin in both box and segmentation proposal generation. In future work, we plan on coupling our proposal method more closely with state-of-the-art detection approaches. Acknowledgements: We would like to thank Ahmad Humayun and Tsung-Yi Lin for help with generating experimental results, Andrew Tulloch, Omry Yadan and Alexey Spiridonov for help with computational infrastructure, and Rob Fergus, Yuandong Tian and Soumith Chintala for valuable discussions. 8 References [1] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. PAMI, 2012. 2 [2] N. Chavali, H. Agrawal, A. Mahendru, and D. Batra. Object-proposal evaluation protocol is ’gameable’. arXiv:1505.05836, 2015. 8 [3] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. ICLR, 2015. 2 [4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. 1 [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 1, 3 [6] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In CVPR, 2014. 2 [7] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL visual object classes (VOC) challenge. IJCV, 2010. 1, 2, 6 [8] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. PAMI, 2010. 1 [9] R. Girshick. Fast R-CNN. arXiv:1504.08083, 2015. 2, 8 [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 1, 2, 6 [11] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and finegrained localization. In CVPR, 2015. 2 [12] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. 2 [13] J. Hosang, R. Benenson, P. Doll´ar, and B. Schiele. What makes for effective detection proposals? arXiv:1502.05082, 2015. 2, 6 [14] A. Humayun, F. Li, and J. M. Rehg. RIGOR: Reusing Inference in Graph Cuts for generating Object Regions. In CVPR, 2014. 2, 6, 7, 8 [15] H. Kaiming, Z. Xiangyu, R. Shaoqing, and S. Jian. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. 1 [16] P. Kr¨ahenb¨uhl and V. Koltun. Geodesic object proposals. In ECCV, 2014. 2, 6, 7, 8 [17] P. Kr¨ahenb¨uhl and V. Koltun. Learning to propose objects. In CVPR, 2015. 2 [18] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 2 [19] W. Kuo, B. Hariharan, and J. Malik. Deepbox: Learning objectness with convolutional networks. In arXiv:505.02146v1, 2015. 2, 8 [20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. 1 [21] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Doll´ar. Microsoft COCO: Common objects in context. arXiv:1405.0312, 2015. 1, 2, 5, 6 [22] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Is object localization for free? – Weakly-supervised learning with convolutional neural networks. In CVPR, 2015. 2 [23] P. O. Pinheiro and R. Collobert. Recurrent conv. neural networks for scene labeling. In ICML, 2014. 2 [24] J. Pont-Tuset, P. Arbel´aez, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping for image segmentation and object proposal generation. In arXiv:1503.00848, 2015. 2, 6, 7, 8 [25] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In arXiv:1506.01497, 2015. 2 [26] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. 5 [27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 2, 3, 5 [28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. 5 [29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 2 [30] C. Szegedy, S. Reed, D. Erhan, and D. Anguelov. Scalable, high-quality object detection. In arXiv:1412.1441, 2014. 2 [31] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recog. IJCV, 2013. 2, 6, 7, 8 [32] P. Viola and M. J. Jones. Robust real-time face detection. IJCV, 2004. 1 [33] Z. Y. Zhu, R. Urtasun, R. Salakhutdinov, and S. Fidler. segdeepm: Exploiting segmentation and context in deep neural networks for object detection. In CVPR, 2015. 1 [34] C. L. Zitnick and P. Doll´ar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 2, 6, 7, 8 9
2015
136
5,632
A Theory of Decision Making Under Dynamic Context Michael Shvartsman Princeton Neuroscience Institute Princeton University Princeton, NJ, 08544 ms44@princeton.edu Vaibhav Srivastava Department of Mechanical and Aerospace Engineering Princeton University Princeton, NJ, 08544 vaibhavs@princeton.edu Jonathan D. Cohen Princeton Neuroscience Institute Princeton University Princeton, NJ, 08544 jdc@princeton.edu Abstract The dynamics of simple decisions are well understood and modeled as a class of random walk models [e.g. 1–4]. However, most real-life decisions include a dynamically-changing influence of additional information we call context. In this work, we describe a computational theory of decision making under dynamically shifting context. We show how the model generalizes the dominant existing model of fixed-context decision making [2] and can be built up from a weighted combination of fixed-context decisions evolving simultaneously. We also show how the model generalizes recent work on the control of attention in the Flanker task [5]. Finally, we show how the model recovers qualitative data patterns in another task of longstanding psychological interest, the AX Continuous Performance Test [6], using the same model parameters. 1 Introduction In the late 1940s, Wald and colleagues developed a sequential test called the sequential probability ratio test (SPRT; [7]). This test accumulates evidence in favor of one of two simple hypotheses until a log likelihood threshold is crossed and one hypothesis is selected, forming a random walk to a decision bound. This test was quickly applied as a model of human decision making behavior both in its discrete form [e.g. 1] and in a continuous realization as biased Wiener process (the Diffusion Decision Model or DDM; [2]). This work has seen a recent revival due to evidence of neurons that appear to reflect ramping behavior consistent with evidence accumulation [e.g. 8], cortical circuits implementing a decision process similar to the SPRT in the basal ganglia in rats [9], and the finding correlations between DDM parameters and activity in EEG [10] and fMRI [11]. Bolstered by this revival, a number of groups investigated extension models. Some of these models tackle complex hypothesis spaces [e.g. 12], or greater biological realism [e.g. 13]. Others focus on relaxing stationarity assumptions about the task setting, whether by investigating multi-stimulus integration [5], deadlines [14], or different evidence distribution by trial [15]. We engage with the latter literature by providing a theory of multi-alternative decision making under dynamically changing context. We define context simply as additional information that may bear upon a decision, whether from perception or memory. Such a theory is important because even simple tasks that use apparently-fixed contexts such as prior biases may require inference on the 1 context itself before it can bear on the decision. The focus on dynamics is what distinguishes our work from efforts on context-dependent changes in preferences [e.g. 16] and internal context updating [e.g. 17]. The admission of evidence from memory distinguishes it from work on multisensory integration [e.g. 18]. We illustrate such decisions with an example: consider seeing someone that looks like a friend (a target stimulus), and a decision: to greet or not greet this person. A context can be external (e.g. a concert hall) or internal (e.g. the memory that the friend went on vacation, and therefore this person is likely a lookalike). The context can strictly constrain the decision (e.g. greeting a friend in the street vs. the middle of a film), or only bias it (guessing whether this is a friend or lookalike after retrieving the memory of them on vacation). Regardless, context affects the decision, and we assume it needs to be inferred, either before or alongside the greeting decision itself. We aim to build a normative theory of this context processing component of decision making. We show that our theory generalizes the discrete-time context-free SPRT (and therefore a Wiener process DDM in continuous time) and how context-dependent decisions can be optimally built up from a dynamically weighted combination of context-free decisions. Our theory is general enough to consider a range of existing empirical paradigms in the literature, including the Stroop, Flanker, Simon, and the AX-CPT [6, 19–21]. We choose to mention these in particular because they reside on the bounds of the task space our theory considers on two different dimensions, and describe a discretization of task space on those dimensions that accommodates those existing paradigms. We show that in spite of the framework’s generality, it can provide wellbehaved zero-parameter predictions across qualitatively different tasks. We do this by using our framework to derive a notational variant of an existing Flanker model [5], and using parameter values from this previous model to simultaneously generate qualitatively accurate predictions in both the flanker and AX-CPT paradigms. That is, our theory generates plausible behavior in qualitatively different tasks, using the same parameters. 2 The theoretical framework We assume that dynamic context decision making, like fixed context decision making, can be understood as a sequential Bayesian inference process. Our theory therefore uses sequentially drawn samples from external input and/or internal memory to compute the joint posterior probability over the identity of the true context and decision target over time. It maps from this joint probability to a response probability using a fixed response mapping, and uses a fixed threshold rule defined over the response probability to stop sampling and respond. We make a distinction between our theory of decision making and individual task models that can be derived from the theory by picking points in task space that the theory accommodates. Formally, we assume the decider conditions a decision based on its best estimate of two pieces of information: some unknown true context taking on one of the values {ci}n i=0, and some unknown true target taking on one of the values {gj}m j=0. This intentionally abstracts from richer views of context (e.g. ones which assume that the context is qualitatively different from the target, or that the relevant context to sample from is unknown). We denote by C, G random variables representing the possible draws of context and target, and r(·) a deterministic function from the distribution P(C, G) to a distribution over responses. We define an abstract context sensor and target sensor selectively tuned to context or target information, such that eC is a discrete piece of evidence drawn from the context sensor, and eG one drawn from the target sensor. The goal of the decider is to average over the noise in the sensors to estimate the pair (C, G) sufficiently to determine the correct response, and we assume that this inference is done optimally using Bayes’ rule. We denote by ton c the time at which the context appears and toff c ≥ton c the time at which it disappears, and likewise ton g ≤toff g the time at which the target appears and disappears. We also restrict these times such that ton c ≤ton g ; this is the primary distinction between context and target, which can otherwise be two arbitrary stimuli. The onsets and offsets define one dimension in a continuous space of tasks over which our theory can make predictions. The form of r(·) defines a second dimension in the space of possible tasks where our theory makes predictions. We use a suboptimal but simple threshold heuristic for the decision rule: when the a 2 posteriori probability of any response crosses some adaptively set threshold, sampling ends and the response is made in favor of that response. For the purposes of this paper, we restrict ourselves to two extremes on both of these dimensions. For stimulus onset and offset times, we consider one setting where the context and target appear and disappear together (perfect overlap, i.e. ton c = ton g and toff c = toff g ), and one where the target appears some time after the context disappears (no overlap, i.e. toff c ≤ton g ). We label the former the external context model, because the contextual information is immediately available, and the latter the internal context model, because the information must be previously encoded and maintained. The external context model is like the ongoing film context from the introduction, and the internal context is like knowing that the friend is on vacation. For the response mapping function r(·) we consider one setting where the response is solely conditioned on the perceptual target (context-independent response) and one where the response is is conditioned jointly on the context-target pair (context-dependent response). The context-dependent response is like choosing to greet or not greet the friend at the movie theater, and the contextindependent one is like choosing to greet or not greet the friend on the street. In the lab, classic tasks like the Stroop, Flanker, and Simon [19–21] fall into the taxonomy as external-context tasks with a context-independent response, because the response is solely conditioned on the perceptual target. On the other side of both dimensions are tasks like the N-back task and the AX Continuous Performance Test [6]. In our consideration of these tasks, we restrict our attention to the case where there are only two possible context and target hypotheses. The sequential inference procedure we use can be performed for other numbers of potentially-dependent hypotheses and responses, though the analysis we show later in the paper relies on the n = m = 2 assumption and on indepednence between the two sensors. 3 External context update First we describe the inference procedure in the case of perfect overlap of context and target. At the current timestep τ, the decider has available evidence samples from both the context and the target (eC and eG) and uses Bayes’ rule to compute the posterior probability P(C, G | eC, eG): Pτ (C = c, G = g | eC, eG) ∝P (eC, eG | C = c, G = g)Pτ−1(C = c, G = g) (1) The first term is the likelihood of the evidence given the joint context-target hypothesis, and the second term is the prior, which we take to be the posterior from the previous time step. We use the flanker task as a concrete example. In this task, participants are shown a central target (e.g. an S or an H) surrounded on both sides by distractors (‘flankers’, more S or H stimuli) that are either congruent or incongruent with it. Participants are told to respond to the target only but show a number of indications of influence of the distractor, most notably an early period of below-chance performance and a slowdown or reduced accuracy with incongruent relative to congruent flankers [20]. We label the two possible target identities {g0 = S, g1 = H} and the possible flanker identities {c0 = S_S, c1 = H_H} with the underscore representing the position of the target. This gives us the two congruent possibilities {[C = c0, G = g0], [C = c1, G = g1]} or [SSS,HHH] and the two incongruent possibilities {[C = c0, G = g1], [C = c1, G = g0]} or [SHS,HSH]. The response mapping function marginalize over context identities at each timestep: r(P (C, G)) = ( r0 with probability P c P (C = c, G = g0) r1 with probability P c P (C = c, G = g1) (2) The higher of the two response probabilities is compared to a threshold θ and when this threshold is crossed, the model responds. What remains is to define the prior P0(C, G) and the likelihood function P(eC, eG|C, G) or its inverse, the sample generation function. For sample generation, we assume that the context and target are represented as two Gaussian distributions: eC ∼N (µc + αµµg, σ2 c + ασσ2 g) (3) eG ∼N (µg + αµµc, σ2 g + ασσ2 c) (4) Here µc and µg are baseline means for the distributions of context and target, σ2 c and σ2 g are their variances, and the α scaling factors mix them, potentially reflecting perceptual overlap in the sensors. This formulation is a notational variant of an earlier flanker model [5], but we are able to derive it by describing the task in our formalism (we describe the exact mapping in the supplementary material). Moreover, we later show how this notational equivalence lets us reproduce both Yu and colleagues’ results and data patterns in another task, using the same parameter settings. 3 4 Comparison to a constant-drift model We now write the model in terms of a likelihood ratio test to facilitate comparison to Wald’s SPRT and Wiener diffusion models. This is complementary to an earlier approach performing dynamical analysis on the problem in probability space [22]. First we write the likelihood ratio Z of the full response posteriors for the two responses. Since the likelihood ratio and the max a posteriori probability are monotonically related, thresholding on Z maps onto the threshold over the probability of the most probable response we described above. Z = p(r(P (C, G)) = r0|eC, eG) p(r(P (C, G)) = r1|eC, eG) (5) =  P (eC, eG | C = c0, G = g0)Pτ−1(C = c0, G = g0) + P (eC, eG | C = c1, G = g0)Pτ−1(C = c1, G = g0)  P (eC, eG | C = c0, G = g1)Pτ−1(C = c0, G = g1) + P (eC, eG | C = c1, G = g1)Pτ−1(C = c1, G = g1)  (6) For this analysis we assume that context and target samples are drawn independently from each other, i.e. that αµ = ασ = 0 and therefore that P(eC, eG | C, G) = P(eC | C)P(eG | T). We also index the evidence samples by time to remove the prior terms Pτ−1(·), and introduce the notation lt(tx) = P(eG t | G = gx) and lt(cx) = P(eC t | C = cx) for the likelihoods, with x ∈{0, 1} indexing stimuli and t ∈{tcon = tgon . . . τ} indexing evidence samples over time. Now we can rewrite: Zτ = P0(C = c0, G = g0) Q t lt(c0)lt(g0) + P0(C = c1, G = g0) Q t lt(c1)lt(g0) P0(C = c0, G = g1) Q t lt(c0)lt(g1) + P0(C = c1, G = g1) Q t lt(c1)lt(g1) (7) = P0(C = c0)P (G = g0 | C = c0) Q t lt(c0)lt(g0) + P0(C = c1)P (G = g0 | C = c1) Q t lt(c1)lt(g0) P0(C = c0)P (G = g1 | C = c0) Q t lt(c0)lt(g1) + P0(C = c1)P (G = g1 | C = c1) Q t lt(c1)lt(g1) (8) Divide both the numerator and the denominator by Q t lt(c1): Zτ = P0(C = c0)P (G = g0 | C = c0) Q t lt(c0) lt(c1) lt(g0) + P0(C = c1)P (G = g0 | C = c1) Q t lt(g0) P0(C = c0)P (G = g1 | C = c0) Q t lt(c0) lt(c1) lt(g1) + P0(C = c1)P (G = g1 | C = c1) Q t lt(g1) (9) Separate out the target likelihood product and take logs: log Zτ = τ X t=1 log lt(g0) lt(g1) + log P (G = g0 | C = c0) P0(C=c0) P0(C=c1) Q t lt(c0) lt(c1) + P (G = g0 | C = c1) P (G = g1 | C = c0) P0(C=c0) P0(C=c1) Q t lt(c0) lt(c1) + P (G = g1 | C = c1) (10) Now, the first term is the Wald’s sequential probability ratio test [7] with zτ g = P t log lt(g0) lt(g1). In the continuum limit, it is equal to a Wiener diffusion process dzg = agdt+bgdW with ag = E[log l(g0) l(g1)] and b2 g = Var[log l(g0) l(g1)] [1, 4]. We can relabel the SPRT for the target zτ g = P t log lt(g0) lt(g1) and do the same for the context drift that appears on both numerator and denominator of the final term: zc τ = P t log lt(c0) lt(c1) and z0 c = log P0(C=c0) P0(C=c1). Then the expression is as follows: log Zτ = zτ g + log P (G = g0 | C = c0)ez0 c +zτ c + P (G = g0 | C = c1) P (G = g1 | C = c0)ez0c +zτc + P (G = g1 | C = c1) (11) log Zτ in equation (11) comprises two terms. The first is the unbiased SPRT statistic, while the second is a nonlinear function of the SPRT statistic for the decision on the context. The nonlinear term plays the role of bias in the SPRT for decision on target. This rational dynamic prior bias is an advance over previous heuristic approaches to dynamic biases [e.g. 23]. Several limits of (11) are of interest: if the context and the target are independent, then the second term reduces to log  P (G=g0) P (G=g1)  , and (11) reduces to the biased SPRT for the target. If each target is equally likely given a context, then the nonlinear term in (11) reduces to zero and (11) reduces to the SPRT for the target. If each context deterministically determines a different target, then any piece of evidence on the context is equally informative about the target. Accordingly, (11) reduces to the sum of statistic for context and target, i.e., zτ g ± (zτ c + z0 c). If the magnitude of drift rate for the context is much higher than the magnitude of drift rate for the target, or the magnitude of the bias zc 0 is high, then the nonlinear term saturates at a faster timescale compared to the decision time. In this limit, the approximate contribution of the nonlinear term is either log  P (G=g0|C=c0) P (G=g1|C=c0)  , or log  P (G=g0|C=c1) P (G=g1|C=c1)  . Finally, in the limit of large thresholds, or equivalently, large decision times |zc τ +zc 0| will be a large, e−|zτ c +z0 c| will be small, and the nonlinear term in (11) can be approximated by a linear function of zτ c + zc 0 obtained using the first order Taylor series expansion. In all these cases, (11) can be approximated by a sum of two SPRTs. However, this approximation may not hold 4 in general and we suspect many interesting cases will require us to consider the nonlinear model in (11). In those cases, the signal and noise characteristics of context and target will have different – and we think distinguishable – effects on the RT distributions we measure. 5 The internal-context update and application to a new task Recall our promise to explore two extremes on the dimension of context and onset timing, and two extremes on the dimension of context-response dependence. The flanker task is an external context task with a context-independent response, so we now turn to an internal context task with context-dependent response. This task is the AX Continuous Performance Test (AX-CPT), a task with origins in the psychiatry literature now applied to cognitive control [6]. In this task, subjects are asked to make a response to a probe (target) stimulus, by convention labeled ‘X’ or ‘Y’, where the response mapping is determined by a previously seen cue (context) stimulus, ‘A’ or ‘B’. In our notation: g0 = X, g1 = Y, c0 = A, c1 = B. Unlike the flanker, where all stimuli pairs are equally likely, in the AX-CPT AX trials are usually the most common (appearing 50% of the time or more), and BY trials least common. AY and BX trials appear with equal frequency, but have dramatically different conditional probabilities due to the preponderance of AX trials. Two response mappings are used in the literature: an asymmetric one where one response is made on AX trials and the other response otherwise; and a symmetric variant where one response is made to AX and BY trials, and the other to AY and BX trials. We focus on the symmetric variant, since in this case the response is always context-dependent (in the asymmetric variant the response is is context-independent on Y trials). We can use the definition of the task to write a new form for r(·): r(P (C, G)) = ( r0 = ‘left′ with probability P (G = g0, C = c0) + P (G = g1, C = c1) r1 = ‘right′ with probability P (G = g0, C = c1) + P (G = g1, C = c0) (12) We assume for simplicity that the inference process on the context models the maintenance of context information and retrieval of the response rule (though the model could be extended to perceptual encoding of the context as well). That is, we start the inference machine at toff c , using the following update when toff c ≤τ ≤ton g : Pτ (C, G | eC) ∝P (eC | C, G)Pτ−1(C, G) (13) Then, once the target appears the update becomes: Pτ (C, G | eC, eG) ∝P (eC, eG | C, G)Pτ−1(C, G) (14) For samples after the context disappears, we introduce a simple decay mechanism wherein the probability with which the context sensor provides a sample from the true context decays exponentially. A sample is drawn from the true context with probability e−d(τ−toff c ), and drawn uniformly otherwise. The update takes this into account, such that as τ grows the ratio P (eC|C=c0) P (eC|C=c1) approaches 1 and the context sensor stops being informative (notation omitted for space). This means that the unconditional posterior of context can saturate at values other than 1. The remainder of the model is exactly as described above. This provides an opportunity to generate predictions of both tasks in a shared model, something we take up in the final portion of the paper. But first, as in the flanker model, we reduce this model to a combination of multiple instances of the well-understood DDM. 6 Relating the internal context model to the fixed-context drift model We sketch an intuition for how our internal context model can be built up from a combination of fixed-context drift models (again assuming sensor independence). The derivation uses the same trick of dividing numerator and denominator by the likelihood as the flanker expressions, and is included in the supplementary material, as is the asymmetric variant. We state the final expression for the symmetric case here: log Z = log P0(C = c0, G = g0)ezτ c ezτ g + P0(C = c1, G = g1) P0(C = c0, G = g1)ezτc + P0(C = c1, G = g0)ezτg (15) Equation (15) combines the SPRT statistic associated with the context and the target in a nonlinear fashion which is more complicated than in (11), further complicated by the fact that the memory decay turns the context random walk into an Ornstein-Uhlenbeck process in expectation (notation omitted for space, but follows from the relationship between continuous O-U and discrete AR(1) processes). The reduction of these equations to a SPRT or the sum of two SPRTs is subtle, and is valid only in rather contrived settings. For example, if the drift rate for the target is much higher 5 than the drift rate for the context, then in the limit of large thresholds (15) can be approximated by either log P0(C=c0,G=g0) P0(C=c1,G=g0) + zτ c , or log P0(C=c1,G=g1) P0(C=c0,G=g1) −zτ c . As with (11), we think it will be highly instructive to further invesgate the cases where the reductions cannot apply. 7 Simulation results for both tasks using the same model and parameters With the relationship between both tasks established via our theory, we can now simulate behavior in both tasks under nearly the same model parameters. The one difference is in the memory component, governed by the memory decay parameter d and the target onset time τton. Longer intervals between context disappearance and target appearance have the same effect as higher values of d: they make context retrieved more poorly. We use d = 0.0001 for the decay and a 2000-timestep interval, which results in approximately 82% probability of drawing a correct sample by the time the target comes on. The effect of both parameters is equivalent in the results we show, since we do not explore variable context-target delays, but could be explored by varying this duration. For simplicity we assume the sampling distribution for eC and eG is identical for both tasks, though this need not hold except for identical stimuli sampled from perception. For flanker simulations we use the model no spatial uncertainty, i.e. αµ = ασ = 0, to best match the AX-CPT model and our analytical connections to the SPRT. We assume the model has a high congruence prior for the flanker model, and the correct prior for the AX-CPT, as detailed in Table 1. Context Target Prior Flanker AX-CPT Flanker AX-CPT Flanker AX-CPT S_S A S X 0.45 0.5 S_S A H Y 0.05 0.2 H_H B S X 0.05 0.2 H_H B H Y 0.45 0.1 Table 1: Priors for the inference process for the Flanker and AX-CPT instantiation of our theory. The remainder of parameters are identical across both task simulations: σc = σg = 9, θ = 0.9, µc = µg = 0 for c0 and g0, and µc = µg = 1 for c1 and g1. To replicate the flanker results, we followed [5] by introducing a non-decision error parameter γ = 0.03: this is the probability of making a random response immediately at the first timestep. We simulated 100,000 trials for each model. Figure 1 shows results from the simulation of the flanker task, recovering the characteristic early below-chance performance in incongruent trials. This simulation supports the assertion that our theory generalizes the flanker model of [5], though we are not sure why our scale on timesteps appears different by about 5x in spite of using what we think are equivalent parameters. A library for simulating tasks that fit in our framework and code for generating all simulation figures in this paper can be found at https://github.com/mshvartsman/cddm. For the AX-CPT behavior, we compare qualitative patterns from our model to a heterogeneous dataset of humans performing this task (n=59) across 4 different manipulations with 200 trials per subject [24]. The manipulations were different variants of “proactive”-behavior inducing manipulations in the sense of [25]. This is the most apt comparison to our model: proactive strategies are argued to involve response preparation of the sort that our model reflects in its accumulation over the context before the target appears. Figure 2 shows mean RTs and accuracies produced by our model for the AX-CPT, under the same parameters that we used for the flanker model. This model recovers the qualitative pattern of behavior seen in human subjects in this task, both RT and error proportion by condition showing the same pattern. Moreover, if we examine the conditional RT plot (Figure 3) we see that the model predicts a region of below-chance performance early in AY trials but not other trials. This effect appears isomorphic to the early congruence effect in the flanker task, in the sense that both are caused by a strong prior biased away from the correct response: on incongruent trials given a high congruence prior in the flanker, and on AY trials given a high AX prior in AX-CPT. More generally, the model recovers conditional accuracy curves that look very similar to those in the human data. 6 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0.00 0.25 0.50 0.75 1.00 0 250 500 750 1000 Timesteps Accuracy G G Congruent Incongruent 0.000 0.002 0.004 0.006 0 250 500 750 1000 Timesteps density Congruent Incongruent Figure 1: Model recovers characteristic flanker pattern. Left: response time computed by 50timestep RT bin for congruent and incongruent trials, showing early below-chance performance. Right: response time distributions for congruent and incongruent trials, showing the same mode but fatter tail for incongruent relative to congruent trials. Both are signature phenomena in the flanker task previously recovered by the model of Yu and colleagues, consistent with our theory being a generalization of their model. G G G G 350 400 450 500 550 A:X A:Y B:X B:Y Trial Type RT (timesteps) RT by condition (model) G G G G 0.05 0.10 0.15 0.20 A:X A:Y B:X B:Y Trial Type Error Proportion Errors by condition (model) G G G G 420 430 440 450 460 470 AX AY BX BY Trial Type RT (ms) RT by condition (humans) G G G G 0.05 0.10 0.15 0.20 AX AY BX BY Trial Type Error Proportion Errors by condition (humans) Figure 2: Model recovers gross RT patterns in human behavior. Left: RT and error rates by trial type in the model, using the same parameters as the flanker model. Right: RT and error rates by trial type in 59 human participants. Error bars are standard errors (where not visible, they are smaller than the dots). Both RT and error patterns are quite similar (note that the timestep-to-ms mapping need not be one-to-one). 8 Discussion In this paper, we have provided a theoretical framework for understanding decision making under dynamically shifting context. We used this framework to derive models of two distinct tasks from the cognitive control literature, one a notational equivalent of a previous model and the other a novel model of a well-established task. We showed how we can write these models in terms of combinations of constant-drift random walks. Most importantly, we showed how two models derived from our theoretical framing can recover RT, error, and RT-conditional accuracy patterns seen in human data without a change of parameters between tasks and task models. Our results are quantitatively robust to small changes in the prior because equations 12 and 16 are smooth functions of the prior. The early incongruent errors in flanker are also robust to larger changes, as long as the congruence prior is above 0.5. The ordering of RTs and error rates for AX-CPT rely on assuming that participants at least learn the correct ordering of trial frequencies – we think an uncontroversial assumption. One natural next step should be to generate direct quantitative predictions of behavior in one task based on a model trained on another task – ideally on an individual subject level, and in a task 7 0.00 0.25 0.50 0.75 1.00 0 250 500 750 1000 Timesteps Accuracy A:X A:Y B:X B:Y Conditional Accuracy (model) 0.00 0.25 0.50 0.75 1.00 0 250 500 750 1000 RT (ms) Accuracy AX AY BX BY Conditional Accuracy (humans) Figure 3: Model recovers conditional accuracy pattern in human behavior. Left: response time computed by 50-timestep bin for the four trial types, using same parameters as the flanker model. Right: same plot from 59 human participants (see text for details). Bins with fewer than 50 observations omitted. Error bars are standard errors (where not visible, they are smaller than the dots). Both plots show qualitatively similar patterns. Two discrepancies are of note: first, the model predicts very early AY responses to be more accurate than slightly later responses, and early B responses to be close to chance. We think at least part of this is due to the non-decision error γ, but we retained it for consistency with the flanker model. Second, the humans show slightly better BY than BX performance early on, something the model does not recover. We think this may have to do with a global left-response bias that the model is somehow not capturing. Note: the abscissae are in different units (though they correspond surprisingly well). that fits in our framework that has not been extensively explored (for example, an internal-context Flanker variant, or a context-dependent response congruence judgment task). The main challenge in pursuing this kind of analysis is our ability to efficiently estimate and explore these models which, unlike the fixed-context models, have no closed-form analytic expressions or fast approximations. We believe that approximations such as those provided for the flanker model [22] can and should be applied within our framework, both as a way to generate more efficient data fits, and as a way to apply the tools of dynamical systems analysis to the overall behavior of a system. Of particular interest is whether some points in the task space defined in our framework map onto existing descriptive decision models [e.g. 3]. Another natural next step is to seek evidence of our proposed form of integrator in neural data, or investigate plausible neural implementations or approximations to it. One way of doing so is computing time-varying tuning curves of neural populations in different regions to the individual components of the accumulators we propose in equations (11) and (15). Another is to find connectivity patterns that perform the log-sum computation we hypothesize happens in the integrator. A third is to look for components correlated with them in EEG data. All of these methods have some promise, as they have been successfully applied to the fixed context model [9, 10, 26]. Such neural data would not only test a prediction of our theory, but also – via the brain locations found to be correlated – address questions we presently do not address, such as whether the dynamic weighting happens at the sampler or further upstream (i.e. whether unreliable evidence is gated at the sampler or discounted at the integrator). A second key challenge given our focus on optimal inference is the fact that the fixed threshold decision rule we use is suboptimal for the case of non identically distributed observations. While the likelihoods of context and target are independent in our simulations, the likelihoods of the two responses are not identically distributed. The optimal threshold is generally time-varying for this case [27], though the specific form is not known. Finally, while our model recovers RT-conditional accuracies and stimulus-conditional RT and accuracy patterns, it fails to recover the correct pattern of accuracy-conditional RTs. That is, it predicts much faster errors than corrects on average. Future work will need to investigate whether this is caused by qualitative or quantitative aspects of the theoretical framework and model. References [1] D. R. J. Laming, Information theory of choice-reaction times. London: Academic Press, 1968. [2] R. Ratcliff, “A theory of memory retrieval.,” Psychological Review, vol. 85, no. 2, pp. 59–108, 1978. 8 [3] M. Usher and J. L. McClelland, “The time course of perceptual choice: The leaky, competing accumulator model.,” Psychological Review, vol. 108, no. 3, pp. 550–592, 2001. [4] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J. D. Cohen, “The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks.,” Psychological review, vol. 113, pp. 700–65, Oct. 2006. [5] A. J. Yu, P. Dayan, and J. D. Cohen, “Dynamics of attentional selection under conflict: toward a rational Bayesian account.,” Journal of experimental psychology. Human perception and performance, vol. 35, pp. 700–17, June 2009. [6] D. Servan-Schreiber, J. D. Cohen, and S. Steingard, “Schizophrenic Deficits in the Processing of Context,” Archives of General Psychiatry, vol. 53, p. 1105, Dec. 1996. [7] A. Wald and J. Wolfowitz, “Optimum Character of the Sequential Probability Ratio Test,” The Annals of Mathematical Statistics, vol. 19, pp. 326–339, Sept. 1948. [8] S. Kira, T. Yang, and M. N. Shadlen, “A Neural Implementation of Wald’s Sequential Probability Ratio Test,” Neuron, vol. 85, pp. 861–873, Feb. 2015. [9] R. Bogacz and K. N. Gurney, “The basal ganglia and cortex implement optimal decision making between alternative actions.,” Neural computation, vol. 19, pp. 442–77, Feb. 2007. [10] M. K. van Vugt, P. Simen, L. E. Nystrom, P. Holmes, and J. D. Cohen, “EEG oscillations reveal neural correlates of evidence accumulation.,” Frontiers in neuroscience, vol. 6, p. 106, Jan. 2012. [11] B. M. Turner, L. van Maanen, and B. U. Forstmann, “Informing cognitive abstractions through neuroimaging: The neural drift diffusion model.,” Psychological Review, vol. 122, no. 2, pp. 312–336, 2015. [12] D. Norris, “The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.,” Psychological review, vol. 113, pp. 327–357, Apr. 2006. [13] K.-F. Wong and X.-J. Wang, “A recurrent network mechanism of time integration in perceptual decisions.,” The Journal of neuroscience : the official journal of the Society for Neuroscience, vol. 26, no. 4, pp. 1314–1328, 2006. [14] P. I. Frazier and A. J. Yu, “Sequential hypothesis testing under stochastic deadlines,” Advances in Neural Information Processing Systems, pp. 1–8, 2008. [15] J. Drugowitsch, R. Moreno-Bote, A. K. Churchland, M. N. Shadlen, and A. Pouget, “The cost of accumulating evidence in perceptual decision making.,” The Journal of Neuroscience, vol. 32, pp. 3612–28, Mar. 2012. [16] N. Srivastava and P. Schrater, “Rational inference of relative preferences,” in Advances in Neural Information Processing Systems 25, pp. 2312–2320, 2012. [17] R. C. O’Reilly and M. J. Frank, “Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia,” Neural Computation, vol. 18, pp. 283–328, Feb. 2006. [18] J. P. Sheppard, D. Raposo, and A. K. Churchland, “Dynamic weighting of multisensory stimuli shapes decision-making in rats and humans.,” Journal of vision, vol. 13, no. 6, pp. 1–19, 2013. [19] J. R. Stroop, “Studies of interference in serial verbal reactions.,” Journal of Experimental Psychology, vol. 18, no. 6, pp. 643–662, 1935. [20] G. Gratton, M. G. Coles, E. J. Sirevaag, C. W. Eriksen, and E. Donchin, “Pre- and poststimulus activation of response channels: a psychophysiological analysis.,” Journal of experimental psychology. Human perception and performance, vol. 14, no. 3, pp. 331–344, 1988. [21] J. R. Simon and J. D. Wolf, “Choice reaction time as a function of angular stimulus-response correspondence and age,” Ergonomics, vol. 6, pp. 99–105, Jan. 1963. [22] Y. S. Liu, A. Yu, and P. Holmes, “Dynamical analysis of Bayesian inference models for the Eriksen task.,” Neural computation, vol. 21, pp. 1520–53, June 2009. [23] T. D. Hanks, M. E. Mazurek, R. Kiani, E. Hopp, and M. N. Shadlen, “Elapsed decision time affects the weighting of prior probability in a perceptual decision task.,” The Journal of Neuroscience, vol. 31, pp. 6339–52, Apr. 2011. [24] O. Lositsky, R. C. Wilson, M. Shvartsman, and J. D. Cohen, “A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task,” in The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, pp. 103–107, 2015. [25] T. S. Braver, “The variable nature of cognitive control: a dual mechanisms framework.,” Trends in cognitive sciences, vol. 16, pp. 106–13, Feb. 2012. [26] T. D. Hanks, C. D. Kopec, B. W. Brunton, C. A. Duan, J. C. Erlich, and C. D. Brody, “Distinct relationships of parietal and prefrontal cortices to evidence accumulation,” Nature, 2015. [27] Y. Liu and S. Blostein, “Optimality of the sequential probability ratio test for nonstationary observations,” IEEE Transactions on Information Theory, vol. 38, no. 1, pp. 177–182, 1992. 9
2015
137
5,633
Particle Gibbs for Infinite Hidden Markov Models Nilesh Tripuraneni* University of Cambridge nt357@cam.ac.uk Shixiang Gu* University of Cambridge MPI for Intelligent Systems sg717@cam.ac.uk Hong Ge University of Cambridge hg344@cam.ac.uk Zoubin Ghahramani University of Cambridge zoubin@eng.cam.ac.uk Abstract Infinite Hidden Markov Models (iHMM’s) are an attractive, nonparametric generalization of the classical Hidden Markov Model which can automatically infer the number of hidden states in the system. However, due to the infinite-dimensional nature of the transition dynamics, performing inference in the iHMM is difficult. In this paper, we present an infinite-state Particle Gibbs (PG) algorithm to resample state trajectories for the iHMM. The proposed algorithm uses an efficient proposal optimized for iHMMs and leverages ancestor sampling to improve the mixing of the standard PG algorithm. Our algorithm demonstrates significant convergence improvements on synthetic and real world data sets. 1 Introduction Hidden Markov Models (HMM’s) are among the most widely adopted latent-variable models used to model time-series datasets in the statistics and machine learning communities. They have also been successfully applied in a variety of domains including genomics, language, and finance where sequential data naturally arises [Rabiner, 1989; Bishop, 2006]. One possible disadvantage of the finite-state space HMM framework is that one must a-priori specify the number of latent states K. Standard model selection techniques can be applied to the finite state-space HMM but bear a high computational overhead since they require the repetitive training  exploration of many HMM’s of different sizes. Bayesian nonparametric methods offer an attractive alternative to this problem by adapting their effective model complexity to fit the data. In particular, Beal et al. [2001] constructed an HMM over a countably infinite state-space using a Hierarchical Dirichlet Process (HDP) prior over the rows of the transition matrix. Various approaches have been taken to perform full posterior inference over the latent states, transition  emission distributions and hyperparameters since it is impossible to directly apply the forward-backwards algorithm due to the infinite-dimensional size of the state space. The original Gibbs sampling approach proposed in Teh et al. [2006] suffered from slow mixing due to the strong correlations between nearby time steps often present in time-series data [Scott, 2002]. However, Van Gael et al. [2008] introduced a set of auxiliary slice variables to dynamically “truncate” the state space to be finite (referred to as beam sampling), allowing them to use dynamic programming to jointly resample the latent states thus circumventing the problem. Despite the power of the beam-sampling scheme, Fox et al. [2008] found that application of the beam sampler to the (sticky) iHMM resulted in slow mixing relative to an inexact, blocked sampler due to the introduction of auxiliary slice variables in the sampler. *equal contribution. 1 The main contributions of this paper are to derive an infinite-state PG algorithm for the iHMM using the stick-breaking construction for the HDP, and constructing an optimal importance proposal to efficiently resample its latent state trajectories. The proposed algorithm is compared to existing state-of-the-art inference algorithms for iHMMs, and empirical evidence suggests that the infinitestate PG algorithm consistently outperforms its alternatives. Furthermore, by construction the time complexity of the proposed algorithm is O(TNK). Here T denotes the length of the sequence, N denotes the number of particles in the PG sampler, and K denotes the number of “active” states in the model. Despite the simplicity of sampler, we find in a variety of synthetic and real-world experiments that these particle methods dramatically improve convergence of the sampler, while being more scalable. We will first define the iHMM/sticky iHMM in Section 2, and review the Dirichlet Process (DP) and Hierarchical Dirichlet Process (HDP) in our appendix. Then we move onto the description of our MCMC sampling scheme in Section 3. In Section 4 we present our results on a variety of synthetic and real-world datasets. 2 Model and Notation 2.1 Infinite Hidden Markov Models We can formally define the iHMM (we review the theory of the HDP in our appendix) as follows: β ∼GEM(γ), πj|β iid ∼DP(α, β), φj iid ∼H, j = 1, . . . , ∞ st|st−1 ∼Cat(·|πst−1), yt|st ∼f(·|φst), t = 1, . . . , T. (1) Here β is the shared DP measure defined on integers Z. Here s1:T = (s1, ..., sT ) are the latent states of the iHMM, y1:T = (y1, ..., yT ) are the observed data, and φj parametrizes the emission distribution f. Usually H and f are chosen to be conjugate to simplify the inference. βk′ can be interpreted as the prior mean for transition probabilities into state k′, with α governing the variability of the prior mean across the rows of the transition matrix. The hyper-parameter γ controls how concentrated or diffuse the probability mass of β will be over the states of the transition matrix. To connect the HDP with the iHMM, note that given a draw from the HDP Gk = P∞ k′=1 πkk′δφk′ we identify πkk′ with the transition probability from state k to state k′ where φk′ parametrize the emission distributions. Note that fixing β = ( 1 K , ...., 1 K , 0, 0...) implies only transitions between the first K states of the transition matrix are ever possible, leaving us with the finite Bayesian HMM. If we define a finite, hierarchical Bayesian HMM by drawing β ∼Dir(γ/K, ..., γ/K) πk ∼Dir(αβ) (2) with joint density over the latent/hidden states as pφ(s1:T , y1:T ) = ΠT t=1π(st|st−1)fφ(yt|st) then after taking K →∞, the hierarchical prior in Equation (2) approaches the HDP. Figure 1: Graphical Model for the sticky HDP-HMM (setting κ = 0 recovers the HDP-HMM) 2 2.2 Prior and Emission Distribution Specification The hyperparameter α governs the variability of the prior mean across the rows of the transition matrix and γ controls how concentrated or diffuse the probability mass of β will be over the states of the transition matrix. However, in the HDP-HMM we have each row of the transition matrix is drawn as πj ∼DP(α, β). Thus the HDP prior doesn’t differentiate self-transitions from jumps between different states. This can be especially problematic in the non-parametric setting, since non-Markovian state persistence in data can lead to the creation of unnecessary extra states and unrealistically, rapid switching dynamics in our model. In Fox et al. [2008], this problem is addressed by including a self-transition bias parameter into the distribution of transitioning probability vector πj: πj ∼DP(α + κ, αβ + κδj α + κ ) (3) to incorporate prior beliefs that smooth, state-persistent dynamics are more probable. Such a construction only involves the introduction of one further hyperparameter κ which controls the “stickiness” of the transition matrix (note a similar self-transition was explored in Beal et al. [2001]). For the standard iHMM, most approaches to inference have placed vague gamma hyper-priors on the hyper-parameters α and γ, which can be resampled efficiently as in Teh et al. [2006]. Similarly in the sticky iHMM, in order to maintain tractable resampling of hyper-parameters Fox et al. [2008] chose to place vague gamma priors on γ, α+κ, and a beta prior on κ/(α+κ). In this work we follow Teh et al. [2006]; Fox et al. [2008] and place priors γ ∼Gamma(aγ, bγ), α + κ ∼Gamma(as, bs), and κ ∼Beta(aκ, bκ) on the hyper-parameters. We consider two conjugate emission models for the output states of the iHMM – a multinomial emission distribution for discrete data, and a normal emission distribution for continuous data. For discrete data we choose φk ∼Dir(αφ) with f(· | φst) = Cat(·|φk). For continuous data we choose φk = (µ, σ2) ∼NIG(µ, λ, αφ, βφ) with f(· | φst) = N(·|φk = (µ, σ2)). 3 Posterior Inference for the iHMM Let us first recall the collection of variables we need to sample: β is a shared DP base measure, (πk) is the transition matrix acting on the latent states, while φk parametrizes the emission distribution f, k = 1, . . . , K. We can then resample the variables of the iHMM in a series of Gibbs steps: Step 1: Sample s1:T | y1:T , φ1:K, β, π1:K. Step 2: Sample β | s1:T , γ. Step 3: Sample π1:K | β, α, κ, s1:T . Step 4: Sample φ1:K | y1:T , s1:T , H. Step 5: Sample (α, γ, κ) | s1:T , β, π1:K. Due to the strongly correlated nature of time-series data, resampling the latent hidden states in Step 1, is often the most difficult since the other variables can be sampled via the Gibbs sampler once a sample of s1:T has been obtained. In the following section, we describe a novel efficient sampler for the latent states s1:T of the iHMM, and refer the reader to our appendix and Teh et al. [2006]; Fox et al. [2008] for a detailed discussion on steps for sampling variables α, γ, κ, β, π1:K, φ1:K. 3.1 Infinite State Particle Gibbs Sampler Within the Particle MCMC framework of Andrieu et al. [2010], Sequential Monte Carlo (or particle filtering) is used as a complex, high-dimensional proposal for the Metropolis-Hastings algorithm. The Particle Gibbs sampler is a conditional SMC algorithm resulting from clamping one particle to an apriori fixed trajectory. In particular, it is a transition kernel that has p(s1:T |y1:T ) as its stationary distribution. The key to constructing a generic, truncation-free sampler for the iHMM to resample the latent states, s1:T , is to note that the finite number of particles in the sampler are “localized” in the latent space to a finite subset of the infinite set of possible states. Moreover, they can only transition to finitely many new states as they are propagated through the forward pass. Thus the “infinite” measure β, and “infinite” transition matrix π only need to be instantiated to support the number of “active” states (defined as being {1, ..., K}) in the state space. In the particle Gibbs algorithm, if a particle transitions to a state outside the “active” set, the objects β and π can be lazily expanded via 3 the stick-breaking constructions derived for both objects in Teh et al. [2006] and stated in equations (2), (4) and (5). Thus due to the properties of both the stick-breaking construction and the PGAS kernel, this resampling procedure will leave the target distribution p(s1:T |y1:T ) invariant. Below we first describe our infinite-state particle Gibbs algorithm for the iHMM then detail our notation (we provide further background on SMC in our supplement): Step 1: For iteration t = 1 initialize as: (a) sample si 1 ∼q1(·), for i ∈1, ..., N. (b) initialize weights wi 1 = p(s1)f1(y1|s1)  q1(s1) for i ∈1, ..., N. Step 2: For iteration t > 1 use trajectory s′ 1:T from t −1, β, π, φ, and K: (a) sample the index ai t−1 ∼Cat(·|W 1:N t−1 ) of the ancestor of particle i for i ∈1, ..., N −1. (b) sample si t ∼qt(· | s ai t−1 t−1 ) for i ∈1, ..., N −1. If si t = K + 1 then create a new state using the stick-breaking construction for the HDP: (i) Sample a new transition probability vector πK+1 ∼Dir(αβ). (ii) Use stick-breaking construction to iteratively expand β ←[β, βK+1] as: β′ K+1 iid ∼Beta(1, γ), βK+1 = β′ K+1ΠK ℓ=1(1 −β′ ℓ). (iii) Expand transition probability vectors (πk), k = 1, . . . , K + 1, to include transitions to K + 1st state via the HDP stick-breaking construction as: πj ←[πj1, πj2, . . . , πj,K+1], ∀j = 1, . . . , K + 1. where π′ jK+1 ∼Beta α0βK, α0(1 − K+1 X ℓ=1 βl)  , πjK+1 = π′ jK+1ΠK ℓ=1(1 −π′ jℓ). (iv) Sample a new emission parameter φK+1 ∼H. (c) compute the ancestor weights ˜wi t−1|T = wi t−1π(s′ t|si t−1) and resample aN t as P(aN t = i) ∝˜wi t−1|T . (d) recompute and normalize particle weights using: wt(si t) = π(si t | s ai t−1 t−1 )f(yt | si t)/qt(si t | s ai t−1 t−1 ) Wt(si t) = wt(si t)/( N X i=1 wt(si t)) Step 3: Sample k with P(k = i) ∝wi T and return s∗ 1:T = sk 1:T . In the particle Gibbs sampler, at each step t a weighted particle system {si t, wi t}N i=1 serves as an empirical point-mass approximation to the distribution p(s1:T ), with the variables ai t denoting the ‘ancestor’ particles of si t. Here we have used π(st|st−1) to denote the latent transition distribution, f(yt|st) the emission distribution, and p(s1) the prior over the initial state s1. 3.2 More Efficient Importance Proposal qt(·) In the PG algorithm described above, we have a choice of the importance sampling density qt(·) to use at every time step. The simplest choice is to sample from the “prior” – qt(·|s ai t−1 t−1 ) = π(si t|s ai t−1 t−1 ) – which can lead to satisfactory performance when then observations are not too informative and the dimension of the latent variables are not too large. However using the prior as importance proposal in particle MCMC is known to be suboptimal. In order to improve the mixing rate of the sampler, it is desirable to sample from the partial “posterior” – qt(· | s ai t−1 t−1 ) ∝π(si t|s ai t−1 t−1 )f(yt|si t) – whenever possible. In general, sampling from the “posterior”, qt(· | s an t−1 t−1 ) ∝π(sn t |s an t−1 t−1 )f(yt|sn t ), may be impossible, but in the iHMM we can show that it is analytically tractable. To see this, note that we have lazily 4 represented π(·|sn t−1) as a finite vector – [πsn t−1,1:K, πsn t−1,K+1]. Moreover, we can easily evaluate the likelihood f(yn t |sn t , φ1:K) for all sn t ∈1, ..., K. However, if sn t = K + 1, we need to compute f(yn t |sn t = K + 1) = R f(yn t |sn t = K + 1, φ)H(φ)dφ. If f and H are conjugate, we can analytically compute the marginal likelihood of the K + 1st state, but this can also be approximated by Monte Carlo sampling for non-conjugate likelihoods – see Neal [2000] for a more detailed discussion of this argument. Thus, we can compute p(yt|sn t−1) = PK+1 k=1 π(k | sn t−1)f(yt | φk) for each particle sn t where n ∈1, ..., N −1. We investigate the impact of “posterior” vs. “prior” proposals in Figure 5. Based on the convergence of the number of states and joint log-likelihood, we can see that sampling from the “posterior” improves the mixing of the sampler. Indeed, we see from the “prior” sampling experiments that increasing the number of particles from N = 10 to N = 50 does seem to marginally improve the mixing the sampler, but have found N = 10 particles sufficient to obtain good results. However, we found no appreciable gain when increasing the number of particles from N = 10 to N = 50 when sampling from the “posterior” and omitted the curves for clarity. It is worth noting that the PG sampler (with ancestor resampling) does still perform reasonably even when sampling from the “prior”. 3.3 Improving Mixing via Ancestor Resampling It has been recognized that the mixing properties of the PG kernel can be poor due to path degeneracy [Lindsten et al., 2014]. A variant of PG that is presented in Lindsten et al. [2014] attempts to address this problem for any non-Markovian state-space model with a modification – resample a new value for the variable aN t in an “ancestor sampling” step at every time step, which can significantly improve the mixing of the PG kernel with little extra computation in the case of Markovian systems. To understand ancestor sampling, for t ≥2 consider the reference trajectory s′ t:T ranging from the current time step t to the final time T. Now, artificially assign a candidate history to this partial path, by connecting s′ t:T to one of the other particles history up until that point {si 1:t−1}N i=1 which can be achieved by simply assigning a new value to the variable aN t ∈1, ..., N. To do this, we first compute the weights: ˜wi t−1|T ≡wi t−1 pT (si 1:t−1, s′ t:T |y1:T ) pt−1(si 1:t−1|y1:T ) , i = 1, ..., N (4) Then aN t is sampled according to P(aN t = i) ∝˜wi t−1|T . Remarkably, this ancestor sampling step leaves the density p(s1:T | y1:T ) invariant as shown in Lindsten et al. [2014] for arbitrary, nonMarkovian state-space models. However since the infinite HMM is Markovian, we can show the computation of the ancestor sampling weights simplifies to ˜wi t−1|T = wi t−1π(s′ t|si t−1) (5) Note that the ancestor sampling step does not change the O(TNK) time complexity of the infinitestate PG sampler. 3.4 Resampling π, φ, β, α, γ, and κ Our resampling scheme for π, β, φ, α, γ, and κ will follow straightforwardly from this scheme in Fox et al. [2008]; Teh et al. [2006]. We present a review of their methods and related work in our appendix for completeness. 4 Empirical Study In the following experiments we explore the performance of the PG sampler on both the iHMM and the sticky iHMM. Note that throughout this section we have only taken N = 10 and N = 50 particles for the PG sampler which has time complexity O(TNK) when sampling from the “posterior” compared to the time complexity of O(TK2) of the beam sampler. For completeness, we also compare to the Gibbs sampler, which has been shown perform worse than the beam sampler [Van Gael et al., 2008], due to strong correlations in the latent states. 4.1 Convergence on Synthetic Data To study the mixing properties of the PG sampler on the iHMM and sticky iHMM, we consider two synthetic examples with strongly positively correlated latent states. First as in Van Gael et al. 5 iteration 0 500 1000 0 5 10 15 20 25 30 35 40 4-State: K PGAS-S PGAS Beam Gibbs iteration 0 500 1000 -10000 -9000 -8000 -7000 -6000 -5000 4-State: JLL PGAS-S PGAS Beam Gibbs Figure 2: Comparing the performance of the PG sampler, PG sampler on sticky iHMM (PG-S), beam sampler, and Gibbs sampler on inferring data from a 4 state strongly correlated HMM. Left: Number of “Active” States K vs. Iterations Right: Joint-Log Likelihood vs. Iterations (Best viewed in color) PG Beam Truth Figure 3: Learned Latent Transition Matrices for the PG sampler and Beam Sampler vs Ground Truth (Transition Matrix for Gibbs Sampler omitted for clarity). PG correctly recovers strongly correlated self-transition matrix, while the Beam Sampler supports extra “spurious” states in the latent space. [2008], we generate sequences of length 4000 from a 4 state HMM with self-transition probability of 0.75, and residual probability mass distributed uniformly over the remaining states where the emission distributions are taken to be normal with fixed standard deviation 0.5 and emission means of −2.0, −0.5, 1.0, 4.0 for the 4 states. The base distribution, H for the iHMM is taken to be normal with mean 0 and standard deviation 2, and we initialized the sampler with K = 10 “active” states. In the 4-state case, we see in Figure 2 that the PG sampler applied to both the iHMM and the sticky iHMM converges to the “true” value of K = 4 much quicker than both the beam sampler and Gibbs sampler – uncovering the model dimensionality, and structure of the transition matrix by more rapidly eliminating spurious “active” states from the space as evidenced in the learned transition matrix plots in Figure 3. Moreover, as evidenced by the joint log-likelihood in Figure 2, we see that the PG sampler applied to both the iHMM and the sticky iHMM converges quickly to a good mode, while the beam sampler has not fully converged within a 1000 iterations, and the Gibbs sampler is performing poorly. To further explore the mixing of the PG sampler vs. the beam sampler we consider a similar inference problem on synthetic data over a larger state space. We generate data from sequences of length 4000 from a 10 state HMM with self-transition probability of 0.75, and residual probability mass distributed uniformly over the remaining states, and take the emission distributions to be normal with fixed standard deviation 0.5 and means equally spaced 2.0 apart between −10 and 10. The base distribution, H, for the iHMM is also taken to be normal with mean 0 and standard deviation 2. The samplers were initialized with K = 3 and K = 30 states to explore the convergence and robustness of the infinite-state PG sampler vs. the beam sampler. 0 200 400 600 800 1000 0 5 10 15 20 25 30 10-State: K PGAS-initK30 PGAS-initK30-S PGAS-initK3 PGAS-initK3-S Beam-initK30 Beam-initK3 0 200 400 600 800 1000 -10000 -9500 -9000 -8500 -8000 -7500 -7000 10-State: JLL PGAS-initK30 PGAS-initK30-S PGAS-initK3 PGAS-initK3-S Beam-initK30 Beam-initK3 Figure 4: Comparing the performance of the PG sampler vs. beam sampler on inferring data from a 10 state strongly correlated HMM with different initializations. Left: Number of “Active” States K from different Initial K vs. Iterations Right: Joint-Log Likelihood from different Initial K vs. Iterations 0 200 400 600 800 1000 0 5 10 15 20 25 30 10-State: K PGAS-n10-post-initK30 PGAS-n10-post-initK3 PGAS-n10-pri-initK30 PGAS-n10-pri-initK3 PGAS-n50-pri-initK30 PGAS-n50-pri-initK3 0 200 400 600 800 1000 -10000 -9500 -9000 -8500 -8000 -7500 -7000 10-State: JLL PGAS-n10-post-initK30 PGAS-n10-post-initK3 PGAS-n10-pri-initK30 PGAS-n10-pri-initK3 PGAS-n50-pri-initK30 PGAS-n50-pri-initK3 Figure 5: Influence of “Posterior” vs. “Prior” proposal and Number of Particles in PG sampler on iHMM. Left: Number of “Active” States K from different Initial K, Numbers of Particles, and “Prior”/“Posterior” proposal vs. Iterations Right: Joint-Log Likelihood from different Initial K, Numbers of Particles, and “Prior”/”Posterior” proposal vs. Iterations 6 As observed in Figure 4, we see that the PG sampler applied to the iHMM and sticky iHMM, converges far more quickly from both “small” and “large” initialization of K = 3 and K = 30 “active” states to the true value of K = 10 hidden states, as well as converging in JLL more quickly. Indeed, as noted in Fox et al. [2008], the introduction of the extra slice variables in the beam sampler can inhibit the mixing of the sampler, since for the beam sampler to consider transitions with low prior probability one must also have sampled an unlikely corresponding slice variable so as not to have truncated that state out of the space. This can become particularly problematic if one needs to consider several of these transitions in succession. We believe this provides evidence that the infinite-state Particle Gibbs sampler presented here, which does not introduce extra slice variables, is mixing better than beam sampling in the iHMM. 4.2 Ion Channel Recordings For our first real dataset, we investigate the behavior of the PG sampler and beam sampler on an ion channel recording. In particular, we consider a 1MHz recording from Rosenstein et al. [2013] of a single alamethicin channel previously investigated in Palla et al. [2014]. We subsample the time series by a factor of 100, truncate it to be of length 2000, and further log transform and normalize it. We ran both the beam and PG sampler on the iHMM for 1000 iterations (until we observed a convergence in the joint log-likelihood). Due to the large fluctuations in the observed time series, the beam sampler infers the number of “active” hidden states to be K = 5 while the PG sampler infers the number of “active” hidden states to be K = 4. However in Figure 6, we see that beam sampler infers a solution for the latent states which rapidly oscillates between a subset of likely states during temporal regions which intuitively seem to be better explained by a single state. However, the PG sampler has converged to a mode which seems to better represent the latent transition dynamics, and only seems to infer “extra” states in the regions of large fluctuation. Indeed, this suggests that the beam sampler is mixing worse with respect to the PG sampler. Beam: Latent States t 0 500 1000 1500 2000 y -1.5 -1 -0.5 0 0.5 1 2 3 4 5 PG: Latent States t 0 500 1000 1500 2000 y -1.5 -1 -0.5 0 0.5 1 2 3 4 Figure 6: Left: Observations colored by an inferred latent trajectory using beam sampling inference. Right: Observations colored by an inferred latent state trajectory using PG inference. 4.3 Alice in Wonderland Data For our next example we consider the task of predicting sequences of letters taken from Alice’s Adventures in Wonderland. We trained an iHMM on the 1000 characters from the first chapter of the book, and tested on 4000 subsequent characters from the same chapter using a multinomial emission model for the iHMM. Once again, we see that the PG sampler applied to the iHMM/sticky iHMM converges quickly in joint log-likelihood to a mode where it stably learns a value of K ≈10 as evidenced in Figure 7. Though the performance of the PG and beam samplers appear to be roughly comparable here, we would like to highlight two observations. Firstly, the inferred value of K obtained by the PG sampler quickly converges independent of the initialization K in the rightmost of Figure 7. However, the beam sampler’s prediction for the number of active states K still appears to be decreasing and more rapidly fluctuating than both the iHMM and sticky iHMM as evidenced by the error bars in the middle plot in addition to being quite sensitive to the initialization K as shown in the rightmost plot. Based on the previous synthetic experiment (Section 4.1), and this result we suspect that although both the beam sampler and PG sampler are quickly converging to good solutions as evidenced by the training joint log-likelihood, the beam sampler is learning a transition matrix with unnecessary  spurious “active” states. Next we calculate the predictive log-likelihood of the Alice 7 JLL iteration s PGAS (N=50) PGAS (N=10) Beam K iteration PGAS (N=50) PGAS (N=10) Figure 7: Left: Comparing the Joint Log-Likelihood vs. Iterations for the PG sampler and Beam sampler. Middle: Comparing the convergence of the “active” number of states for the iHMM and sticky iHMM for the PG sampler and Beam sampler. Right: Trace plots of the number of states for different initializations for K. in Wonderland test data averaged over 2500 different realizations and find that the infinite-state PG sampler with N = 10 particles achieves a predictive log-likelihood of −5918.4 ± 123.8 while the beam sampler achieves a predictive log-likelihood of −6099.0 ± 106.0, showing the PG sampler applied to the iHMM and Sticky iHMM learns hyperparameter and latent variable values that obtain better predictive performance on the held-out dataset. We note that in this experiment as well, we have only found it necessary to take N = 10 particles in the PG sampler achieve good mixing and empirical performance, although increasing the number of particles to N = 50 does improve the convergence of the sampler in this instance. Given that the PG sampler has a time complexity of O(TNK) for a single pass, while the beam sampler (and truncated methods) have a time complexity of O(TK2) for a single pass, we believe that the PG sampler is a competitive alternative to the beam sampler for the iHMM. 5 Discussions and Conclusions In this work we derive a new inference algorithm for the iHMM using the particle MCMC framework based on the stick-breaking construction for the HDP. We also develop an efficient proposal inside PG optimized for iHMM’s, to efficiently resample the latent state trajectories for iHMM’s. The proposed algorithm is empirically compared to existing state-of-the-art inference algorithms for iHMMs, and shown to be promising because it converges more quickly and robustly to the true number of states, in addition to obtaining better predictive performance on several synthetic and realworld datasets. Moreover, we argued that the PG sampler proposed here is a competitive alternative to the beam sampler since the time complexity of the particle samplers presented is O(TNK) versus the O(TK2) of the beam sampler. Another advantage of the proposed method is the simplicity of the PG algorithm, which doesn’t require truncation or the introduction of auxiliary variables, also making the algorithm easily adaptable to challenging inference tasks. In particular, the PG sampler can be directly applied to the sticky HDP-HMM with DP emission model considered in Fox et al. [2008] for which no truncation-free sampler exists. We leave this development and application as an avenue for future work. References Andrieu, C., Doucet, A., and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342. Beal, M. J., Ghahramani, Z., and Rasmussen, C. E. (2001). The infinite hidden Markov model. In Advances in neural information processing systems, pages 577–584. Bishop, C. M. (2006). Pattern recognition and machine learning, volume 4. Springer New York. Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An HDP–HMM for systems with state persistence. In Proceedings of the 25th international conference on Machine learning, pages 312–319. ACM. Lindsten, F., Jordan, M. I., and Sch¨on, T. B. (2014). Particle Gibbs with ancestor sampling. The Journal of Machine Learning Research, 15(1):2145–2184. 8 Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265. Palla, K., Knowles, D. A., and Ghahramani, Z. (2014). A reversible infinite hmm using normalised random measures. arXiv preprint arXiv:1403.4206. Rabiner, L. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. Rosenstein, J. K., Ramakrishnan, S., Roseman, J., and Shepard, K. L. (2013). Single ion channel recordings with cmos-anchored lipid membranes. Nano letters, 13(6):2682–2686. Scott, S. L. (2002). Bayesian methods for hidden Markov models. Journal of the American Statistical Association, 97(457). Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Van Gael, J., Saatci, Y., Teh, Y. W., and Ghahramani, Z. (2008). Beam sampling for the infinite hidden Markov model. In Proceedings of the International Conference on Machine Learning, volume 25. 9
2015
138
5,634
Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff Ofer Dekel Microsoft Research Redmond, WA oferd@microsoft.com Ronen Eldan Weizmann Institute Rehovot, Israel roneneldan@gmail.com Tomer Koren Technion Haifa, Israel tomerk@technion.ac.il Abstract Bandit convex optimization is one of the fundamental problems in the field of online learning. The best algorithm for the general bandit convex optimization problem guarantees a regret of eO(T 5/6), while the best known lower bound is ⌦(T 1/2). Many attempts have been made to bridge the huge gap between these bounds. A particularly interesting special case of this problem assumes that the loss functions are smooth. In this case, the best known algorithm guarantees a regret of eO(T 2/3). We present an efficient algorithm for the bandit smooth convex optimization problem that guarantees a regret of eO(T 5/8). Our result rules out an ⌦(T 2/3) lower bound and takes a significant step towards the resolution of this open problem. 1 Introduction Bandit convex optimization [11, 5] is the following online learning problem. First, an adversary privately chooses a sequence of bounded and convex loss functions f1, . . . , fT defined over a convex domain K in d-dimensional Euclidean space. Then, a randomized decision maker iteratively chooses a sequence of points x1, . . . , xT , where each xt 2 K. On iteration t, after choosing the point xt, the decision maker incurs a loss of ft(xt) and receives bandit feedback: he observes the value of his loss but he does not receive any other information about the function ft. The decision maker uses the feedback to make better choices on subsequent rounds. His goal is to minimize regret, which is the difference between his loss and the loss incurred by the best fixed point in K. If the regret grows sublinearly with T, it indicates that the decision maker’s performance improves as the length of the sequence increases, and therefore we say that he is learning. Finding an optimal algorithm for bandit convex optimization is an elusive open problem. The first algorithm for this problem was presented in Flaxman et al. [11] and guarantees a regret of R(T) = eO(T 5/6) for any sequence of loss functions (here and throughout, the asymptotic eO notation hides a polynomial dependence on the dimension d as well as logarithmic factors). Despite the ongoing effort to improve on this rate, it remains the state of the art. On the other hand, Dani et al. [9] proves that for any algorithm there exists a worst-case sequence of loss functions for which R(T) = ⌦(T 1/2), and the gap between the upper and lower bounds is huge. While no progress has been made on the general form of the problem, some progress has been made in interesting special cases. Specifically, if the bounded convex loss functions are also assumed to be Lipschitz, Flaxman et al. [11] improves their regret guarantee to R(T) = eO(T 3/4). If the loss functions are smooth (namely, their gradients are Lipschitz), Saha and Tewari [15] present an algorithm with a guaranteed regret of eO(T 2/3). Similarly, if the loss functions are bounded, Lipschitz, and strongly convex, the guaranteed regret is eO(T 2/3) [3]. If even stronger assumptions are made, an optimal regret rate of e⇥(T 1/2) can be guaranteed; namely, when the loss functions are both smooth 1 and strongly-convex [12], when they are Lipschitz and linear [2], and when Lipschitz loss functions are not generated adversarially but drawn i.i.d. from a fixed and unknown distribution [4]. Recently, Bubeck et al. [8] made progress that did not rely on additional assumptions, such as Lipschitz, smoothness, or strong convexity, but instead considered the general problem in the onedimensional case. That result proves that there exists an algorithm with optimal e⇥(T 1/2) regret for arbitrary univariate convex functions ft : [0, 1] 7! [0, 1]. Subsequently, and after the current paper was written, Bubeck and Eldan [7] generalized this result to bandit convex optimization in general Euclidean spaces (albeit requiring a Lipschitz assumption). However, the proofs in both papers are non-constructive and do not give any hint on how to construct a concrete algorithm, nor any indication that an efficient algorithm exists. The current state of the bandit convex optimization problem has given rise to two competing conjectures. Some believe that there exists an efficient algorithm that matches the current lower bound. Meanwhile, others are trying to prove larger lower bounds, in the spirit of [10], even under the assumption that the loss functions are smooth; if the ⌦(T 1/2) lower bound is loose, a natural guess of the true regret rate would be e⇥(T 2/3).1 In this paper, we take an important step towards the resolution of this problem by presenting an algorithm that guarantees a regret of e⇥(T 5/8) against any sequence of bounded, convex, smooth loss functions. Compare this result to the previous stateof-the-art result of e⇥(T 2/3) (noting that 2/3 = 0.666... and 5/8 = 0.625). This result rules out the possibility of proving a lower bound of ⌦(T 2/3) with smooth functions. While there remains a sizable gap with the T 1/2 lower bound, our result brings us closer to finding the elusive optimal algorithm for bandit convex optimization, at least in the case of smooth functions. Our algorithm is a variation on the algorithms presented in [11, 1, 15], with one new idea. These algorithms all follow the same template: on each round, the algorithm computes an estimate of rft(xt), the gradient of the current loss function at the current point, by applying a random perturbation to xt. The sequence of gradient estimates is then plugged into a first-order online optimization technique. The technical challenge in the analysis of these algorithms is to bound the bias and the variance of these gradient estimates. Our idea is take a window of consecutive gradient estimates and average them, producing a new gradient estimate with lower variance and higher bias. Overall, the new bias-variance tradeoff works in our favor and allows us to improve the regret upper-bound. Averaging uncorrelated random vectors to reduce variance is a well-known technique, but applying it in the context of bandit convex optimization algorithm is easier said than done and requires us to overcome a number of technical difficulties. For example, the gradient estimates in our window are taken at different points, which introduces a new type of bias. Another example is the difficulty that arrises when the sequence xs, . . . , xt travels adjacent to the boundary of the convex set K (imagine transitioning from one face of a hypercube to another); the random perturbation applied to xs and xt could be supported on orthogonal directions, yet we average the resulting gradient estimates and expect to get a meaningful low-variance gradient estimate. While the basic idea is simple, our non-trivial technical analysis is not, and may be of independent interest. 2 Preliminaries We begin by defining smooth bandit convex optimization more formally, and recalling several basic results from previous work on the problem (Flaxman et al. [11], Abernethy et al. [2], Saha and Tewari [15]) that we use in our analysis. We also review the necessary background on self-concordant barrier functions. 2.1 Smooth Bandit Convex Optimization In the bandit convex optimization problem, an adversary first chooses a sequence of convex functions f1, . . . , fT : K 7! [0, 1], where K is a closed and convex domain in Rd. Then, on each round t = 1, . . . , T, a randomized decision maker has to choose a point xt 2 K, and after committing to his decision he incurs a loss of ft(xt), and observes this loss as feedback. The decision maker’s expected loss (where expectation is taken with respect to his random choices) is E[PT t=1ft(xt)] and 1In fact, we are aware of at least two separate research groups that invested time trying to prove such an ⌦(T 2/3) lower bound. 2 his regret is R(T) = E " T X t=1 ft(xt) # −min x2K T X t=1 ft(x) . Throughout, we use the notation Et[·] to indicate expectations conditioned on all randomness up to and including round t −1. We make the following assumptions. First, we assume that each of the functions f1, . . . , fT is LLipschitz with respect to the Euclidean norm k·k2, namely that |ft(x) −ft(y)| Lkx −yk2 for all x, y 2 K. We further assume that ft is H-smooth with respect to k·k2, which is to say that 8 x, y 2 K , krft(x) −rft(y)k2 H kx −yk2 . In particular, this implies that ft is continuously differentiable over K. Finally, we assume that the Euclidean diameter of the decision domain K is bounded by D > 0. 2.2 First Order Algorithms with Estimated Gradients The online convex optimization problem becomes much easier in the full information setting, where the decision maker’s feedback includes the vector gt = rft(xt), the gradient (or subgradient) of ft at the point xt. In this setting, the decision maker can use a first-order online algorithm, such as the projected online gradient descent algorithm [17] or dual averaging [13] (sometimes known as follow the regularized leader [16]), and guarantee a regret of O(T 1/2). The dual averaging approach sets xt to be the solution to the following optimization problem, xt = arg min x2K ( x · t−1 X s=1 ↵s,tgs + R(x) ) , (1) where R is a suitably chosen regularizer, and for all t = 1, . . . , T and s = 1, . . . , t we define a non-negative weight ↵s,t. Typically, all of the weights (↵s,t) are set to a constant value ⌘, called the learning rate parameter. However, since we are not in the full information setting and the decision maker does not observe gt, the algorithms mentioned above cannot be used directly. The key observation of Flaxman et al. [11], which is later reused in all of the follow-up work, is that gt can be estimated by randomly perturbing the point xt. Specifically, on round t, the algorithm chooses the point yt = xt + δAtut , (2) instead of the original point xt, where δ > 0 is a parameter that controls the magnitude of the perturbation, At is a positive definite d ⇥d matrix, and ut is drawn from the uniform distribution on the unit sphere. In Flaxman et al. [11], At is simply set to the identity matrix whereas in Saha and Tewari [15], At is more carefully tailored to the point xt (see details below). In any case, care should be taken to ensure that the perturbed point yt remains in the convex set K. The observed value ft(yt) is then used to compute the gradient estimate ˆgt = d δ ft(yt)A−1 t ut , (3) and this estimate is fed to the first-order optimization algorithm. While ˆgt is not an unbiased estimator of rft(xt), it is an unbiased estimator for the gradient of a different function, ˆf t, defined by ˆf t(x) = Et[ft(x + δAtv)] , (4) where v 2 Rd is uniformly drawn from the unit ball. The function ˆf t(x) is a smoothed version of ft, which plays a key role in our analysis and in many of the previous results on this topic. The main property of ˆf t is summarized in the following lemma. Lemma 1 (Flaxman et al. [11], Saha and Tewari [15, Lemma 5]). For any differentiable function f : Rd 7! R, positive definite matrix A, x 2 Rd, and δ 2 (0, 1], define ˆg = (d/δ)f(x+δAu)·A−1u, where u is uniform on the unit sphere. Also, let ˆf(x) = E[f(x + δAv)] where v is uniform on the unit ball. Then E[ˆg] = rˆf(x). The difference between rft(xt) and rˆf t(xt) is the bias of the gradient estimator ˆgt. The analysis in Flaxman et al. [11], Abernethy et al. [2], Saha and Tewari [15] focuses on bounding the bias and the variance of ˆgt and their effect on the first-order optimization algorithm. 3 2.3 Self-Concordant Barriers Following [2, 1, 15], our algorithm and analysis rely on the properties of self-concordant barrier functions. Intuitively, a barrier is a function defined on the interior of the convex body K, which is rather flat in most of the interior of K and explodes to 1 as we approach its boundary. Additionally, a self-concordant barrier has some technical properties that are useful in our setting. Before giving the formal definition of a self-concordant barrier, we define the local norm defined by a self-concordant barrier. Definition 2 (Local Norm Induced by a Self-Concordant Barrier [14]). Let R : int(K) 7! R be a self-concordant barrier. The local norm induced by R at the point x 2 int(K) is denoted by kzkx and defined as kzkx = p zTr2R(x)z. Its dual norm is kzkx,⇤= p zT(r2R(x))−1z. In words, the local norm at x is the Mahalanobis norm defined by the Hessian of R at the point x, namely, r2R(x). We now give a formal definition of a self-concordant barrier. Definition 3 (Self-Concordant Barrier [14]). Let K ✓Rd be a convex body. A function R : int(K) 7! R is a #-self-concordant barrier for K if (i) R is three times continuously differentiable, (ii) R(x) ! 1 as x ! @K, and (iii) for all x 2 int(K) and y 2 Rd, R satisfies |r3R(x)[y, y, y]| 2kyk3 x and |rR(x) · y|  p #kykx . This definition is given for completeness, and is not directly used in our analysis. Instead, we rely on some useful properties of self-concordant barriers. First and foremost, there exists a O(d)-selfconcordant barrier for any convex body [14, 6]. Efficiently-computable self-concordant barriers are only known for specific classes of convex bodies, such as polytopes, yet we make the standard assumption that we have an efficiently computable #-self-concordant barrier for the set K. Another key feature of a self-concordant barrier is the set of Dikin ellipsoids that it defines. The Dikin ellipsoid at x 2 int(K) is simply the unit ball with respect to the local norm at x. A key feature of the Dikin ellipsoid is that it is entirely contained in the convex body K, for any x [see 14, Theorem 2.1.1]. Another technical property of a self-concordant barriers is that its Hessian changes slowly with respect to its local norm. Theorem 4 (Nesterov and Nemirovskii [14, Theorem 2.1.1]). Let K be a convex body with selfconcordant barrier R. For any x 2 int(K) and z 2 Rd such that kzkx < 1, it holds that (1 −kzkx)2 r2R(x) ⪯r2R(x + z) ⪯(1 −kzkx)−2 r2R(x) . While the self-concordant barrier explodes to infinity at the boundary of K, it is quite flat at points that are far from the boundary. To make this statement formal, we define an operation that multiplicatively shrinks the set K toward the minimizer of R (called the analytic center of K). Let y = arg min R(x) and assume without loss of generality that R(y) = 0. For any ✏2 (0, 1) let Ky,✏ denote the set {y + (1 −✏)(x −y) : x 2 K}. The next theorem states that the barrier is flat in Ky,✏ and explodes to 1 in the thin shell between Ky,✏and K. Theorem 5 (Nesterov and Nemirovskii [14, Propositions 2.3.2-3]). Let K be a convex body with #-self-concordant barrier R, let y = arg min R(x), and assume that R(y) = 0. For any ✏2 (0, 1], it holds that 8 x 2 Ky,✏ R(x) # log 1 ✏. Our assumptions on the loss functions, as the Lipschitz assumption or the smoothness assumption, are stated in terms of the standard Euclidean norm (which we denote by k · k2). Therefore, we will need to relate the Euclidean norm to the local norms defined by the self-concordant barrier. This is accomplished by the following lemma (whose proof appears in the supplementary material). Lemma 6. Let K be a convex body with self-concordant barrier R and let D be the (Euclidean) diameter of K. For any x 2 K, it holds that D−1kzkx,⇤kzk2 Dkzkx for all z 2 Rd. 2.4 Self-Concordant Barrier as a Regularizer Looking back at the dual averaging strategy defined in Eq. (1), we can now fill in some of the details that were left unspecified: [1, 15] set the regularization R in Eq. (1) to be a #-self-concordant barrier for the set K. We use the following useful lemma from Abernethy and Rakhlin [1] in our analysis. 4 Algorithm 1: Bandit Smooth Convex Optimization Parameters: perturbation parameter δ 2 (0, 1], dual averaging weights (↵s,t), self-concordant barrier R : int(K) 7! R Initialize: y1 2 K arbitrarily for t = 1, . . . , T At (r2R(xt))−1/2 draw ut uniformly from the unit sphere yt xt + δAtut choose yt, receive feedback ft(yt) ˆgt (d/δ)ft(yt) · A−1 t ut xt+1 arg minx2K{x · Pt s=1 ↵s,tˆgs + R(x)} Lemma 7 (Abernethy and Rakhlin [1]). Let K be a convex body with #-self-concordant barrier R, let g1, . . . , gT be vectors in Rd, and let ⌘> 0 be such that ⌘kgtkxt,⇤ 1 4 for all t. Define xt = arg minx2K{x · Pt−1 s=1 ⌘gs + R(x)}. Then, (i) for all t it holds that kxt −xt+1kxt 2⌘kgtkxt,⇤; (ii) for any x? 2 K it holds that PT t=1 gt · (xt −x?) 1 ⌘R(x?) + 2⌘PT t=1 kgtk2 xt,⇤. Algorithms for bandit convex optimization that use a self-concordant regularizer also use the same self-concordant barrier to obtain gradient estimates. Namely, these algorithms perturb the dual averaging solution xt as in Eq. (3), with the perturbation matrix At set to (r2R(xt))−1/2, the root of the inverse Hessian of R at the point xt. In other words, the distribution of yt is supported on the Dikin ellipsoid centered at xt, scaled by δ. Since δ 2 (0, 1], this form of perturbation guarantees that yt 2 K. Moreover, if yt is generated in this way and used to construct the gradient estimator ˆgt, then the local norm of ˆgt is bounded, as specified in the following lemma. Lemma 8 (Saha and Tewari [15, Lemma 5]). Let K ✓Rd be a convex body with self-concordant barrier R. For any differentiable function f : K 7! [0, 1], δ 2 (0, 1], and x 2 int(K), define ˆg = (d/δ)f(y) · A−1u, where A = (r2R(x))−1/2, y = x + δAu, and u is drawn uniformly from the unit sphere. Then kˆgkx d/δ. 3 Main Result Our algorithm for the bandit smooth convex optimization problem is a variant of the algorithm in Saha and Tewari [15], and appears in Algorithm 1. Following Abernethy and Rakhlin [1], Saha and Tewari [15], we use a self-concordant function as the dual averaging regularizer and we use its Dikin ellipsoids to perturb the points xt. The difference between our algorithm and previous ones is the introduction of dual averaging weights (↵s,t), for t = 1, . . . , T and s = 1, . . . , t, which allow us to vary the weight of each gradient in the dual averaging objective function. In addition to the parameters δ, ⌘, and ✏, we introduce a new buffering parameter k, which takes non-negative integer values. We set the dual averaging weights in Algorithm 1 to be ↵s,t = ( ⌘ if s t −k t−s+1 k+1 ⌘ if s > t −k , (5) where ⌘> 0 is a global learning rate parameter. This choice of (↵s,t) effectively decreases the influence of the feedback received on the most recent k rounds. If k = 0, all of the (↵s,t) become equal to ⌘and Algorithm 1 reduces to the algorithm in Saha and Tewari [15]. The surprising result is that there exists a different setting of k > 0 that gives a better regret bound. We introduce a slight abuse of notation, which helps us simplify the presentation of our regret bound. We will eventually achieve the desired regret bound by setting the parameters ⌘, δ, and k to be some functions of T. Therefore, from now on, we treat the notation ⌘, δ, and k as an abbreviation for the functional forms ⌘(T), δ(T), and k(T) respectively. The benefit is that we can now use asymptotic notation (e.g., O(⌘k)) to sweep meaningless low-order terms under the rug. 5 We prove the following regret bound for this algorithm. Theorem 9. Let f1, . . . , fT be a sequence of loss functions where each ft : K 7! [0, 1] is differentiable, convex, H-smooth and L-Lipschitz, and where K ✓Rd is a convex body of diameter D > 0 with #-self-concordant barrier R. For any δ, ⌘2 (0, 1] and k 2 {0, 1, . . . , T} assume that Algorithm 1 is run with these parameters and with the weights defined in Eq. (5) (using k and ⌘) to generate the sequences x1, . . . , xT and y1, . . . , yT . If 12k⌘d δ and for any ✏2 (0, 1) it holds that R(T) HD2δ2T + # log 1 ✏ ⌘ + 64d2⌘T δ2(k + 1) + 12(HD2 + DL)d⌘ p kT δ + O ✓T✏ δ + T⌘k ◆ . Specifically, if we set δ = d1/4T −3/16, ⌘= d−1/2T −5/8, k = d1/2T 1/8, and ✏= T −100, we get that R(T) = O( p d T 5/8 log T). Note that if we set k = 0 in our theorem, we recover the eO(T 2/3) bound in Saha and Tewari [15] up to a small numerical constant (namely, the dependence on L, H, D, #, d, and T is the same). 4 Analysis Using the notation x? = arg minx2K PT t=1 ft(x), the decision maker’s regret becomes R(T) = E ⇥PT t=1 ft(yt) −ft(x?) ⇤ . Following Flaxman et al. [11], Saha and Tewari [15], we rewrite the regret as R(T) = E " T X t=1 ft(yt) −ˆf t(xt) # | {z } a + E " T X t=1 ˆf t(xt) −ˆf t(x?) # | {z } b + E " T X t=1 ˆf t(x?) −ft(x?) # | {z } c . (6) This decomposition essentially adds a layer of hallucination to the analysis: we pretend that the loss functions are ˆf 1, . . . , ˆf T instead of f1, . . . , fT and we also pretend that we chose the points x1, . . . , xT rather than y1, . . . , yT . We then analyze the regret in this pretend world (this regret is the expression in Eq. (6b)). Finally, we tie our analysis back to the real world by bounding the difference between that which we analyzed and the regret of the actual problem (this difference is the sum of Eq. (6a) and Eq. (6c)). The advantage of our pretend world over the real world is that we have unbiased gradient estimates ˆg1, . . . , ˆgT that can plug into the dual averaging algorithm. The algorithm in Saha and Tewari [15] sets all of the dual averaging weights (↵s,t) equal to the constant learning rate ⌘> 0. It decomposes the regret as in Eq. (6) and their main technical result is the following bound for the individual terms: Theorem 10 (Saha and Tewari [15]). Let f1, . . . , fT be a sequence of loss functions where each ft : K 7! [0, 1] is differentiable, convex, H-smooth and L-Lipschitz, and where K ✓Rd is a convex body of diameter D > 0 and #-self-concordant barrier R. Assume that Algorithm 1 is run with perturbation parameter δ 2 (0, 1] and generates the sequences x1, . . . , xT and y1, . . . , yT . Then for any ✏2 (0, 1) it holds that (6a) + (6c) (HD2δ2 + ✏L)T. If, additionally, the dual averaging weights (↵s,t) are all set to the constant learning rate ⌘then (6b) # log(1/✏)⌘−1 + d2δ−2⌘T. The analysis in Saha and Tewari [15] goes on to obtain a regret bound of eO(T 2/3) by choosing optimal values for the parameters ⌘, δ and ✏and plugging those values into Theorem 10. Our analysis uses the first part of Theorem 10 to bound (6a) + (6c) and shows that our careful choice of the dual averaging weights (↵s,t) results in the following improved bound on (6b). We begin our analysis by defining a moving average of the functions ˆf 1, . . . , ˆf T , as follows: 8 t = 1, . . . , T , ¯f t(x) = 1 k + 1 k X i=0 ˆf t−i(x) , (7) where, for soundness, we let ¯f s ⌘0 for s 0. Also, define a moving average of gradient estimates: 8 t = 1, . . . , T , ¯gt = 1 k + 1 k X i=0 ˆgt−i , 6 again, with ˆgs = 0 for s 0. In Section 4 below, we show how each ¯gt can be used as a biased estimate of r¯f t(xt). Also note that the choice of the dual averaging weights (↵s,t) in Eq. (5) is such that Pt s=1 ↵s,tˆgs = ⌘Pt s=1 ¯gs for all t. Therefore, the last step in Algorithm 1 basically performs dual averaging with the gradient estimates ¯g1, . . . , ¯gT uniformly weighted by ⌘. We use the functions ¯f t to rewrite Eq. (6b) as E " T X t=1 ˆf t(xt) −¯f t(xt) # | {z } a + E " T X t=1 ¯f t(xt) −¯f t(x?) # | {z } b + E " T X t=1 ¯f t(x?) −ˆf t(x?) # | {z } c . (8) This decomposition essentially adds yet another layer of hallucination to the analysis: we pretend that the loss functions are ¯f 1, . . . , ¯f T instead of ˆf 1, . . . , ˆf T (which are themselves pretend loss functions, as described above). Eq. (8b) is the regret in our new pretend scenario, while Eq. (8a) + Eq. (8c) is the difference between this regret and the regret in Eq. (6b). The following lemma bounds each of the terms in Eq. (8) separately, and summarizes the main technical contribution of our paper. Lemma 11. Under the conditions of Theorem 9, for any ✏2 (0, 1) it holds that (8c) 0 , (8a) 12DLd⌘ p kT δ + O 1 (1 + ⌘)k 2 , and (8b) # log 1 ✏ ⌘ + 64d2⌘T δ2(k + 1) + 12HD2d⌘ p kT δ + O ✓T✏ δ + T⌘k ◆ . 4.1 Proof Sketch of Lemma 11 As mentioned above, the basic intuition of our technique is quite simple: average the gradients to decrease their variance. Yet, applying this idea in the analysis is tricky. We begin by describing the main source of difficulty in proving Lemma 11. Recall that our strategy is to pretend that the loss functions are ¯f 1, . . . , ¯f T and to use the random vector ¯gt as a biased estimator of r¯f t(xt). Naturally, one of our goals is to show that this bias is small. Recall that each ˆgs is an unbiased estimator of rˆf s(xs) (conditioned on the history up to round t). Specifically, note that each vector in the sequence ˆgt−k, . . . , ˆgt is a gradient estimate at a different point. Yet, we average these vectors and claim that they accurately estimate r¯f t at the current point, xt. Luckily, ˆf t is H-smooth, so rˆf t−i(xt−i) should not be much different than rˆf t−i(xt), provided that we show that xt−i and xt are close to each other in Euclidean distance. To show that xt−i and xt are close, we exploit the stability of the dual averaging algorithm. Particularly, the first claim in Lemma 7 states that kxs −xs+1kxs is controlled by k¯gskxs,⇤for all s, so now we need to show that k¯gskxs,⇤is small. However, ¯gt is the average of k + 1 gradient estimates taken at different points; each ˆgt−i is designed to have a small norm with respect to its own local norm k · kxt−i,⇤; for all we know, it may be very large with respect to the current local norm k · kxt,⇤. So now we need to show that the local norms at xt−i and xt are similar. We could prove this if we knew that xt−i and xt are close to each other—which is exactly what we set out to prove in the beginning. This chicken-and-egg situation complicates our analysis considerably. Another non-trivial component of our proof is the variance reduction analysis. The motivation to average ˆgt−k, . . . , ˆgt is to generate new gradient estimates with a smaller variance. While the random vectors ˆg1, . . . , ˆgT are not independent, we show that their randomness is uncorrelated. Therefore, the variance of ¯gt is k + 1 times smaller than the variance of each ˆgt. However, to make this argument formal, we again require the local norms at xt−i and xt to be similar. To make things more complicated, there is the recurring need to move back and forth between local norms and the Euclidean norm, since the latter is used in the definition of Lipschitz and smoothness. All of this has to do with bounding Eq. (8b), the regret with respect to the pretend loss functions ¯f 1, . . . , ¯f T - an additional bias term appears in the analysis of Eq. (8a). We conclude the paper by stating our main lemmas and sketching the proof Lemma 11. The full technical proofs are all deferred to the supplementary material and replaced with some high level commentary. 7 To break the chicken-and-egg situation described above, we begin with a crude bound on k¯gtkxt,⇤, which does not benefit at all from the averaging operation. We simultaneously prove that the local norms at xt−i and xt are similar. Lemma 12. If the parameters k, ⌘, and δ are chosen such that 12k⌘d δ then for all t, (i) k¯gtkxt,⇤2d/δ; (ii) for any 0 i k such that t −i ≥1 it holds that 1 2kzkxt−i,⇤kzkxt,⇤2kzkxt−i,⇤. Lemma 12 itself has a chicken-and-egg aspect, which we resolve using an inductive proof technique. Armed with the knowledge that the local norms at xt−i and xt are similar, we go on to prove the more refined bound on Et ⇥ k¯gtk2 xt,⇤ ⇤ , which does benefit from averaging. Lemma 13. If the parameters k, ⌘, and δ are chosen such that 12k⌘d δ then Et ⇥ k¯gtk2 xt,⇤ ⇤ 2D2L2 + 32d2 δ2(k + 1) . The proof constructs a martingale difference sequence and uses the fact that its increments are uncorrelated. Compare the above to Lemma 8, which proves that kˆgt−ik2 xt−i,⇤d2/δ2 and note the extra k + 1 in our denominator—all of our hard work was aimed at getting this factor. Next, we set out to bound the expected Euclidean distance between xt−i and xt. This bound is later needed to exploit the L-Lipschitz and H-smooth assumptions. The crude bound on k¯gskxs,⇤from Lemma 12 is enough to satisfy the conditions of Lemma 7, which then tells us that E[kxs−xs+1kxs] is controlled by E[k¯gskxs,⇤]. The latter enjoys the improved bound due to Lemma 13. Integrating the resulting bound over time, we obtain the following lemma. Lemma 14. If the parameters k, ⌘, and δ are chosen such that 12k⌘d δ then for all t and any 0 i k such that t −i ≥1, we have E[kxt−i −xtk2] 12Dd⌘ p k δ + O(⌘k) . Notice that xt−i and xt may be k rounds apart, but the bound scales only with p k. Again, this is the work of the averaging technique. Finally, we have all the tools in place to prove our main result, Lemma 11. Proof sketch. The first term, Eq. (8a), is bounded by rewriting ¯f t(xt) = 1 k+1 Pk i=0 ˆf t−i(xt) and then proving that ˆf t−i(xt) is not very far from ˆf t(xt). This follows from the fact that ˆf t is LLipschitz and from Lemma 14. To bound the second term, Eq. (8b), we use the convexity of each ¯f t to write E " T X t=1 ¯f t(xt) −¯f t(x?) # E " T X t=1 r¯f t(xt) · (xt −x?) # . We relate the right-hand side above to E " T X t=1 ¯gt · (xt −x?) # , using the fact that ˆf t is H-smooth and again using Lemma 14. Then, we upper bound the above using Lemma 7, Theorem 5, and Lemma 13. ⇤ Acknowledgments We thank Jian Ding for several critical contributions during the early stages of this research. Parts of this work were done while the second and third authors were at Microsoft Research, the support of which is gratefully acknowledged. 8 References [1] J. Abernethy and A. Rakhlin. Beating the adaptive bandit with high probability. In Information Theory and Applications Workshop, 2009, pages 280–289. IEEE, 2009. [2] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), 2008. [3] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010. [4] A. Agarwal, D. P. Foster, D. Hsu, S. M. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. In Advances in Neural Information Processing Systems (NIPS), 2011. [5] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [6] S. Bubeck and R. Eldan. The entropic barrier: a simple and optimal universal self-concordant barrier. arXiv preprint arXiv:1412.1587, 2015. [7] S. Bubeck and R. Eldan. Multi-scale exploration of convex functions and bandit convex optimization. arXiv preprint arXiv:1507.06580, 2015. [8] S. Bubeck, O. Dekel, T. Koren, and Y. Peres. Bandit convex optimization: p T regret in one dimension. In In Proceedings of the 28st Annual Conference on Learning Theory (COLT), 2015. [9] V. Dani, T. Hayes, and S. M. Kakade. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems (NIPS), 2008. [10] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. In Proceedings of the 46th Annual Symposium on the Theory of Computing, 2014. [11] A. D. Flaxman, A. Kalai, and H. B. McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACMSIAM symposium on Discrete algorithms, pages 385–394. Society for Industrial and Applied Mathematics, 2005. [12] E. Hazan and K. Levy. Bandit convex optimization: Towards tight bounds. In Advances in Neural Information Processing Systems (NIPS), 2014. [13] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221–259, 2009. [14] Y. Nesterov and A. Nemirovskii. Interior-point polynomial algorithms in convex programming, volume 13. SIAM, 1994. [15] A. Saha and A. Tewari. Improved regret guarantees for online smooth convex optimization with bandit feedback. In International Conference on Artificial Intelligence and Statistics (AISTAT), pages 636–642, 2011. [16] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011. [17] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML’03), pages 928–936, 2003. 9
2015
139
5,635
Semi-Supervised Factored Logistic Regression for High-Dimensional Neuroimaging Data Danilo Bzdok, Michael Eickenberg, Olivier Grisel, Bertrand Thirion, Ga¨el Varoquaux INRIA, Parietal team, Saclay, France CEA, Neurospin, Gif-sur-Yvette, France firstname.lastname@inria.fr Abstract Imaging neuroscience links human behavior to aspects of brain biology in everincreasing datasets. Existing neuroimaging methods typically perform either discovery of unknown neural structure or testing of neural structure associated with mental tasks. However, testing hypotheses on the neural correlates underlying larger sets of mental tasks necessitates adequate representations for the observations. We therefore propose to blend representation modelling and task classification into a unified statistical learning problem. A multinomial logistic regression is introduced that is constrained by factored coefficients and coupled with an autoencoder. We show that this approach yields more accurate and interpretable neural models of psychological tasks in a reference dataset, as well as better generalization to other datasets. keywords: Brain Imaging, Cognitive Science, Semi-Supervised Learning, Systems Biology 1 Introduction Methods for neuroimaging research can be grouped by discovering neurobiological structure or assessing the neural correlates associated with mental tasks. To discover, on the one hand, spatial distributions of neural activity structure across time, independent component analysis (ICA) is often used [6]. It decomposes the BOLD (blood-oxygen level-dependent) signals into the primary modes of variation. The ensuing spatial activity patterns are believed to represent brain networks of functionally interacting regions [26]. Similarly, sparse principal component analysis (SPCA) has been used to separate BOLD signals into parsimonious network components [28]. The extracted brain networks are probably manifestations of electrophysiological oscillation frequencies [17]. Their fundamental organizational role is further attested by continued covariation during sleep and anesthesia [10]. Network discovery by applying ICA or SPCA is typically performed on task-unrelated (i.e., unlabeled) “resting-state” data. These capture brain dynamics during ongoing random thought without controlled environmental stimulation. In fact, a large portion of the BOLD signal variation is known not to correlate with a particular behavior, stimulus, or experimental task [10]. To test, on the other hand, the neural correlates underlying mental tasks, the general linear model (GLM) is the dominant approach [13]. The contribution of individual brain voxels is estimated according to a design matrix of experimental tasks. Alternatively, psychophysiological interactions (PPI) elucidate the influence of one brain region on another conditioned by experimental tasks [12]. As a last example, an increasing number of neuroimaging studies model experimental tasks by training classification algorithms on brain signals [23]. All these methods are applied to task-associated (i.e., labeled) data that capture brain dynamics during stimulus-guided behavior. Two important conclusions can be drawn. First, the mentioned supervised neuroimaging analyses typically yield results in a voxel space. This ignores the fact that the BOLD signal exhibits spatially distributed 1 patterns of coherent neural activity. Second, existing supervised neuroimaging analyses cannot exploit the abundance of easily acquired resting-state data [8]. These may allow better discovery of the manifold of brain states due to the high task-rest similarities of neural activity patterns, as observed using ICA [26] and linear correlation [9]. Both these neurobiological properties can be conjointly exploited in an approach that is mixed (i.e., using rest and task data), factored (i.e., performing network decomposition), and multitask (i.e., capitalize on neural representations shared across mental operations). The integration of brain-network discovery into supervised classification can yield a semi-supervised learning framework. The most relevant neurobiological structure should hence be identified for the prediction problem at hand. Autoencoders suggest themselves because they can emulate variants of most unsupervised learning algorithms, including PCA, SPCA, and ICA [15, 16]. Figure 1: Model architecture Linear autoencoders find an optimized compression of 79,941 brain voxels into n unknown activity patterns by improving reconstruction from them. The decomposition matrix equates with the bottleneck of a factored logistic regression. Supervised multi-class learning on task data (Xtask) can thus be guided by unsupervised decomposition of rest data (Xrest). Autoencoders (AE) are layered learning models that condense the input data to local and global representations via reconstruction under compression prior. They behave like a (truncated) PCA in case of one linear hidden layer and a squared error loss [3]. Autoencoders behave like a SPCA if shrinkage terms are added to the model weights in the optimization objective. Moreover, they have the characteristics of an ICA in case of tied weights and adding a nonlinear convex function at the first layer [18]. These authors further demonstrated that ICA, sparse autoencoders, and sparse coding are mathematically equivalent under mild conditions. Thus, autoencoders may flexibly project the neuroimaging data onto the main directions of variation. In the present investigation, a linear autoencoder will be fit to (unlabeled) rest data and integrated as a rankreducing bottleneck into a multinomial logistic regression fit to (labeled) task data. We can then solve the compound statistical problem of unsupervised data representation and supervised classification, previously studied in isolation. From the perspective of dictionary learning, the first layer represents projectors to the discovered set of basis functions which are linearly combined by the second layer to perform predictions [20]. Neurobiologically, this allows delineating a low-dimensional manifold of brain network patterns and then distinguishing mental tasks by their most discriminative linear combinations. Theoretically, a reduction in model variance should be achieved by resting-state autoencoders that privilege the most neurobiologically valid models in the hypothesis set. Practically, neuroimaging research frequently suffers from data scarcity. This limits the set of representations that can be extracted from GLM analyses based on few participants. We therefore contribute a computational framework that 1) analyzes many problems simultaneously (thus finds shared representations by “multi-task learning”) and 2) exploits unlabeled data (since they span a space of meaningful configurations). 2 Methods Data. As the currently biggest openly-accessible reference dataset, we chose resources from the Human Connectome Project (HCP) [4]. Neuroimaging task data with labels of ongoing cognitive processes were drawn from 500 healthy HCP participants (cf. Appendix for details on datasets). 18 HCP tasks were selected that are known to elicit reliable neural activity across participants (Table 1). In sum, the HCP task data incorporated 8650 first-level activity maps from 18 diverse paradigms administered to 498 participants (2 removed due to incomplete data). All maps were resampled to a common 60 ⇥72 ⇥60 space of 3mm isotropic voxels and gray-matter masked (at least 10% tissue 2 probability). The supervised analyses were thus based on labeled HCP task maps with 79,941 voxels of interest representing z-values in gray matter. Cognitive Task Stimuli Instruction for participants 1 Reward Card game Guess the number of a mystery card for gain/loss of money 2 Punish 3 Shapes Shape pictures Decide which of two shapes matches another shape geometrically 4 Faces Face pictures Decide which of two faces matches another face emotionally 5 Random Videos with objects Decide whether the objects act randomly or intentionally 6 Theory of mind 7 Mathematics Spoken numbers Complete addition and subtraction problems 8 Language Auditory stories Choose answer about the topic of the story 9 Tongue movement Visual cues Move tongue 10 Food movement Squeezing of the left or right toe 11 Hand movement Tapping of the left or right finger 12 Matching Shapes with textures Decide whether two objects match in shape or texture 13 Relations Decide whether object pairs differ both along either shape or texture 14 View Bodies Pictures Passive watching 15 View Faces Pictures Passive watching 16 View Places Pictures Passive watching 17 View Tools Pictures Passive watching 18 Two-Back Various pictures Indicate whether current stimulus is the same as two items earlier Table 1: Description of psychological tasks to predict. These labeled data were complemented by unlabeled activity maps from HCP acquisitions of unconstrained resting-state activity [25]. These reflect brain activity in the absence of controlled thought. In sum, the HCP rest data concatenated 8000 unlabeled, noise-cleaned rest maps with 40 brain maps from each of 200 randomly selected participants. We were further interested in the utility of the optimized low-rank projection in one task dataset for dimensionality reduction in another task dataset. To this end, the HCP-derived network decompositions were used as preliminary step in the classification problem of another large sample. The ARCHI dataset [21] provides activity maps from diverse experimental tasks, including auditory and visual perception, motor action, reading, language comprehension and mental calculation. Analogous to HCP data, the second task dataset thus incorporated 1404 labeled, grey-matter masked, and z-scored activity maps from 18 diverse tasks acquired in 78 participants. Linear autoencoder. The labeled and unlabeled data were fed into a linear statistical model composed of an autoencoder and dimensionality-reducing logistic regression. The affine autoencoder takes the input x, projects it into a coordinate system of latent representations z and reconstructs it back to x0 by z = W0x + b0 x0 = W1z + b1, (1) where x 2 Rd denotes the vector of d = 79,941 voxel values from each rest map, z 2 Rn is the ndimensional hidden state (i.e., distributed neural activity patterns), and x0 2 Rd is the reconstruction vector of the original activity map from the hidden variables. Further, W0 denotes the weight matrix that transforms from input space into the hidden space (encoder), W1 is the weight matrix for backprojection from the hidden variables to the output space (decoder). b0 and b1 are corresponding bias vectors. The model parameters W0, b0, b1 are found by minimizing the expected squared reconstruction error E [LAE(x)] = E ⇥ kx −(W1(W0x + b0) + b1)k2⇤ . (2) Here we choose W0 and W1 to be tied, i.e. W0 = WT 1 . Consequently, the learned weights are forced to take a two-fold function: That of signal analysis and that of signal synthesis. The first layer analyzes the data to obtain the cleanest latent representation, while the second layer represents building blocks from which to synthesize the data using the latent activations. Tying these processes together makes the analysis layer interpretable and pulls all non-zero singular values towards 1. Nonlinearities were not applied to the activations in the first layer. Factored logistic regression. Our factored logistic regression model is best described as a variant of a multinomial logistic regression. Specifically, the weight matrix is replaced by the product 3 of two weight matrices with a common latent dimension. The later is typically much lower than the dimension of the data. Alternatively, this model can be viewed as a single-hidden-layer feedforward neural network with a linear activation function for the hidden layer and a softmax function on the output layer. As the dimension of the hidden layer is much lower than the input layer, this architecture is sometimes referred to as a “linear bottleneck” in the literature. The probability of an input x to belong to a class i 2 {1, . . . , l} is given by P(Y = i|x; V0, V1, c0, c1) = softmaxi(fLR(x)), (3) where fLR(x) = V1(V0x + c0) + c1 computes multinomial logits and softmaxi(x) = exp(xi)/ P j exp(xj). The matrix V0 2 Rdxn transforms the input x 2 Rd into n latent components and the matrix V1 2 Rnxl projects the latent components onto hyperplanes that reflect l label probabilities. c0 and c1 are bias vectors. The loss function is given by E [LLR(x, y)] ⇡ 1 NXtask NXtask X k=0 log(P(Y = y(k)|x(k); V0, V1, c0, c1)). (4) Layer combination. The optimization problem of the linear autoencoder and the factored logistic regression are linked in two ways. First, their transformation matrices mapping from input to the latent space are tied V0 = W0. (5) We hence search for a compression of the 79,941 voxel values into n unknown components that represent a latent code optimized for both rest and task activity data. Second, the objectives of the autoencoder and the factored logistic regression are interpolated in the common loss function L(✓, λ) = λLLR + (1 −λ) 1 NXrest LAE + ⌦. (6) In so doing, we search for the combined model parameters ✓= {V0, V1, c0, c1, b0, b1} with respect to the (unsupervised) reconstruction error and the (supervised) task detection. LAE is devided by NXrest to equilibrate both loss terms to the same order of magnitude. ⌦represents an ElasticNet-type regularization that combines `1 and `2 penalty terms. Optimization. The common objective was optimized by gradient descent in the SSFLogReg parameters. The required gradients were obtained by using the chain rule to backpropagate error derivatives. We chose the rmsprop solver [27], a refinement of stochastic gradient descent. Rmsprop dictates an adaptive learning rate for each model parameter by scaled gradients from a running average. The batch size was set to 100 (given much expected redundancy in Xrest and Xtask), matrix parameters were initalized by Gaussian random values multiplied by 0.004 (i.e., gain), and bias parameters were initalized to 0. The normalization factor and the update rule for ✓are given by v(t+1) = ⇢v(t) + (1 −⇢) ⇣ r✓f(x(t), y(t), ✓(t)) ⌘2 ✓(t+1) = ✓(t) + ↵r✓f(x(t), y(t), ✓(t)) p v(t+1) + ✏ , (7) where f is the loss function computed on a minibatch sample at timestep t, ↵is the learning rate (0.00001), ✏a global damping factor (10−6), and ⇢the decay rate (0.9 to deemphasize the magnitude of the gradient). Note that we have also experimented with other solvers (stochastic gradient descent, adadelta, and adagrad) but found that rmsprop converged faster and with similar or higher generalization performance. Implementation. The analyses were performed in Python. We used nilearn to handle the large quantities of neuroimaging data [1] and Theano for automatic, numerically stable differentiation of symbolic computation graphs [5, 7]. All Python scripts that generated the results are accessible online for reproducibility and reuse (http://github.com/banilo/nips2015). 4 3 Experimental Results Serial versus parallel structure discovery and classification. We first tested whether there is a substantial advantage in combining unsupervised decomposition and supervised classification learning. We benchmarked our approach against performing data reduction on the (unlabeled) first half of the HCP task data by PCA, SPCA, ICA, and AE (n = 5, 20, 50, 100 components) and learning classification models in the (labeled) second half by ordinary logistic regression. PCA reduced the dimensionality of the task data by finding orthogonal network components (whitening of the data). SPCA separated the task-related BOLD signals into network components with few regions by a regression-type optimization problem constrained by `1 penalty (no orthogonality assumptions, 1000 maximum iterations, per-iteration tolerance of 10-8, ↵= 1). ICA performed iterative blind source separation by a parallel FASTICA implementation (200 maximum iterations, per-iteration tolerance of 0.0001, initialized by random mixing matrix, whitening of the data). AE found a code of latent representations by optimizing projection into a bottleneck (500 iterations, same implementation as below for rest data). The second half of the task data was projected onto the latent components discovered in its first half. Only the ensuing component loadings were submitted to ordinary logistic regression (no hidden layer, `1 = 0.1, `2 = 0.1, 500 iterations). These serial twostep approaches were compared against parallel decomposition and classification by SSFLogReg (one hidden layers, λ = 1, `1 = 0.1, `2 = 0.1, 500 iterations). Importantly, all trained classification models were tested on a large, unseen test set (20% of data) in the present analyses. Across choices for n, SSFLogReg achieved more than 95% out-of-sample accuracy, whereas supervised learning based on PCA, SPCA, ICA, and AE loadings ranged from 32% to 87% (Table 2). This experiment establishes the advantage of directly searching for classification-relevant structure in the fMRI data, rather than solving the supervised and unsupervised problems independently. This effect was particularly pronounced when assuming few hidden dimensions. n PCA + LogReg SPCA + LogReg ICA + LogReg AE + LogReg SSFLogReg 5 45.1 % 32.2 % 37.5 % 44.2 % 95.7% 20 78.1 % 78.2 % 81.0 % 63.2 % 97.3% 50 81.7 % 84.0 % 84.2 % 77.0 % 97.6% 100 81.3 % 82.2 % 87.3 % 76.6 % 97.4% Table 2: Serial versus parallel dimensionality reduction and classification. Chance is at 5,6%. Model performance. SSFLogReg was subsequently trained (500 epochs) across parameter choices for the hidden components (n = 5, 20, 100) and the balance between autoencoder and logistic regression (λ = 0, 0.25, 0.5, 0.75, 1). Assuming 5 latent directions of variation should yield models with higher bias and smaller variance than SSFLogReg with 100 latent directions. Given the 18-class problem of HCP, setting λ to 0 consistently yields generalization performance at chancelevel (5,6%) because only the unsupervised layer of the estimator is optimized. At each epoch (i.e., iteration over the data), the out-of-sample performance of the trained classifier was assessed on 20% of unseen HCP data. Additionally, the “out-of-study” performance of the learned decomposition (W0) was assessed by using it as dimensionality reduction of an independent labeled dataset (i.e., ARCHI) and conducting ordinary logistic regression on the ensuing component loadings. n = 5 n = 20 n = 100 λ = 0 λ = 0.25 λ = 0.5 λ = 0.75 λ = 1 λ = 0 λ = 0.25 λ = 0.5 λ = 0.75 λ = 1 λ = 0 λ = 0.25 λ = 0.5 λ = 0.75 λ = 1 Out-of-sample accuracy 6.0% 88.9% 95.1% 96.5% 95.7% 5.5% 97.4% 97.8% 97.3% 97.3% 6.1% 97.2% 97.0% 97.8% 97.4% Precision (mean) 5.9% 87.0% 94.9% 96.3% 95.4% 5.1% 97.4% 97.1% 97.0% 97.0% 5.9% 96.9% 96.5% 97.5% 96.9% Recall (mean) 5.6% 88.3% 95.2% 96.6% 95.7% 4.6% 97.5% 97.5% 97.4% 97.4% 7.2% 97.2% 97.2% 97.9% 97.4% F1 score (mean) 4.1% 86.6% 94.9% 96.4% 95.4% 3.8% 97.4% 97.2% 97.1% 97.1% 5.3% 97.0% 96.7% 97.7% 97.2% Reconstr. error (norm.) 0.76 0.85 0.87 1.01 1.79 0.64 0.67 0.69 0.77 1.22 0.60 0.65 0.68 0.73 1.08 Out-of-study accuracy 39.4% 60.8% 54.3% 60.7% 62.9% 77.0% 79.7% 81.9% 79.7% 79.4% 79.2% 82.2% 81.7% 81.3% 75.8% Table 3: Performance of SSFLogReg across model parameter choices. Chance is at 5.6%. We made three noteworthy observations (Table 3). First, the most supervised estimator (λ = 1) achieved in no instance the best accuracy, precision, recall, or f1 scores on HCP data. Classification by SSFLogReg is therefore facilitated by imposing structure from the unlabeled rest data. Confirmed by the normalized reconstruction error (E = kx −ˆxk/kxk), little weight on the supervised term is sufficient for good model performance while keeping E low and task-map decomposition rest-like. 5 Figure 2: Effect of bottleneck in a 38-task classificaton problem Depicts the f1 prediction scores for each of 38 psychological tasks. Multinomial logistic regression operating in voxel space (blue bars) was compared to SSFLogReg operating in 20 (left plot) and 100 (right plot) latent modes (grey bars). Autoencoder or rest data were not used for these analyses (λ = 1). Ordinary logistic regression yielded 77.7% accuracy out of sample, while SSFLogReg scored at 94.4% (n = 20) and 94.2% (n = 100). Hence, compressing the voxel data into a component space for classification achieves higher task separability. Chance is at 2, 6%. Second, the higher the number of latent components n, the higher the out-of-study performance with small values of λ. This suggests that the presence of more rest-data-inspired hidden components results in more effective feature representations in unrelated task data. Third, for n = 20 and 100 (but not 5) the purely rest-data-trained decomposition matrix (λ = 0) resulted in noninferior out-ofstudy performance of 77.0% and 79.2%, respectively (Table 3). This confirms that guiding model learning by task-unrelated structure extracts features of general relevance beyond the supervised problem at hand. Individual effects of dimensionality reduction and rest data. We first quantified the impact of introducing a bottleneck layer disregarding the autoencoder. To this end, ordinary logistic regression was juxtaposed with SSFLogReg at λ = 1. For this experiment, we increased the difficulty of the classification problem by including data from all 38 HCP tasks. Indeed, increased class separability in component space, as compared to voxel space, entails differences in generalization performance of ⇡17% (Figure 2). Notably, the cognitive tasks on reward and punishment processing are among the least predicted with ordinary but well predicted with low-rank logistic regression (tasks 1 and 2 in Figure 2). These experimental conditions have been reported to exhibit highly similar neural activity patterns in GLM analyses of that dataset [4]. Consequently, also local activity differences (in the striatum and visual cortex in this case) can be successfully captured by brain-network modelling. We then contemplated the impact of rest structure (Figure 3) by modulating its influence (λ = 0.25, 0.5, 0.75) in data-scarce and data-rich settings (n = 20, `1 = 0.1, `2 = 0.1). At the beginning of every epoch, 2000 task and 2000 rest maps were drawn with replacement from same amounts of task and rest maps. In data-scarce scenarios (frequently encountered by neuroimaging practitioners), the out-of-sample scores improve as we depart from the most supervised model (λ = 1). In data-rich scenarios, we observed the same trend to be apparent. Feature identification. We finally examined whether the models were fit for purpose (Figure 4). To this end, we computed Pearson’s correlation between the classifier weights and the averaged neural activity map for each of the 18 tasks. Ordinary logistic regression thus yielded a mean correlation of ⇢= 0.28 across tasks. For SSFLogReg (λ = 0.25, 0.5, 0.75, 1), a per-class-weight map was computed by matrix multiplication of the two inner layers. Feature identification performance thus ranged between ⇢= 0.35 and ⇢= 0.55 for n = 5, between ⇢= 0.59 and ⇢= 0.69 for n = 20, and between ⇢= 0.58 and ⇢= 0.69 for n = 100. Consequently, SSFLogReg puts higher absolute weights on relevant structure. This reflects an increased signal-to-noise ratio, in part explained by 6 Figure 3: Effect of rest structure Model performance of SSFLogReg (n = 20, `1 = 0.1, `2 = 0.1) for different choices of λ in data-scarce (100 task and 100 rest maps, hot color) and data-rich (1000 task and 1000 rest maps, cold color) scenarios. Gradient descent was performed on 2000 task and 2000 rest maps. At the begining of each epoch, these were drawn with replacement from a pool of 100 or 1000 different task and rest maps, respectively. Chance is at 5.6%. Figure 4: Classification weight maps The voxel predictors corresponding to 5 exemplary (of 18 total) psychological tasks (rows) from the HCP dataset [4]. Left column: multinomial logistic regression (same implementation but without bottleneck or autoencoder), middle column: SSFLogReg (n = 20 latent components, λ = 0.5, `1 = 0.1, `2 = 0.1), right column: voxel-wise average across all samples of whole-brain activity maps from each task. SSFLogReg a) puts higher absolute weights on relevant structure, b) lower ones on irrelevant structure, and c) yields BOLD-typical local contiguity (without enforcing an explicit spatial prior). All values are z-scored and thresholded at the 75th percentile. the more BOLD-typical local contiguity. Conversely, SSFLogReg puts lower probability mass on irrelevant structure. Despite lower interpretability of the results from ordinary logistic regression, the salt-and-pepper-like weight maps were sufficient for good classification performance. Hence, SSFLogReg yielded class weights that were much more similar to features of the respective training samples for all choices of n and λ. SSFLogReg therefore captures genuine properties of task activity patterns, rather than participant- or study-specific artefacts. 7 Miscellaneous observations. For the sake of completeness, we informally report modifications of the statistical model that did not improve generalization performance. a) Introducing stochasticity into model learning by input corruption of Xtask deteriorated model performance in all scenarios. Adding b) rectified linear units (ReLU) to W0 or other commonly used nonlinearities (c) sigmoid, d) softplus, e) hyperbolic tangent) all led to decreased classification accuracies, probably due to sample size limits. Further, f) “pretraining” of the bottleneck W0 (i.e., non-random initialization) by either corresponding PCA, SPCA or ICA loadings did not exhibit improved accuracies, neither did g) autoencoder pretraining. Moreover, introducing an additional h) overcomplete layer (100 units) after the bottleneck was not advantageous. Finally, imposing either i) only `1 or j) only `2 penalty terms was disadvantageous in all tested cases. This favored ElasticNet regularization chosen in the above analyses. 4 Discussion and Conclusion Using the flexibility of factored models, we learn the low-dimensional representation from highdimensional voxel brain space that is most important for prediction of cognitive task sets. From a machine-learning perspective, factorization of the logistic regression weights can be viewed as transforming a “multi-class classification problem” into a “multi-task learning problem”. The higher generalization accuracy and support recovery, comparing to ordinary logistic regression, hold potential for adoption in various neuroimaging analyses. Besides increased performance, these models are more interpretable by automatically learning a mapping to and from a brain-network space. This domain-specific learning algorithm encourages departure from the artificial and statistically less attractive voxel space. Neurobiologically, brain activity underlying defined mental operations can be explained by linear combinations of the main activity patterns. That is, fMRI data probably concentrate near a low-dimensional manifold of characteristic brain network combinations. Extracting fundamental building blocks of brain organization might facilitate the quest for the cognitive primitives of human thought. We hope that these first steps stimulate development towards powerful semi-supervised representation extraction in systems neuroscience. In the future, automatic reduction of brain maps to their neurobiological essence may leverage dataintense neuroimaging investigations. Initiatives for data collection are rapidly increasing in neuroscience [22]. These promise structured integration of neuroscientific knowledge accumulating in databases. Tractability by condensed feature representations can avoid the ill-posed problem of learning the full distribution of activity patterns. This is not only relevant to the multi-class challenges spanning the human cognitive space [24] but also the multi-modal combination with high-resolution 3D models of brain anatomy [2] and high-throughput genomics [19]. The biggest socioeconomic potential may lie in across-hospital clinical studies that predict disease trajectories and drug responses in psychiatric and neurological populations [11]. Acknowledgment The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project). Data were provided by the Human Connectome Project. Further support was received from the German National Academic Foundation (D.B.) and the MetaMRI associated team (B.T., G.V.). References [1] Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., Varoquaux, G.: Machine learning for neuroimaging with scikit-learn. Front Neuroinform 8, 14 (2014) [2] Amunts, K., Lepage, C., Borgeat, L., Mohlberg, H., Dickscheid, T., Rousseau, M.E., Bludau, S., Bazin, P.L., Lewis, L.B., Oros-Peusquens, A.M., et al.: Bigbrain: an ultrahigh-resolution 3d human brain model. Science 340(6139), 1472–1475 (2013) [3] Baldi, P., Hornik, K.: Neural networks and principal component analysis: Learning from examples without local minima. Neural networks 2(1), 53–58 (1989) [4] Barch, D.M., Burgess, G.C., Harms, M.P., Petersen, S.E., Schlaggar, B.L., Corbetta, M., Glasser, M.F., Curtiss, S., Dixit, S., Feldt, C.: Function in the human connectome: task-fmri and individual differences in behavior. Neuroimage 80, 169–189 (2013) 8 [5] Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., WardeFarley, D., Bengio, Y.: Theano: new features and speed improvements. arXiv preprint arXiv:1211.5590 (2012) [6] Beckmann, C.F., DeLuca, M., Devlin, J.T., Smith, S.M.: Investigations into resting-state connectivity using independent component analysis. Philos Trans R Soc Lond B Biol Sci 360(1457), 1001–13 (2005) [7] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., Bengio, Y.: Theano: a cpu and gpu math expression compiler. Proceedings of the Python for scientific computing conference (SciPy) 4, 3 (2010) [8] Biswal, B.B., Mennes, M., Zuo, X.N., Gohel, S., Kelly, C., et al.: Toward discovery science of human brain function. Proc Natl Acad Sci U S A 107(10), 4734–9 (2010) [9] Cole, M.W., Bassettf, D.S., Power, J.D., Braver, T.S., Petersen, S.E.: Intrinsic and task-evoked network architectures of the human brain. Neuron 83c, 238251 (2014) [10] Fox, D.F., Raichle, M.E.: Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat Rev Neurosci 8, 700–711 (2007) [11] Frackowiak, R., Markram, H.: The future of human cerebral cartography: a novel approach. Philosophical Transactions of the Royal Society of London B: Biological Sciences 370(1668), 20140171 (2015) [12] Friston, K.J., Buechel, C., Fink, G.R., Morris, J., Rolls, E., Dolan, R.J.: Psychophysiological and modulatory interactions in neuroimaging. Neuroimage 6(3), 218–29 (1997) [13] Friston, K.J., Holmes, A.P., Worsley, K.J., Poline, J.P., Frith, C.D., Frackowiak, R.S.: Statistical parametric maps in functional imaging: a general linear approach. Hum Brain Mapp 2(4), 189–210 (1994) [14] Gorgolewski, K., Burns, C.D., Madison, C., Clark, D., Halchenko, Y.O., Waskom, M.L., Ghosh, S.S.: Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform 5, 13 (2011) [15] Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the theory of neural computation, vol. 1. Basic Books (1991) [16] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006) [17] Hipp, J.F., Siegel, M.: Bold fmri correlation reflects frequency-specific neuronal correlation. Curr Biol (2015) [18] Le, Q.V., Karpenko, A., Ngiam, J., Ng, A.: Ica with reconstruction cost for efficient overcomplete feature learning pp. 1017–1025 (2011) [19] Need, A.C., Goldstein, D.B.: Whole genome association studies in complex diseases: where do we stand? Dialogues in clinical neuroscience 12(1), 37 (2010) [20] Olshausen, B., et al.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996) [21] Pinel, P., Thirion, B., Meriaux, S., Jobert, A., Serres, J., Le Bihan, D., Poline, J.B., Dehaene, S.: Fast reproducible identification and large-scale databasing of individual functional cognitive networks. BMC Neurosci 8, 91 (2007) [22] Poldrack, R.A., Gorgolewski, K.J.: Making big data open: data sharing in neuroimaging. Nature Neuroscience 17(11), 1510–1517 (2014) [23] Poldrack, R.A., Halchenko, Y.O., Hanson, S.J.: Decoding the large-scale structure of brain function by classifying mental states across individuals. Psychol Sci 20(11), 1364–72 (2009) [24] Schwartz, Y., Thirion, B., Varoquaux, G.: Mapping cognitive ontologies to and from the brain. Advances in Neural Information Processing Systems (2013) [25] Smith, S.M., Beckmann, C.F., Andersson, J., Auerbach, E.J., Bijsterbosch, J., Douaud, G., Duff, E., Feinberg, D.A., Griffanti, L., Harms, M.P., et al.: Resting-state fmri in the human connectome project. NeuroImage 80, 144–168 (2013) [26] Smith, S.M., Fox, P.T., Miller, K.L., Glahn, D.C., Fox, P.M., Mackay, C.E., Filippini, N., Watkins, K.E., Toro, R., Laird, A.R., Beckmann, C.F.: Correspondence of the brain’s functional architecture during activation and rest. Proc Natl Acad Sci U S A 106(31), 13040–5 (2009) [27] Tieleman, T., Hinton, G.: Lecture 6.5rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning (2012) [28] Varoquaux, G., Gramfort, A., Pedregosa, F., Michel, V., Thirion, B.: Multi-subject dictionary learning to segment an atlas of brain spontaneous activity. Information Processing in Medical Imaging pp. 562–573 (2011) 9
2015
14
5,636
Compressive spectral embedding: sidestepping the SVD Dinesh Ramasamy dineshr@ece.ucsb.edu ECE Department, UC Santa Barbara Upamanyu Madhow madhow@ece.ucsb.edu ECE Department, UC Santa Barbara Abstract Spectral embedding based on the Singular Value Decomposition (SVD) is a widely used “preprocessing” step in many learning tasks, typically leading to dimensionality reduction by projecting onto a number of dominant singular vectors and rescaling the coordinate axes (by a predefined function of the singular value). However, the number of such vectors required to capture problem structure grows with problem size, and even partial SVD computation becomes a bottleneck. In this paper, we propose a low-complexity compressive spectral embedding algorithm, which employs random projections and finite order polynomial expansions to compute approximationsto SVD-based embedding. For an m×n matrix with T non-zeros, its time complexity is O ((T + m + n) log(m + n)), and the embedding dimension is O(log(m + n)), both of which are independent of the number of singular vectors whose effect we wish to capture. To the best of our knowledge, this is the first work to circumvent this dependence on the number of singular vectors for general SVD-based embeddings. The key to sidestepping the SVD is the observation that, for downstream inference tasks such as clustering and classification, we are only interested in using the resulting embedding to evaluate pairwise similarity metrics derived from the ℓ2-norm, rather than capturing the effect of the underlying matrix on arbitrary vectors as a partial SVD tries to do. Our numerical results on network datasets demonstrate the efficacy of the proposed method, and motivate further exploration of its application to large-scale inference tasks. 1 Introduction Inference tasks encountered in natural language processing, graph inference and manifold learning employ the singular value decomposition (SVD) as a first step to reduce dimensionality while retaining useful structure in the input. Such spectral embeddings go under various guises: Principle Component Analysis (PCA), Latent Semantic Indexing (natural language processing), Kernel Principal Component Analysis, commute time and diffusion embeddings of graphs, to name a few. In this paper, we present a compressive approach for accomplishing SVD-based dimensionality reduction, or embedding, without actually performing the computationally expensive SVD step. The setting is as follows. The input is represented in matrix form. This matrix could represent the adjacency matrix or the Laplacian of a graph, the probability transition matrix of a random walker on the graph, a bag-of-words representation of documents, the action of a kernel on a set of l points {x(p) ∈Rd : p = 1, . . . , m} (kernel PCA)[1][2] such as A(p, q) = e −∥x(p)−x(q)∥2/ 2α2 (or) A(p, q) = I(∥x(p) −x(q)∥< α), 1 ≤p, q ≤l, (1) where I(·) denotes the indicator function or matrices derived from K-nearest-neighbor graphs constructed from {x(p)}. We wish to compute a transformation of the rows of this m × n matrix A which succinctly captures the global structure of A via euclidean distances (or similarity metrics derived from the ℓ2-norm, such as normalized correlations). A common approach is to com1 pute a partial SVD of A, Pl=k l=1 σlulvT l , k ≪n, and to use it to embed the rows of A into a k-dimensional space using the rows of E = [f(σ1)u1 f(σ2)u2 · · · f(σk)uk], for some function f(·). The embedding of the variable corresponding to the l-th row of the matrix A is the l-th row of E. For example, f(x) = x corresponds to Principal Component Analysis (PCA): the k-dimensional rows of E are projections of the n-dimensional rows of A along the first k principal components, {vl, l = 1, . . . , k}. Other important choices include f(x) = constant used to cut graphs [3] and f(x) = 1 √1 −x for commute time embedding of graphs [4]. Inference tasks such as (unsupervised) clustering and (supervised) classification are performed using ℓ2-based pairwise similarity metrics on the embedded coordinates (rows of E) instead of the ambient data (rows of A). Beyond the obvious benefit of dimensionality reduction from n to k, embeddings derived from the leading partial-SVD can often be interpreted as denoising, since the “noise” in matrices arising from real-world data manifests itself via the smaller singular vectors of A (e.g., see [5], which analyzes graph adjacency matrices). This is often cited as a motivation for choosing PCA over “isotropic” dimensionality reduction techniques such as random embeddings, which, under the setting of the Johnson-Lindenstrauss (JL) lemma, can also preserve structure. The number of singular vectors k needed to capture the structure of an m × n matrix grows with its size, and two bottlenecks emerge as we scale: (a) The computational effort required to extract a large number of singular vectors using conventional iterative methods such as Lanczos or simultaneous iteration or approximate algorithms like Nystrom [6], [7] and Randomized SVD [8] for computation of partial SVD becomes prohibitive (scaling as Ω(kT ), where T is the number of non-zeros in A) (b) the resulting k-dimensional embedding becomes unwieldy for use in subsequent inference steps. Approach and Contributions: In this paper, we tackle these scalability bottlenecks by focusing on what embeddings are actually used for: computing ℓ2-based pairwise similarity metrics typically used for supervised or unsupervised learning. For example, K-means clustering uses pairwise Euclidean distances, and SVM-based classification uses pairwise inner products. We therefore ask the following question: “Is it possible to compute an embedding which captures the pairwise euclidean distances between the rows of the spectral embedding E = [f(σ1)u1 · · · f(σk)uk], while sidestepping the computationally expensive partial SVD?” We answer this question in the affirmative by presenting a compressive algorithm which directly computes a low-dimensional embedding. There are two key insights that drive our algorithm: • By approximating f(σ) by a low-order (L ≪min{m, n}) polynomial, we can compute the embedding iteratively using matrix-vector products of the form Aq or AT q. • The iterations can be computed compressively: by virtue of the celebrated JL lemma, the embedding geometry is approximately captured by a small number d = O(log(m + n)) of randomly picked starting vectors. The number of passes over A, AT and time complexity of the algorithm are L, L and O(L(T +m+ n) log(m + n)) respectively. These are all independent of the number of singular vectors k whose effect we wish to capture via the embedding. This is in stark contrast to embedding directly based on the partial SVD. Our algorithm lends itself to parallel implementation as a sequence of 2L matrixvector products interlaced with vector additions, run in parallel across d = O(log(m+n)) randomly chosen starting vectors. This approach significantly reduces both computational complexity and embedding dimensionality relative to partial SVD. A freely downloadable Python implementation of the proposed algorithm that exploits this inherent parallelism can be found in [9]. 2 Related work As discussed in Section 3.1, the concept of compressive measurements forms a key ingredient in our algorithm, and is based on the JL lemma [10]. The latter, which provides probabilistic guarantees on approximate preservation of the Euclidean geometry for a finite collection of points under random projections, forms the basis for many other applications, such as compressive sensing [11]. We now mention a few techniques for exact and approximate SVD computation, before discussing algorithms that sidestep the SVD as we do. The time complexity of the full SVD of an m × n matrix is O(mn2) (for m > n). Partial SVDs are computed using iterative methods for eigen decompositions of symmetric matrices derived from A such as AAT and  0 AT ; A 0  [12]. The 2 complexity of standard iterative eigensolvers such as simultaneous iteration[13] and the Lanczos method scales as Ω(kT ) [12], where T denotes the number of non-zeros of A. The leading k singular value, vector triplets {(σl, ul, vl), l = 1, . . . , k} minimize the matrix reconstruction error under a rank k constraint: they are a solution to the optimization problem arg min∥A −Pl=k l=1 σlulvT l ∥2 F , where ∥· ∥F denotes the Frobenius norm. Approximate SVD algorithms strive to reduce this error while also placing constraints on the computational budget and/or the number of passes over A. A commonly employed approximate eigendecomposition algorithm is the Nystrom method [6], [7] based on random sampling of s columns of A, which has time complexity O(ksn + s3). A number of variants of the Nystrom method for kernel matrices like (1) have been proposed in the literature. These aim to improve accuracy using preprocessing steps such as K-means clustering [14] or random projection trees [15]. Methods to reduce the complexity of the Nystrom algorithm to O(ksn + k3)[16], [17] enable Nystrom sketches that see more columns of A. The complexity of all of these grow as Ω(ksn). Other randomized algorithms, involving iterative computations, include the Randomized SVD [8]. Since all of these algorithms set out to recover k-leading eigenvectors (exact or otherwise), their complexity scales as Ω(kT ). We now turn to algorithms that sidestep SVD computation. In [18], [19], vertices of a graph are embedded based on diffusion of probability mass in random walks on the graph, using the power iteration run independently on random starting vectors, and stopping “prior to convergence.” While this approach is specialized to probability transition matrices (unlike our general framework) and does not provide explicit control on the nature of the embedding as we do, a feature in common with the present paper is that the time complexity of the algorithm and the dimensionality of the resulting embedding are independent of the number of eigenvectors k captured by it. A parallel implementation of this algorithm was considered in [20]; similar parallelization directly applies to our algorithm. Another specific application that falls within our general framework is the commute time embedding on a graph, based on the normalized adjacency matrix and weighing function f(x) = 1/√1 −x [4], [21]. Approximate commute time embeddings have been computed using Spielman-Teng solvers [22], [23] and the JL lemma in [24]. The complexity of the latter algorithm and the dimensionality of the resulting embedding are comparable to ours, but the method is specially designed for the normalized adjacency matrix and the weighing function f(x) = 1/√1 −x. Our more general framework would, for example, provide the flexibility of suppressing small eigenvectors from contributing to the embedding (e.g, by setting f(x) = I(x > ǫ)/√1 −x). Thus, while randomized projections are extensively used in the embedding literature, to the best of our knowledge, the present paper is the first to develop a general compressive framework for spectral embeddings derived from the SVD. It is interesting to note that methods similar to ours have been used in a different context, to estimate the empirical distribution of eigenvalues of a large hermitian matrix [25], [26]. These methods use a polynomial approximation of indicator functions f(λ) = I(a ≤λ ≤b) and random projections to compute an approximate histogram of the number of eigenvectors across different bands of the spectrum: [a, b] ⊆[λmin, λmax]. 3 Algorithm We first present the algorithm for a symmetric n × n matrix S. Later, in Section 3.5, we show how to handle a general m × n matrix by considering a related (m + n) × (m + n) symmetric matrix. Let λl denote the eigenvalues of S sorted in descending order and vl their corresponding unit-norm eigenvectors (chosen to be orthogonal in case of repeated eigenvalues). For any function g(x) : R 7→R, we denote by g(S) the n × n symmetric matrix g(S) = Pl=n l=1 g(λl)vlvT l . We now develop an O(n log n) algorithm to compute a d = O(log n) dimensional embedding which approximately captures pairwise euclidean distances between the rows of the embedding E = [f (λ1) v1 f (λ2) v2 · · · f (λn) vn]. Rotations are inconsequential: We first observe that rotation of basis does not alter ℓ2-based similarity metrics. Since V = [v1 · · · vn] satisfies V V T = V T V = In, pairwise distances between the rows of E are equal to corresponding pairwise distances between the rows of EV T = Pl=n l=1 f(λl)vlvT l = f(S). We use this observation to compute embeddings of the rows of f(S) rather than those of E. 3 3.1 Compressive embedding Suppose now that we know f(S). This constitutes an n-dimensional embedding, and similarity queries between two “vertices” (we refer to the variables corresponding to rows of S as vertices, as we would for matrices derived from graphs) requires O(n) operations. However, we can reduce this time to O(log n) by using the JL lemma, which informs us that pairwise distances can be approximately captured by compressive projection onto d = O(log n) dimensions. Specifically, for d > (4 + 2β) log n ǫ2/2 −ǫ3/3  , let Ωdenote an n × d matrix with i.i.d. entries drawn uniformly at random from {±1/ √ d}. According to the JL lemma, pairwise distances between the rows of f(S)Ωapproximate pairwise distances between the rows of f(S) with high probability. In particular, the following statement holds with probability at least 1 −n−β: (1 −ǫ) ∥u −v∥2 ≤ ∥(u −v) Ω∥2 ≤(1 + ǫ) ∥u −v∥2, for any two rows u, v of f(S). The key take-aways are that (a) we can reduce the embedding dimension to d = O(log n), since we are only interested in pairwise similarity measures, and (b) We do not need to compute f(S). We only need to compute f(S)Ω. We now discuss how to accomplish the latter efficiently. 3.2 Polynomial approximation of embedding Direct computation of E′ = f(S)Ωfrom the eigenvectors and eigenvalues of S, as f(S) = P f(λl)vlvT l would suggest, is expensive (O(n3)). However, we now observe that computation of ψ(S)Ωis easy when ψ(·) is a polynomial. In this case, ψ(S) = Pp=L p=0 bpSp for some bp ∈R, so that ψ(S)Ωcan be computed as a sequence of L matrix-vector products interlaced with vector additions run in parallel for each of the d columns of Ω. Therefore, they only require LdT + O(Ldn) flops. Our strategy is to approximate E′ = f(S)Ωby eE = efL(S)Ω, where efL(x) is an L-th order polynomial approximation of f(x). We defer the details of computing a “good” polynomial approximation to Section 3.4. For now, we assume that one such approximation efL(·) is available and give bounds on the loss in fidelity as a result of this approximation. 3.3 Performance guarantees The spectral norm of the “error matrix” Z = f(S)−ef(S) = Pr=n r=1(f(λr)−efL(λr))vrvT r satisfies ∥Z∥= δ = maxl|f(λl) −efL(λl)| ≤max{|f(x) −efL(x)|}, where the spectral norm of a matrix B, denoted by ∥B∥refers to the induced ℓ2-norm. For symmetric matrices, ∥B∥≤α ⇐⇒|λl| ≤ α ∀l, where λl are the eigenvalues of B. Letting ip denote the unit vector along the p-th coordinate of Rn, the distance between the p, q-th rows of ef(S) can be written as ∥efL(S) (ip −iq)∥= ∥f(S) (ip −iq) −Z (ip −iq)∥≤∥ET (ip −iq)∥+ δ √ 2. (2) Similarly, we have that ∥efL(S) (ip −iq)∥≥∥ET (ip −iq)∥−δ √ 2. Thus pairwise distances between the rows of efL(S) approximate those between the rows of E. However, the distortion term δ √ 2 is additive and must be controlled by carefully choosing efL(·), as discussed in Section 4. Applying the JL lemma [10] to the rows of efL(S), we have that when d > O ǫ−2 log n  with i.i.d. entries drawn uniformly at random from {±1/ √ d}, the embedding eE = efL(S)Ωcaptures pairwise distances between the rows of efL(S) up to a multiplicative distortion of 1 ± ǫ with high probability: eET (ip −iq) = ΩT efL(S) (ip −iq) ≤ √ 1 + ǫ efL(S) (ip −iq) Using (2), we can show that ∥eET (ip −iq)∥≤√1 + ǫ ∥ET (ip −iq)∥+ δ √ 2  . Similarly, ∥eET (ip −iq)∥≥√1 −ǫ ∥ET (ip −iq)∥−δ √ 2  . We state this result in Theorem 1. Theorem 1. Let efL(x) denote an L-th order polynomial such that: δ = maxl|f(λl) −efL(λl)| ≤ max|f(x) −efL(x)| and Ωan n × d matrix with entries drawn independently and uniformly at random from {±1/ √ d}, where d is an integer satisfying d > (4 + 2β) log n  (ǫ2/2 −ǫ3/3). Let 4 g : Rp →Rd denote the mapping from the i-th row of E = [f (λ1) v1 · · · f (λn) vn] to the i-th row of eE = efL(S)Ω. The following statement is true with probability at least 1 −n−β: √ 1 −ǫ(∥u −v∥−δ √ 2) ≤∥g(u) −g(v)∥≤ √ 1 + ǫ(∥u −v∥+ δ √ 2) for any two rows u, v of E. Furthermore, there exists an algorithm to compute each of the d = O(log n) columns of eE in O(L(T + n)) flops independent of its other columns which makes L passes over S (T is the number of non-zeros in S). 3.4 Choosing the polynomial approximation We restrict attention to matrices which satisfy ∥S∥≤1, which implies that |λl| ≤1. We observe that we can trivially center and scale the spectrum of any matrix to satisfy this assumption when we have the following bounds: λl ≤σmax and λl ≥σmin via the rescaling and centering operation given by: S′ = 2S/(σmax −σmin) −(σmax + σmin) In/(σmax −σmin) and by modifying f(x) to f ′(x) = f (x (σmax −σmin)/2 + (σmax + σmin)/2). In order to compute a polynomial approximation of f(x), we need to define the notion of “good” approximation. We showed in Section 3.3 that the errors introduced by the polynomial approximation can be summarized by furnishing a bound on the spectral norm of the error matrix Z = f(S) −efL(S): Since ∥Z∥= δ = maxl|f(λl) −efL(λl)|, what matters is how well we approximate the function f(·) at the eigenvalues {λl} of S. Indeed, if we know the eigenvalues, we can minimize ∥Z∥by minimizing maxl|f(λl) −efL(λl)|. This is not a particularly useful approach, since computing the eigenvalues is expensive. However, we can use our prior knowledge of the domain from which the matrix S comes from to penalize deviations from f(λ) differently for different values of λ. For example, if we know the distribution p(x) of the eigenvalues of S, we can minimize the average error ∆L = R 1 −1 p(λ)|f(λ) −efL(λ)|2dx. In our examples, for the sake of concreteness, we assume that the eigenvalues are uniformly distributed over [−1, 1] and give a procedure to compute an L-th order polynomial approximation of f(x) that minimizes ∆L = (1/2) R 1 −1 |f(x) −efL(x)|2dx. A numerically stable procedure to generate finite order polynomial approximations of a function over [−1, 1] with the objective of minimizing R 1 −1 |f(x) −efL(x)|2dx is via Legendre polynomials p(r, x), r = 0, 1, . . . , L. They satisfy the recursion p(r, x) = (2 −1/r)xp(r −1, x) −(1 − 1/r)p(r −2, x) and are orthogonal: R 1 −1 p(k, x)p(l, x)dx = 2I(k = l)/(2r + 1). Therefore we set efL(x) = Pr=L r=0 a(r)p(r, x) where a(r) = (r + 1/2) R 1 −1 p(r, x)f(x)dx. We give a method in Algorithm 1 that uses the Legendre recursion to compute p(r, S)Ω, r = 0, 1, . . . , L using Ld matrix-vector products and vector additions. The coefficients a(r) are used to compute efL(S)Ωby adding weighted versions of p(r, S)Ω. Algorithm 1 Proposed algorithm to compute approximate d-dimensional eigenvector embedding of a n × n symmetric matrix S (such that ∥S∥≤1) using the n × d random projection matrix Ω. 1: Procedure FASTEMBEDEIG(S, f(x), L, Ω): 2: //* Compute polynomial approximation efL(x) which minimizes R 1 −1 |f(x) −efL(x)|2dx *// 3: for r = 0, . . . , L do 4: a(r) ←(r + 1/2) R x=1 x=−1 f(x)p(r, x)dx //* p(r, x): Order r Legendre polynomial *// 5: Q(0) ←Ω, Q(−1) ←0, eE ←a(0)Q(0) 6: for r = 1, 2, . . . , L do 7: Q(r) ←(2 −1/r)SQ(r −1) −(1 −1/r)Q(r −2) //* Q(r) = p(r, S)Ω*// 8: eE ←eE + a(r)Q(r) //* eE now holds efr(S)Ω*// 9: return eE //* eE = efL(S)Ω*// As described in Section 4, if we have prior knowledge of the distribution of eigenvalues (as we do for many commonly encountered large matrices), then we can “boost” the performance of the generic Algorithm 1 based on the assumption of eigenvalues uniformly distributed over [−1, 1]. 5 3.5 Embedding general matrices We complete the algorithm description by generalizing to any m × n matrix A (not necessarily symmetric) such that ∥A∥≤1. The approach is to utilize Algorithm 1 to compute an approximate d-dimensional embedding of the symmetric matrix S = [0 AT ; A 0]. Let {(σl, ul, vl) : l = 1, . . . , min{m, n}} be an SVD of A = P l σlulvT l (∥A∥ ≤ 1 ⇐⇒ σl ≤1). Consider the following spectral mapping of the rows of A to the rows of Erow = [f(σ1)u1 · · · f(σm)um] and the columns of A to the rows of Ecol = [f(σ1)v1 · · · f(σn)vn]. It can be shown that the unit-norm orthogonal eigenvectors of S take the form [vl; ul] √ 2 and [vl; −ul] √ 2, l = 1, . . . , min{m, n}, and their corresponding eigenvalues are σl and −σl respectively. The remaining |m −n| eigenvalues of S are equal to 0. Therefore, we call eEall ←FASTEMBEDEIG(S, f ′(x), L, Ω) with f ′(x) = f(x)I(x ≥0) −f(−x)I(x < 0) and Ω is an (m + n) × d, d = O(log(m + n)) matrix (entries drawn independently and uniformly at random from {±1/ √ d}). Let eEcol and eErow denote the first n and last m rows of eEall. From Theorem 1, we know that, with overwhelming probability, pairwise distances between any two rows of eErow approximates those between corresponding rows of Erow. Similarly, pairwise distances between any two rows of eEcol approximates those between corresponding rows of Ecol. 4 Implementation considerations We now briefly go over implementation considerations before presenting numerical results in Section 5. Spectral norm estimates In order to ensure that the eigenvalues of S are within [−1, 1] as we have assumed, we scale the matrix by its spectral norm (∥S∥= max|λl|). To this end, we obtain a tight lower bound (and a good approximation) on the spectral norm using power iteration (20 iterates on 6 log n randomly chosen starting vectors), and then scale this up by a small factor (1.01) for our estimate (typically an upper bound) for ∥S∥. Polynomial approximation order L: The error in approximating f(λ) by efL(λ), as measured by ∆L = R 1 −1 |f(x) −efL(x)|2dx is a non-increasing function of the polynomial order L. Reduction in ∆L often corresponds to a reduction in δ that appears as a bound on distortion in Theorem 1. “Smooth” functions generally admit a lower order approximation for the same target error ∆L, and hence yield considerable savings in algorithm complexity, which scales linearly with L. Polynomial approximation method: The rate at which δ decreases as we increase L depends on the function p(λ) used to compute efL(λ) (by minimizing ∆L = R p(λ)|f(λ) −efL(λ)|2dx). The choice p(λ) ∝1 yields the Legendre recursion used in Algorithm 1, whereas p(λ) ∝1/ √ 1 −λ2 corresponds to the Chebyshev recursion, which is known to result in fast convergence. We defer to future work a detailed study of the impact of alternative choices for p(λ) on δ. Denoising by cascading In large-scale problems, it may be necessary to drive the contribution from certain singular vectors to zero. In many settings, singular vectors with smaller singular values correspond to noise. The number of such singular values can scale as fast as O(min{m, n}). Therefore, when we place nulls (zeros) in f(λ), it is desirable to ensure that these nulls are pronounced after we approximate f(λ) by efL(λ). We do this by computing egL/b(S) b Ω, where egL/b(λ) is an L/bth order approximation of g(λ) = f 1/b(λ). The small values in the polynomial approximation of f 1/b(λ) which correspond to f(λ) = 0 (nulls which we have set) get amplified when we pass them through the xb non-linearity. 5 Numerical results While the proposed approach is particularly useful for large problems in which exact eigendecomposition is computationally infeasible, for the purpose of comparison, our results are restricted to smaller settings where the exact solution can be computed. We compute the exact partial eigendecomposition using the ARPACK library (called from MATLAB). For a given choice of weighing 6 20 40 60 80 100 120 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 d Change in normalized inner product 99 percentile 95 percentile 75 percentile 50 percentile 25 percentile 5 percentile 1 percentile (a) Effect of dimensionality d of embedding −1 −0.5 0 0.5 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 Eigenvector embedding Compressive embedding Normalized correlation 99 percentile 95 percentile 75 percentile 50 percentile 25 percentile 5 percentile 1 percentile −1 −0.5 0 0.5 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 Eigenvector embedding Compressive embedding Normalized correlation 99 percentile 95 percentile 75 percentile 50 percentile 25 percentile 5 percentile 1 percentile (b) Effect of cascading: b = 1(left) and b = 2 (right) Figure 1: DBLP collaboration network normalized correlations function f(λ), the associated embedding E = [f(λ1)v1 · · · f(λn)vn] is compared with the compressive embedding eE returned by Algorithm 1. The latter was implemented in Python using the Scipy’s sparse matrix-multiplication routines and is available for download from [9]. We consider two real world undirected graphs in [27] for our evaluation, and compute embeddings for the normalized adjacency matrix eA (= D−1/2AD−1/2, where D is a diagonal matrix with row sums of the adjacency matrix A; the eigenvalues of eA lie in [−1, 1]) for graphs. We study the accuracy of embeddings by comparing pairwise normalized correlations between i, j-th rows of E given by < E(i, :), E(j, :) >/∥E(i, :)∥∥E(j, :)∥with those predicted by the approximate embedding < eE(i, :), eE(j, :) > /∥eE(i, :)∥∥eE(j, :)∥(E(i, :) is short-hand for the i-th row of E). DBLP collaboration network [27] is an undirected graph on n = 317080 vertices with 1049866 edges. We compute the leading 500 eigenvectors of the normalized adjacency matrix eA. The smallest of the five hundred eigenvalues is 0.98, so we set f(λ) = I(λ ≥0.98) and S = eA in Algorithm 1 and compare the resulting embedding eE with E = [v1 · · · v500]. We demonstrate the dependence of the quality of the embedding eE returned by the proposed algorithm on two parameters: (i) number of random starting vectors d, which gives the dimensionality of the embedding and (ii) the boosting/cascading parameter b using this dataset. Dependence on the number of random projections d: In Figure (1a), d ranges from 1 to 120 ≈9 log n and plot the 1-st, 5-th, 25-th, 50-th, 75-th, 95-th and 99-th percentile values of the deviation between the compressive normalized correlation (from the rows of eE) and the corresponding exact normalized correlation (rows of E). The deviation decreases with increasing d, corresponding to ℓ2-norm concentration (JL lemma), but this payoff saturates for large values of d as polynomial approximation errors start to dominate. From the 5-th and 95-th percentile curves, we see that a significant fraction (90%) of pairwise normalized correlations in eE lie within ±0.2 of their corresponding values in E when d = 80 ≈6 log n. For Figure (1a), we use L = 180 matrix-vector products for each randomly picked starting vector and set cascading parameter b = 2 for the algorithm in Section 4. Dependence on cascading parameter b: In Section 4 we described how cascading can help suppress the contribution to the embedding eE of the eigenvectors whose eigenvalues lie in regions where we have set f(λ) = 0. We illustrate the importance of this boosting procedure by comparing the quality of the embedding eE for b = 1 and b = 2 (keeping the other parameters of the algorithm in Section 4 fixed: L = 180 matrix-vector products for each of d = 80 randomly picked starting vectors). We report the results in Figure (1b) where we plot percentile values of compressive normalized correlation (from the rows of eE) for different values of the exact normalized correlation (rows of E). For b = 1, the polynomial approximation of f(λ) does not suppress small eigenvectors. As a result, we notice a deviation (bias) of the 50-percentile curve (green) from the ideal y = x dotted line drawn (Figure 1b left). This disappears for b = 2 (Figure 1b right). The running time for our algorithm on a standard workstation was about two orders of magnitude smaller than partial SVD using off-the-shelf sparse eigensolvers (e.g., the 80 dimensional embedding of the leading 500 eigenvectors of the DBLP graph took 1 minute whereas their exact computation 7 took 105 minutes). A more detailed comparison of running times is beyond the scope of this paper, but it is clear that the promised gains in computational complexity are realized in practice. Application to graph clustering for the Amazon co-purchasing network [27] : This is an undirected graph on n = 334863 vertices with 925872 edges. We illustrate the potential downstream benefits of our algorithm by applying K-means clustering on embeddings (exact and compressive) of this network. For the purpose of our comparisons, we compute the first 500 eigenvectors for eA explicitly using an exact eigensolver, and use an 80-dimensional compressive embedding eE which captures the effect of these, with f(λ) = I(λ ≥λ500), where λ500 is the 500th eigenvalue. We compare this against the usual spectral embedding using the first 80 eigenvectors of eA: E = [v1 · · · v80]. We keep the dimension fixed at 80 in the comparison because K-means complexity scales linearly with it, and quickly becomes the bottleneck. Indeed, our ability to embed a large number of eigenvectors directly into a low dimensional space (d ≈6 log n) has the added benefit of dimensionality reduction within the subspace of interest (in this case the largest 500 eigenvectors). We consider 25 instances of K-means clustering with K = 200 throughout, reporting the median of a commonly used graph clustering score, modularity [28] (larger values translate to better clustering solutions). The median modularity for clustering based on our embedding eE is 0.87. This is significantly better than that for E, which yields median modularity of 0.835. In addition, the computational cost for eE is one-fifth that for E (1.5 minutes versus 10 minutes). When we replace the exact eigenvector embedding E with approximate eigendecomposition using Randomized SVD [8] (parameters: power iterates q = 5 and excess dimensionality l = 10), the time taken reduces from 10 minutes to 17 seconds, but this comes at the expense of inference quality: median modularity drops to 0.748. On the other hand, the median modularity increases to 0.845 when we consider exact partial SVD embedding with 120 eigenvectors. This indicates that our compressive embedding yields better clustering quality because it is able to concisely capture more eigenvectors(500 in this example, compared to 80 and 120 with conventional partial SVD). It is worth pointing out that, even for known eigenvectors, the number of dominant eigenvectors k that yields the best inference performance is often unknown a priori, and is treated as a hyper-parameter. For compressive spectral embedding eE, an elegant approach for implicitly optimizing over k is to use the embedding function f(λ) = I(λ ≥c), with c as a hyper-parameter. 6 Conclusion We have shown that random projections and polynomial expansions provide a powerful approach for spectral embedding of large matrices: for an m×n matrix A, our O((T +m+n) log(m+n)) algorithm computes an O(log(m+n))-dimensional compressive embedding that provably approximates pairwise distances between points in the desired spectral embedding. Numerical results for several real-world data sets show that our method provides good approximations for embeddings based on partial SVD, while incurring much lower complexity. Moreover, our method can also approximate spectral embeddings which depend on the entire SVD, since its complexity does not depend on the number of dominant vectors whose effect we wish to model. A glimpse of this potential is provided by the example of K-means based clustering for estimating sparse-cuts of the Amazon graph, where our method yields much better performance (using graph metrics) than a partial SVD with significantly higher complexity. This motivates further investigation into applications of this approach for improving downstream inference tasks in a variety of large-scale problems. Acknowledgments This work is supported in part by DARPA GRAPHS (BAA-12-01) and by Systems on Nanoscale Information fabriCs (SONIC), one of the six SRC STARnet Centers, sponsored by MARCO and DARPA. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. References [1] B. Sch¨olkopf, A. Smola, and K.-R. M¨uller, “Kernel principal component analysis,” in Artificial Neural Networks ICANN’97, ser. Lecture Notes in Computer Science, W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, Eds. Springer Berlin Heidelberg, 1997, pp. 583–588. 8 [2] S. Mika, B. Sch¨olkopf, A. J. Smola, K.-R. M¨uller, M. Scholz, and G. R¨atsch, “Kernel PCA and de-noising in feature spaces,” in Advances in Neural Information Processing Systems, 1999. [3] S. White and P. Smyth, “A spectral clustering approach to finding communities in graph.” in SDM, vol. 5. SIAM, 2005. [4] F. G¨obel and A. A. Jagers, “Random walks on graphs,” Stochastic Processes and their Applications, 1974. [5] R. R. Nadakuditi and M. E. J. Newman, “Graph spectra and the detectability of community structure in networks,” Physical Review Letters, 2012. [6] C. Fowlkes, S. Belongie, F. Chung, and J. Malik, “Spectral grouping using the Nystr¨om method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, 2004. [7] P. Drineas and M. W. Mahoney, “On the Nystr¨om Method for Approximating a Gram Matrix for Improved Kernel-Based Learning,” Journal on Machine Learning Resources, 2005. [8] N. Halko, P. G. Martinsson, and J. A. Tropp, “Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions,” SIAM Rev., 2011. [9] “Python implementation of FastEmbed.” [Online]. Available: https://bitbucket.org/dineshkr/fastembed/src/NIPS2015 [10] D. Achlioptas, “Database-friendly random projections,” in Proceedings of the Twentieth ACM SIGMODSIGACT-SIGART Symposium on Principles of Database Systems, ser. PODS ’01, 2001. [11] E. Candes and M. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE, March 2008. [12] L. N. Trefethen and D. Bau, Numerical Linear Algebra. SIAM, 1997. [13] S. F. McCormick and T. Noe, “Simultaneous iteration for the matrix eigenvalue problem,” Linear Algebra and its Applications, vol. 16, no. 1, pp. 43–56, 1977. [14] K. Zhang, I. W. Tsang, and J. T. Kwok, “Improved Nystr¨om Low-rank Approximation and Error Analysis,” in Proceedings of the 25th International Conference on Machine Learning, ser. ICML ’08. ACM, 2008. [15] D. Yan, L. Huang, and M. I. Jordan, “Fast Approximate Spectral Clustering,” in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’09. ACM, 2009. [16] M. Li, J. T. Kwok, and B.-L. Lu, “Making Large-Scale Nystr¨om Approximation Possible.” in ICML, 2010. [17] S. Kumar, M. Mohri, and A. Talwalkar, “Ensemble Nystr¨om method,” in Advances in Neural Information Processing Systems, 2009. [18] F. Lin and W. W. Cohen, “Power iteration clustering,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010. [19] F. Lin, “Scalable methods for graph-based unsupervised and semi-supervised learning,” Ph.D. dissertation, Carnegie Mellon University, 2012. [20] W. Yan, U. Brahmakshatriya, Y. Xue, M. Gilder, and B. Wise, “PIC: Parallel power iteration clustering for big data,” Journal of Parallel and Distributed Computing, 2013. [21] L. Lov´asz, “Random walks on graphs: A survey,” Combinatorics, Paul erdos is eighty, vol. 2, no. 1, pp. 1–46, 1993. [22] D. A. Spielman and S.-H. Teng, “Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems,” in Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, ser. STOC ’04. New York, NY, USA: ACM, 2004. [23] D. Spielman and S. Teng, “Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 35, Jan. 2014. [24] D. Spielman and N. Srivastava, “Graph sparsification by effective resistances,” SIAM Journal on Computing, 2011. [25] R. N. Silver, H. Roeder, A. F. Voter, and J. D. Kress, “Kernel polynomial approximations for densities of states and spectral functions,” Journal of Computational Physics, vol. 124, no. 1, pp. 115–130, Mar. 1996. [26] E. Di Napoli, E. Polizzi, and Y. Saad, “Efficient estimation of eigenvalue counts in an interval,” arXiv:1308.4275 [cs], Aug. 2013. [27] J. Yang and J. Leskovec, “Defining and evaluating network communities based on ground-truth,” in 2012 IEEE 12th International Conference on Data Mining (ICDM), Dec. 2012. [28] S. Fortunato, “Community detection in graphs,” Physics Reports, vol. 486, no. 3-5, Feb. 2010. 9
2015
140
5,637
Winner-Take-All Autoencoders Alireza Makhzani, Brendan Frey University of Toronto makhzani, frey@psi.toronto.edu Abstract In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance. 1 Introduction Recently, supervised learning has been developed and used successfully to produce representations that have enabled leaps forward in classification accuracy for several tasks [1]. However, the question that has remained unanswered is whether it is possible to learn as “powerful” representations from unlabeled data without any supervision. It is still widely recognized that unsupervised learning algorithms that can extract useful features are needed for solving problems with limited label information. In this work, we exploit sparsity as a generic prior on the representations for unsupervised feature learning. We first introduce the fully-connected winner-take-all autoencoders that learn to do sparse coding by directly enforcing a winner-take-all lifetime sparsity constraint. We then introduce convolutional winner-take-all autoencoders that learn to do shift-invariant/convolutional sparse coding by directly enforcing winner-take-all spatial and lifetime sparsity constraints. 2 Fully-Connected Winner-Take-All Autoencoders Training sparse autoencoders has been well studied in the literature. For example, in [2], a “lifetime sparsity” penalty function proportional to the KL divergence between the hidden unit marginals (ˆρ) and the target sparsity probability (ρ) is added to the cost function: λKL(ρ∥ˆρ). A major drawback of this approach is that it only works for certain target sparsities and is often very difficult to find the right λ parameter that results in a properly trained sparse autoencoder. Also KL divergence was originally proposed for sigmoidal autoencoders, and it is not clear how it can be applied to ReLU autoencoders where ˆρ could be larger than one (in which case the KL divergence can not be evaluated). In this paper, we propose Fully-Connected Winner-Take-All (FC-WTA) autoencoders to address these concerns. FC-WTA autoencoders can aim for any target sparsity rate, train very fast (marginally slower than a standard autoencoder), have no hyper-parameter to be tuned (except the target sparsity rate) and efficiently train all the dictionary atoms even when very aggressive sparsity rates (e.g., 1%) are enforced. 1 (a) MNIST, 10% (b) MNIST, 5% (c) MNIST, 2% Figure 1: Learnt dictionary (decoder) of FC-WTA with 1000 hidden units trained on MNIST Sparse coding algorithms typically comprise two steps: a highly non-linear sparse encoding operation that finds the “right” atoms in the dictionary, and a linear decoding stage that reconstructs the input with the selected atoms and update the dictionary. The FC-WTA autoencoder is a nonsymmetric autoencoder where the encoding stage is typically a stack of several ReLU layers and the decoder is just a linear layer. In the feedforward phase, after computing the hidden codes of the last layer of the encoder, rather than reconstructing the input from all of the hidden units, for each hidden unit, we impose a lifetime sparsity by keeping the k percent largest activation of that hidden unit across the mini-batch samples and setting the rest of activations of that hidden unit to zero. In the backpropagation phase, we only backpropagate the error through the k percent non-zero activations. In other words, we are using the min-batch statistics to approximate the statistics of the activation of a particular hidden unit across all the samples, and finding a hard threshold value for which we can achieve k% lifetime sparsity rate. In this setting, the highly nonlinear encoder of the network (ReLUs followed by top-k sparsity) learns to do sparse encoding, and the decoder of the network reconstructs the input linearly. At test time, we turn off the sparsity constraint and the output of the deep ReLU network will be the final representation of the input. In order to train a stacked FC-WTA autoencoder, we fix the weights and train another FC-WTA autoencoder on top of the fixed representation of the previous network. The learnt dictionary of a FC-WTA autoencoder trained on MNIST, CIFAR-10 and Toronto Face datasets are visualized in Fig. 1 and Fig 2. For large sparsity levels, the algorithm tends to learn very local features that are too primitive to be used for classification (Fig. 1a). As we decrease the sparsity level, the network learns more useful features (longer digit strokes) and achieves better classification (Fig. 1b). Nevertheless, forcing too much sparsity results in features that are too global and do not factor the input into parts (Fig. 1c). Section 4.1 reports the classification results. Winner-Take-All RBMs. Besides autoencoders, WTA activations can also be used in Restricted Boltzmann Machines (RBM) to learn sparse representations. Suppose h and v denote the hidden and visible units of RBMs. For training WTA-RBMs, in the positive phase of the contrastive divergence, instead of sampling from P(hi|v), we first keep the k% largest P(hi|v) for each hi across the mini-batch dimension and set the rest of P(hi|v) values to zero, and then sample hi according to the sparsified P(hi|v). Filters of a WTA-RBM trained on MNIST are visualized in Fig. 3. We can see WTA-RBMs learn longer digit strokes on MNIST, which as will be shown in Section 4.1, improves the classification rate. Note that the sparsity rate of WTA-RBMs (e.g., 30%) should not be as aggressive as WTA autoencoders (e.g., 5%), since RBMs are already being regularized by having binary hidden states. (a) Toronto Face Dataset (48 × 48) (b) CIFAR-10 Patches (11 × 11) Figure 2: Dictionaries (decoder) of FC-WTA autoencoder with 256 hidden units and sparsity of 5% 2 (a) Standard RBM (b) WTA-RBM (sparsity of 30%) Figure 3: Features learned on MNIST by 256 hidden unit RBMs. 3 Convolutional Winner-Take-All Autoencoders There are several problems with applying conventional sparse coding methods on large images. First, it is not practical to directly apply a fully-connected sparse coding algorithm on high-resolution (e.g., 256 × 256) images. Second, even if we could do that, we would learn a very redundant dictionary whose atoms are just shifted copies of each other. For example, in Fig. 2a, the FCWTA autoencoder has allocated different filters for the same patterns (i.e., mouths/noses/glasses/face borders) occurring at different locations. One way to address this problem is to extract random image patches from input images and then train an unsupervised learning algorithm on these patches in isolation [3]. Once training is complete, the filters can be used in a convolutional fashion to obtain representations of images. As discussed in [3, 4], the main problem with this approach is that if the receptive field is small, this method will not capture relevant features (imagine the extreme of 1 × 1 patches). Increasing the receptive field size is problematic, because then a very large number of features are needed to account for all the position-specific variations within the receptive field. For example, we see that in Fig. 2b, the FC-WTA autoencoder allocates different filters to represent the same horizontal edge appearing at different locations within the receptive field. As a result, the learnt features are essentially shifted versions of each other, which results in redundancy between filters. Unsupervised methods that make use of convolutional architectures can be used to address this problem, including convolutional RBMs [5], convolutional DBNs [6, 5], deconvolutional networks [7] and convolutional predictive sparse decomposition (PSD) [4, 8]. These methods learn features from the entire image in a convolutional fashion. In this setting, the filters can focus on learning the shapes (i.e., “what”), because the location information (i.e., “where”) is encoded into feature maps and thus the redundancy among the filters is reduced. In this section, we propose Convolutional Winner-Take-All (CONV-WTA) autoencoders that learn to do shift-invariant/convolutional sparse coding by directly enforcing winner-take-all spatial and lifetime sparsity constraints. Our work is similar in spirit to deconvolutional networks [7] and convolutional PSD [4, 8], but whereas the approach in that work is to break apart the recognition pathway and data generation pathway, but learn them so that they are consistent, we describe a technique for directly learning a sparse convolutional autoencoder. A shallow convolutional autoencoder maps an input vector to a set of feature maps in a convolutional fashion. We assume that the boundaries of the input image are zero-padded, so that each feature map has the same size as the input. The hidden representation is then mapped linearly to the output using a deconvolution operation (Appendix A.1). The parameters are optimized to minimize the mean square error. A non-regularized convolutional autoencoder learns useless delta function filters that copy the input image to the feature maps and copy back the feature maps to the output. Interestingly, we have observed that even in the presence of denoising[9]/dropout[10] regularizations, convolutional autoencoders still learn useless delta functions. Fig. 4a depicts the filters of a convolutional autoencoder with 16 maps, 20% input and 50% hidden unit dropout trained on Street View House Numbers dataset [11]. We see that the 16 learnt delta functions make 16 copies of the input pixels, so even if half of the hidden units get dropped during training, the network can still rely on the non-dropped copies to reconstruct the input. This highlights the need for new and more aggressive regularization techniques for convolutional autoencoders. The proposed architecture for CONV-WTA autoencoder is depicted in Fig. 4b. The CONV-WTA autoencoder is a non-symmetric autoencoder where the encoder typically consists of a stack of several ReLU convolutional layers (e.g., 5 × 5 filters) and the decoder is a linear deconvolutional layer of larger size (e.g., 11 × 11 filters). We chose to use a deep encoder with smaller filters (e.g., 5×5) instead of a shallow one with larger filters (e.g., 11×11), because the former introduces more 3 (a) Dropout CONV Autoencoder (b) WTA-CONV Autoencoder Figure 4: (a) Filters and feature maps of a denoising/dropout convolutional autoencoder, which learns useless delta functions. (b) Proposed architecture for CONV-WTA autoencoder with spatial sparsity (128conv5-128conv5-128deconv11). non-linearity and regularizes the network by forcing it to have a decomposition over large receptive fields through smaller filters. The CONV-WTA autoencoder is trained under two winner-take-all sparsity constraints: spatial sparsity and lifetime sparsity. 3.1 Spatial Sparsity In the feedforward phase, after computing the last feature maps of the encoder, rather than reconstructing the input from all of the hidden units of the feature maps, we identify the single largest hidden activity within each feature map, and set the rest of the activities as well as their derivatives to zero. This results in a sparse representation whose sparsity level is the number of feature maps. The decoder then reconstructs the output using only the active hidden units in the feature maps and the reconstruction error is only backpropagated through these hidden units as well. Consistent with other representation learning approaches such as triangle k-means [3] and deconvolutional networks [7, 12], we observed that using a softer sparsity constraint at test time results in a better classification performance. So, in the CONV-WTA autoencoder, in order to find the final representation of the input image, we simply turn off the sparsity regularizer and use ReLU convolutions to compute the last layer feature maps of the encoder. After that, we apply max-pooling (e.g., over 4 × 4 regions) on these feature maps and use this representation for classification tasks or in training stacked CONV-WTA as will be discussed in Section 3.3. Fig. 5 shows a CONV-WTA autoencoder that was trained on MNIST. 0 5 10 15 20 25 0 5 0 5 0 5 0 10 20 30 40 0 10 20 30 40 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 50 100 150 0 5 0 5 Figure 5: The CONV-WTA autoencoder with 16 first layer filters and 128 second layer filters trained on MNIST: (a) Input image. (b) Learnt dictionary (deconvolution filters). (c) 16 feature maps while training (spatial sparsity applied). (d) 16 feature maps after training (spatial sparsity turned off). (e) 16 feature maps of the first layer after applying local max-pooling. (f) 48 out of 128 feature maps of the second layer after turning off the sparsity and applying local max-pooling (final representation). 4 (a) Spatial sparsity only (b) Spatial & lifetime sparsity 20% (c) Spatial & lifetime sparsity 5% Figure 6: Learnt dictionary (deconvolution filters) of CONV-WTA autoencoder trained on MNIST (64conv5-64conv5-64conv5-64deconv11). 3.2 Lifetime Sparsity Although spatial sparsity is very effective in regularizing the autoencoder, it requires all the dictionary atoms to contribute in the reconstruction of every image. We can further increase the sparsity by exploiting the winner-take-all lifetime sparsity as follows. Suppose we have 128 feature maps and the mini-batch size is 100. After applying spatial sparsity, for each filter we will have 100 “winner” hidden units corresponding to the 100 mini-batch images. During feedforward phase, for each filter, we only keep the k% largest of these 100 values and set the rest of activations to zero. Note that despite this aggressive sparsity, every filter is forced to get updated upon visiting every mini-batch, which is crucial for avoiding the dead filter problem that often occurs in sparse coding. Fig. 6 and Fig. 7 show the effect of the lifetime sparsity on the dictionaries trained on MNIST and Toronto Face dataset. We see that similar to the FC-WTA autoencoders, by tuning the lifetime sparsity of CONV-WTA autoencoders, we can aim for different sparsity rates. If no lifetime sparsity is enforced, we learn local filters that contribute to every training point (Fig. 6a and 7a). As we increase the lifetime sparsity, we can learn rare but useful features that result in better classification (Fig. 6b). Nevertheless, forcing too much lifetime sparsity will result in features that are too diverse and rare and do not properly factor the input into parts (Fig. 6c and 7b). 3.3 Stacked CONV-WTA Autoencoders The CONV-WTA autoencoder can be used as a building block to form a hierarchy. In order to train the hierarchical model, we first train a CONV-WTA autoencoder on the input images. Then we pass all the training examples through the network and obtain their representations (last layer of the encoder after turning off sparsity and applying local max-pooling). Now we treat these representations as a new dataset and train another CONV-WTA autoencoder to obtain the stacked representations. Fig. 5(f) shows the deep feature maps of a stacked CONV-WTA that was trained on MNIST. 3.4 Scaling CONV-WTA Autoencoders to Large Images The goal of convolutional sparse coding is to learn shift-invariant dictionary atoms and encoding filters. Once the filters are learnt, they can be applied convolutionally to any image of any size, and produce a spatial map corresponding to different locations at the input. We can use this idea to efficiently train CONV-WTA autoencoders on datasets containing large images. Suppose we want to train an AlexNet [1] architecture in an unsupervised fashion on ImageNet, ILSVRC-2012 (a) Spatial sparsity only (b) Spatial and lifetime sparsity of 10% Figure 7: Learnt dictionary (deconvolution filters) of CONV-WTA autoencoder trained on the Toronto Face dataset (64conv7-64conv7-64conv7-64deconv15). 5 (a) Spatial sparsity (b) Spatial and lifetime sparsity of 10% Figure 8: Learnt dictionary (deconvolution filters) of CONV-WTA autoencoder trained on ImageNet 48 × 48 whitened patches. (64conv5-64conv5-64conv5-64deconv11). (224x224). In order to learn the first layer 11 × 11 shift-invariant filters, we can extract mediumsize image patches of size 48 × 48 and train a CONV-WTA autoencoder with 64 dictionary atoms of size 11 on these patches. This will result in 64 shift-invariant filters of size 11 × 11 that can efficiently capture the statistics of 48 × 48 patches. Once the filters are learnt, we can apply them in a convolutional fashion with the stride of 4 to the entire images and after max-pooling we will have a 64 × 27 × 27 representation of the images. Now we can train another CONV-WTA autoencoder on top of these feature maps to capture the statistics of a larger receptive field at different location of the input image. This process could be repeated for multiple layers. Fig. 8 shows the dictionary learnt on the ImageNet using this approach. We can see that by imposing lifetime sparsity, we could learn very diverse filters such as corner, circular and blob detectors. 4 Experiments In all the experiments of this section, we evaluate the quality of unsupervised features of WTA autoencoders by training a naive linear classifier (i.e., SVM) on top them. We did not fine-tune the filters in any of the experiments. The implementation details of all the experiments are provided in Appendix A (in the supplementary materials). An IPython demo for reproducing important results of this paper is publicly available at http://www.comm.utoronto.ca/˜makhzani/. 4.1 Winner-Take-All Autoencoders on MNIST The MNIST dataset has 60K training points and 10K test points. Table 1 compares the performance of FC-WTA autoencoder and WTA-RBMs with other permutation-invariant architectures. Table 2a compares the performance of CONV-WTA autoencoder with other convolutional architectures. In these experiments, we have used all the available training labels (N = 60000 points) to train a linear SVM on top of the unsupervised features. An advantage of unsupervised learning algorithms is the ability to use them in semi-supervised scenarios where labeled data is limited. Table 2b shows the semi-supervised performance of a CONVWTA where we have assumed only N labels are available. In this case, the unsupervised features are still trained on the whole dataset (60K points), but the SVM is trained only on the N labeled points where N varies from 300 to 60K. We compare this with the performance of a supervised deep convnet (CNN) [17] trained only on the N labeled training points. We can see supervised deep learning techniques fail to learn good representations when labeled data is limited, whereas our WTA algorithm can extract useful features from the unlabeled data and achieve a better classification. We also compare our method with some of the best semi-supervised learning results recently obtained by Error Rate Shallow Denoising/Dropout Autoencoder (20% input and 50% hidden units dropout) 1.60% Stacked Denoising Autoencoder (3 layers) [9] 1.28% Deep Boltzmann Machines [13] 0.95% k-Sparse Autoencoder [14] 1.35% Shallow FC-WTA Autoencoder, 2000 units, 5% sparsity 1.20% Stacked FC-WTA Autoencoder, 5% and 2% sparsity 1.11% Restricted Boltzmann Machines 1.60% Winner-Take-All Restricted Boltzmann Machines (30% sparsity) 1.38% Table 1: Classification performance of FC-WTA autoencoder features + SVM on MNIST. 6 Error Deep Deconvolutional Network [7, 12] 0.84% Convolutional Deep Belief Network [5] 0.82% Scattering Convolution Network [15] 0.43% Convolutional Kernel Network [16] 0.39% CONV-WTA Autoencoder, 16 maps 1.02% CONV-WTA Autoencoder, 128 maps 0.64% Stacked CONV-WTA, 128 & 2048 maps 0.48% (a) Unsupervised features + SVM trained on N = 60000 labels (no fine-tuning) N CNN [17] CKN [16] SC [15] CONV-WTA 300 7.18% 4.15% 4.70% 3.47% 600 5.28% 2.37% 1K 3.21% 2.05% 2.30% 1.92% 2K 2.53% 1.51% 1.30% 1.45% 5K 1.52% 1.21% 1.03% 1.07% 10K 0.85% 0.88% 0.88 % 0.91% 60K 0.53% 0.39% 0.43% 0.48% (b) Unsupervised features + SVM trained on few labels N. (semi-supervised) Table 2: Classification performance of CONV-WTA autoencoder trained on MNIST. convolutional kernel networks (CKN) [16] and convolutional scattering networks (SC) [15]. We see CONV-WTA outperforms both these methods when very few labels are available (N < 1K). 4.2 CONV-WTA Autoencoder on Street View House Numbers The SVHN dataset has about 600K training points and 26K test points. Table 3 reports the classification results of CONV-WTA autoencoder on this dataset. We first trained a shallow and stacked CONV-WTA on all 600K training cases to learn the unsupervised features, and then performed two sets of experiments. In the first experiment, we used all the N=600K available labels to train an SVM on top of the CONV-WTA features, and compared the result with convolutional k-means [11]. We see that the stacked CONV-WTA achieves a dramatic improvement over the shallow CONV-WTA as well as k-means. In the second experiment, we trained an SVM by using only N = 1000 labeled data points and compared the result with deep variational autoencoders [18] trained in a same semi-supervised fashion. Fig. 9 shows the learnt dictionary of CONV-WTA on this dataset. Accuracy Convolutional Triangle k-means [11] 90.6% CONV-WTA Autoencoder, 256 maps (N=600K) 88.5% Stacked CONV-WTA Autoencoder, 256 and 1024 maps (N=600K) 93.1% Deep Variational Autoencoders (non-convolutional) [18] (N=1000) 63.9% Stacked CONV-WTA Autoencoder, 256 and 1024 maps (N=1000) 76.2% Supervised Maxout Network [19] (N=600K) 97.5% Table 3: CONV-WTA unsupervised features + SVM trained on N labeled points of SVHN dataset. (a) Contrast Normalized SVHN (b) Learnt Dictionary (64conv5-64conv5-64conv5-64deconv11) Figure 9: CONV-WTA autoencoder trained on the Street View House Numbers (SVHN) dataset. 4.3 CONV-WTA Autoencoder on CIFAR-10 Fig. 10a reports the classification results of CONV-WTA on CIFAR-10. We see when a small number of feature maps (< 256) are used, considerable improvements over k-means can be achieved. This is because our method can learn a shift-invariant dictionary as opposed to the redundant dictionaries learnt by patch-based methods such as k-means. In the largest deep network that we trained, we used 256, 1024, 4096 maps and achieved the classification rate of 80.1% without using finetuning, model averaging or data augmentation. Fig. 10b shows the learnt dictionary on the CIFAR10 dataset. We can see that the network has learnt diverse shift-invariant filters such as point/corner detectors as opposed to Fig. 2b that shows the position-specific filters of patch-based methods. 7 Accuracy Shallow Convolutional Triangle k-means (64 maps) [3] 62.3% Shallow CONV-WTA Autoencoder (64 maps) 68.9% Shallow Convolutional Triangle k-means (256 maps) [3] 70.2% Shallow CONV-WTA Autoencoder (256 maps) 72.3% Shallow Convolutional Triangle k-means (4000 maps) [3] 79.6% Deep Triangle k-means (1600, 3200, 3200 maps) [20] 82.0% Convolutional Deep Belief Net (2 layers) [6] 78.9% Exemplar CNN (300x Data Augmentation) [21] 82.0% NOMP (3200,6400,6400 maps + Averaging 7 Models) [22] 82.9% Stacked CONV-WTA (256, 1024 maps) 77.9% Stacked CONV-WTA (256, 1024, 4096 maps) 80.1% Supervised Maxout Network [19] 88.3% (a) Unsupervised features + SVM (without fine-tuning) (b) Learnt dictionary (deconv-filters) 64conv5-64conv5-64conv5-64deconv7 Figure 10: CONV-WTA autoencoder trained on the CIFAR-10 dataset. 5 Discussion Relationship of FC-WTA to k-sparse autoencoders. k-sparse autoencoders impose sparsity across different channels (population sparsity), whereas FC-WTA autoencoder imposes sparsity across training examples (lifetime sparsity). When aiming for low sparsity levels, k-sparse autoencoders use a scheduling technique to avoid the dead dictionary atom problem. WTA autoencoders, however, do not have this problem since all the hidden units get updated upon visiting every mini-batch no matter how aggressive the sparsity rate is (no scheduling required). As a result, we can train larger networks and achieve better classification rates. Relationship of CONV-WTA to deconvolutional networks and convolutional PSD. Deconvolutional networks [7, 12] are top down models with no direct link from the image to the feature maps. The inference of the sparse maps requires solving the iterative ISTA algorithm, which is costly. Convolutional PSD [4] addresses this problem by training a parameterized encoder separately to explicitly predict the sparse codes using a soft thresholding operator. Deconvolutional networks and convolutional PSD can be viewed as the generative decoder and encoder paths of a convolutional autoencoder. Our contribution is to propose a specific winner-take-all approach for training a convolutional autoencoder, in which both paths are trained jointly using direct backpropagation yielding an algorithm that is much faster, easier to implement and can train much larger networks. Relationship to maxout networks. Maxout networks [19] take the max across different channels, whereas our method takes the max across space and mini-batch dimensions. Also the winner-take-all feature maps retain the location information of the “winners” within each feature map and different locations have different connectivity on the subsequent layers, whereas the maxout activity is passed to the next layer using weights that are the same regardless of which unit gave the maximum. 6 Conclusion We proposed the winner-take-all spatial and lifetime sparsity methods to train autoencoders that learn to do fully-connected and convolutional sparse coding. We observed that CONV-WTA autoencoders learn shift-invariant and diverse dictionary atoms as opposed to position-specific Gabor-like atoms that are typically learnt by conventional sparse coding methods. Unlike related approaches, such as deconvolutional networks and convolutional PSD, our method jointly trains the encoder and decoder paths by direct back-propagation, and does not require an iterative EM-like optimization technique during training. We described how our method can be scaled to large datasets such as ImageNet and showed the necessity of the deep architecture to achieve better results. We performed experiments on the MNIST, SVHN and CIFAR-10 datasets and showed that the classification rates of winner-take-all autoencoders are competitive with the state-of-the-art. Acknowledgments We would like to thank Ruslan Salakhutdinov and Andrew Delong for the valuable comments. We also acknowledge the support of NVIDIA with the donation of the GPUs used for this research. 8 References [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks.,” in NIPS, vol. 1, p. 4, 2012. [2] A. Ng, “Sparse autoencoder,” CS294A Lecture notes, vol. 72, 2011. [3] A. Coates, A. Y. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in International Conference on Artificial Intelligence and Statistics, 2011. [4] K. Kavukcuoglu, P. Sermanet, Y.-L. Boureau, K. Gregor, M. Mathieu, and Y. LeCun, “Learning convolutional feature hierarchies for visual recognition.,” in NIPS, vol. 1, p. 5, 2010. [5] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, “Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations,” in Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609–616, ACM, 2009. [6] A. Krizhevsky, “Convolutional deep belief networks on cifar-10,” Unpublished, 2010. [7] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2528– 2535, IEEE, 2010. [8] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun, “Pedestrian detection with unsupervised multi-stage feature learning,” in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 3626–3633, IEEE, 2013. [9] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” The Journal of Machine Learning Research, vol. 11, pp. 3371–3408, 2010. [10] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012. [11] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, p. 5, Granada, Spain, 2011. [12] M. D. Zeiler and R. Fergus, “Differentiable pooling for hierarchical feature learning,” arXiv preprint arXiv:1207.0151, 2012. [13] R. Salakhutdinov and G. E. Hinton, “Deep boltzmann machines,” in International Conference on Artificial Intelligence and Statistics, pp. 448–455, 2009. [14] A. Makhzani and B. Frey, “k-sparse autoencoders,” International Conference on Learning Representations, ICLR, 2014. [15] J. Bruna and S. Mallat, “Invariant scattering convolution networks,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 8, pp. 1872–1886, 2013. [16] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, “Convolutional kernel networks,” in Advances in Neural Information Processing Systems, pp. 2627–2635, 2014. [17] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. Lecun, “Unsupervised learning of invariant feature hierarchies with applications to object recognition,” in Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1–8, IEEE, 2007. [18] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, “Semi-supervised learning with deep generative models,” in Advances in Neural Information Processing Systems, pp. 3581–3589, 2014. [19] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, “Maxout networks,” ICML, 2013. [20] A. Coates and A. Y. Ng, “Selecting receptive fields in deep networks.,” in NIPS, 2011. [21] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox, “Discriminative unsupervised feature learning with convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 766–774, 2014. [22] T.-H. Lin and H. Kung, “Stable and efficient representation learning with nonnegativity constraints,” in Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1323–1331, 2014. 9
2015
141
5,638
Robust Feature-Sample Linear Discriminant Analysis for Brain Disorders Diagnosis Ehsan Adeli-Mosabbeb, Kim-Han Thung, Le An, Feng Shi, Dinggang Shen, for the ADNI∗ Department of Radiology and BRIC University of North Carolina at Chapel Hill, NC, 27599, USA {eadeli,khthung,le_an,fengshi,dgshen}@med.unc.edu Abstract A wide spectrum of discriminative methods is increasingly used in diverse applications for classification or regression tasks. However, many existing discriminative methods assume that the input data is nearly noise-free, which limits their applications to solve real-world problems. Particularly for disease diagnosis, the data acquired by the neuroimaging devices are always prone to different sources of noise. Robust discriminative models are somewhat scarce and only a few attempts have been made to make them robust against noise or outliers. These methods focus on detecting either the sample-outliers or feature-noises. Moreover, they usually use unsupervised de-noising procedures, or separately de-noise the training and the testing data. All these factors may induce biases in the learning process, and thus limit its performance. In this paper, we propose a classification method based on the least-squares formulation of linear discriminant analysis, which simultaneously detects the sample-outliers and feature-noises. The proposed method operates under a semi-supervised setting, in which both labeled training and unlabeled testing data are incorporated to form the intrinsic geometry of the sample space. Therefore, the violating samples or feature values are identified as sample-outliers or feature-noises, respectively. We test our algorithm on one synthetic and two brain neurodegenerative databases (particularly for Parkinson’s disease and Alzheimer’s disease). The results demonstrate that our method outperforms all baseline and state-of-the-art methods, in terms of both accuracy and the area under the ROC curve. 1 Introduction Discriminative methods pursue a direct mapping from the input to the output space for a classification or a regression task. As an example, linear discriminant analysis (LDA) aims to find the mapping that reduces the input dimensionality, while preserving the most class discriminatory information. Discriminative methods usually achieve good classification results compared to the generative models, when there are enough number of training samples. But they are limited when there are small number of labeled data, as well as when the data is noisy. Various efforts have been made to add robustness to these methods. For instance, [17] and [9] proposed robust Fisher/linear discriminant analysis methods, and [19] introduced a worst-case LDA, by minimizing the upper bound of the LDA cost function. These methods are all robust to sample-outliers. On the other hand, some methods were proposed to deal with the intra-sample-outliers (or feature-noises), such as [12, 15]. ∗Parts of the data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.ucla.edu). The investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this paper. A complete listing of ADNI investigators can be found at: http://adni.loni. ucla.edu/wp-content/uploads/howtoapply/ADNIAcknowledgementList.pdf. 1 As in many previous works, de-noising the training and the testing data are often conducted separately. This might induce a bias or inconsistency to the whole learning process. Besides, for many real-world applications, it is a cumbersome task to acquire enough training samples to perform a proper discriminative analysis. Hence, we propose to take advantage of the unlabeled testing data available, to build a more robust classifier. To this end, we introduce a semi-supervised discriminative classification model, which, unlike previous works, jointly estimates the noise model (both sample-outliers and feature-noises) on the whole labeled training and unlabeled testing data and simultaneously builds a discriminative model upon the de-noised training data. In this paper, we introduce a novel classification model based on LDA, which is robust against both sample-outliers and feature-noises, and hence, here, it is called robust feature-sample linear discriminant analysis (RFS-LDA). LDA finds the mapping between the sample space and the label space through a linear transformation matrix, maximizing a so-called Fisher discriminant ratio [17]. In practice, the major drawback of the original LDA is the small sample size problem, which arises when the number of available training samples is less than the dimensionality of the feature space [18]. A reformulation of LDA based on the reduced-rank least-squares problem (known LS-LDA) [10] tackles this problem. LS-LDA finds the mapping βββ ∈Rl×d by solving the following problem1: min βββ ∥H(Ytr −βββXtr)∥2 F, (1) where Ytr ∈Rl×Ntr is a binary class label indicator matrix, for l different classes (or labels), and Xtr ∈Rd×Ntr is the matrix containing Ntr d-dimensional training samples. H is a normalization factor defined as H = (YtrY⊤ tr)−1/2 that compensates for the different number of samples in each class [10]. As a result, the mapping βββ is a reduced rank transformation matrix [10, 15], which could be used to project a test data xtst ∈Rd×1 onto a l dimensional space. The class label could therefore be simply determined using a k-NN strategy. To make LDA robust against noisy data, Fidler et al. [12] proposed to construct a basis, which contains complete discriminative information for classification. In the testing phase, the estimated basis identifies the outliers in samples (images in their case) and then is used to calculate the coefficients using a subsampling approach. On the other hand, Huang et al. [15] proposed a general formulation for robust regression (RR) and classification (robust LDA or RLDA). In the training stage, they denoise the feature values using a strategy similar to robust principle component analysis (RPCA) [7] and build the above LS-LDA model using the de-noised data. In the testing stage, they de-noise the data by performing a locally compact representation of the testing samples from the de-noised training data. This separate de-noising procedure could not effectively form the underlying geometry of sample space to de-noise the data. Huang et al. [15] only account for feature-noise by imposing a sparse noise model constraint on the features matrix. On the other hand, the data fitting term in (1) is vulnerable to large sample-outliers. Recently, in robust statistics, it is found that ℓ1 loss functions are able to make more reliable estimations [2] than ℓ2 least-squares fitting functions. This has been adopted in many applications, including robust face recognition [28] and robust dictionary learning [22]. Reformulating the objective in (1), using this idea, would yield to this problem: min βββ ∥H(Ytr −βββXtr)∥1. (2) We incorporate this fitting function in our formulation to deal with the sample-outliers by iteratively re-weighting each single sample, while simultaneously de-noising the data from feature-noises. This is done through a semi-supervised setting to take advantage of all labeled and unlabeled data to build the structure of the sample space more robustly. Semi-supervised learning [8, 34] has long been of great interest in different fields, because it can make use of unlabeled or poorly labeled data. For instance, Joulin and Bach [16] introduced a convex relaxation and use the model in different semisupervised learning scenarios. In another work, Cai et al. [5] proposed a semi-supervised discriminant analysis, where the separation between different classes is maximized using the labeled data points, while the unlabeled data points estimate the structure of the data. In contrast, we incorporate the unlabeled testing data to form the intrinsic geometry of the sample space and de-noise the data, whilst building the discriminative model. 1Bold capital letters denote matrices (e.g., D). All non-bold letters denote scalar variables. dij is the scalar in the row i and column j of D. ⟨d1, d2⟩denotes the inner product between d1 and d2. ∥d∥2 2 and ∥d∥1 represent the squared Euclidean Norm and the ℓ1 norm of d, respectively. ∥D∥2 F = tr(D⊤D) = P ij dij and ∥D∥∗designate the squared Frobenius Norm and the nuclear norm (sum of singular values) of D, respectively. 2 x11 x12 . . . x1Ntr x1Ntr+1 . . . x1N x21 x22 . . . x2Ntr x2Ntr+1 . . . x2N x31 x32 . . . x3Ntr x3Ntr+1 . . . x3N ... ... . . . ... ... . . . ... xd1 xd2 . . . xdNtr xdNtr+1 . . . xdN     X = [Xtr Xtst] ∈Rd×N = d11 d12 . . . d1Ntr d1Ntr+1 . . . d1N d21 d22 . . . d2Ntr d2Ntr+1 . . . d2N d31 d32 . . . d3Ntr d3Ntr+1 . . . d3N ... ... . . . ... ... . . . ... dd1 dd2 . . . ddNtr ddNtr+1 . . . ddN     D = [Dtr Dtst] ∈Rd×N + e11 e12 . . . e1N e21 e22 . . . e2N e31 e32 . . . e3N ... ... . . . ... ed1 ed2 . . . edN     E ∈Rd×N y11 y12 . . . y1Ntr ... ... . . . ... yl1 yl2 . . . ylNtr     Ytr ∈Rl×Ntr Mapping βββ Figure 1: Outline of the proposed method: The original data matrix, X, is composed of both labeled training and unlabeled testing data. Our method decomposes this matrix to a de-noised data matrix, D, and an error matrix, E, to account for feature-noises. Simultaneously, we learn a mapping from the de-noised training samples in D (Dtr) through a robust ℓ1 fitting function, dealing with the sample-outliers. The same learned mapping on the testing data, Dtst, leads to the test labels. We apply our method for the diagnosis of neurodegenerative brain disorders. The term neurodegenerative disease is an umbrella term for debilitating and incurable conditions related to progressive degeneration or death of the cells in the brain nervous system. Although neurodegenerative diseases manifest with diverse pathological features, the cellular level processes resemble similar structures. For instance, Parkinson’s disease (PD) mainly affects the basal ganglia region and the substansia nigra sub-region of the brain, leading to decline in generation of a chemical messenger, dopamine. Lack of dopamine yields loss of ability to control body movements, along with some non-motor problems (e.g., depression, anxiety) [35]. In Alzheimer’s disease (AD), deposits of tiny protein plaques yield into brain damage and progressive loss of memory [26]. These diseases are often incurable and thus, early diagnosis and treatment are crucial to slow down the progression of the disease in its initial stages. In this study, we use two popular databases: PPMI and ADNI. The former aims at investigating PD and its related disorders, while the latter is designed for diagnosing AD and its prodormal stage, known as mild cognitive impairment (MCI). Contributions: The contribution of this paper would therefore be multi-fold: (1) We propose an approach to deal with the sample-outliers and feature-noises simultaneously, and build a robust discriminative classification model. The sample-outliers are penalized through an ℓ1 fitting function, by re-weighing the samples based on their prediction power, while discarding the feature-noises. (2) Our proposed model operates under a semi-supervised setting, where the whole data (labeled training and unlabeled testing samples) are incorporated to build the intrinsic geometry of the sample space, which leads to better de-noising the data. (3) We further select the most discriminative features for the learning process through regularizing the weights matrix with an ℓ1 norm. This is specifically of great interest for the neurodegenerative disease diagnosis, where the features from different regions of the brain are extracted, but not all the regions are associated with a certain disease. Therefore, the most discriminative regions in the brain that utmost affect the disease would be identified, leading to a more reliable diagnosis model. 2 Robust Feature-Sample Linear Discriminant Analysis (RFS-LDA) Let’s assume we have Ntr training and Ntst testing samples, each with a d-dimensional feature vector, which leads to a set of N = Ntr + Ntst total samples. Let X ∈Rd×N denote the set of all samples (both training and testing), in which each column indicates a single sample, and yi ∈R1×N their corresponding ith labels. In general, with l different labels, we can define Y ∈Rl×N. Thus, X and Y are composed by stacking up the training and testing data as: X = [Xtr Xtst] and Y = [Ytr Ytst]. Our goal is to determine the labels of the test samples, Ytst ∈Rl×Ntst. Formulation: An illustration of the proposed method is depicted in Fig 1. First, all the samples (labeled or unlabeled) are arranged into a matrix, X. We are interested in de-noising this matrix. Following [14, 21], this could be done by assuming that X can be spanned on a low-rank subspace and therefore should be rank-deficient. This assumption supports the fact that samples from same classes should be more correlated [14, 15]. Therefore, the original matrix X is decomposed into two 3 counterparts, D and E, which represent the de-noised data matrix and the error matrix, respectively, similar to RPCA [7]. The de-noised data matrix shall hold the low-rank assumption and the error matrix is considered to be sparse. But, this process of de-noising does not incorporate the label information and is therefore unsupervised. Nevertheless, note that we also seek a mapping between the de-noised training samples and their respective labels. So, matrix D should be spanned on a low-rank subspace, which also leads to a good classification model of its sub-matrix, Dtr. To ensure the rank-deficiency of the matrix D, like in many previous works [7, 14, 21], we approximate the rank function using the nuclear norm (the sum of the singular values of the matrix). The noise is modeled using the ℓ1 norm of the matrix, which ensures a sparse noise model on the feature values. Accordingly, the objective function for RFS-LDA under a semi-supervised setting would be: min βββ,D, ˆD,E η 2∥H(Ytr −βββ ˆD)∥1 + ∥D∥∗+ λ1∥E∥1 + λ2R(βββ), s.t. D = X + E, ˆD = [Dtr; 1⊤], (3) where the first term is the ℓ1 regression model introduced in (2). This term only operates on the denoised training samples from matrix D with a row of all 1s is added to it, to ensure an appropriate linear classification model. The second and the third terms together with the first constraint are similar to the RPCA formulation [7]. They de-noise the labeled training and unlabeled testing data together. In combination with the first term, we ensure that the de-noised data also provides a favorable regression/classification model. The last term is a regularization on the learned mapping coefficients to ensure the coefficients do not get trivial or unexpectedly large values. The parameters η, λ1 and λ2 are constant regularization parameters, which are discussed in more details later. The regularization on the coefficients could be posed as a simple norm of the βββ matrix. But, in many applications like ours (disease diagnosis) many of the features in the feature vectors are redundant. In practice, features from different brain regions are often extracted, but not all the regions contribute to a certain disease. Therefore, it is desirable to determine which features (regions) are the most relevant and the most discriminative to use. Following [11, 26, 28], we are looking for a sparse set of weights that ensures incorporating the least and the most discriminative features. We propose a regularization on the weights vector as a combination of the ℓ1 and Frobenius norms: R(βββ) = ∥βββ∥1 + γ∥βββ∥F. (4) Evidently, the solution to the objective function in (3) is not easy to achieve, since the first term contains a quadratic term and minimization of the ℓ1 fitting function is not straightforward (because of its indifferentiability). To this end, we formalize the solution with a similar strategy as in iteratively re-weighted least squares (IRLS) [2]. The ℓ1 minimization problem is approximated by a conventional ℓ2 least-squares, in which each of the samples in the ˆD matrix are weighted with the reverse of their regression residual. Therefore the new problem would be: min βββ,D, ˆD,E η 2∥H(Ytr −βββ ˆD)ˆααα∥2 F + ∥D∥∗+ λ1∥E∥1 + λ2R(βββ), s.t. D = X + E, ˆD = [Dtr; 1⊤]. (5) where ˆααα is a diagonal matrix, the ith diagonal element of which is the ith sample’s weight: ˆαααii = 1/ q (yi −βββˆdi)2 + δ, ∀i, j ∈{0, . . . , Ntr}, i ̸= j, ˆαααij = 0, (6) where δ is a very small positive number (equal to 0.0001 in our experiments). In the next subsection, we introduce an algorithm to solve this optimization problem. Our work is closely related to the RR and RLDA formulations in [15], where the authors impose a low-rank assumption on the training data feature values and an ℓ1 assumption on the noise model. The discriminant model is learned similar to LS-LDA, as illustrated in (1), while a sample-weighting strategy is employed to achieve a more robust model. On the other hand, our model operates under a semi-supervised learning setting, where both the labeled training and the unlabeled testing samples are de-noised simultaneously. Therefore, the geometry of the sample space is better modeled on the low-dimensional subspace, by interweaving both labeled training and unlabeled testing data. In addition, our model further selects the most discriminative features to learn the regression/classification model, by regularizing the mapping weights vector and enforcing an sparsity condition on them. 4 Algorithm 1 RFS-LDA optimization algorithm. Input: X = [Xtr Xtst], Ytr, parameters η, λ1, λ2, ρ and γ. Initialization: D0 = [Xtr Xtst], ˆD0 = [Xtr; 1⊤],βββ0 = Ytr( ˆD0)⊤( ˆD0( ˆD0)⊤+ γI), E0 = 0, L 0 1 = X/∥X∥2, L 0 2 = Xtr/∥Xtr∥2, L 0 3 = βββ0/∥βββ0∥2, µ1 = dN 4 ∥X∥1, µ2 = dNtr 4 ∥Xtr∥1, µ3 = dc 4 ∥βββ0∥1. 1: k ←0 2: repeat ▷Main optimization loop 3: t ←0, ˆβββ 0 = βββk ▷Update βββ 4: repeat 5: ∀i, j ∈{0, . . . , Ntr −1}, i ̸= j, ˆαααij ←0 and ˆαααii ←1/ q (yk i −ˆβββt ˆdk i )2 + 0.0001 6: ˆβββ t+1 ← Ytr ˆαααˆααα⊤( ˆDk)⊤+ µ3(Bk −L k 3 )  ˆDkˆαααˆααα⊤( ˆDk)⊤+ γI  , t ←t + 1 7: until ∥ˆβββt−1 −ˆβββt ∥F/(∥ˆβββt−1 ∥F × ∥ˆβββt ∥F) < 0.001 or t > 100 8: βββk+1 ←ˆβββ t. 9: ˆDk+1 ← ηˆααα⊤(βββk+1)⊤βββk+1ˆααα + µk 2I −1ηˆααα⊤(βββk+1)⊤Ytr −L k 2 + µk 2[Dk tr; 1⊤]  ▷Update ˆD 10: Dk+1 ←D1/(µk 1 + µk 2 ) L k 1 + µk 1(X −Ek) +  [L k 2 + µk 2 ˆDk+1](1:Ntr,:) 0  ▷Update D 11: Ek+1 ←Sλ1/µk 1 (X −Dk+1 + L k 1 /µk 1) ▷Update E 12: Bk+1 ←Sλ2/µk 3 (βββk+1 + L k 3 ) ▷Update B 13: L k+1 1 ←L k 1 + µk 1(X −Dk+1 −Ek+1) ▷Update multipliers and parameters 14: L k+1 2 ←L k 2 + µk 2( ˆD −[Dk+1 tr ; 1⊤]), L k+1 3 ←L k 3 + µk 3(βββ −B) 15: µk+1 1 ←min(ρµk 1, 109), µk+1 2 ←min(ρµk 2, 109), µk+1 3 ←min(ρµk 3, 109) 16: k ←k + 1 17: until ∥X −Dk −Ek∥F/∥X∥F < 10−8 and ∥ˆ Dk −[Dk tr; 1⊤]∥F/∥ˆ Dk∥F < 10−8 and ∥βββk −Bk∥F/∥βββk∥F < 10−8 Output: βββ, D, E and Ytst = βββXtst. Optimization: Problem (5) could be efficiently solved using the augmented Lagrangian multipliers (ALM) approach. Hence, we introduce the Lagrangian multipliers, L1 ∈Rd×N, L2 ∈R(d+1)×Ntr and L3 ∈Rl×(d+1), an auxiliary variable, B ∈Rl×(d+1), and write the Lagrangian function as: L(βββ, B, D, ˆD, E) =η 2∥H(Ytr −βββ ˆD)ˆααα∥2 F + ∥D∥∗+ λ1∥E∥1 + λ2(∥B∥1 + γ∥βββ∥F) + ⟨L1, X −D −E⟩+ µ1 2 ∥X −D −E∥2 F + ⟨L2, ˆD −[Dtr; 1⊤]⟩ + µ2 2 ∥ˆD −[Dtr; 1⊤]∥2 F + ⟨L3,βββ −B⟩+ µ3 2 ∥βββ −B∥2 F, (7) where µ1, µ2 and µ3 are penalty parameters. There are five variables (βββ, B, D, ˆD and E) contributing to the problem. We alternatively optimize for each variable, while fixing the others. Except for the matrix βββ, all the variables have straightforward or closed-form solutions. βββ is calculated through IRLS [2], by iteratively calculating the weights in ˆααα and solving the conventional least-squares problem, until convergence. The detailed optimization steps are given in Algorithm 1. The normalization factor H is omitted in this algorithm, for easier readability. In this algorithm, I is the identity matrix and the operators Dτ(.) and Sκ(.) are defined in the following. Dτ(A) = UDτ(ΣΣΣ)V∗applies singular value thresholding algorithm [6] on the intermediate matrix ΣΣΣ, as Dτ(ΣΣΣ) = diag({(σi −τ)+}), where UΣΣΣV∗ is the singular values decomposition (SVD) of A and σis are the singular values. Additionally, Sκ(a) = (a −κ)+ −(−a −κ)+ is the soft thresholding operator or the proximal operator for the ℓ1 norm [3]. Note that s+ is the positive part of s, defined as s+ = max(0, s). Algorithm analysis: The solution for each of the matrices B, D, ˆD, E is a convex function, while all the other variables are fixed. For βββ, the solution is achieved via the IRLS approach, in an iterative manner. Both the ℓ1 fitting function and the approximated re-weighted least-squares functions are convex. We only need to ensure that the minimization of the latter is numerically better tractable than the minimization of the former. This is discussed in depth and the convergence is proved in [2]. To estimate the computational complexity of the algorithm, we need to investigate the complexity of the sub-procedures of the algorithm. The two most computationally expensive steps in the loop are the iterative update of βββ (Algorithm 1, Steps 4-7) and the SVT operation (Algorithm 1, Step 10). The former includes solving a least-squares iteratively, which is O(d2N) in each iteration and the latter has the SVD operation as the most computational intensive operation, which is of O(d2N + N 3). 5 By considering the maximum number of iterations for the first sub-procedure equal to tmax = 100, the overall computational complexity of the algorithm in each iteration would be O(100d2N +N 3). The number of iterations of the whole algorithm until convergence is dependent on the choice of {µ}s. If µ penalty parameters are increasing smoothly in each iteration (as in Step 15, Algorithm 1), the overall algorithm would be Q-linearly convergent. A reasonable choice for the sequence of all {µ}s yields in a decrease in the number of required SVD operations [1, 21]. 3 Experiments We compare our method with several baseline and state-of-the-art methods in three different scenarios. The first experiment is on synthetic data, which highlights how the proposed method is robust against sample-outliers or feature-noises, separately or when they occur at the same time. The next two experiments are conducted for neurodegenerative brain disorders diagnosis. We use two popular databases, one for Parkinson’s disease (PD) and the other for Alzheimer’s disease (AD). We compare our results with different baseline methods, including: Conventional LS-LDA [10], RLDA [15], RPCA on the X matrix separately to de-noise and then LS-LDA for the classification (denoted as RPCA+LS-LDA) [15], linear support vector machines (SVM), and sparse feature selection with SVM (SFS+SVM) or with RLDA (SFS+RLDA). Except for RPCA+LDA, the other methods in comparison do not incorporate the testing data. In order to have a fair set of comparisons, we also compare against the transductive matrix completion (MC) approach [14]. Additionally, to also evaluate the effect of the regularization on matrix βββ, we report results for RFS-LDA when regularized by only γ∥βββ∥F (denoted as RFS-LDA∗), instead of the term introduced in (4). Moreover, we also train our proposed RFS-LDA in a fully supervised setting, i.e., not involving any testing data in the training process, to show the effect of the established semi-supervised learning framework in our proposed method. This is simply done by replacing variable X in (3) with Xtr and solving the problem correspondingly. This method, referred to as S-RFS-LDA, only uses the training data to form the geometry of the sample space and, therefore, only cleans the training feature-noises. For the choice of parameters, the best parameters are selected through an inner 10-fold cross validation on the training data, for all the competing methods. For the proposed method, the parameters are set with a same strategy as in [15]: λ1 = Λ1/( p min(d, N)), λ2 = Λ2/ √ d, ηk = Λ3∥X∥∗/∥Ytr −βββk ˆDk∥2 F, and ρ (controlling the {µ}s in the algorithm) is set to 1.01. We have set Λ1, Λ2, Λ3 and γ through inner cross validation, and found that all set to 1 yields to reasonable results across all datasets. Synthetic Data: We construct two independent 100-dimensional subspaces, with bases U1 and U2 (same as described in [21]). U1 ∈R100×100 is a random orthogonal matrix and U2 = TU1, in which T is a random rotation matrix. Then, 500 vectors are sampled from each subspace through Xi = UiQi, i = {1, 2}, with Qi, a 100 × 500 matrix, independent and identically distributed (i.i.d.) from N(0, 1). This leads to a binary classification problem. We gradually add additional noisy samples and features to the data, drawn i.i.d from N(0, 1), and evaluate our proposed method. The accuracy means and standard deviations of three different runs are illustrated in Fig. 2. This experiment is conducted under three settings: (1) First, we analyze the behavior of the method against gradually added noise to some of the features (feature-noises), illustrated in Fig. 2a. (2) We randomly add some noisy samples to the aforementioned noise-free samples and evaluate the methods in the sole presence of sample-outliers. Results are depicted in Fig. 2b. (3) Finally, we simultaneously add noisy features and samples. Fig. 2c shows the mean±std accuracy as a function of the additional number of noisy features and samples. Note that all the reported results are obtained through 10-fold cross-validation. As can be seen, our method is able to select a better subset of features and samples and achieve superior results compared to RLDA and conventional LS-LDA approaches. Furthermore, our method behaves more robust against the increase in the noise factor. Brain neurodegenrative disease diagnosis databases: The first set of data used in this paper is obtained from the Parkinson’s progression markers initiative (PPMI) database2 [23]. PPMI is the first substantial study for identifying the PD progression biomarkers to advance the understanding of the disease. In this research, we use the MRI data acquired by the PPMI study, in which a T1weighted, 3D sequence (e.g., MPRAGE or SPGR) is acquired for each subject using 3T SIEMENS MAGNETOM TrioTim syngo scanners. We use subjects scanned using MPRAGE sequence to 2http://www.ppmi-info.org/data 6 0 100 200 70 80 90 100 # of added noisy features Accuracy (%) RFS-LDA RLDA LS-LDA (a) Only added noisy features 0 100 200 60 80 100 # of added noisy samples (b) Only added noisy samples 0 100 200 60 80 100 # of added noisy samples and features (c) Added noisy samples & features Figure 2: Results comparisons on synthetic data, for three different runs (mean±std). Table 1: The accuracy (ACC) and area under ROC curve (AUC) of the PD/NC classification on PPMI database, compared to the baseline methods. Method RFS-LDA RFS-LDA∗ S-RFS-LDA RLDA SFS+RLDA RPCA+LS-LDA LS-LDA SVM SFS+SVM MC ACC 84.1 78.3 75.8 71.0 73.4 59.4 56.6 55.2 61.5 61.5 AUC 0.87 0.81 0.80 0.79 0.80 0.64 0.59 0.56 0.59 68.8 minimize the effect of different scanning protocols. The T1-weighted images were acquired for 176 sagittal slices with the following parameters: repetition time = 2300 ms, echo time = 2.98 ms, flip angle = 9◦, and voxel size = 1 × 1 × 1 mm3. All the MR images were preprocessed by skull stripping [29], cerebellum removal, and then segmented into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) tissues [20]. The anatomical automatic labeling atlas [27], parcellated with 90 predefined regions of interest (ROI), was registered using HAMMER3 [25, 30] to each subject’s native space. We further added 8 more ROIs in basal ganglia and brainstem regions, which are clinically important ROIs for PD. We then computed WM, GM and CSF tissue volumes in each of the 98 ROIs as features. 56 PD and 56 normal control (NC) subjects are used in our experiments. The second dataset is from Alzheimer’s disease neuroimaging initiative (ADNI) study4, including MRI and FDG-PET data. For this experiment, we used 93 AD patients, 202 MCI patients and 101 NC subjects. To process the data, same tools employed in [29] and [32] are used, including spatial distortion, skull-stripping, and cerebellum removal. The FSL package [33] was used to segment each MR image into three different tissues, i.e., GM, WM, and CSF. Then, 93 ROIs are parcellated for each subject [25] with atlas warping. The volume of GM tissue in each ROI was calculated as the image feature. For FDG-PET images, a rigid transformation was employed to align it to the corresponding MR image and the mean intensity of each ROI was calculated as the feature. All these features were further normalized in a similar way, as in [32]. Results: The first experiment is set up on the PPMI database. Table 1 shows the diagnosis accuracy of the proposed technique (RFS-LDA) in comparisons with different baseline and state-of-the-art methods, using a 10-fold cross-validation strategy. As can be seen, the proposed method outperforms all others. This could be because our method deals with both feature-noises and sample-outliers. Note that, subjects and their corresponding feature vectors extracted from MRI data are quite prone to noise, because of many possible sources of noise (e.g. the patient’s body movements, RF emission due to thermal motion, overall MR scanner measurement chain, or preprocessing artifacts). Therefore, some samples might not be useful (sample-outliers) and some might be contaminated by some amounts of noise (feature-noises). Our method deals with both types and achieves good results. The goal for the experiments on ADNI database is to discriminate both MCI and AD patients from NC subjects, separately. Therefore, NC subjects form our negative class, while the positive class is defined as AD in one experiment and MCI in the other. The diagnosis results of the AD vs. NC and MCI vs. NC experiments are reported in Tables 2. As it could be seen, in comparisons with the state-of-the-art, our method achieves good results in terms of both accuracy and the area under curve. This is because we successfully discard the sample-outliers and detect the feature-noises. 3Could be downloaded at http://www.nitrc.org/projects/hammerwml 4http://www.loni.ucla.edu/ADNI 7 Table 2: The accuracy (ACC) and the area under ROC curve (AUC) of the Alzheimer’s disease classification on ADNI database, compared to the baseline methods. Method RFS-LDA RFS-LDA∗ S-RFS-LDA RLDA SFS+RLDA RPCA+LS-LDA LS-LDA SVM SFS+SVM MC AD/NC ACC 91.8 89.1 86.3 88.7 90.1 87.6 70.9 72.1 76.3 78.2 AUC 0.98 0.96 0.95 0.96 0.98 0.93 0.81 0.80 0.83 0.82 MCI/NC ACC 89.8 85.6 84.5 85.0 88.1 84.5 68.9 70.1 76.1 74.3 AUC 0.93 0.90 0.90 0.87 0.92 0.87 0.75 0.79 0.80 0.78 Figure 3: The top selected ROIs for AD vs. NC (left) and MCI vs. NC (right) classification problems. Discussions: In medical imaging applications, many sources of noise (e.g. patient’s movement, radiations and limitation of imaging devices, preprocessing artifacts) contribute to the acquired data [13], and therefore methods that deal with noise and outliers are of great interest. Our method enjoys from a single optimization objective that can simultaneously suppress sample-outliers and feature-noises, which compared to the competing methods, exhibits a good performance. One of the interesting functions of the proposed method is the regularization on the mapping coefficients with the ℓ1 norm, which would select a compact set of features to contribute to the learned mapping. The magnitude of the coefficients would show the level of contribution of that specific feature to the learned model. In our application, the features from the whole brain regions are extracted, but only a small number of regions are associated with the disease (e.g., AD, MCI or PD). Using this strategy, we can determine which brain regions are highly associated with a certain disease. Fig. 3 shows the top regions selected by our algorithm in AD vs. NC and MCI vs. NC classification scenarios. These regions, including middle temporal gyrus, medial front-orbital gyrus, postcentral gyrus, caudate nucleus, cuneus, and amygdala have been reported to be associated with AD and MCI in the literature [24, 26]. The figures show the union of regions selected for both MRI and FDG-PET features. The most frequently used regions for the PD/NC experiment are the substantial nigra (left and right), putamen (right), middle frontal gyrus (right), superior temporal gyrus (left), which are also consistent with the literature [4, 31]. This selection of brain regions could be further incorporated for future clinical analysis. The semi-supervised setting of the proposed method is also of great interest in the diagnosis of patients. When new patients first arrive and are to be diagnosed, the previous set of the patients with no certain diagnosis so far (not labeled yet), could still be used to build a more reliable classifier. In other words, the current testing samples could contribute the diagnosis of future subjects, as unlabeled samples. 4 Conclusion In this paper, we proposed an approach for discriminative classification, which is robust against both sample-outliers and feature-noises. Our method enjoys a semi-supervised setting, where all the labeled training and the unlabeled testing data are used to detect outliers and are de-noised, simultaneously. We have applied our method to the interesting problem of neurodegenerative brain disease diagnosis and directly applied it for the diagnosis of Parkinson’s and Alzheimer’s diseases. The results show that our method outperforms all competing methods. As a direction for the future work, one can develop a multi-task learning reformulation of the proposed method to incorporate multiple modalities for the subjects, or extend the method for the incomplete data case. 8 References [1] E. Adeli-Mosabbeb and M. Fathy. Non-negative matrix completion for action detection. Image Vision Comput., 39:38 – 51, 2015. [2] N. Bissantz, L. D¨umbgen, A. Munk, and B. Stratmann. Convergence analysis of generalized iteratively reweighted least squares algorithms on convex function spaces. SIAM Optimiz., 19(4):1828–1845, 2009. [3] S. Boyd and et al.. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1–122, 2011. [4] Heiko Braak, Kelly Tredici, Udo Rub, Rob de Vos, Ernst Jansen Steur, and Eva Braak. Staging of brain pathology related to sporadic parkinsons disease. Neurobio. of Aging, 24(2):197 – 211, 2003. [5] D. Cai, X. He, and J. Han. Semi-supervised discriminant analysis. In CVPR, 2007. [6] J.-F. Cai, E. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Optimiz., 20(4):1956–1982, 2010. [7] E. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3), 2011. [8] O. Chapelle, B. Sch¨olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006. [9] C. Croux and C. Dehon. Robust linear discriminant analysis using s-estimators. Canadian J. of Statistics, 29(3):473–493, 2001. [10] F. De la Torre. A least-squares framework for component analysis. IEEE TPAMI, 34(6):1041–1055, 2012. [11] E. Elhamifar and R. Vidal. Robust classification using structured sparse representation. In CVPR, 2011. [12] S. Fidler, D. Skocaj, and A. Leonardis. Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. IEEE TPAMI, 28(3):337–350, 2006. [13] V. Fritsch, G. Varoquaux, B. Thyreau, J.-B. Poline, and B. Thirion. Detecting outliers in high-dimensional neuroimaging datasets with robust covariance estimators. Med. Image Anal., 16(7):1359 – 1370, 2012. [14] A. Goldberg, X. Zhu, B. Recht, J.-M. Xu, and R. Nowak. Transduction with matrix completion: Three birds with one stone. In NIPS, pages 757–765, 2010. [15] D. Huang, R. Cabral, and F. De la Torre. Robust regression. In ECCV, pages 616–630, 2012. [16] A. Joulin and F. Bach. A convex relaxation for weakly supervised classifiers. In ICML, 2012. [17] S. Kim, A. Magnani, and S. Boyd. Robust Fisher discriminant analysis. In NIPS, pages 659–666, 2005. [18] H. Li, T. Jiang, and K. Zhang. Efficient and robust feature extraction by maximum margin criterion. In NIPS, pages 97–104, 2003. [19] H. Li, C. Shen, A. van den Hengel, and Q. Shi. Worst-case linear discriminant analysis as scalable semidefinite feasibility problems. IEEE TIP, 24(8), 2015. [20] K.O. Lim and A. Pfefferbaum. Segmentation of MR brain images into cerebrospinal fluid spaces, white and gray matter. J. of Computer Assisted Tomography, 13:588–593, 1989. [21] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by low-rank representation. IEEE TPAMI, 35(1):171–184, 2013. [22] C. Lu, J. Shi, and J. Jia. Online robust dictionary learning. In CVPR, pages 415–422, June 2013. [23] K. Marek and et al.. The parkinson progression marker initiative (PPMI). Prog. Neurobiol., 95(4):629 – 635, 2011. [24] B. Pearce, A. Palmer, D. Bowen, G. Wilcock, M. Esiri, and A. Davison. Neurotransmitter dysfunction and atrophy of the caudate nucleus in alzheimer’s disease. Neurochem Pathol., 2(4):221–32, 1985. [25] D. Shen and C. Davatzikos. HAMMER: Hierarchical attribute matching mechanism for elastic registration. IEEE TMI, 21:1421–1439, 2002. [26] K.-H. Thung, C.-Y. Wee, P.-T. Yap, and D. Shen. Neurodegenerative disease diagnosis using incomplete multi-modality data via matrix shrinkage and completion. NeuroImage, 91:386–400, 2014. [27] N. Tzourio-Mazoyer and et al.. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage, 15(1):273–289, 2002. [28] A. Wagner, J. Wright, A. Ganesh, Zihan Zhou, and Yi Ma. Towards a practical face recognition system: Robust registration and illumination by sparse representation. In CVPR, pages 597–604, 2009. [29] Y. Wang, J. Nie, P.-T. Yap, G. Li, F. Shi, X. Geng, L. Guo, D. Shen, ADNI, et al. Knowledge-guided robust MRI brain extraction for diverse large-scale neuroimaging studies on humans and non-human primates. PLOS ONE, 9(1):e77810, 2014. [30] Y. Wang, J. Nie, P.-T. Yap, F. Shi, L. Guo, and D. Shen. Robust deformable-surface-based skull-stripping for large-scale studies. In MICCAI, volume 6893, pages 635–642, 2011. [31] A. Worker and et al.. Cortical thickness, surface area and volume measures in parkinson’s disease, multiple system atrophy and progressive supranuclear palsy. PLOS ONE, 9(12), 2014. [32] D. Zhang, Y. Wang, L. Zhou, H. Yuan, D. Shen, ADNI, et al. Multimodal classification of Alzheimer’s disease and mild cognitive impairment. NeuroImage, 55(3):856–867, 2011. [33] Y. Zhang, M. Brady, and S. Smith. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE TMI, 20(1):45–57, 2001. [34] Xiaojin Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005. [35] D. Ziegler and J. Augustinack. Harnessing advances in structural MRI to enhance research on Parkinson’s disease. Imaging Med., 5(2):91–94, 2013. 9
2015
142
5,639
COEVOLVE: A Joint Point Process Model for Information Diffusion and Network Co-evolution Mehrdad Farajtabar∗ Yichen Wang∗ Manuel Gomez-Rodriguez† Shuang Li∗ Hongyuan Zha∗ Le Song∗ Georgia Institute of Technology∗ MPI for Software Systems† {mehrdad,yichen.wang,sli370}@gatech.edu manuelgr@mpi-sws.org {zha,lsong}@cc.gatech.edu Abstract Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly creating new links when exposed to new information sources, and in turn these links are alternating the way information spreads. However, these two highly intertwined stochastic processes, information diffusion and network evolution, have been predominantly studied separately, ignoring their co-evolutionary dynamics. We propose a temporal point process model, COEVOLVE, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other. This model allows us to efficiently simulate interleaved diffusion and network events, and generate traces obeying common diffusion and network patterns observed in real-world networks. Furthermore, we also develop a convex optimization framework to learn the parameters of the model from historical diffusion and network evolution traces. We experimented with both synthetic data and data gathered from Twitter, and show that our model provides a good fit to the data as well as more accurate predictions than alternatives. 1 Introduction Online social networks, such as Twitter or Weibo, have become large information networks where people share, discuss and search for information of personal interest as well as breaking news [1]. In this context, users often forward to their followers information they are exposed to via their followees, triggering the emergence of information cascades that travel through the network [2], and constantly create new links to information sources, triggering changes in the network itself over time. Importantly, recent empirical studies with Twitter data have shown that both information diffusion and network evolution are coupled and network changes are often triggered by information diffusion [3, 4, 5]. While there have been many recent works on modeling information diffusion [2, 6, 7, 8] and network evolution [9, 10, 11], most of them treat these two stochastic processes independently and separately, ignoring the influence one may have on the other over time. Thus, to better understand information diffusion and network evolution, there is an urgent need for joint probabilistic models of the two processes, which are largely inexistent to date. In this paper, we propose a probabilistic generative model, COEVOLVE, for the joint dynamics of information diffusion and network evolution. Our model is based on the framework of temporal point processes, which explicitly characterize the continuous time interval between events, and it consists of two interwoven and interdependent components (refer to Appendix B for an illustration): I. Information diffusion process. We design an “identity revealing” multivariate Hawkes process [12] to capture the mutual excitation behavior of retweeting events, where the intensity of such events in a user is boosted by previous events from her time-varying set of followees. Al1 though Hawkes processes have been used for information diffusion before [13, 14, 15, 16, 17, 18, 19], the key innovation of our approach is to explicitly model the excitation due to a particular source node, hence revealing the identity of the source. Such design reflects the reality that information sources are explicitly acknowledged, and it also allows a particular information source to acquire new links in a rate according to her “informativeness”. II. Network evolution process. We model link creation as an “information driven” survival process, and couple the intensity of this process with retweeting events. Although survival processes have been used for link creation before [20, 21], the key innovation in our model is to incorporate retweeting events as the driving force for such processes. Since our model has captured the source identity of each retweeting event, new links will be targeted toward the information sources, with an intensity proportional to their degree of excitation and each source’s influence. Our model is designed in such a way that it allows the two processes, information diffusion and network evolution, unfold simultaneously in the same time scale and excise bidirectional influence on each other, allowing sophisticated coevolutionary dynamics to be generated (e.g., see Figure 5). Importantly, the flexibility of our model does not prevent us from efficiently simulating diffusion and link events from the model and learning its parameters from real world data: • Efficient simulation. We design a scalable sampling procedure that exploits the sparsity of the generated networks. Its complexity is O(nd log m), where n is the number of samples, m is the number of nodes and d is the maximum number of followees per user. • Convex parameters learning. We show that the model parameters that maximize the joint likelihood of observed diffusion and link creation events can be found via convex optimization. Finally, we experimentally verify that our model can produce coevolutionary dynamics of information diffusion and network evolution, and generate retweet and link events that obey common information diffusion patterns (e.g., cascade structure, size and depth), static network patterns (e.g., node degree) and temporal network patterns (e.g., shrinking diameter) described in related literature [22, 10, 23]. Furthermore, we show that, by modeling the coevolutionary dynamics, our model provide significantly more accurate link and diffusion event predictions than alternatives in large scale Twitter dataset [3]. 2 Backgrounds on Temporal Point Processes A temporal point process is a random process whose realization consists of a list of discrete events localized in time, {ti} with ti ∈R+ and i ∈Z+. Many different types of data produced in online social networks can be represented as temporal point processes, such as the times of retweets and link creations. A temporal point process can be equivalently represented as a counting process, N(t), which records the number of events before time t. Let the history H(t) be the list of times of events {t1, t2, . . . , tn} up to but not including time t. Then, the number of observed events in a small time window dt between [t, t+dt) is dN(t) = ! ti∈H(t) δ(t−ti) dt, and hence N(t) = " t 0 dN(s), where δ(t) is a Dirac delta function. More generally, given a function f(t), we can define the convolution with respect to dN(t) as f(t) ⋆dN(t) := # t 0 f(t −τ) dN(τ) = $ ti∈H(t) f(t −ti). (1) The point process representation of temporal data is fundamentally different from the discrete time representation typically used in social network analysis. It directly models the time interval between events as random variables, and avoid the need to pick a time window to aggregate events. It allows temporal events to be modeled in a more fine grained fashion, and has a remarkably rich theoretical support [24]. An important way to characterize temporal point processes is via the conditional intensity function — a stochastic model for the time of the next event given all the times of previous events. Formally, the conditional intensity function λ∗(t) (intensity, for short) is the conditional probability of observing an event in a small window [t, t + dt) given the history H(t), i.e., λ∗(t)dt := P {event in [t, t + dt)|H(t)} = E[dN(t)|H(t)], (2) where one typically assumes that only one event can happen in a small window of size dt, i.e., dN(t) ∈{0, 1}. Then, given a time t′ ⩾t, we can also characterize the conditional probability that no event happens during [t, t′) and the conditional density that an event occurs at time t′ 2 as S∗(t′) = exp(− " t′ t λ∗(τ) dτ) and f ∗(t′) = λ∗(t′) S∗(t′) respectively [24]. Furthermore, we can express the log-likelihood of a list of events {t1, t2, . . . , tn} in an observation window [0, T) as L = n $ i=1 log λ∗(ti) − # T 0 λ∗(τ) dτ, T ⩾tn. (3) This simple log-likelihood will later enable us to learn the parameters of our model from observed data. Finally, the functional form of the intensity λ∗(t) is often designed to capture the phenomena of interests. Some useful functional forms we will use later are [24]: (i) Poisson process. The intensity is assumed to be independent of the history H(t), but it can be a time-varying function, i.e., λ∗(t) = g(t) ⩾0; (ii) Hawkes Process. The intensity models a mutual excitation between events, i.e., λ∗(t) = µ + ακω(t) ⋆dN(t) = µ + α $ ti∈H(t) κω(t −ti), (4) where κω(t) := exp(−ωt)I[t ⩾0] is an exponential triggering kernel, µ ⩾0 is a baseline intensity independent of the history. Here, the occurrence of each historical event increases the intensity by a certain amount determined by the kernel and the weight α ⩾0, making the intensity history dependent and a stochastic process by itself. We will focus on the exponential kernel in this paper. However, other functional forms for the triggering kernel, such as log-logistic function, are possible, and our model does not depend on this particular choice; and, (iii) Survival process. There is only one event for an instantiation of the process, i.e., λ∗(t) = g∗(t)(1 −N(t)), (5) where λ∗(t) becomes 0 if an event already happened before t. 3 Generative Model of Information Diffusion and Network Co-evolution In this section, we use the above background on temporal point processes to formulate our probabilistic generative model for the joint dynamics of information diffusion and network evolution. 3.1 Event Representation We model the generation of two types of events: tweet/retweet events, er, and link creation events, el. Instead of just the time t, we record each event as a triplet er or el := ( u ↑ destination , source ↓s, t ↑ time ). (6) For retweet event, the triplet means that the destination node u retweets at time t a tweet originally posted by source node s. Recording the source node s reflects the real world scenario that information sources are explicitly acknowledged. Note that the occurrence of event er does not mean that u is directly retweeting from or is connected to s. This event can happen when u is retweeting a message by another node u′ where the original information source s is acknowledged. Node u will pass on the same source acknowledgement to its followers (e.g., “I agree @a @b @c @s”). Original tweets posted by node u are allowed in this notation. In this case, the event will simply be er = (u, u, t). Given a list of retweet events up to but not including time t, the history Hr us(t) of retweets by u due to source s is Hr us(t) = {er i = (ui, si, ti)|ui = u and si = s} . The entire history of retweet events is denoted as Hr(t) := ∪u,s∈[m]Hr us(t). For link creation event, the triplet means that destination node u creates at time t a link to source node s, i.e., from time t on, node u starts following node s. To ease the exposition, we restrict ourselves to the case where links cannot be deleted and thus each (directed) link is created only once. However, our model can be easily augmented to consider multiple link creations and deletions per node pair, as discussed in Section 8. We denote the link creation history as Hl(t). 3.2 Joint Model with Two Interwoven Components Given m users, we use two sets of counting processes to record the generated events, one for information diffusion and the other for network evolution. More specifically, 3 I. Retweet events are recorded using a matrix N(t) of size m × m for each fixed time point t. The (u, s)-th entry in the matrix, Nus(t) ∈{0} ∪Z+, counts the number of retweets of u due to source s up to time t. These counting processes are “identity revealing”, since they keep track of the source node that triggers each retweet. This matrix N(t) can be dense, since Nus(t) can be nonzero even when node u does not directly follow s. We also let dN(t) := ( dNus(t) )u,s∈[m]. II. Link events are recorded using an adjacency matrix A(t) of size m × m for each fixed time point t. The (u, s)-th entry in the matrix, Aus(t) ∈{0, 1}, indicates whether u is directly following s. That is Aus(t) = 1 means the directed link has been created before t. For simplicity of exposition, we do not allow self-links. The matrix A(t) is typically sparse, but the number of nonzero entries can change over time. We also define dA(t) := ( dAus(t) )u,s∈[m]. Then the interwoven information diffusion and network evolution processes can be characterized using their respective intensities E[dN(t) | Hr(t) ∪Hl(t)] = Γ∗(t) dt and E[dA(t) | Hr(t) ∪ Hl(t)] = Λ∗(t) dt, where Γ∗(t) = ( γ∗ us(t) )u,s∈[m] and Λ∗(t) = ( λ∗ us(t) )u,s∈[m]. The sign ∗means that the intensity matrices will depend on the joint history, Hr(t) ∪Hl(t), and hence their evolution will be coupled. By this coupling, we make: (i) the counting processes for link creation to be “information driven” and (ii) the evolution of the linking structure to change the information diffusion process. Refer to Appendix B for an illustration of our joint model. In the next two sections, we will specify the details of these two intensity matrices. 3.3 Information Diffusion Process We model the intensity, Γ∗(t), for retweeting events using multivariate Hawkes process [12]: γ∗ us(t) = I[u = s] ηu + I[u ̸= s] βs $ v∈Fu(t) κω1(t) ⋆(Auv(t) dNvs(t)) , (7) where I[·] is the indicator function and Fu(t) := {v ∈[m] : Auv(t) = 1} is the current set of followees of u. The term ηu ⩾0 is the intensity of original tweets by a user u on his own initiative, becoming the source of a cascade and the term βs ! v∈Fu(t) κω(t) ⋆(Auv(t) dNvs(t)) models the propagation of peer influence over the network, where the triggering kernel κω1(t) models the decay of peer influence over time. Note that the retweet intensity matrix Γ∗(t) is by itself a stochastic process that depends on the timevarying network topology, the non-zero entries in A(t), whose growth is controlled by the network evolution process in Section 3.4. Hence the model design captures the influence of the network topology and each source’s influence, βs, on the information diffusion process. More specifically, to compute γ∗ us(t), one first finds the current set Fu(t) of followees of u, and then aggregates the retweets of these followees that are due to source s. Note that these followees may or may not directly follow source s. Then, the more frequently node u is exposed to retweets of tweets originated from source s via her followees, the more likely she will also retweet a tweet originated from source s. Once node u retweets due to source s, the corresponding Nus(t) will be incremented, and this in turn will increase the likelihood of triggering retweets due to source s among the followers of u. Thus, the source does not simply broadcast the message to nodes directly following her but her influence propagates through the network even to those nodes that do not directly follow her. Finally, this information diffusion model allows a node to repeatedly generate events in a cascade, and is very different from the independent cascade or linear threshold models [25] which allow at most one event per node per cascade. 3.4 Network Evolution Process We model the intensity, Λ∗(t), for link creation using a combination of survival and Hawkes process: λ∗ us(t) = (1 −Aus(t))(µu + αu κω2(t) ⋆dNus(t)) (8) where the term 1 −Aus(t) effectively ensures a link is created only once, and after that, the corresponding intensity is set to zero. The term µu ⩾0 denotes a baseline intensity, which models when a node u decides to follow a source s spontaneously at her own initiative. The term αuκω2(t)⋆dNus(t) corresponds to the retweets of node u due to tweets originally published by source s, where the triggering kernel κω2(t) models the decay of interests over time. Here, the higher the corresponding retweet intensity, the more likely u will find information by source s useful and will create a direct link to s. 4 The link creation intensity Λ∗(t) is also a stochastic process by itself, which depends on the retweet events, and is driven by the retweet count increments dNus(t). It captures the influence of retweets on the link creation, and closes the loop of mutual influence between information diffusion and network topology. Note that creating a link is more than just adding a path or allowing information sources to take shortcuts during diffusion. The network evolution makes fundamental changes to the diffusion dynamics and stationary distribution of the diffusion process in Section 3.3. As shown in [14], given a fixed network structure A, the expected retweet intensity µs(t) at time t due to source s will depend of the network structure in a highly nonlinear fashion, i.e., µs(t) := E[Γ∗ ·s(t)] = (e(A−ω1I)t + ω1(A −ω1I)−1(e(A−ω1I)t −I)) ηs, where ηs ∈Rm has a single nonzero entry with value ηs and e(A−ω1I)t is the matrix exponential. When t →∞, the stationary intensity ¯µs = (I −A/ω)−1 ηs is also nonlinearly related to the network structure. Thus given two network structures A(t) and A(t′) at two points in time, which are different by a few edges, the effect of these edges on the information diffusion is not just simply an additive relation. Depending on how these newly created edges modify the eigen-structure of the sparse matrix A(t), their effect can be drastic to the information diffusion. Remark 1. In our model, each user is exposed to information through a time-varying set of neighbors. By doing so, we couple information diffusion with the network evolution, increasing the practical application of our model to real-network datasets. The particular definition of exposure (e.g., a retweet’s neighbor) will depend on the type of historical information that is available. Remarkably, the flexibility of our model allows for different types of diffusion events, which we can broadly classify into two categories. In a first category, events corresponds to the times when an information cascade hits a person, for example, through a retweet from one of her neighbors, but she does not explicitly like or forward the associated post. In a second category, the person decides to explicitly like or forward the associated post and events corresponds to the times when she does so. Intuitively, events in the latter category are more prone to trigger new connections but are also less frequent. Therefore, it is mostly suitable to large event dataset for examples those ones generated synthetically. In contrast, the events in the former category are less likely to inspire new links but found in abundance. Therefore, it is very suitable for real-world sparse data. Consequently, in synthetic experiments we used the latter and in the real one we used the former. It’s noteworthy that Eq. (8) is written based on the latter category, but, Fig. 7 in appendix is drawn based on the former. 4 Efficient Simulation of Coevolutionary Dynamics We can simulate samples (link creations, tweets and retweets) from our model by adapting Ogata’s thinning algorithm [26], originally designed for multidimensional Hawkes processes. However, a naive implementation of Ogata’s algorithm would scale poorly, i.e., for each sample, we would need to re-evaluate Γ∗(t) and Λ∗(t), thus, to draw n samples, we would need to perform O(m2n2) operations, where m is the number of nodes. We designed a sampling procedure that is especially well-fitted for the structure of our model. The algorithm is based on the following key idea: if we consider each intensity function in Γ∗(t) and Λ∗(t) as a separate Hawkes process and draw a sample from each, it is easy to show that the minimum among all these samples is a valid sample from the model [12]. However, by drawing samples from all intensities, the computational complexity would not improve. However, when the network is sparse, whenever we sample a new node (or link) event from the model, only a small number of intensity functions, in the local neighborhood of the node (or the link), will change. As a consequence, we can reuse most of the samples from the intensity functions for the next new sample and find which intensity functions we need to change in O(log m) operations, using a heap. Finally, we exploit the properties of the exponential function to update individual intensities for each new sample in O(1): let ti and ti+1 be two consecutive events, then, we can compute λ∗(ti+1) as (λ∗(ti) −µ) exp(−ω(ti+1 −ti)) + µ without the need to compare all previous events. The complete simulation algorithm is summarized in Algorithm 2 in Appendix C. By using Algorithm 2, we reduce the complexity from O(n2m2) to O(nd log m), where d is the maximum number of followees per node. That means, our algorithm scales logarithmically with the number of nodes and linearly with the number of edges at any point in time during the simulation. We also note that the events for link creations, tweets and retweets are generated in a temporally intertwined and inter5 0 20 40 60 Event occurrence time Spike trains Retweet Link 0 20 40 60 0 0.6 Event occurrence time Intensity Retweet Link −50 0 50 0 2 4 Cross covariance Lag (a) (b) (c) Figure 1: Coevolutionary dynamics for synthetic data. a) Spike trains of link and retweet events. b) Link and retweet intensities. c) Cross covariance of link and retweet intensities. 10 0 10 1 10 0 10 2 10 4 Data Power−law fit Poisson fit 10 0 10 1 10 0 10 2 10 4 Data Power−law fit Poisson fit 10 0 10 1 10 0 10 2 10 4 Data Power−law fit Poisson fit 10 0 10 1 10 2 10 0 10 2 10 4 Data Power−law fit Poisson fit (a) β = 0 (b) β = 0.001 (c) β = 0.1 (d) β = 0.8 Figure 2: Degree distributions when network sparsity level reaches 0.001 for fixed α = 0.1. leaving fashion by Algorithm 2. This is because every new retweet event will modify the intensity for link creation, and after each link creation we also need to update the retweet intensities. 5 Efficient Parameter Estimation from Coevolutionary Events Given a collection of retweet events E = {er i } and link creation events A = {el i} recorded within a time window [0, T), we can easily estimate the parameters needed in our model using maximum likelihood estimation. Here, we compute the joint log-likelihood L({µu} , {αu} , {ηu} , {βs}) of these events using Eq. (3), i.e., $ er i ∈E log % γ∗ uisi(ti) & − $ u,s∈[m] # T 0 γ∗ us(τ) dτ ' () * tweet / retweet + $ el i∈A log % λ∗ uisi(ti) & − $ u,s∈[m] # T 0 λ∗ us(τ) dτ ' () * links . (9) For the terms corresponding to retweets, the log term only sums over the actual observed events, but the integral term actually sums over all possible combination of destination and source pairs, even if there is no event between a particular pair of destination and source. For such pairs with no observed events, the corresponding counting processes have essentially survived the observation window [0, T), and the term − " T 0 γ∗ us(τ)dτ simply corresponds to the log survival probability. Terms corresponding to links have a similar structure to those for retweet. Since γ∗ us(t) and λ∗ us are linear in the parameters (ηu, βs) and (µu, αu) respectively, then log(γ∗ us(t)) and log(λ∗ us) are concave functions in these parameters. Integration of γ∗ us(t) and λ∗ us still results in linear functions of the parameters. Thus the overall objective in Eq. (9) is concave, and the global optimum can be found by many algorithms. In our experiments, we adapt the efficient algorithm developed in previous work [18, 19]. Furthermore, the optimization problem decomposes in m independent problems, one per node u, and can be readily parallelized. 6 Properties of Simulated Co-evolution, Networks and Cascades∗ In this section, we perform an empirical investigation of the properties of the networks and information cascades generated by our model. In particular, we show that our model can generate coevolutionary retweet and link dynamics and a wide spectrum of static and temporal network patterns and information cascades. Appendix D contains additional simulation results and visualizations. Appendix E contains an evaluation of our model estimation method in synthetic data. Retweet and link coevolution. Figures 1(a,b) visualize the retweet and link events, aggregated across different sources, and the corresponding intensities for one node and one realization, picked at random. Here, it is already apparent that retweets and link creations are clustered in time and often follow each other. Further, Figure 1(c) shows the cross-covariance of the retweet and link creation intensity, computed across multiple realizations, for the same node, i.e., if f(t) and g(t) are two intensities, the cross-covariance is a function of the time lag τ defined as h(τ) = " f(t + τ)g(t) dt. It can be seen that the cross-covariance has its peak around 0, i.e., retweets and link creations are 6 5 10 x 10 −4 0 40 80 diameter sparsity β=0 β=0.05 β=0.1 β=0.2 5 10 x 10 −4 0 40 80 diameter sparsity α=0 α=0.001 α=0.1 α=0.8 (a) Diameter, α = 0.1 (b) Diameter, β = 0.1 Figure 3: Diameter for network sparsity 0.001. Panels (a) and (b) show the diameter against sparsity over time for fixed α = 0.1, and for fixed β = 0.1 respectively. Others cascade size 1 2 3 4 5 6 7 8 others percentage 0 0.1% 1% 10% 100% ,=0 ,=0.1 ,=0.8 cascade depth 0 1 2 3 4 5 6 7 others percentage 0 0.1% 1% 10% 100% ,=0 ,=0.1 ,=0.8 Figure 4: Distribution of cascade structure, size and depth for different α values and fixed β = 0.2. highly correlated and co-evolve over time. For ease of exposition, we illustrated co-evolution using one node, however, we found consistent results across nodes. Degree distribution. Empirical studies have shown that the degree distribution of online social networks and microblogging sites follow a power law [9, 1], and argued that it is a consequence of the rich get richer phenomena. The degree distribution of a network is a power law if the expected number of nodes md with degree d is given by md ∝d−γ, where γ > 0. Intuitively, the higher the values of the parameters α and β, the closer the resulting degree distribution follows a power-law; the lower their values, the closer the distribution to an Erdos-Renyi random graph [27]. Figure 2 confirms this intuition by showing the degree distribution for different values of β. Small (shrinking) diameter. There is empirical evidence that the diameter of online social networks and microblogging sites exhibit relatively small diameter and shrinks (or flattens) as the network grows [28, 9, 22]. Figures 3(a-b) show the diameter on the largest connected component (LCC) against the sparsity of the network over time for different values of α and β. Although at the beginning, there is a short increase in the diameter due to the merge of small connected components, the diameter decreases as the network evolves. Here, nodes arrive to the network when they follow (or are followed by) a node in the largest connected component. Cascade patterns. Our model can produce the most commonly occurring cascades structures as well as heavy-tailed cascade size and depth distributions, as observed in historical Twitter data [23]. Figure 4 summarizes the results. The higher the α value, the shallower and wider the cascades. 7 Experiments on Real Dataset In this section, we validate our model using a large Twitter dataset containing nearly 550,000 tweet, retweet and link events from more than 280,000 users [3]. We will show that our model can capture the co-evolutionary dynamics and, by doing so, it predicts retweet and link creation events more accurately than several alternatives. Appendix F contains detailed information about the dataset and additional experiments. Retweet and link coevolution. Figures 5(a, b) visualize the retweet and link events, aggregated across different sources, and the corresponding intensities given by our trained model for one node, picked at random. Here, it is already apparent that retweets and link creations are clustered in time and often follow each other, and our fitted model intensities successfully track such behavior. Further, Figure 5(c) compares the cross-covariance between the empirical retweet and link creation intensities and between the retweet and link creation intensities given by our trained model, computed across multiple realizations, for the same node. The similarity between both cross-covariances is striking and both has its peak around 0, i.e., retweets and link creations are highly correlated and co-evolve over time. For ease of exposition, as in Section 6, we illustrated co-evolution using one node, however, we found consistent results across nodes (see Appendix F). Link prediction. We use our model to predict the identity of the source for each test link event, given the historical (link and retweet) events before the time of the prediction, and compare its performance with two state of the art methods, denoted as TRF [3] and WENG [5]. TRF measures ∗Implementation codes are available at https://github.com/farajtabar/Coevolution 7 0 20 40 60 80 Event occurrence time Spike trains Retweet Link 0 20 40 60 80 0 0.5 1 Event occurrence time Intensity Retweet Link −100 0 100 0 2 4 Lag Cross covariance Estimated Empirical (a) (b) (c) Figure 5: Coevolutionary dynamics for real data a) Spike trains of link and retweet events. b) Estimated link and retweet intensities. c) Empirical and estimated cross covariance of link and retweet intensities # events ×105 1 3 5 AvgRank 10 70 140 COEVOLVE TRF WENG # events ×105 1 3 5 Top1 0 0.1 0.2 COEVOLVE TRF WENG # events ×105 1 3 5 AvgRank 40 80 COEVOLVE HAWKES # events ×105 1 3 5 Top1 0 0.15 0.3 COEVOLVE HAWKES (a) Links: AR (b) Links: Top-1 (c) Activity: AR Activity: Top-1 Figure 6: Prediction performance in the Twitter dataset by means of average rank (AR) and success probability that the true (test) events rank among the top-1 events (Top-1). the probability of creating a link from a source at a given time by simply computing the proportion of new links created from the source with respect to the total number of links created up to the given time. WENG considers different link creation strategies and makes a prediction by combining them. We evaluate the performance by computing the probability of all potential links using different methods, and then compute (i) the average rank of all true (test) events (AvgRank) and, (ii) the success probability (SP) that the true (test) events rank among the top-1 potential events at each test time (Top-1). We summarize the results in Fig. 6(a-b), where we consider an increasing number of training retweet/tweet events. Our model outperforms TRF and WENG consistently. For example, for 8 · 104 training events, our model achieves a SP 2.5x times larger than TRF and WENG. Activity prediction. We use our model to predict the identity of the node that is going to generate each test diffusion event, given the historical events before the time of the prediction, and compare its performance with a baseline consisting of a Hawkes process without network evolution. For the Hawkes baseline, we take a snapshot of the network right before the prediction time, and use all historical retweeting events to fit the model. Here, we evaluate the performance the via the same two measures as in the link prediction task and summarize the results in Figure 6(c-d) against an increasing number of training events. The results show that, by modeling the co-evolutionary dynamics, our model performs significantly better than the baseline. 8 Discussion We proposed a joint continuous-time model of information diffusion and network evolution, which can capture the coevolutionary dynamics, mimics the most common static and temporal network patterns observed in real-world networks and information diffusion data, and predicts the network evolution and information diffusion more accurately than previous state-of-the-arts. Using point processes to model intertwined events in information networks opens up many interesting future modeling work. Our current model is just a show-case of a rich set of possibilities offered by a point process framework, which have been rarely explored before in large scale social network modeling. For example, we can generalize our model to support link deletion by introducing an intensity matrix Ξ∗(t) modeling link deletions as survival processes, i.e., Ξ∗(t) = (g∗ us(t)Aus(t))u,s∈[m], and then consider the counting process A(t) associated with the adjacency matrix to evolve as E[dA(t)|Hr(t) ∪Hl(t)] = Λ∗(t) dt −Ξ∗(t) dt. We also can consider the number of nodes varying over time. Furthermore, a large and diverse range of point processes can also be used in the framework without changing the efficiency of the simulation and the convexity of the parameter estimation, e.g., condition the intensity on additional external features, such as node attributes. Acknowledge The authors would like to thank Demetris Antoniades and Constantine Dovrolis for providing them with the dataset. The research was supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR N00014-15-1-2340, NSF IIS-1218749, NSF CAREER IIS-1350983. 8 References [1] H. Kwak, C. Lee, H. Park, and others. What is Twitter, a social network or a news media? WWW, 2010. [2] J. Cheng, L. Adamic, P. A. Dow, and others. Can cascades be predicted? WWW, 2014. [3] D. Antoniades and C. Dovrolis. Co-evolutionary dynamics in social networks: A case study of twitter. arXiv:1309.6001, 2013. [4] S. Myers and J. Leskovec. The bursty dynamics of the twitter information network. WWW, 2014. [5] L. Weng, J. Ratkiewicz, N. Perra, B. Goncalves, C. Castillo, F. Bonchi, R. Schifanella, F. Menczer, and A. Flammini. The role of information diffusion in the evolution of social networks. KDD, 2013. [6] N. Du, L. Song, M. Gomez-Rodriguez, and H. Zha. Scalable influence estimation in continuous-time diffusion networks. NIPS, 2013. [7] M. Gomez-Rodriguez, D. Balduzzi, and B. Sch¨olkopf. Uncovering the temporal dynamics of diffusion networks. ICML, 2011. [8] M. Gomez-Rodriguez, J. Leskovec, A. Krause. Inferring networks of diffusion and influence. KDD, 2010. [9] D. Chakrabarti, Y. Zhan, and C. Faloutsos. R-mat: A recursive model for graph mining. Computer Science Department, page 541, 2004. [10] J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and J. Leskovec. Kronecker graphs: An approach to modeling networks. JMLR, 2010. [11] J. Leskovec, L. Backstrom, R. Kumar, and others. Microscopic evolution of social networks. KDD, 2008. [12] T.J. Liniger. Multivariate Hawkes Processes. PhD thesis, ETHZ, 2009. [13] C. Blundell, J. Beck, K. Heller. Modelling reciprocating relationships with hawkes processes. NIPS, 2012. [14] M. Farajtabar, N. Du, M. Gomez-Rodriguez, I. Valera, H. Zha, and L. Song. Shaping social activity by incentivizing users. NIPS, 2014. [15] T. Iwata, A. Shah, and Z. Ghahramani. Discovering latent influence in online social activities via shared cascade poisson processes. KDD, 2013. [16] S. Linderman and R. Adams. Discovering latent network structure in point process data. ICML, 2014. [17] I. Valera, M. Gomez-Rodriguez, Modeling adoption of competing products and conventions in social media. ICDM, 2015. [18] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multidimensional hawkes processes. AISTATS, 2013. [19] K. Zhou, H. Zha, and L. Song. Learning triggering kernels for multi-dimensional hawkes processes. ICML, 2013. [20] D. Hunter, P. Smyth, D. Q. Vu, and others. Dynamic egocentric models for citation networks. ICML, 2011. [21] D. Q. Vu, D. Hunter, P. Smyth, and A. Asuncion. Continuous-time regression models for longitudinal networks. NIPS, 2011. [22] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. KDD, 2005. [23] S. Goel, D. J. Watts, and D. G. Goldstein. The structure of online diffusion networks. EC, 2012. [24] O. Aalen, O. Borgan, and H. Gjessing. Survival and event history analysis: a process point of view, 2008. [25] D. Kempe, J. Kleinberg, and ´E. Tardos. Maximizing the spread of influence through a social network. KDD, 2003. [26] Y. Ogata. On lewis’ simulation method for point processes. IEEE TIT, 27(1):23–31, 1981. [27] P. Erdos and A R´enyi. On the evolution of random graphs. Hungar. Acad. Sci, 5:17–61, 1960. [28] L. Backstrom, P. Boldi, M. Rosa, J. Ugander, and S. Vigna. Four degrees of separation. WebSci, 2012. [29] M. Granovetter. The strength of weak ties. American journal of sociology, pages 1360–1380, 1973. [30] D. Romero and J. Kleinberg. The directed closure process in hybrid social-information networks, with an analysis of link formation on twitter. ICWSM, 2010. [31] J. Ugander, L. Backstrom, and J. Kleinberg. Subgraph frequencies: Mapping the empirical and extremal geography of large graph collections. WWW, 2013. [32] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 1998. [33] T. Gross and B. Blasius. Adaptive coevolutionary networks: a review. Royal Society Interface, 2008. [34] P. Singer, C. Wagner, and M. Strohmaier. Factors influencing the co-evolution of social and content networks in online social media. Modeling and Mining Ubiquitous Social Media, pages 40–59. Springer, 2012. 9
2015
143
5,640
Nearly-Optimal Private LASSO∗ Kunal Talwar Google Research kunal@google.com Abhradeep Thakurta (Previously) Yahoo! Labs guhathakurta.abhradeep@gmail.com Li Zhang Google Research liqzhang@google.com Abstract We present a nearly optimal differentially private version of the well known LASSO estimator. Our algorithm provides privacy protection with respect to each training example. The excess risk of our algorithm, compared to the non-private version, is eO(1/n2/3), assuming all the input data has bounded ℓ∞norm. This is the first differentially private algorithm that achieves such a bound without the polynomial dependence on p under no additional assumptions on the design matrix. In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms. 1 Introduction A common task in supervised learning is to select the model that best fits the data. This is frequently achieved by selecting a loss function that associates a real-valued loss with each datapoint d and model θ and then selecting from a class of admissible models, the model θ that minimizes the average loss over all data points in the training set. This procedure is commonly referred to as Empirical Risk Minimization(ERM). The availability of large datasets containing sensitive information from individuals has motivated the study of learning algorithms that guarantee the privacy of individuals contributing to the database. A rigorous and by-now standard privacy guarantee is via the notion of differential privacy. In this work, we study the design of differentially private algorithms for Empirical Risk Minimization, continuing a long line of work. (See [2] for a survey.) In particular, we study adding privacy protection to the classical LASSO estimator, which has been widely used and analyzed. We first present a differentially private optimization algorithm for the LASSO estimator. The algorithm is the combination of the classical Frank-Wolfe algorithm [15] and the exponential mechanism for guaranteeing the privacy [21]. We then show that our algorithm achieves nearly optimal risk among all the differentially private algorithms. This lower bound proof relies on recently developed techniques with roots in Cryptography [4, 14], Consider the training dataset D consisting of n pairs of data di = (xi, yi) where xi ∈Rp, usually called the feature vector, and yi ∈R, the prediction. The LASSO estimator, or the sparse linear regression, solves for θ∗= argminθ L(θ; di) = 1 n P i |xi · θ −yi|2 subject to ∥θ∥1 ≤c. To simplify presentation, we assume c = 1, but our results directly extend to general c. The ℓ1 constraint tends to induce sparse θ∗so is widely used in the high dimensional setting when p ≫n. Here, we will study approximating the LASSO estimation with minimum possible error while protecting the privacy of each individual di. Below we define the setting more formally. ∗Part of this work was done at Microsoft Research Silicon Valley Campus. 1 Problem definition: Given a data set D = {d1, · · · , dn} of n samples from a domain D, a constraint set C ⊆Rp, and a loss function L : C × D →R, for any model θ, define its excess empirical risk as R(θ; D) def = 1 n n X i=1 L(θ; di) −min θ∈C 1 n n X i=1 L(θ; di). (1) For LASSO, the constraint set is the ℓ1 ball, and the loss is the quadratic loss function. We define the risk of a mechanism A on a data set D as R(A; D) = E[R(A(D); D)], where the expectation is over the internal randomness of A, and the risk R(A) = maxD∈Dn R(A; D) is the maximum risk over all the possible data sets. Our objective is then to design a mechanism A which preserves (ϵ, δ)-differential privacy (Definition 1.3) and achieves as low risk as possible. We call the minimum achievable risk as privacy risk, defined as minA R(A), where the min is over all (ϵ, δ)-differentially private mechanisms A. There has been much work on studying the privacy risk for the LASSO estimator. However, all the previous results either need to make strong assumption about the input data or have polynomial dependence on the dimension p. First [20] and then [24] studied the LASSO estimator with differential privacy guarantee. They showed that one can avoid the polynomial dependence on p in the excess empirical risk if the data matrix X satisfy the restricted strong convexity and mutual incoherence properities. While such assumptions seem necessary to prove that LASSO recovers the exact support in the worst case, they are often violated in practice, where LASSO still leads to useful models. It is therefore desirable to design and analyze private versions of LASSO in the absence of such assumptions. In this work, we do so by analyzing the loss achieved by the private optimizer, compared to the true optimizer. We make primarily two contributions in this paper. First we present an algorithm that achieves the privacy risk of eO(1/n2/3) for the LASSO problem1. Compared to the previous work, we only assume that the input data has bounded ℓ∞norm. In addition, the above risk bound only has logarithmic dependence on p, which fits particularly well for LASSO as we usually assume n ≪p when applying LASSO. This bound is achieved by a private version of the Frank-Wolfe algorithm. Assuming that each data point di satisfies that ∥di∥∞≤1, we have Theorem 1.1. There exists an (ϵ, δ)-differentially private algorithm A for LASSO such that R(A) = O log(np) p log(1/δ) (nϵ)2/3 ! . Our second contribution is to show that, surprisingly, this simple algorithm gives a nearly tight bound. We show that this rather unusual n−2/3 dependence is not an artifact of the algorithm or the analysis, but is in fact the right dependence for the LASSO problem: no differentially private algorithm can do better! We prove a lower bound by employing fingerprinting codes based techniques developed in [4, 14]. Theorem 1.2. For the sparse linear regression problem where ∥xi∥∞≤1, for ϵ = 0.1 and δ = o(1/n2), any (ϵ, δ)-differentially private algorithm A must have R(A) = Ω(1/(n log n)2/3) . Our improved privacy risk crucially depends on the fact that the constraint set is a polytope with few (polynomial in dimensions) vertices. This allows us to use a private version of the Frank-Wolfe algorithm, where at each step, we use the exponential mechanism to select one of the vertices of the polytope. We also present a variant of Frank-Wolfe that uses objective perturbation instead of the exponential mechanism. We show that (Theorem 2.6) we can obtain a risk bound dependent on the Gaussian width of the constraint set, which often results in tighter bounds compared to bounds based, e.g., on diameter. While more general, this variant adds much more noise than the FrankWolfe based algorithm, as it is effectively publishing the whole gradient at each step. When C is not a polytope with a small number of vertices, one can still use the exponential mechanism as long as one has a small list of candidate points which contains an approximate optimizer for every direction. For many simple cases, for example the ℓq ball with 1 < q < 2, the bounds attained in this way have 1Throughout the paper, we use eO to hide logarithmic factors. 2 an additional polynomial dependence on the dimension p, instead of the logarithmic dependence in the above result. For example, when q = 1, the upper bound from this variant has an extra factor of p1/3. Whereas such a dependence is provably needed for q = 2, the upper bound jump rather abruptly from the logarithmic dependence for q = 1 to a polynomial dependence on p for q > 1. We leave open the question of resolving this discontinuity and interpolating more smoothly between the ℓ1 case and the ℓ2 case. Our results enlarge the set of problems for which privacy comes “for free”. Given n samples from a distribution, suppose that θ∗is the empirical risk minimizer and θpriv is the differentially private approximate minimizer. Then the non-private ERM algorithm outputs θ∗and incurs expected (on the distribution) loss equal to the loss(θ∗, training-set) + generalization-error, where the generalization error term depends on the loss function, C and on the number of samples n. The differentially private algorithm incurs an additional loss of the privacy risk. If the privacy risk is asymptotically no larger than the generalization error, we can think of privacy as coming for free, since under the assumption of n being large enough to make the generalization error small, we are also making n large enough to make the privacy risk small. In the case when C is the ℓ1-ball, and the loss function is the squared loss with ∥x∥∞≤1 and |y| ≤1, the best known generalization error bounds dominate the privacy risk when n = ω(log3 p) [1, Theorem 18]. 1.1 Related work There have been much work on private LASSO or more generally private ERM algorithms. The error bounds mainly depend on the shape of the constraint set and the Lipschitz condition of the loss function. Here we will summarize these related results. Related to our results, we distinguish two settings: i) the constraint set is bounded in the ℓ1-norm and the the loss function is 1-Lipschitz in the ℓ1-norm. (call it the (ℓ1/ℓ∞)-setting). This is directly related to our bounds on LASSO; and ii) the constraint set has bounded ℓ2 norm and the loss function is 1-Lipschitz in the ℓ2 norm (the (ℓ2/ℓ2)-setting), which is related to our bounds using Gaussian width. The (ℓ1/ℓ∞)-setting: The results in this setting include [20, 24, 19, 25]. The first two works make certain assumptions about the instance (restricted strong convexity (RSC) and mutual incoherence). Under these assumptions, they obtain privacy risk guarantees that depend logarithmically in the dimensions p, and thus allowing the guarantees to be meaningful even when p ≫n. In fact their bound of O(polylog p/n) can be better than our tight bound of O(polylog p/n2/3). However, these assumptions on the data are strong and may not hold in practice. Our guarantees do not require any such data dependent assumptions. The result of [19] captures the scenario when the constraint set C is the probability simplex and the loss function is a generalized linear model, but provides a worse bound of O(polylog p/n1/3). For the special case of linear loss functions, which are interesting primarily in the online prediction setting, the techniques of [19, 25] provide a bound of O(polylog p/n). The (ℓ2/ℓ2)-setting: In all the works on private convex optimization that we are aware of, either the excess risk guarantees depend polynomially on the dimensionality of the problem (p), or assumes special structure to the loss (e.g., generalized linear model [19] or linear losses [25]). Similar dependence is also present in the online version of the problem [18, 26]. [2] recently show that in the private ERM setting, in general this polynomial dependence on p is unavoidable. In our work we show that one can replace this dependence on p with the Gaussian width of the constraint set C, which can be much smaller. Effect of Gaussian width in risk minimization: Our result on general C has an dependence on the Gaussian width of C. This geometric concept has previously appeared in other contexts. For example, [1] bounds the the excess generalization error by the Gaussian width of the constraint set C. Recently [5] show that the Gaussian width of a constraint set C is very closely related to the number of generic linear measurements one needs to perform to recover an underlying model θ∗∈C. The notion of Gaussian width has also been used by [22, 11] in the context of differentially private query release mechanisms but in the very different context of answering multiple linear queries over a database. 3 1.2 Background Differential Privacy: The notion of differential privacy (Definition 1.3) is by now a defacto standard for statistical data privacy [10, 12]. One of the reasons why differential privacy has become so popular is because it provides meaningful guarantees even in the presence of arbitrary auxiliary information. At a semantic level, the privacy guarantee ensures that an adversary learns almost the same thing about an individual independent of his presence or absence in the data set. The parameters (ϵ, δ) quantify the amount of information leakage. For reasons beyond the scope of this work, ϵ ≈0.1 and δ = 1/nω(1) are a good choice of parameters. Here n refers to the number of samples in the data set. Definition 1.3. A randomized algorithm A is (ϵ, δ)-differentially private if, for all neighboring data sets D and D′ (i.e., they differ in one record, or equivalently, dH(D, D′) = 1) and for all events S in the output space of A, we have Pr(A(D) ∈S) ≤eϵ Pr(A(D′) ∈S) + δ . Here dH(D, D′) refers to the Hamming distance. ℓq-norm, q ≥1: For q ≥1, the ℓq-norm for any vector v ∈Rp is defined as  pP i=1 v(i)q 1/q , where v(i) is the i-th coordinate of the vector v. L-Lipschitz continuity w.r.t. norm ∥· ∥: A function Ψ : C →R is L-Lispchitz within a set C w.r.t. a norm ∥· ∥if the following holds. ∀θ1, θ2 ∈C, |Ψ(θ1) −Ψ(θ2)| ≤L · ∥θ1 −θ2∥. Gaussian width of a set C: Let b ∼N(0, Ip) be a Gaussian random vector in Rp. The Gaussian width of a set C is defined as GC def = Eb  sup w∈C |⟨b, w⟩|  . 2 Private Convex Optimization by Frank-Wolfe algorithm In this section we analyze a differentially private variant of the classical Frank-Wolfe algorithm [15]. We show that for the setting where the constraint set C is a polytope with k vertices, and the loss function L(θ; d) is Lipschitz w.r.t. the ℓ1-norm, one can obtain an excess privacy risk of roughly O(log k/n2/3). This in particular captures the high-dimensional linear regression setting. One such example is the classical LASSO algorithm[27], which computes argminθ:∥θ∥1≤1 1 n∥Xθ −y∥2 2. In the usual case of |xij|, |yj| = O(1), L(θ) = 1 n∥Xθ−y∥2 2 is O(1)-Lipschitz with respect to ℓ1-norm, we show that one can achieve the nearly optimal privacy risk of eO(1/n2/3). The Frank-Wolfe algorithm [15] can be regarded as a “greedy” algorithm which moves towards the optimum solution in the first order approximation (see Algorithm 1 for the description). How fast Frank-Wolfe algorithm converges depends on L’s “curvature”, defined as follows according to [8, 17]. We remark that a β-smooth function on C has curvature constant bounded by β∥C∥2. Definition 2.1 (Curvature constant). For L : C →R, define ΓL as below. ΓL := sup θ1,θ2,∈C,γ∈(0,1],θ3=θ1+γ(θ2−θ1) 2 γ2 (L(θ3) −L(θ1) −⟨θ3 −θ1, ▽L(θ1)⟩) . Remark 1. A useful bound can be derived for a quadratic loss L(θ) = θAT Aθ + ⟨b, θ⟩. In this case, by [8], ΓL ≤maxa,b∈A·C ∥a −b∥2 2. When C is centrally symmetric, we have the bound ΓL ≤4 maxθ∈C ∥Aθ∥2 2. For LASSO, A = 1 √nX. Define θ∗= argmin θ∈C L(θ). The following theorem bounds the convergence rate of Frank-Wolfe algorithm. 4 Algorithm 1 Frank-Wolfe algorithm Input: C ⊆Rp, L : C →R, µt 1: Choose an arbitrary θ1 from C 2: for t = 1 to T −1 do 3: Compute eθt = argminθ∈C⟨▽L(θt), (θ −θt)⟩ 4: Set θt+1 = θt + µt(eθt −θt) 5: return θT . Theorem 2.2 ([8, 17]). If we set µt = 2/(t + 2), then L(θT ) −L(θ∗) = O(ΓL/T) . While the Frank-Wolfe algorithm does not necessarily provide faster convergence compared to the gradient-descent based method, it has two major advantages. First, on Line 3, it reduces the problem to solving a minimization of linear function. When C is defined by small number of vertices, e.g. when C is an ℓ1 ball, the minimization can be done by checking ⟨▽L(θt), x⟩for each vertex x of C. This can be done efficiently. Secondly, each step in Frank-Wolfe takes a convex combination of θt and eθt, which is on the boundary of C. Hence each intermediate solution is always inside C (sometimes called projection free), and the final outcome θT is the convex combination of up to T points on the boundary of C (or vertices of C when C is a polytope). Such outcome might be desired, for example when C is a polytope, as it corresponds to a sparse solution. Due to these reasons Frank-Wolfe algorithm has found many applications in machine learning [23, 16, 8]. As we shall see below, these properties are also useful for obtaining low risk bounds for their private version. 2.1 Private Frank-Wolfe Algorithm We now present a private version of the Frank-Wolfe algorithm. The algorithm accesses the private data only through the loss function in step 3 of the algorithm. Thus to achieve privacy, it suffices to replace this step by a private version. To do so, we apply the exponential mechanism [21] to select an approximate optimizer. In the case when the set C is a polytope, it suffices to optimize over the vertices of C due to the following basic fact: Fact 2.3. Let C ⊆Rp be the convex hull of a compact set S ⊆Rp. For any vector v ∈Rp, arg min θ∈C ⟨θ, v⟩∩S ̸= ∅. Thus it suffices to run the exponential mechanism to select θt+1 from amongst the vertices of C. This leads to a differentially private algorithm with risk logarithmically dependent on |S|. When |S| is polynomial in p, it leads to an error bound with log p dependence. We can bound the error in terms of the ℓ1-Lipschitz constant, which can be much smaller than the ℓ2-Lipschitz constant. In particular, as we show in the next section, the private Frank-Wolfe algorithm is nearly optimal for the important high-dimensional sparse linear regression problem. Algorithm 2 ANoise−FW(polytope): Differentially Private Frank-Wolfe Algorithm (Polytope Case) Input: Data set: D = {d1, · · · , dn}, loss function: L(θ; D) = 1 n nP i=1 L(θ; di) (with ℓ1-Lipschitz constant L1 for L), privacy parameters: (ϵ, δ), convex set: C = conv(S) with ∥C∥1 denoting maxs∈S ∥s∥1. 1: Choose an arbitrary θ1 from C 2: for t = 1 to T −1 do 3: ∀s ∈S, αs ←⟨s, ▽L(θt; D)⟩+ Lap  L1∥C∥1√ 8T log(1/δ) nϵ  , where Lap(λ) ∼ 1 2λe−|x|/λ. 4: eθt ←arg min s∈S αs. 5: θt+1 ←(1 −µt)θt + µteθt, where µt = 2 t+2. 6: Output θpriv = θT . Theorem 2.4 (Privacy guarantee). Algorithm 2 is (ϵ, δ)-differentially private. 5 Since each data item is assumed to have bounded ℓ∞norm, for two neighboring databases D and D′ and any θ ∈C, s ∈S, we have that |⟨s, ▽L(θ; D)⟩−⟨s, ▽L(θ; D)⟩| = O(L1∥C∥1/n) . The proof of privacy then follows from a straight-forward application of the exponential mechanism [21] or its noisy maximum version [3, Theorem 5]) and the strong composition theorem [13]. In Theorem 2.5 we prove the utility guarantee for the private Frank-Wolfe algorithm for the convex polytope case. Define ΓL = max D∈D CL over all the possible data sets in D. Theorem 2.5 (Utility guarantee). Let L1, S and ∥C∥1 be defined as in Algorithms 2 (Algorithm ANoise−FW(polytope)). Let ΓL be an upper bound on the curvature constant (defined in Definition 2.1) for the loss function L(·; d) that holds for all d ∈D. In Algorithm ANoise−FW(polytope), if we set T = ΓL 2/3(nϵ)2/3 (L1∥C∥1)2/3 , then E  L(θpriv; D)  −min θ∈C L(θ; D) = O ΓL 1/3 (L1∥C∥1)2/3 log(n|S|) p log(1/δ) (nϵ)2/3 ! . Here the expectation is over the randomness of the algorithm. The proof of utility uses known bounds on noisy Frank-Wolfe [17], along with error bounds for the exponential mechanism. The details can be found in the full version. General C While a variant of this mechanism can be applied to the case when C is not a polytope, its error would depend on the size of a cover of the boundary of C, which can be exponential in p, leading to an error bound with polynomial dependence on p. In the full version, we analyze another variant of private Frank-Wolfe that uses objective perturbation to ensure privacy. This variant is well-suited for a general convex set C and the following result, proven in the Appendix, bounds its excess risk in terms of the Gaussian Width of C. For this mechanism, we only need C to be bounded in ℓ2 diameter, but our error now depends on the ℓ2-Lipschitz constant of the loss functions. Theorem 2.6. Suppose that each loss function is L2-Lipschitz with respect to the ℓ2 norm, and that C has ℓ2 diameter at most ∥C∥2. Let GC the Gaussian width of the convex set C ⊆Rp, and let ΓL be the curvature constant (defined in Definition 2.1) for the loss function ℓ(θ; d) for all θ ∈C and d ∈D. Then there is an (ϵ, δ)-differentially private algorithm ANoise−FW with excess empirical risk: E  L(θpriv; D)  −min θ∈C L(θ; D) = O ΓL 1/3 (L2GC)2/3 log2(n/δ) (nϵ)2/3 ! . Here the expectation is over the randomness of the algorithm. 2.2 Private LASSO algorithm We now apply the private Frank-Wolfe algorithm ANoise−FW(polytope) to the important case of the sparse linear regression (or LASSO) problem. Problem definition: Given a data set D = {(x1, y1), · · · , (xn, yn)} of n-samples from the domain D = {(x, y) : x ∈Rp, y ∈[−1, 1], ∥x∥∞≤1}, and the convex set C = ℓp 1. Define the mean squared loss, L(θ; D) = 1 n X i∈[n] (⟨xi, θ⟩−yi)2 . (2) The objective is to compute θpriv ∈C to minimize L(θ; D) while preserving privacy with respect to any change of individual (xi, yi) pair. The non-private setting of the above problem is a variant of the least squares problem with ℓ1 regularization, which was started by the work of LASSO [27, 28] and intensively studied in the past years. Since the ℓ1 ball is the convex hull of 2p vertices, we can apply the private Frank-Wolfe algorithm ANoise−FW(polytope). For the above setting, it is easy to check that the ℓ1-Lipschitz constant is bounded by O(1). Further, by applying the bound on quadratic programming Remark 1, we have that CL ≤4 maxθ∈C 1 n∥Xθ∥2 2 = O(1) since C is the unit ℓ1 ball, and |xij| ≤1. Hence Γ = O(1). Now applying Theorem 2.5, we have 6 Corollary 2.7. Let D = {(x1, y1), · · · , (xn, yn)} of n samples from the domain D = {(x, y) : ∥x∥∞≤1, |y| ≤1}, and the convex set C equal to the ℓ1-ball. The output θpriv of Algorithm ANoise−FW(polytope) ensures the following. E[L(θpriv; D) −min θ∈C L(θ; D)] = O log(np/δ) (nϵ)2/3  . Remark 2. Compared to the previous work [20, 24], the above upper bound makes no assumption of restricted strong convexity or mutual incoherence, which might be too strong for realistic settings. Also our results significantly improve bounds of [19], from ˜O(1/n1/3) to ˜O(1/n2/3), which considered the case of the set C being the probability simplex and the loss being a generalized linear model. 3 Optimality of Private LASSO In the following, we shall show that to ensure privacy, the error bound in Corollary 2.7 is nearly optimal in terms of the dominant factor of 1/n2/3. Theorem 3.1 (Optimality of private Frank-Wolfe). Let C be the ℓ1-ball and L be the mean squared loss in equation (2). For every sufficiently large n, for every (ϵ, δ)-differentially private algorithm A, with ϵ ≤0.1 and δ = o(1/n2), there exists a data set D = {(x1, y1), · · · , (xn, yn)} of n samples from the domain D = {(x, y) : ∥x∥∞≤1, |y| ≤1} such that E[L(A(D); D) −min θ∈C L(θ; D)] = eΩ  1 n2/3  . We prove the lower bound by following the fingerprinting codes argument of [4] for lowerbounding the error of (ϵ, δ)-differentially private algorithms. Similar to [4] and [14], we start with the following lemma which is implicit in [4].The matrix X in Theorem 3.2 is the padded Tardos code used in [14, Section 5]. For any matrix X, denote by X(i) the matrix obtained by removing the i-th row of X. Call a column of a matrix a consensus column if the entries in the column are either all 1 or all −1. The sign of a consensus column is simply the consensus value of the column. Write w = m/ log m and p = 1000m2. The following theorem follows immediately from the proof of Corollary 16 in [14]. Theorem 3.2. [Corollary 16 from [14], restated] Let m be a sufficiently large positive integer. There exists a matrix X ∈{−1, 1}(w+1)×p with the following property. For each i ∈[1, w + 1], there are at least 0.999p consensus columns Wi in each X(i). In addition, for algorithm A on input matrix X(i) where i ∈[1, w + 1], if with probability at least 2/3, A(X(i)) produces a p-dimensional sign vector which agrees with at least 3 4p columns in Wi, then A is not (ε, δ) differentially private with respect to single row change (to some other row in X). Write τ = 0.001. Let k = τwp. We first form an k × p matrix Y where the column vectors of Y are mutually orthogonal {1, −1} vectors. This is possible as k ≫p. Now we construct w + 1 databases Di for 1 ≤i ≤w + 1 as follows. For all the databases, they contain the common set of examples (zj, 0) (i.e. vector zj with label 0) for 1 ≤j ≤k where zj = (Yj1, . . . , Yjp) is the j-th row vector of Y . In addition, each Di contains w examples (xj, 1) for xj = (Xj1, . . . , Xjk) for j ̸= i. Then L(θ; Di) is defined as follows (for the ease of notation in this proof, we work with the un-normalized loss. This does not affect the generality of the arguments in any way.) L(θ; Di) = X j̸=i (xj · θ −1)2 + k X j=1 (yj · θ)2 = X j̸=i (xj · θ −1)2 + k∥θ∥2 2 . The last equality is due to that the columns of Y are mutually orthogonal {−1, 1} vectors. For each Di, consider θ∗∈ n −1 p, 1 p op such that the sign of the coordinates of θ∗matches the sign for the consensus columns of X(i). Plugging θ∗in L(θ∗; ˆD) we have the following, L(θ∗; ˆD) ≤ w X i=1 (2τ)2 + k p [since the number of consensus columns is at least (1 −τ)p] = (τ + 4τ 2)w . (3) 7 We now prove the crucial lemma, which states that if θ is such that ∥θ∥1 ≤1 and L(θ; Di) is small, then θ has to agree with the sign of most of the consensus columns of X(i). Lemma 3.3. Suppose that ∥θ∥1 ≤1, and L(θ; Di) < 1.1τw. For j ∈Wi, denote by sj the sign of the consensus column j. Then we have |{j ∈Wi : sign(θj) = sj}| ≥3 4p . Proof. For any S ⊆{1, . . . , p}, denote by θ|S the projection of θ to the coordinate subset S. Consider three subsets S1, S2, S3, where S1 = {j ∈Wi : sign(θj) = sj} , S2 = {j ∈Wi : sign(θj) ̸= sj} , S3 = {1, . . . , p} \ Wi . The proof is by contradiction. Assume that |S1| < 3 4p. Further denote θi = θ|Si for i = 1, 2, 3. Now we will bound ∥θ1∥1 and ∥θ3∥1 using the inequality ∥x∥2 ≥∥x∥1/ √ d for any d-dimensional vector. ∥θ3∥2 2 ≥∥θ3∥2 1/|S3| ≥∥θ3∥2 1/(τp) . Hence k∥θ3∥2 2 ≥w∥θ3∥2 1. But k∥θ3∥2 2 ≤k∥θ∥2 2 ≤1.1τw, so that ∥θ3∥1 ≤ √ 1.1τ ≤0.04. Similarly by the assumption of |S1| < 3 4p, ∥θ1∥2 2 ≥∥θ1∥2 1/|S1| ≥4∥θ1∥2 1/(3p) . Again using k∥θ∥2 2 < 1.1τw, we have that ∥θ1∥1 ≤ p 1.1 ∗3/4 ≤0.91. Now we have ⟨xi, θ⟩−1 = ∥θ1∥1 −∥θ2∥1 + βi −1 where |βi| ≤∥θ3∥1 ≤0.04. By ∥θ1∥1 + ∥θ2∥1 + ∥θ3∥1 ≤1, we have |⟨xi, θ⟩−1| ≥1 −∥θ1∥−|βi| ≥1 −0.91 −0.04 = 0.05 . Hence we have that L(θ; Di) ≥(0.05)2w ≥1.1τw. This leads to a contradiction. Hence we must have |S1| ≥3 4p. With Theorem 3.2 and Lemma 3.3, we can now prove Theorem 3.1. Proof. Suppose that A is private. And for the datasets we constructed above, E[L(A(Di); Di) −min θ L(θ; Di)] ≤cw , for sufficiently small constant c. By Markov inequality, we have with probability at least 2/3, L(A(Di); Di) −minθ L(θ; Di) ≤3cw. By (3), we have min θ L(θ; Di) ≤(τ + 4τ 2)w. Hence if we choose c a constant small enough, we have with probability 2/3, L(A(Di); Di) < (τ + 4τ 2 + 3c)w ≤1.1τw . (4) By Lemma 3.3, (4) implies that A(Di) agrees with at least 3 4p consensus columns in X(i). However by Theorem 3.2, this violates the privacy of A. Hence we have that there exists i, such that E[L(A(Di); Di) −min θ L(θ; Di)] > cw . Recall that w = m/ log m and n = w + wp = O(m3/ log m). Hence we have that E[L(A(Di); Di) −min θ L(θ; Di)] = Ω(n1/3/ log2/3 n) . The proof is completed by converting the above bound to the normalized version of Ω(1/(n log n)2/3). 8 References [1] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. The Journal of Machine Learning Research, 3:463–482, 2003. [2] R. Bassily, A. Smith, and A. Thakurta. Private empirical risk minimization, revisited. In FOCS, 2014. [3] R. Bhaskar, S. Laxman, A. Smith, and A. Thakurta. Discovering frequent patterns in sensitive data. In KDD, New York, NY, USA, 2010. [4] M. Bun, J. Ullman, and S. Vadhan. Fingerprinting codes and the price of approximate differential privacy. In STOC, 2014. [5] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. [6] K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In NIPS, 2008. [7] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. JMLR, 12:1069–1109, 2011. [8] K. L. Clarkson. Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm. ACM Transations on Algorithms, 2010. [9] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In FOCS, 2013. [10] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265–284. Springer, 2006. [11] C. Dwork, A. Nikolov, and K. Talwar. Efficient algorithms for privately releasing marginals via convex relaxations. arXiv preprint arXiv:1308.1385, 2013. [12] C. Dwork and A. Roth. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. NOW Publishers, 2014. [13] C. Dwork, G. N. Rothblum, and S. P. Vadhan. Boosting and differential privacy. In FOCS, 2010. [14] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In STOC, 2014. [15] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95–110, 1956. [16] E. Hazan and S. Kale. Projection-free online learning. In ICML, 2012. [17] M. Jaggi. Revisiting {Frank-Wolfe}: Projection-free sparse convex optimization. In ICML, 2013. [18] P. Jain, P. Kothari, and A. Thakurta. Differentially private online learning. In COLT, pages 24.1–24.34, 2012. [19] P. Jain and A. Thakurta. (near) dimension independent risk bounds for differentially private learning. In International Conference on Machine Learning (ICML), 2014. [20] D. Kifer, A. Smith, and A. Thakurta. Private convex empirical risk minimization and high-dimensional regression. In COLT, pages 25.1–25.40, 2012. [21] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94–103. IEEE, 2007. [22] A. Nikolov, K. Talwar, and L. Zhang. The geometry of differential privacy: The sparse and approximate cases. In STOC, 2013. [23] S. Shalev-Shwartz, N. Srebro, and T. Zhang. Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimization, 2010. [24] A. Smith and A. Thakurta. Differentially private feature selection via stability arguments, and the robustness of the Lasso. In COLT, 2013. [25] A. Smith and A. Thakurta. Follow the perturbed leader is differentially private with optimal regret guarantees. Manuscript in preparation, 2013. [26] A. Smith and A. Thakurta. Nearly optimal algorithms for private online learning in full-information and bandit settings. In NIPS, 2013. [27] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 1996. [28] R. Tibshirani et al. The Lasso method for variable selection in the cox model. Statistics in medicine, 16(4):385–395, 1997. [29] J. Ullman. Private multiplicative weights beyond linear queries. CoRR, abs/1407.1571, 2014. 9
2015
144
5,641
Calibrated Structured Prediction Volodymyr Kuleshov Department of Computer Science Stanford University Stanford, CA 94305 Percy Liang Department of Computer Science Stanford University Stanford, CA 94305 Abstract In user-facing applications, displaying calibrated confidence measures— probabilities that correspond to true frequency—can be as important as obtaining high accuracy. We are interested in calibration for structured prediction problems such as speech recognition, optical character recognition, and medical diagnosis. Structured prediction presents new challenges for calibration: the output space is large, and users may issue many types of probability queries (e.g., marginals) on the structured output. We extend the notion of calibration so as to handle various subtleties pertaining to the structured setting, and then provide a simple recalibration method that trains a binary classifier to predict probabilities of interest. We explore a range of features appropriate for structured recalibration, and demonstrate their efficacy on three real-world datasets. 1 Introduction Applications such as speech recognition [1], medical diagnosis [2], optical character recognition [3], machine translation [4], and scene labeling [5] have two properties: (i) they are instances of structured prediction, where the predicted output is a complex structured object; and (ii) they are user-facing applications for which it is important to provide accurate estimates of confidence. This paper explores confidence estimation for structured prediction. Central to this paper is the idea of probability calibration [6, 7, 8, 9], which is prominent in the meteorology [10] and econometrics [9] literature. Calibration requires that the probability that a system outputs for an event reflects the true frequency of that event: of the times that a system says that it will rain with probability 0.3, then 30% of the time, it should rain. In the context of structured prediction, we do not have a single event or a fixed set of events, but rather a multitude of events that depend on the input, corresponding to different conditional and marginal probabilities that one could ask of a structured prediction model. We must therefore extend the definition of calibration in a way that deals with the complexities that arise in the structured setting. We also consider the practical question of building a system that outputs calibrated probabilities. We introduce a new framework for calibration in structured prediction, which involves defining probabilities of interest, and then training binary classifiers to predict these probabilities based on a set of features. Our framework generalizes current methods for binary and multiclass classification [11, 12, 13], which predict class probabilities based on a single feature, the uncalibrated prediction score. In structured prediction, the space of interesting probabilities and useful features is considerably richer. This motivates us to introduce a new concept of events as well as a range of new features—margin, pseudomargin—which have varying computational demands. We perform a thorough study of which features yield good calibration, and find that domain-general features are quite good for calibrating MAP and marginal estimates over three tasks—object recognition, optical character recognition, and scene understanding. Interestingly, features based on MAP inference alone can achieve good calibration on marginal probabilities (which can be more difficult to compute). 1 Figure 1: In the context of an OCR system, our framework augments the structured predictor with calibrated confidence measures for a set of events, e.g., whether the first letter is “l”. y1 y2 y3 y4 x “l” “a” “n” “d” Event Probability [y = “land”] 0.8 [y1 = “l”] 0.8 [y2 = “a”] 0.9 [y3 = “n”] 0.9 [y4 = “d”] 0.8 (a) Structured prediction model (b) Forecaster output 2 Background 2.1 Structured Prediction In structured prediction, we want to assign a structured label y = (y1, . . . , yL) ∈Y to an input x ∈X. For example, in optical character recognition (OCR), x is a sequence of images and y is the sequence of associated characters (see Figure 1(a)); note that the number of possible outputs y for a given x may be exponentially large. A common approach to structured prediction is conditional random fields (CRFs), where we posit a probabilistic model pθ(y | x). We train pθ by optimizing a maximum-likelihood or a max-margin objective over a training set {(x(i), y(i))}n i=1, assumed to be drawn i.i.d. from an unknown datagenerating distribution P(x, y). The promise of a probabilistic model is that in addition to computing the most likely output ˆy = arg maxy pθ(y | x), we can also get its probability pθ(y = ˆy | x) ∈[0, 1], or even marginal probabilities pθ(y1 = ˆy1 | x) ∈[0, 1]. 2.2 Probabilistic Forecasting Probabilities from a CRF pθ are just numbers that sum to 1. In order for these probabilities to be useful as confidence measures, we would ideally like them to be calibrated. Calibration intuitively means that whenever a forecaster assigns 0.7 probability to an event, it should be the case that the event actually holds about 70% of the time. In the case of binary classification (Y = {0, 1}), we say that a forecaster F : X →[0, 1] is perfectly calibrated if for all possible probabilities p ∈[0, 1]: P[y = 1 | F(x) = p] = p. (1) Calibration by itself does not guarantee a useful confidence measure. A forecaster that always outputs the marginal class probability F(x) = P(y = 1) is calibrated but useless for accurate prediction. Good forecasts must also be sharp, i.e., their probabilities should be close to 0 or 1. Calibration and sharpness. Given a forecaster F : X →[0, 1], define T(x) = E[y | F(x)] to be the true probability of y = 1 given a that x received a forecast F(x). We can use T to decompose the ℓ2 prediction loss as follows: E[(y −F(x))2] = E[(y −T(x))2] + E[(T(x) −F(x))2] (2) = Var[y] | {z } uncertainty −Var[T(x)] | {z } sharpness + E[(T(x) −F(x))2] | {z } calibration error . (3) The first equality follows because y −T(x) has expectation 0 conditioned on F(x), and the second equality follows from the variance decomposition of y onto F(x). The three terms in (3) formalize our intuitions about calibration and sharpness [7]. The calibration term measures how close the predicted probability is to the true probability over that region and is a natural generalization of perfect calibration (1) (which corresponds to zero calibration error). The sharpness term measures how much variation there is in the true probability across forecasts. It does not depend on the numerical value of the forecaster F(x), but only the induced grouping of points; it is maximized by making F(x) closer to 0 and 1. Uncertainty does not depend on the forecaster and can be mostly ignored; note that it is always greater than sharpness and thus ensures that the loss stays positive. 2 input x 0 1 2 calib. sharp. true P(y | x) 0 1 0.5 0 0.167 calibrated, unsharp pθ(y | x) 0.5 0.5 0.5 0 0 uncalibrated, sharp pθ(y | x) 0.2 0.8 0.4 0.03 0.167 balanced pθ(y | x) 0 0.75 0.75 0 0.125 Examples. To illustrate the difference between calibration error (lower is better) and sharpness (higher is better), consider the following binary classification example: we have a uniform distribution (P(x) = 1/3) over inputs X = {0, 1, 2}. For x ∈{0, 1}, y = x with probability 1, and for x = 2, y is either 0 or 1, each with probability 1 2. Setting pθ(y | x) ≡0.5 would achieve perfect calibration (0) but not sharpness (0). We can get excellent sharpness (0.167) but suffer in calibration (0.03) by predicting probabilities 0.2, 0.8, 0.4. We can trade off some sharpness (0.125) for perfect calibration (0) by predicting 0 for x = 0 and 0.75 for x ∈{1, 2}. Discretized probabilities. We have assumed so far that the forecaster might return arbitrary probabilities in [0, 1]. In this case, we might need an infinite amount of data to estimate T(x) = E[y | F(x)] accurately for each value of F(x). In order to estimate calibration and sharpness from finite data, we use a discretized version of calibration and sharpness. Let B be a partitioning of the interval [0, 1]; for example B = {[0, 0.1), [0.1, 0.2), . . . }. Let B : [0, 1] →B map a probability p to the interval B(p) containing p; e.g., B(0.15) = [0.1, 0.2). In this case, we simply redefine T(x) to be the true probability of y = 1 given that F(x) lies in a bucket: T(x) = E[y | B(F(x))]. It is not hard to see that discretized calibration estimates form an upper bound on the calibration error (3) [14]. 3 Calibration in the Context of Structured Prediction We have so far presented calibration in the context of binary classification. In this section, we extend these definitions to structured prediction. Our ultimate motivation is to construct forecasters that augment pre-trained structured models pθ(y|x) with confidence estimates. Unlike in the multiclass setting [12], we cannot learn a forecaster Fy : X →[0, 1] that targets P(y | x) for each y ∈Y because the cardinality of Y is too large; in fact, the user will probably not be interested in every y. Events of interest. Instead, we assume that for a given x and associated prediction y, the user is interested in a set I(x) of events concerning x and y. An event E ∈I(x) is a subset E ⊆Y; we would like to determine the probability P(y ∈E | x) for each E ∈I(x). Here are two useful types of events that will serve as running examples: 1. {MAP(x)}, which encodes whether MAP(x) = arg maxy pθ(y | x) is correct. 2. {y : yj = MAP(x)j}, which encodes whether the label at position j in MAP(x) is correct. In the OCR example (Figure 1), suppose we predict MAP(x) = “land”. Define the events of interest to be the MAP and the marginals: I(x) = {{MAP(x)}, {y : y1 = MAP(x)1}, . . . , {y : yL = MAP(x)L}}. Then we have I(x) = {{“land”}, {y : y1 = “l”}, {y : y2 = “a”}, {y : y3 = “n”}, {y : y4 = “d”}}. Note that the events of interest I(x) depend on x through MAP(x). Event pooling. We now define calibration in analogy with (1). We will construct a forecaster F(x, E) that tries to predict P(y ∈E | x). As we remarked earlier, we cannot make a statement that holds uniformly for all events E; we can only make a guarantee in expectation. Thus, let E be drawn uniformly from I(x), so that P is extended to be a joint distribution over (x, y, E). We say that a forecaster F : X × 2Y 7→[0, 1] is perfectly calibrated if P (y ∈E | F(x, E) = p) = p. (4) In other words, averaged over all x, y and events of interest E ∈I(x), whenever the forecaster outputs probability p, then the event E actually holds with probability p. Note that this definition corresponds to perfect binary calibration (1) for the transformed pair of variables y′ = I[y ∈ E], x′ = (x, E). As an example, if I(x) = {{MAP(x)}}, then (4) says that of all the MAP predictions with confidence p, a p fraction will be correct. If I(x) = {{y : yj = MAP(x)j}}L j=1, then (4) states that out of all the marginals (pooled together across all samples x and all positions j) with confidence p, a p fraction will be correct. 3 Algorithm 1 Recalibration procedure for calibrated structured prediction. Input: Features φ(x, E) from trained model pθ, event set I(x), recalibration set S = {(xi, yi)}n i=1. Output: Forecaster F(x, E). Construct the events dataset: Sbinary = {(φ(x, E), I[y ∈E]) : (x, y) ∈S, E ∈I(x)} Train the forecaster F (e.g., k-NN or decision trees) on Sbinary. The second example hints at an important subtlety inherent to having multiple events in structured prediction. The confidence scores for marginals are only calibrated when averaged over all positions. If a user only looked at the marginals for the first position, she might be sorely disappointed. As an extreme example, suppose y = (y1, y2) and y1 is 0 or 1 with probability 1 2 while y2 ≡1. Then a forecaster that outputs a confidence of 0.75 for both events {y : y1 = 1} and {y : y2 = 1} will be perfectly calibrated. However, neither event is calibrated in isolation (P(y1 = 1 | x) = 1 2 and P(y2 = 1 | x) = 1). Finally, perfect calibration can be relaxed; following (3), we may define the calibration error to be E[(T(x, E) −F(x, E))2], where T(x, E) def = P(y ∈E | F(x, E)). 4 Constructing Calibrated Forecasters Having discussed the aspects of calibration specific to structured prediction, let us now turn to the problem of constructing calibrated (and sharp) forecasters from finite data. Recalibration framework. We propose a framework that generalizes existing recalibration strategies to structured prediction models pθ. First, the user specifies a set of events of interest I(x) as well as features φ(x, E), which will in general depend on the trained model pθ. We then train a forecaster F to predict whether the event E holds (i.e. I[y ∈E]) given features φ(x, E). We train F by minimizing the empirical ℓ2 loss over a recalibration set S (disjoint from the training examples): minF P (x,y)∈S P E∈I(x)(F(x, E) −I[y ∈E])2. Algorithm 1 outlines our procedure. As an example, consider again the OCR setting in Figure 1. The margin feature φ(x, E) = log pθ(MAP(1)(x)) −log pθ(MAP(2)(x)) (where MAP(1)(x) and MAP(2)(x) are the first and second highest scoring labels for x according to pθ, respectively) will typically correlate with the event that the MAP prediction is correct. We can perform isotonic regression using this feature on the recalibration set S to produce well-calibrated probabilities. In the limit of infinite data, Algorithm 1 minimizes the expected loss E[(F(x, E) −I[y ∈E])2], where the expectation is over (x, y, E). By (3), the calibration error E[(T(x, E) −F(x, E))2] will also be small. If there are not too many features φ, we can drive the ℓ2 loss close to zero with a nonparametric method such as k-NN. This is also why isotonic regression is sensible for binary recalibration: we first project the data into a highly informative one-dimensional feature space; then we predict labels from that space to obtain small ℓ2 loss. Note also that standard multiclass recalibration is a special case of this framework, where we use the raw uncalibrated score from pθ as a single feature. In the structured setting, one must invest careful thought in the choice of classifier and features; we discuss these choices below. Features. Calibration is possible even with a single constant feature (e.g. φ(x, E) ≡1), but sharpness depends strongly on the features’ quality. If φ collapses points of opposite labels, no forecaster will be able to separate them and be sharp. While we want informative features, we can only afford to have a few, since our recalibration set is typically small. Compared to calibration for binary classification, our choice of features must also be informed by their computational requirements: the most informative features might require performing full inference in an intractable model. It is therefore useful to think of features as belonging to one of three types, depending on whether they are derived from unstructured classifiers (e.g. an SVM trained individually on each label), MAP inference, or marginal inference. In Section 5, we will show that marginal inference produces the sharpest features, but clever MAP-based features can do almost as well. In Table 1, we propose several features that follow our guiding principles and that illustrate the computational tradeoffs inherent to structured prediction. 4 MAP recalibration on y Marginal recalibration on yj Type Name Definition Name Definition none φno 1 : SVM margin minj mrgyj[sSVM j (yj)] φno 2 : SVM margin mrgyj[sSVM j (yj)] MAP φmp 1 : Label length |yMAP| φmp 4 : Label freq. % positions j′ labeled yMAP j φmp 2 : Admissibility I[yMAP ∈G(x)] φmp 5 : Neighbors % neighbors j′ labeled yMAP j φmp 3 : Margin mrgy[pθ(y | x)] φmp 6 : Label type I[yMAP j ∈L(x)] φmp 7 : Pseudomargin mrgyj[pθ(yj | yMAP −j , x)] Marg. φmg 1 : Margin minj mrgyj[pθ(yj | x)] φmg 2 : Margin mrgyj[pθ(yj | x)] φmg 3 : Concordance I[yMG j = yMAP j ] Table 1: Features for MAP recalibration (I(x) = {{yMAP(x)}}) and marginal recalibration (I(x) = {{y : yj = yMAP(x)j}}L j=1). We consider three types of features, requiring either unstructured, MAP, or marginal inference. For a generic function f, define mrgaf(a) ≜f(a(1)) −f(a(2)), where a(1) and a(2) are the top two inputs to f, ordered by f(a). Let yMG j ≜arg maxyj pθ(yj | x); let sSVM j (yj) be the score of an SVM classifier predicting label yj. Features φmp 2 and φmp 6 require domain-specific knowledge: defining admissible sets G(x), L(x). In OCR, G are all English words and L(x) are similar-looking letters. Percentages in φmp 4 and φmp 5 are relative to all the labels in yMAP. Region-based forecasters. Recall from (4) that calibration examines the true probability of an event (y ∈E) conditioned on the forecaster’s prediction F(x, E) = p. By limiting the number of different probabilities p that F can output, we can more accurately estimate the true probability for each p To this end, let us partition the feature space (the range of φ) into regions R, and output a probability FR ∈[0, 1] for each region R ∈R. Formally, we consider region-based forecasters of the form F(x, E) = P R∈R FRI[φ(x, E) ∈R], where FR is the fraction of points in region R (that is, (x, E) for which φ(x, E) ∈R) for which the event holds (y ∈E). Note that the partitioning R could itself depend on the recalibration set. Two examples of region-based forecasters are k-nearest neighbors (k-NN) and decision trees. Let us obtain additional insight into the performance of region-based forecasters as a function of recalibration set size. Let S denote here a recalibration set of size n, which is used to derive a partitioning R and probability estimates FR for each region R ∈R. Let TR ≜P(y ∈E | φ(x, E) ∈ R) be the true event probability for region R, and wR ≜P(φ(x, E) ∈R) be the probability mass of region R. We may rewrite the expected calibration error (3) of FR trained on a random S of size n (drawn i.i.d. from P) as CalibrationErrorn = ER " X R∈R wRES[(FR −TR)2 | R] # . (5) We see that there is a classic bias-variance tradeoff between having smaller regions (lower bias, increased sharpness) and having more data points per region (lower variance, better calibration): E[(FR −TR)2 | R] = (E[FR | R] −TR)2 | {z } bias + E[(FR −E[FR | R])2 | R] | {z } variance . If R is a fixed partitioning independent of S, then the bias will be zero, and the variance is due to an empirical average, falling off as 1/n. However, both k-NN and decision trees produce biased estimates FR of TR because the regions are chosen adaptively, which is important for achieving sharpness. In this case, we can still ensure that the calibration error vanishes to zero if we let the regions grow uniformly larger: minR∈R |{(x, y) ∈S : φ(x, E) ∈R, E ∈I(x)}| P−→∞. 5 Experiments We test our proposed recalibrators and features on three real-world tasks. Multiclass image classification. The task is to predict an image label given an image. This setting is a special case of structured prediction in which we show that our framework improves over existing multiclass recalibration strategies. We perform our experiments on the CIFAR-10 dataset [15], 5 0.0 0.2 0.4 0.6 0.8 1.0 Mean predicted value 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of positives Image classification (Multi-class MAP recal.); 75% accuracy on raw uncalibrated SVM raw (23.0) cal (19.6) 1-vs-a (20.1) 0.0 0.2 0.4 0.6 0.8 1.0 Mean predicted value 0.0 0.2 0.4 0.6 0.8 1.0 OCR (Chain CRF MAP recalibration); 45% per-word accuracy using Viterbi decoding raw (29.5) cal (13.6) 0.0 0.2 0.4 0.6 0.8 1.0 Mean predicted value 0.0 0.2 0.4 0.6 0.8 1.0 Scene understanding (Graph CRF marginal recal.); 78% accuracy using mean-field marg. decoding raw (65.9) cal (18.6) Figure 2: MAP recalibration in the multiclass and chain CRF settings (left and middle) and marginal recalibration of the graph CRF (right). The legend includes the ℓ2 loss before and after calibration. The radius of the black balls reflects the number of points having the given forecasted and true probabilities. which consists of 60,000 32x32 color images of different types of animals and vehicles (ten classes in total). We train a linear SVM on features derived from k-means clustering and that produce high accuracies (79%) on this dataset [16]. We use 800 out of the 1600 features having the highest mutual information with the label (the drop in performance is negligible). 38,000 images were used for training, 2,000 for calibration, and 20,000 for testing. Optical character recognition. The task is to predict the word (sequence of characters) given a sequence of images (Figure 1). Calibrated OCR systems can be useful for automatic sorting of mail. This setting demonstrates calibration on a tractable linear-chain CRF. We used a dataset consisting of ∼8-character-long words from 150 human subjects [3]. Each character is rasterized into a 16×8 binary image. We chose 2000 words for training and another 2000 for testing. The remaining words are subsampled in various ways to produce recalibration sets. Scene understanding. Given an image divided into a set of regions, the task is to label each region with its type (e.g. person, tree, etc.). Calibrated scene understanding is important for building autonomous agents that try to take optimal actions in the environment, integrating over uncertainty. This is a structured prediction setting in which inference is intractable. We conduct experiments on a post-processed VOC Pascal dataset [5]. In brief, we train a graph CRF to predict the joint labeling yi of superpixels yij in an image (∼100 superpixels per image; 21 possible labels). The input xi consists of 21 node features; CRF edges connect adjacent superpixels. We use 600 examples for training, 500 for testing and subsample the remaining ∼800 examples to produce calibration sets. We perform MAP inference using AD3, a dual composition algorithm; we use a mean field approximation to compute marginals. Experimental setup. We perform both MAP and marginal calibration as described in Section 3. We use decision trees and k-NN as our recalibration algorithms and examine the quality of our forecasts based on calibration and sharpness (Section 2). We further discretize probabilities into buckets of size 0.1: B = {[ i−1 10 , i 10) : i = 1, . . . , 10}. We report results using calibration curves: For each test point (xi, Ei, yi), let fi = F(xi, Ei) ∈ [0, 1] be the forecasted probability and ti = I[yi ∈Ei] ∈{0, 1} be the true outcome. For each bucket B ∈B, we compute averages fB = N −1 B P i:fi∈B fi and tB = N −1 B P i:fi∈B ti, where NB = |{fi ∈B}| is the number of points in bucket B. A calibration curve plots the tB as a function of fB. Perfect calibration corresponds to a straight line. See Figure 2 for an example. 5.1 “Out-of-the-Box” Recalibration We would first like to demonstrate that our approach works well “out of the box” with very simple parameters: a single feature, k-NN with k = 100, and a reasonably-sized calibration set. We report results in three settings: (i) multiclass and (ii) chain CRF MAP recalibration with the margin feature φmg 1 (Figure 2, left, middle), as well as (iii) graph CRF marginal recalibration with the margin feature φmg 2 (Figure 2, right). We use calibration sets of 2,000, 1,000, and 300 (respectively) and compare to the raw CRF probabilities pθ(y ∈E | x). 6 0.0 0.2 0.4 0.6 0.8 1.0 Uncalibrated : 30.2 0.0 0.2 0.4 0.6 0.8 1.0 Unstructured SVM scores φno 2 : 15.8 0.0 0.2 0.4 0.6 0.8 1.0 26 character indicators φmp 6 : 16.1 0.0 0.2 0.4 0.6 0.8 1.0 Marginal probabilities φmg 2 : 12.0 0.0 0.2 0.4 0.6 0.8 1.0 Marginal probabilities + Marginal/MAP agreement φmg 2 , φmg 3 : 10.9 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Per-letter OCR (Chain CRF marginal recalibration); 84% per-letter accuracy using Viterbi decoding 0.0 0.2 0.4 0.6 0.8 1.0 All features : 10.8 0.0 0.2 0.4 0.6 0.8 1.0 Uncalibrated : 21.0 0.0 0.2 0.4 0.6 0.8 1.0 Unstructured SVM scores φno 1 : 20.5 0.0 0.2 0.4 0.6 0.8 1.0 Length + Presence in dict. φmp 1 , φmp 2 : 4.2 0.0 0.2 0.4 0.6 0.8 1.0 Margin between 1st and 2nd best φmp 3 : 13.1 0.0 0.2 0.4 0.6 0.8 1.0 Lowest marginal probability φmg 1 : 20.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Per-word OCR (Chain CRF MAP recalibration); 45% per-word accuracy using Viterbi decoding 0.0 0.2 0.4 0.6 0.8 1.0 All features : 4.0 0.0 0.2 0.4 0.6 0.8 1.0 Uncalibrated : 67.0 0.0 0.2 0.4 0.6 0.8 1.0 Unstructured SVM scores φno 2 : 14.7 0.0 0.2 0.4 0.6 0.8 1.0 Pseudomargins φmp 7 : 17.0 0.0 0.2 0.4 0.6 0.8 1.0 Pseudomargins, other MAP features φmp 4 , φmp 5 , φmp 7 : 15.4 0.0 0.2 0.4 0.6 0.8 1.0 Marginals, MAP/marg. concordance φmg 2 : 15.9 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Scene understanding (Graph CRF marginal recalibration); 78% accuracy using mean-field marg. decoding 0.0 0.2 0.4 0.6 0.8 1.0 All features : 14.0 Figure 3: Feature analysis for MAP and marginal recalibration of the chain CRF (left and middle, resp.) and marginal recalibration of the graph CRF (right). Subplots show calibration curves for various groups of features from Table 1, as well as ℓ2 losses; dot sizes indicate relative bucket size. Figure 2 shows that our predictions (green line) are well-calibrated in every setting. In the multiclass setting, we outperform an existing approach which individually recalibrates one-vs-all classifiers and normalizes their probability estimates [12]. This suggests that recalibrating for a specific event (e.g. the highest scoring class) is better than first estimating all the multiclass probabilities. 5.2 Feature Analysis Next, we investigate the role of features. In Figure 3, we consider three structured settings, and in each setting evaluate performance using different sets of features from Table 1. From top to bottom, the subplots describe progressively more computationally demanding features. Our main takeaways are that clever inexpensive features do as well as naive expensive ones, that features may be complementary and help each other, and that recalibration allows us to add “global” features to a chain CRF. We also see that features affect only sharpness. In the intractable graph CRF setting (Figure 3, right), we observe that pseudomarginals φmp 7 (which require only MAP inference) fare almost as well as true marginals φmg 2 , although they lack resolution. Augmenting with additional MAP-based features (φmp 4 , φmp 5 ) that capture whether a label is similar to its neighbors and whether it occurs elsewhere in the image resolves this. This synergistic interaction of features appears elsewhere. On marginal chain CRF recalibration (Figure 3, left), the margin φmg 2 between the two best classes yields calibrated forecasts that slightly lack sharpness near zero (points with e.g. 50% and 10% confidences will have similarly small margins). Adding the MAP-marginal concordance feature φmg 3 improves calibration, since we can further differentiate between low and very low confidence estimates. Similarly, individual SVM and MAP-based features φno 2 , φmp 6 (the φmp 6 are 26 binary indicators, one per character) are calibrated, but not very sharp. They accurately identify 70%, 80% and 90% confidence sets, which may be sufficient in practice, given that they take no additional time to compute. Adding features based on marginals φmg 2 , φmg 3 improves sharpness. On MAP CRF recalibration (Figure 3, middle), we see that simple features (φmp 1 , φmp 2 ) can fare better than more sophisticated ones like the margin φmp 3 (recall that φmp 1 is the length of a word; G in φmp 2 encodes whether the word yMAP is in the dictionary). This demonstrates that recalibration lets us introduce new global features beyond what’s in the original CRF, which can dramatically improve calibration at no additional inferential cost. 7 0 500 1000 1500 2000 Recalibration set size 10-4 10-3 10-2 10-1 kNN Cal Sha 0 500 1000 1500 2000 Recalibration set size 10-4 10-3 10-2 10-1 Decision tree Figure 4: Calibration error (blue) and sharpness (green) of k-NN (left) and decision trees (right) as a function of calibration set size (chain CRF; marginal recalibration). 5.3 Effects of Recalibration Set Size and Recalibration Technique Lastly, in Figure 4, we compare k-NN and decision trees on chain CRF marginal prediction using feature φmg 2 . We subsample calibration sets S of various sizes N. For each N and each algorithm we choose a hyperparameter (minimum leaf size for decision trees, k in k-NN) by 10-fold crossvalidation on S. We tried values between 5 and 500 in increments of 5. Figure 4 shows that for both methods, sharpness remains constant, while the calibration error decreases with N and quickly stabilizes below 10−3; this confirms that we can always recalibrate with enough data. The decrease in calibration error also indicates that cross-validation successfully finds a good model for each N. Finally, we found that k-NN fared better when using continuous features (see also right columns of Figures 2 and 3); decision trees performed much better on categorical features. 6 Previous Work and Discussion Calibration and sharpness provide the conceptual basis for this work. These ideas and their connection to l2 losses have been explored extensively in the statistics literature [7, 9] in connection to forecast evaluation; there exist generalizations to other losses as well [17, 10]. Calibration in the online setting is a field in itself; see [8] for a starting point. Finally, calibration has been explored extensively from a Bayesian viewpoint, starting with the seminal work of Dawid [18]. Recalibration has been mostly studied in the binary classification setting, with Platt scaling [11] and isotonic regression [13] being two popular and effective methods. Non-binary methods typically involve training one-vs-all predictors [12] and include extensions to ranking losses [19] and combinations of estimators [20]. Our generalization to structured prediction required us to develop the notion of events of interest, which even in the multiclass setting works better than estimating every class probability, and this might be useful beyond typical structured prediction problems. Confidence estimation methods play a key role in speech recognition [21], but they require domain specific acoustic features [1]. Our approach is more general, as it applies in any graphical model (including ones where inference is intractable), uses domain-independent features, and guarantees calibrated probabilities, rather than simple scores that correlate with accuracy. The issue of calibration arises any time one needs to assess the confidence of a prediction. Its importance has been discussed and emphasized in medicine [22], natural language processing [23], speech recognition [21], meteorology [10], econometrics [9], and psychology [24]. Unlike uncalibrated confidence measures, calibrated probabilities are formally tied to objective frequencies. They are easy to understand by users, e.g., patients undergoing diagnosis or researchers querying a probabilistic database. Moreover, modern AI systems typically consist of a pipeline of modules [23]. In this setting, calibrated probabilities are important to express uncertainty meaningfully across different (potentially third-party) modules. We hope our extension to the structured prediction setting can help make calibration more accessible and easier to apply to more complex and diverse settings. Acknowledgements. This research is supported by an NSERC Canada Graduate Scholarship to the first author and a Sloan Research Fellowship to the second author. Reproducibility. All code, data, and experiments for this paper are available on CodaLab at https://www.codalab.org/worksheets/0xecc9a01cfcbc4cd6b0444a92d259a87c/. 8 References [1] M. Seigel. Confidence Estimation for Automatic Speech Recognition Hypotheses. PhD thesis, University of Cambridge, 2013. [2] D. E. Heckerman and B. N. Nathwani. Towards normative expert systems: Probability-based representations for efficient knowledge acquisition and inference. Methods Archive, 31(2):106–116, 1992. [3] R. H. Kassel. A comparison of approaches to on-line handwritten character recognition. PhD thesis, Massachusetts Institute of Technology, 1995. [4] P. Liang, A. Bouchard-Cˆot´e, D. Klein, and B. Taskar. An end-to-end discriminative approach to machine translation. In International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL), 2006. [5] A. Mueller. Methods for Learning Structured Prediction in Semantic Segmentation of Natural Images. PhD thesis, University of Bonn, 2013. [6] G. W. Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1– 3, 1950. [7] A. H. Murphy. A new vector partition of the probability score. Journal of Applied Meteorology, 12(4):595–600, 1973. [8] D. P. Foster and R. V. Vohra. Asymptotic calibration, 1998. [9] T. Gneiting, F. Balabdaoui, and A. E. Raftery. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(2):243–268, 2007. [10] J. Brocker. Reliability, sufficiency, and the decomposition of proper scores. Quarterly Journal of the Royal Meteorological Society, 135(643):1512–1519, 2009. [11] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3):61–74, 1999. [12] B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 694–699, 2002. [13] A. Niculescu-Mizil and R. Caruana. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning, pages 625–632, 2005. [14] D. B. Stephenson, C. A. S. Coelho, and I. T. Jolliffe. Two extra components in the brier score decomposition. Weather Forecasting, 23:752–757, 2008. [15] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. [16] A. Coates and A. Y. Ng. Learning feature representations with K-means. Neural Networks: Tricks of the Trade - Second Edition, 2(1):561–580, 2012. [17] A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation and classification: Structure and applications, 2005. [18] D. A. Philip. The well-calibrated Bayesian. Journal of the American Statistical Association (JASA), 77(379):605–610, 1982. [19] A. K. Menon, X. Jiang, S. Vembu, C. Elkan, and L. Ohno-Machado. Predicting accurate probabilities with a ranking loss. In International Conference on Machine Learning (ICML), 2012. [20] L. W. Zhong and J. Kwok. Accurate probability calibration for multiple classifiers. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1939–1945, 2013. [21] D. Yu, J. Li, and L. Deng. Calibration of confidence measures in speech recognition. Trans. Audio, Speech and Lang. Proc., 19(8):2461–2473, 2011. [22] X. Jiang, M. Osl, J. Kim, and L. Ohno-Machado. Calibrating predictive model estimates to support personalized medicine. Journal of the American Medical Informatics Association, 19(2):263–274, 2012. [23] K. Nguyen and B. O’Connor. Posterior calibration and exploratory analysis for natural language processing models. In Empirical Methods in Natural Language Processing (EMNLP), pages 1587–1598, 2015. [24] S. Lichtenstein, B. Fischhoff, and L. D. Phillips. Judgement under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982. 9
2015
145
5,642
Spectral Representations for Convolutional Neural Networks Oren Rippel Department of Mathematics Massachusetts Institute of Technology rippel@math.mit.edu Jasper Snoek Twitter and Harvard SEAS jsnoek@seas.harvard.edu Ryan P. Adams Twitter and Harvard SEAS rpa@seas.harvard.edu Abstract Discrete Fourier transforms provide a significant speedup in the computation of convolutions in deep learning. In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs). We employ spectral representations to introduce a number of innovations to CNN design. First, we propose spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain. This approach preserves considerably more information per parameter than other pooling strategies and enables flexibility in the choice of pooling output dimensionality. This representation also enables a new form of stochastic regularization by randomized modification of resolution. We show that these methods achieve competitive results on classification and approximation tasks, without using any dropout or max-pooling. Finally, we demonstrate the effectiveness of complex-coefficient spectral parameterization of convolutional filters. While this leaves the underlying model unchanged, it results in a representation that greatly facilitates optimization. We observe on a variety of popular CNN configurations that this leads to significantly faster convergence during training. 1 Introduction Convolutional neural networks (CNNs) (LeCun et al., 1989) have been used to achieve unparalleled results across a variety of benchmark machine learning problems, and have been applied successfully throughout science and industry for tasks such as large scale image and video classification (Krizhevsky et al., 2012; Karpathy et al., 2014). One of the primary challenges of CNNs, however, is the computational expense necessary to train them. In particular, the efficient implementation of convolutional kernels has been a key ingredient of any successful use of CNNs at scale. Due to its efficiency and the potential for amortization of cost, the discrete Fourier transform has long been considered by the deep learning community to be a natural approach to fast convolution (Bengio & LeCun, 2007). More recently, Mathieu et al. (2013); Vasilache et al. (2014) have demonstrated that convolution can be computed significantly faster using discrete Fourier transforms than directly in the spatial domain, even for tiny filters. This computational gain arises from the convenient property of operator duality between convolution in the spatial domain and element-wise multiplication in the frequency domain. In this work, we argue that the frequency domain offers more than a computational trick for convolution: it also provides a powerful representation for modeling and training CNNs. Frequency decomposition allows studying an input across its various length-scales of variation, and as such provides a natural framework for the analysis of data with spatial coherence. We introduce two 1 applications of spectral representations. These contributions can be applied independently of each other. Spectral parametrization We propose the idea of learning the filters of CNNs directly in the frequency domain. Namely, we parametrize them as maps of complex numbers, whose discrete Fourier transforms correspond to the usual filter representations in the spatial domain. Because this mapping corresponds to unitary transformations of the filters, this reparametrization does not alter the underlying model. However, we argue that the spectral representation provides an appropriate domain for parameter optimization, as the frequency basis captures typical filter structure well. More specifically, we show that filters tend to be considerably sparser in their spectral representations, thereby reducing the redundancy that appears in spatial domain representations. This provides the optimizer with more meaningful axis-aligned directions that can be taken advantage of with standard element-wise preconditioning. We demonstrate the effectiveness of this reparametrization on a number of CNN optimization tasks, converging 2-5 times faster than the standard spatial representation. Spectral pooling Pooling refers to dimensionality reduction used in CNNs to impose a capacity bottleneck and facilitate computation. We introduce a new approach to pooling we refer to as spectral pooling. It performs dimensionality reduction by projecting onto the frequency basis set and then truncating the representation. This approach alleviates a number of issues present in existing pooling strategies. For example, while max pooling is featured in almost every CNN and has had great empirical success, one major criticism has been its poor preservation of information (Hinton, 2014b,a). This weakness is exhibited in two ways. First, along with other stride-based pooling approaches, it implies a very sharp dimensionality reduction by at least a factor of 4 every time it is applied on two-dimensional inputs. Moreover, while it encourages translational invariance, it does not utilize its capacity well to reduce approximation loss: the maximum value in each window only reflects very local information, and often does not represent well the contents of the window. In contrast, we show that spectral pooling preserves considerably more information for the same number of parameters. It achieves this by exploiting the non-uniformity of typical inputs in their signal-to-noise ratio as a function of frequency. For example, natural images are known to have an expected power spectrum that follows an inverse power law: power is heavily concentrated in the lower frequencies — while higher frequencies tend to encode noise (Torralba & Oliva, 2003). As such, the elimination of higher frequencies in spectral pooling not only does minimal damage to the information in the input, but can even be viewed as a type of denoising. In addition, spectral pooling allows us to specify any arbitrary output map dimensionality. This permits reduction of the map dimensionality in a slow and controlled manner as a function of network depth. Also, since truncation of the frequency representation exactly corresponds to reduction in resolution, we can supplement spectral pooling with stochastic regularization in the form of randomized resolution. Spectral pooling can be implemented at a negligible additional computational cost in convolutional neural networks that employ FFT for convolution kernels, as it only requires matrix truncation. We also note that these two ideas are both compatible with the recently-introduced method of batch normalization (Ioffe & Szegedy, 2015), permitting even better training efficiency. 2 The Discrete Fourier Transform The discrete Fourier transform (DFT) is a powerful way to decompose a spatiotemporal signal. In this section, we provide an introduction to a number of components of the DFT drawn upon in this work. We confine ourselves to the two-dimensional DFT, although all properties and results presented can be easily extended to other input dimensions. Given an input x ∈CM×N (we address the constraint of real inputs in Subsection 2.1), its 2D DFT F(x) ∈CM×N is given by F(x)hw = 1 √ MN M−1 X m=0 N−1 X n=0 xmne−2πi( mh M + nw N ) ∀h ∈{0, . . . , M −1}, ∀w ∈{0, . . . , N −1} . The DFT is linear and unitary, and so its inverse transform is given by F −1(·) = F(·)∗, namely the conjugate of the transform itself. 2 (a) DFT basis functions. (b) Examples of input-transform pairs. (c) Conjugate Symm. Figure 1: Properties of discrete Fourier transforms. (a) All discrete Fourier basis functions of map size 8 × 8. Note the equivalence of some of these due to conjugate symmetry. (b) Examples of input images and their frequency representations, presented as log-amplitudes. The frequency maps have been shifted to center the DC component. Rays in the frequency domain correspond to spatial domain edges aligned perpendicular to these. (c) Conjugate symmetry patterns for inputs with odd (top) and even (bottom) dimensionalities. Orange: real-valuedness constraint. Blue: no constraint. Gray: value fixed by conjugate symmetry. Intuitively, the DFT coefficients resulting from projections onto the different frequencies can be thought of as measures of correlation of the input with basis functions of various length-scales. See Figure 1(a) for a visualization of the DFT basis functions, and Figure 1(b) for examples of inputfrequency map pairs. The widespread deployment of the DFT can be partially attributed to the development of the Fast Fourier Transform (FFT), a mainstay of signal processing and a standard component of most math libraries. The FFT is an efficient implementation of the DFT with time complexity O (MN log (MN)). Convolution using DFT One powerful property of frequency analysis is the operator duality between convolution in the spatial domain and element-wise multiplication in the spectral domain. Namely, given two inputs x, f ∈RM×N, we may write F(x ∗f) = F(x) ⊙F(f) (1) where by ∗we denote a convolution and by ⊙an element-wise product. Approximation error The unitarity of the Fourier basis makes it convenient for the analysis of approximation loss. More specifically, Parseval’s Theorem links the ℓ2 loss between any input x and its approximation ˆx to the corresponding loss in the frequency domain: ∥x −ˆx∥2 2 = ∥F(x) −F(ˆx)∥2 2 . (2) An equivalent statement also holds for the inverse DFT operator. This allows us to quickly assess how an input is affected by any distortion we might make to its frequency representation. 2.1 Conjugate symmetry constraints In the following sections of the paper, we will propagate signals and their gradients through DFT and inverse DFT layers. In these layers, we will represent the frequency domain in the complex field. However, for all layers apart from these, we would like to ensure that both the signal and its gradient are constrained to the reals. A necessary and sufficient condition to achieve this is conjugate symmetry in the frequency domain. Namely, for any transform y = F(x) of some input x, it must hold that ymn = y∗ (M−m) modM,(N−n) modN ∀m ∈{0, . . . , M −1}, ∀n ∈{0, . . . , N −1} . (3) Thus, intuitively, given the left half of our frequency map, the diminished number of degrees of freedom allows us to reconstruct the right. In effect, this allows us to store approximately half the parameters that would otherwise be necessary. Note, however, that this does not reduce the effective dimensionality, since each element consists of real and imaginary components. The conjugate symmetry constraints are visualized in Figure 1(c). Given a real input, its DFT will necessarily meet these. This symmetry can be observed in the frequency representations of the examples in Figure 1(b). However, since we seek to optimize over parameters embedded directly in the frequency domain, we need to pay close attention to ensure the conjugate symmetry constraints are enforced upon inversion back to the spatial domain (see Subsection 2.2). 3 2.2 Differentiation Here we discuss how to propagate the gradient through a Fourier transform layer. This analysis can be similarly applied to the inverse DFT layer. Define x ∈RM×N and y = F(x) to be the input and output of a DFT layer respectively, and R : RM×N →R a real-valued loss function applied to y which can be considered as the remainder of the forward pass. Since the DFT is a linear operator, its gradient is simply the transformation matrix itself. During back-propagation, then, this gradient is conjugated, and this, by DFT unitarity, corresponds to the application of the inverse transform: ∂R ∂x = F −1 ∂R ∂y  . (4) There is an intricacy that makes matters a bit more complicated. Namely, the conjugate symmetry condition discussed in Subsection 2.1 introduces redundancy. Inspecting the conjugate symmetry constraints in Equation (3), we note their enforcement of the special case y00 ∈R for N odd, and y00, y N 2 ,0, y0, N 2 , y N 2 , N 2 ∈R for N even. For all other indices they enforce conjugate equality of pairs of distinct elements. These conditions imply that the number of unconstrained parameters is about half the map in its entirety. 3 Spectral Pooling The choice of a pooling technique boils down to the selection of an appropriate set of basis functions to project onto, and some truncation of this representation to establish a lower-dimensionality approximation to the original input. The idea behind spectral pooling stems from the observation that the frequency domain provides an ideal basis for inputs with spatial structure. We first discuss the technical details of this approach, and then its advantages. Spectral pooling is straightforward to understand and to implement. We assume we are given an input x ∈RM×N, and some desired output map dimensionality H × W. First, we compute the discrete Fourier transform of the input into the frequency domain as y = F(x) ∈CM×N, and assume that the DC component has been shifted to the center of the domain as is standard practice. We then crop the frequency representation by maintaining only the central H × W submatrix of frequencies, which we denote as ˆy ∈CH×W . Finally, we map this approximation back into the spatial domain by taking its inverse DFT as ˆx = F −1(ˆy) ∈RH×W . These steps are listed in Algorithm 1. Note that some of the conjugate symmetry special cases described in Subsection 2.2 might be broken by this truncation. As such, to ensure that ˆx is real-valued, we must treat these individually with TREATCORNERCASES, which can be found in the supplementary material. Figure 2 demonstrates the effect of this pooling for various choices of H × W. The backpropagation procedure is quite intuitive, and can be found in Algorithm 2 (REMOVEREDUNDANCY and RECOVERMAP can be found in the supplementary material). In Subsection 2.2, we addressed the nuances of differentiating through DFT and inverse DFT layers. Apart from these, the last component left undiscussed is differentiation through the truncation of the frequency matrix, but this corresponds to a simple zero-padding of the gradient maps to the appropriate dimensions. In practice, the DFTs are the computational bottlenecks of spectral pooling. However, we note that in convolutional neural networks that employ FFTs for convolution computation, spectral pooling can be implemented at a negligible additional computational cost, since the DFT is performed regardless. We proceed to discuss a number of properties of spectral pooling, which we then test comprehensively in Section 5. Algorithm 1: Spectral pooling Input: Map x ∈RM×N, output size H×W Output: Pooled map ˆx ∈RH×W 1: y ←F(x) 2: ˆy ←CROPSPECTRUM(y, H × W) 3: ˆy ←TREATCORNERCASES(ˆy) 4: ˆx ←F −1(ˆy) Algorithm 2: Spectral pooling back-propagation Input: Gradient w.r.t output ∂R ∂ˆx Output: Gradient w.r.t input ∂R ∂x 1: ˆz ←F ∂R ∂ˆx  2: ˆz ←REMOVEREDUNDANCY(ˆz) 3: z ←PADSPECTRUM(ˆz, M × N) 4: z ←RECOVERMAP(z) 5: ∂R ∂x ←F −1 (z) 4 Figure 2: Approximations for different pooling schemes, for different factors of dimensionality reduction. Spectral pooling projects onto the Fourier basis and truncates it as desired. This retains significantly more information and permits the selection of any arbitrary output map dimensionality. 3.1 Information preservation Spectral pooling can significantly increase the amount of retained information relative to maxpooling in two distinct ways. First, its representation maintains more information for the same number of degrees of freedom. Spectral pooling reduces the information capacity by tuning the resolution of the input precisely to match the desired output dimensionality. This operation can also be viewed as linear low-pass filtering and it exploits the non-uniformity of the spectral density of the data with respect to frequency. That is, that the power spectra of inputs with spatial structure, such as natural images, carry most of their mass on lower frequencies. As such, since the amplitudes of the higher frequencies tend to be small, Parseval’s theorem from Section 2 informs us that their elimination will result in a representation that minimizes the ℓ2 distortion after reconstruction. Second, spectral pooling does not suffer from the sharp reduction in output dimensionality exhibited by other pooling techniques. More specifically, for stride-based pooling strategies such as max pooling, the number of degrees of freedom of two-dimensional inputs is reduced by at least 75% as a function of stride. In contrast, spectral pooling allows us to specify any arbitrary output dimensionality, and thus allows us to reduce the map size gradually as a function of layer. 3.2 Regularization via resolution corruption We note that the low-pass filtering radii, say RH and RW , can be chosen to be smaller than the output map dimensionalities H, W. Namely, while we truncate our input frequency map to size H × W, we can further zero-out all frequencies outside the central RH × RW square. While this maintains the output dimensionality H × W of the input domain after applying the inverse DFT, it effectively reduces the resolution of the output. This can be seen in Figure 2. This allows us to introduce regularization in the form of random resolution reduction. We apply this stochastically by assigning a distribution pR(·) on the frequency truncation radius (for simplicity we apply the same truncation on both axes), sampling from this a random radius at each iteration, and wiping out all frequencies outside the square of that size. Note that this can be regarded as an application of nested dropout (Rippel et al., 2014) on both dimensions of the frequency decomposition of our input. In practice, we have had success choosing pR(·) = U[Hmin,H](·), i.e., a uniform distribution stretching from some minimum value all the way up to the highest possible resolution. 4 Spectral Parametrization of CNNs Here we demonstrate how to learn the filters of CNNs directly in their frequency domain representations. This offers significant advantages over the traditional spatial representation, which we show empirically in Section 5. Let us assume that for some layer of our convolutional neural network we seek to learn filters of size H × W. To do this, we parametrize each filter f ∈CH×W in our network directly in the frequency domain. To attain its spatial representation, we simply compute its inverse DFT 5 (a) Filters over time. (b) Sparsity patterns. 10−4 10−3 10−2 10−1 Element momentum 0.0 0.2 0.4 0.6 0.8 1.0 Normalized count Spatial Spectral (c) Momenta distributions. Figure 3: Learning dynamics of CNNs with spectral parametrization. The histograms have been produced after 10 epochs of training on CIFAR-10 by each method, but are similar throughout. (a) Progression over several epochs of filters parametrized in the frequency domain. Each pair of columns corresponds to the spectral parametrization of a filter and its inverse transform to the spatial domain. Filter representations tend to be more local in the Fourier basis. (b) Sparsity patterns for the different parametrizations. Spectral representations tend to be considerably sparser. (c) Distributions of momenta across parameters for CNNs trained with and without spectral parametrization. In the spectral parametrization considerably fewer parameters are updated. as F −1(f) ∈RH×W . From this point on, we proceed as we would for any standard CNN by computing the convolution of the filter with inputs in our mini-batch, and so on. The back-propagation through the inverse DFT is virtually identical to the one of spectral pooling described in Section 3. We compute the gradient as outlined in Subsection 2.2, being careful to obey the conjugate symmetry constraints discussed in Subsection 2.1. We emphasize that this approach does not change the underlying CNN model in any way — only the way in which it is parametrized. Hence, this only affects the way the solution space is explored by the optimization procedure. 4.1 Leveraging filter structure This idea exploits the observation that CNN filters have a very characteristic structure that reappears across data sets and problem domains. That is, CNN weights can typically be captured with a small number of degrees of freedom. Represented in the spatial domain, however, this results in significant redundancy. The frequency domain, on the other hand, provides an appealing basis for filter representation: characteristic filters (e.g., Gabor filters) are often very localized in their spectral representations. This follows from the observation that filters tend to feature very specific length-scales and orientations. Hence, they tend to have nonzero support in a narrow set of frequency components. This hypothesis can be observed qualitatively in Figure 3(a) and quantitatively in Figure 3(b). Empirically, in Section 5 we observe that spectral representations of filters leads to a convergence speedup by 2-5 times. We remark that, had we trained our network with standard stochastic gradient descent, the linearity of differentiation and parameter update would have resulted in exactly the same filters regardless of whether they were represented in the spatial or frequency domain during training (this is true for any invertible linear transformation of the parameter space). However, as discussed, this parametrization corresponds to a rotation to a more meaningful axis alignment, where the number of relevant elements has been significantly reduced. Since modern optimizers implement update rules that consist of adaptive element-wise rescaling, they are able to leverage this axis alignment by making large updates to a small number of elements. This can be seen quantitatively in Figure 3(c), where the optimizer — Adam (Kingma & Ba, 2015), in this case — only touches a small number of elements in its updates. There exist a number of extensions of the above approach we believe would be quite promising in future work; we elaborate on these in the discussion. 5 Experiments We demonstrate the effectiveness of spectral representations in a number of different experiments. We ran all experiments on code optimized for the Xeon Phi coprocessor. We used Spearmint (Snoek et al., 2015) for Bayesian optimization of hyperparameters with 5-20 concurrent evaluations. 6 0.0 0.2 0.4 0.6 0.8 Fraction of parameters kept 2−7 2−6 2−5 2−4 2−3 2−2 2−1 20 ∥f−ˆf∥ ∥f∥ Max pooling Spectral pooling (a) Approximation loss for the ImageNet validation set. Method CIFAR-10 CIFAR-100 Stochastic pooling 15.13% 41.51% Maxout 11.68% 38.57% Network-in-network 10.41% 35.68% Deeply supervised 9.78% 34.57% Spectral pooling 8.6% 31.6% (b) Classification rates. Figure 4: (a) Average information dissipation for the ImageNet validation set as a function of fraction of parameters kept. This is measured in ℓ2 error normalized by the input norm. The red horizontal line indicates the best error rate achievable by max pooling. (b) Test errors on CIFAR-10/100 without data augmentation of the optimal spectral pooling architecture, as compared to current stateof-the-art approaches: stochastic pooling (Zeiler & Fergus, 2013), Maxout (Goodfellow et al., 2013), network-in-network (Lin et al., 2013), and deeply-supervised nets (Lee et al., 2014). 5.1 Spectral pooling Information preservation We test the information retainment properties of spectral pooling on the validation set of ImageNet (Russakovsky et al., 2015). For the different pooling strategies we plot the average approximation loss resulting from pooling to different dimensionalities. This can be seen in Figure 4. We observe the two aspects discussed in Subsection 3.1: first, spectral pooling permits significantly better reconstruction for the same number of parameters. Second, for max pooling, the only knob controlling the coarseness of approximation is the stride, which results in severe quantization and a constraining lower bound on preserved information (marked in the figure as a horizontal red line). In contrast, spectral pooling permits the selection of any output dimensionality, thereby producing a smooth curve over all frequency truncation choices. Classification with convolutional neural networks We test spectral pooling on different classification tasks. We hyperparametrize and optimize the following CNN architecture: C96+32m 3×3 → SP↓⌊γHm⌋×⌊γHm⌋ M m=1 → C96+32M 1×1 → C10/100 1×1 → GA →Softmax (5) Here, by CF S we denote a convolutional layer with F filters each of size S, by SP↓S a spectral pooling layer with output dimensionality S, and GA the global averaging layer described in Lin et al. (2013). We upper-bound the number of filters per layer as 288. Every convolution and pooling layer is followed by a ReLU nonlinearity. We let Hm be the height of the map of layer m. Hence, each spectral pooling layer reduces each output map dimension by factor γ ∈(0, 1). We assign frequency dropout distribution pR(·; m, α, β) = U[⌊cmHm⌋,Hm](·) for layer m, total layers M and with cm(α, β) = α + m M (β −α) for some constants α, β ∈R. This parametrization can be thought of as some linear parametrization of the dropout rate as a function of the layer. We perform hyperparameter optimization on the dimensionality decay rate γ ∈[0.25, 0.85], number of layers M ∈{1, . . . , 15}, resolution randomization hyperparameters α, β ∈[0, 0.8], weight decay rate in [10−5, 10−2], momentum in [1 −0.10.5, 1 −0.12] and initial learning rate in [0.14, 0.1]. We train each model for 150 epochs and anneal the learning rate by a factor of 10 at epochs 100 and 140. We intentionally use no dropout nor data augmentation, as these introduce a number of additional hyperparameters which we want to disambiguate as alternative factors for success. Perhaps unsurprisingly, the optimal hyperparameter configuration assigns the slowest possible layer map decay rate γ = 0.85. It selects randomized resolution reduction constants of about α ≈0.30, β ≈0.15, momentum of about 0.95 and initial learning rate 0.0088. These settings allow us to attain classification rates of 8.6% on CIFAR-10 and 31.6% on CIFAR-100. These are competitive results among approaches that do not employ data augmentation: a comparison to stateof-the-art approaches from the literature can be found in Table 4(b). 5.2 Spectral parametrization of CNNs We demonstrate the effectiveness of spectral parametrization on a number of CNN optimization tasks, for different architectures and for different filter sizes. We use the notation MPT S to denote a max pooling layer with size S and stride T, and FCF is a fully-connected layer with F filters. 7 e−2 e−1 e0 e1 Size 5 e−1 e0 e1 e−2 e−1 e0 e1 Spatial Spectral 0 40 80 120 160 200 Deep e−2 e−1 e0 e1 Size 3 0 40 80 120 160 200 Generic e−1 e0 e1 0 30 60 90 120 150 Sp. Pooling e−2 e−1 e0 e1 (a) Training curves. Architecture Filter size Speedup factor Deep (7) 3 × 3 2.2 Deep (7) 5 × 5 4.8 Generic (6) 3 × 3 2.2 Generic (6) 5 × 5 5.1 Sp. Pooling (5) 3 × 3 2.4 Sp. Pooling (5) 5 × 5 4.8 (b) Speedup factors. Figure 5: Optimization of CNNs via spectral parametrization. All experiments include data augmentation. (a) Training curves for the various experiments. The remainder of the optimization past the matching point is marked in light blue. The red diamonds indicate the relative epochs in which the asymptotic error rate of the spatial approach is achieved. (b) Speedup factors for different architectures and filter sizes. A non-negligible speedup is observed even for tiny 3 × 3 filters. The first architecture is the generic one used in a variety of deep learning papers, such as Krizhevsky et al. (2012); Snoek et al. (2012); Krizhevsky (2009); Kingma & Ba (2015): C96 3×3 →MP2 3×3 →C192 3×3 →MP2 3×3 →FC1024 →FC512 →Softmax (6) The second architecture we consider is the one employed in Snoek et al. (2015), which was shown to attain competitive classification rates. It is deeper and more complex: C96 3×3 →C96 3×3 →MP2 3×3 →C192 3×3 →C192 3×3 →C192 3×3 →MP2 3×3 →C192 1×1 →C10/100 1×1 →GA →Softmax (7) The third architecture considered is the spectral pooling network from Equation 5. To increase the difficulty of optimization and reflect real training conditions, we supplemented all networks with data augmentation in the form of translations, horizontal reflections, HSV perturbations and dropout. We initialized both spatial and spectral filters in the spatial domain as the same values; for the spectral parametrization experiments we then computed the Fourier transform of these to attain their frequency representations. We optimized all networks using the Adam (Kingma & Ba, 2015) update rule, a variant of RMSprop that we find to be a fast and robust optimizer. The training curves can be found in Figure 5(a) and the respective factors of convergence speedup in Table 5. Surprisingly, we observe non-negligible speedup even for tiny filters of size 3 × 3, where we did not expect the frequency representation to have much room to exploit spatial structure. 6 Discussion and remaining open problems In this work, we demonstrated that spectral representations provide a rich spectrum of applications. We introduced spectral pooling, which allows pooling to any desired output dimensionality while retaining significantly more information than other pooling approaches. In addition, we showed that the Fourier functions provide a suitable basis for filter parametrization, as demonstrated by faster convergence of the optimization procedure. One possible future line of work is to embed the network in its entirety in the frequency domain. In models that employ Fourier transforms to compute convolutions, at every convolutional layer the input is FFT-ed and the elementwise multiplication output is then inverse FFT-ed. These back-andforth transformations are very computationally intensive, and as such it would be desirable to strictly remain in the frequency domain. However, the reason for these repeated transformations is the application of nonlinearities in the forward domain: if one were to propose a sensible nonlinearity in the frequency domain, this would spare us from the incessant domain switching. Acknowledgements We would like to thank Prabhat, Michael Gelbart and Matthew Johnson for useful discussions and assistance throughout this project. Jasper Snoek was a fellow in the Harvard Center for Research on Computation and Society. This work is supported by the Applied Mathematics Program within the Office of Science Advanced Scientific Computing Research of the U.S. Department of Energy under contract No. DE-AC02-05CH11231. This work used resources of the National Energy Research Scientific Computing Center (NERSC). We thank Helen He and Doug Jacobsen for providing us with access to the Babbage Xeon-Phi testbed at NERSC. 8 References Bengio, Yoshua and LeCun, Yann. Scaling learning algorithms towards AI. In Bottou, L´eon, Chapelle, Olivier, DeCoste, D., and Weston, J. (eds.), Large Scale Kernel Machines. MIT Press, 2007. Goodfellow, Ian J., Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C., and Bengio, Yoshua. Maxout networks. CoRR, abs/1302.4389, 2013. URL http://dblp.uni-trier.de/db/journals/corr/ corr1302.html#abs-1302-4389. Hinton, Geoffrey. What’s wrong with convolutional nets? MIT Brain and Cognitive Sciences - Fall Colloquium Series, Dec 2014a. URL http://techtv.mit.edu/collections/bcs/videos/ 30698-what-s-wrong-with-convolutional-nets. Hinton, Geoffrey. Ask me anything: Geoffrey hinton. Reddit Machine Learning, 2014b. URL https: //www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167. Karpathy, Andrej, Toderici, George, Shetty, Sanketh, Leung, Thomas, Sukthankar, Rahul, and Fei-Fei, Li. Large-scale video classification with convolutional neural networks. In Computer Vision and Pattern Recognition, 2014. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2015. URL http://arxiv.org/abs/1412.6980. Krizhevsky, Alex. Learning multiple layers of features from tiny images. Technical report, 2009. Krizhevsky, Alex., Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. LeCun, Yann, Boser, Bernhard, Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel., L. D. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems, 1989. Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. CoRR, abs/1409.5185, 2014. URL http://arxiv.org/abs/1409.5185. Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. CoRR, abs/1312.4400, 2013. URL http: //dblp.uni-trier.de/db/journals/corr/corr1312.html#LinCY13. Mathieu, Micha¨el, Henaff, Mikael, and LeCun, Yann. Fast training of convolutional networks through FFTs. CoRR, abs/1312.5851, 2013. URL http://arxiv.org/abs/1312.5851. Rippel, Oren, Gelbart, Michael A., and Adams, Ryan P. Learning ordered representations with nested dropout. In International Conference on Machine Learning, 2014. Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Li, Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 2015. doi: 10.1007/ s11263-015-0816-y. Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan Prescott. Practical Bayesian optimization of machine learning algorithms. In Neural Information Processing Systems, 2012. Snoek, Jasper, Rippel, Oren, Swersky, Kevin, Kiros, Ryan, Satish, Nadathur, Sundaram, Narayanan, Patwary, Md. Mostofa Ali, Prabhat, and Adams, Ryan P. Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning, 2015. Torralba, Antonio and Oliva, Aude. Statistics of natural image categories. Network, 14(3):391–412, August 2003. ISSN 0954-898X. Vasilache, Nicolas, Johnson, Jeff, Mathieu, Micha¨el, Chintala, Soumith, Piantino, Serkan, and LeCun, Yann. Fast convolutional nets with fbfft: A GPU performance evaluation. CoRR, abs/1412.7580, 2014. URL http://arxiv.org/abs/1412.7580. Zeiler, Matthew D. and Fergus, Rob. Stochastic pooling for regularization of deep convolutional neural networks. CoRR, abs/1301.3557, 2013. URL http://dblp.uni-trier.de/db/journals/corr/ corr1301.html#abs-1301-3557. 9
2015
146
5,643
On the consistency theory of high dimensional variable screening Xiangyu Wang Dept. of Statistical Science Duke University, USA xw56@stat.duke.edu Chenlei Leng Dept. of Statistics University of Warwick, UK C.Leng@warwick.ac.uk David B. Dunson Dept. of Statistical Science Duke University, USA dunson@stat.duke.edu Abstract Variable screening is a fast dimension reduction technique for assisting high dimensional feature selection. As a preselection method, it selects a moderate size subset of candidate variables for further refining via feature selection to produce the final model. The performance of variable screening depends on both computational efficiency and the ability to dramatically reduce the number of variables without discarding the important ones. When the data dimension p is substantially larger than the sample size n, variable screening becomes crucial as 1) Faster feature selection algorithms are needed; 2) Conditions guaranteeing selection consistency might fail to hold. This article studies a class of linear screening methods and establishes consistency theory for this special class. In particular, we prove the restricted diagonally dominant (RDD) condition is a necessary and sufficient condition for strong screening consistency. As concrete examples, we show two screening methods SIS and HOLP are both strong screening consistent (subject to additional constraints) with large probability if n > O((ρs+σ/τ)2 log p) under random designs. In addition, we relate the RDD condition to the irrepresentable condition, and highlight limitations of SIS. 1 Introduction The rapidly growing data dimension has brought new challenges to statistical variable selection, a crucial technique for identifying important variables to facilitate interpretation and improve prediction accuracy. Recent decades have witnessed an explosion of research in variable selection and related fields such as compressed sensing [1, 2], with a core focus on regularized methods [3–7]. Regularized methods can consistently recover the support of coefficients, i.e., the non-zero signals, via optimizing regularized loss functions under certain conditions [8–10]. However, in the big data era when p far exceeds n, such regularized methods might fail due to two reasons. First, the conditions that guarantee variable selection consistency for convex regularized methods such as lasso might fail to hold when p >> n; Second, the computational expense of both convex and non-convex regularized methods increases dramatically with large p. Bearing these concerns in mind, [11] propose the concept of “variable screening”, a fast technique that reduces data dimensionality from p to a size comparable to n, with all predictors having nonzero coefficients preserved. They propose a marginal correlation based fast screening technique “Sure Independence Screening” (SIS) that can preserve signals with large probability. However, this method relies on a strong assumption that the marginal correlations between the response and the important predictors are high [11], which is easily violated in the practice. [12] extends the marginal correlation to the Spearman’s rank correlation, which is shown to gain certain robustness but is still limited by the same strong assumption. [13] and [14] take a different approach to attack the screening problem. They both adopt variants of a forward selection type algorithm that includes one variable at a time for constructing a candidate variable set for further refining. These methods 1 eliminate the strong marginal assumption in [11] and have been shown to achieve better empirical performance. However, such improvement is limited by the extra computational burden caused by their iterative framework, which is reported to be high when p is large [15]. To ameliorate concerns in both screening performance and computational efficiency, [15] develop a new type of screening method termed “High-dimensional ordinary least-square projection” (HOLP). This new screener relaxes the strong marginal assumption required by SIS and can be computed efficiently (complexity is O(n2p)), thus scalable to ultra-high dimensionality. This article focuses on linear models for tractability. As computation is one vital concern for designing a good screening method, we primarily focus on a class of linear screeners that can be efficiently computed, and study their theoretical properties. The main contributions of this article lie in three aspects. 1. We define the notion of strong screening consistency to provide a unified framework for analyzing screening methods. In particular, we show a necessary and sufficient condition for a screening method to be strong screening consistent is that the screening matrix is restricted diagonally dominant (RDD). This condition gives insights into the design of screening matrices, while providing a framework to assess the effectiveness of screening methods. 2. We relate RDD to other existing conditions. The irrepresentable condition (IC) [8] is necessary and sufficient for sign consistency of lasso [3]. In contrast to IC that is specific to the design matrix, RDD involves another ancillary matrix that can be chosen arbitrarily. Such flexibility allows RDD to hold even when IC fails if the ancillary matrix is carefully chosen (as in HOLP). When the ancillary matrix is chosen as the design matrix, certain equivalence is shown between RDD and IC, revealing the difficulty for SIS to achieve screening consistency. We also comment on the relationship between RDD and the restricted eigenvalue condition (REC) [6] which is commonly seen in the high dimensional literature. We illustrate via a simple example that RDD might not be necessarily stronger than REC. 3. We study the behavior of SIS and HOLP under random designs, and prove that a sample size of n = O (ρs + σ/τ)2 log p  is sufficient for SIS and HOLP to be screening consistent, where s is the sparsity, ρ measures the diversity of signals and τ/σ evaluates the signal-to-noise ratio. This is to be compared to the sign consistency results in [9] where the design matrix is fixed and assumed to follow the IC. The article is organized as follows. In Section 1, we set up the basic problem and describe the framework of variable screening. In Section 2, we provide a deterministic necessary and sufficient condition for consistent screening. Its relationship with the irrepresentable condition is discussed in Section 3. In Section 4, we prove the consistency of SIS and HOLP under random designs by showing the RDD condition is satisfied with large probability, although the requirement on SIS is much more restictive. 2 Linear screening Consider the usual linear regression Y = Xβ + ϵ, where Y is the n×1 response vector, X is the n×p design matrix and ϵ is the noise. The regression task is to learn the coefficient vector β. In the high dimensional setting where p >> n, a sparsity assumption is often imposed on β so that only a small portion of the coordinates are non-zero. Such an assumption splits the task of learning β into two phases. The first is to recover the support of β, i.e., the location of non-zero coefficients; The second is to estimate the value of these non-zero signals. This article mainly focuses on the first phase. As pointed out in the introduction, when the dimensionality is too high, using regularization methods methods raises concerns both computationally and theoretically. To reduce the dimensionality, [11] suggest a variable screening framework by finding a submodel Md = {i : |ˆβi| is among the largest d coordinates of |ˆβ|} or Mγ = {i : |ˆβi| > γ}. 2 Let Q = {1, 2, · · · , p} and define S as the true model with s = |S| being its cardinarlity. The hope is that the submodel size |Md| or |Mγ| will be smaller or comparable to n, while S ⊆Md or S ⊆Mγ. To achieve this goal two steps are usually involved in the screening analysis. The first is to show there exists some γ such that mini∈S |ˆβi| > γ and the second step is to bound the size of |Mγ| such that |Mγ| = O(n). To unify these steps for a more comprehensive theoretical framework, we put forward a slightly stronger definition of screening consistency in this article. Definition 2.1. (Strong screening consistency) An estimator ˆβ (of β) is strong screening consistent if it satisfies that min i∈S |ˆβi| > max i̸∈S |ˆβi| (1) and sign(ˆβi) = sign(βi), ∀i ∈S. (2) Remark 2.1. This definition does not differ much from the usual screening property studied in the literature, which requires mini∈S |ˆβi| > max(n−s) i̸∈S |ˆβi|, where max(k) denotes the kth largest item. The key of strong screening consistency is the property (1) that requires the estimator to preserve consistent ordering of the zero and non-zero coefficients. It is weaker than variable selection consistency in [8]. The requirement in (2) can be seen as a relaxation of the sign consistency defined in [8], as no requirement for ˆβi, i ̸∈S is needed. As shown later, such relaxation tremendously reduces the restriction on the design matrix, and allows screening methods to work for a broader choice of X. The focus of this article is to study the theoretical properties of a special class of screeners that take the linear form as ˆβ = AY for some p×n ancillary matrix A. Examples include sure independence screening (SIS) where A = XT /n and high-dimensional ordinary least-square projection (HOLP) where A = XT (XXT )−1. We choose to study the class of linear estimators because linear screening is computationally efficient and theoretically tractable. We note that the usual ordinary least-squares estimator is also a special case of linear estimators although it is not well defined for p > n. 3 Deterministic guarantees In this section, we derive the necessary and sufficient condition that guarantees ˆβ = AY to be strong screening consistent. The design matrix X and the error ϵ are treated as fixed in this section and we will investigate random designs later. We consider the set of sparse coefficient vectors defined by B(s, ρ) =  β ∈Rp : |supp(β)| ≤s, maxi∈supp(β) |βi| mini∈supp(β) |βi| ≤ρ  . The set B(s, ρ) contains vectors having at most s non-zero coordinates with the ratio of the largest and smallest coordinate bounded by ρ. Before proceeding to the main result of this section, we introduce some terminology that helps to establish the theory. Definition 3.1. (restricted diagonally dominant matrix) A p × p symmetric matrix Φ is restricted diagonally dominant with sparsity s if for any I ⊆Q, |I| ≤s −1 and i ∈Q \ I Φii > C0 max  X j∈I |Φij + Φkj|, X j∈I |Φij −Φkj|  + |Φik| ∀k ̸= i, k ∈Q \ I, where C0 ≥1 is a constant. Notice this definition implies that for i ∈Q \ I Φii ≥C0  X j∈I |Φij + Φkj| + X j∈I |Φij −Φkj|  /2 ≥C0 X j∈I |Φij|, (3) which is related to the usual diagonally dominant matrix. The restricted diagonally dominant matrix provides a necessary and sufficient condition for any linear estimators ˆβ = AY to be strong screening consistent. More precisely, we have the following result. 3 Theorem 1. For the noiseless case where ϵ = 0, a linear estimator ˆβ = AY is strong screening consistent for every β ∈B(s, ρ), if and only if the screening matrix Φ = AX is restricted diagonally dominant with sparsity s and C0 ≥ρ. Proof. Assume Φ is restricted diagonally dominant with sparsity s and C0 ≥ρ. Recall ˆβ = Φβ. Suppose S is the index set of non-zero predictors. For any i ∈S, k ̸∈S, if we let I = S \ {i}, then we have |ˆβi| = |βi|  Φii + X j∈I βj βi Φij  = |βi|  Φii + X j∈I βj βi (Φij + Φkj) + Φki − X j∈I βj βi Φkj −Φki  > −|βi|  X j∈I βj βi Φkj + Φki  = −|βi| βi  X j∈I βjΦkj + βiΦki  = −sign(βi) · ˆβk, and |ˆβi| = |βi|  Φii + X j∈I βj βi Φij  = |βi|  Φii + X j∈I βj βi (Φij −Φkj) −Φki + X j∈I βj βi Φkj + Φki  > |βi|  X j∈I βj βi Φkj + Φki  = sign(βi) · ˆβk. Therefore, whatever value sign(βi) is, it always holds that |ˆβi| > |ˆβk| and thus mini∈S |ˆβi| > maxk̸∈S |ˆβk|. To prove the sign consistency for non-zero coefficients, we notice that for i ∈S, ˆβiβi = Φiiβ2 i + X j∈I Φijβjβi = β2 i  Φii + X j∈I βj βi Φij  > 0. The proof of necessity is left to the supplementary materials. The noiseless case is a good starting point to analyze ˆβ. Intuitively, in order to preserve the correct order of the coefficients in ˆβ = AXβ, one needs AX to be close to a diagonally dominant matrix, so that ˆβi, i ∈MS will take advantage of the large diagonal terms in AX to dominate ˆβi, i ̸∈MS that is just linear combinations of off-diagonal terms. When noise is considered, the condition in Theorem 1 needs to be changed slightly to accommodate extra discrepancies. In addition, the smallest non-zero coefficient has to be lower bounded to ensure a certain level of signal-to-noise ratio. Thus, we augment our previous definition of B(s, ρ) to have a signal strength control Bτ(s, ρ) = {β ∈B(s, ρ)| min i∈supp(β) |βi| ≥τ}. Then we can obtain the following modified Theorem. Theorem 2. With noise, the linear estimator ˆβ = AY is strong screening consistent for every β ∈Bτ(s, ρ) if Φ = AX −2τ −1∥Aϵ∥∞Ip is restricted diagonally dominant with sparsity s and C0 ≥ρ. The proof of Theorem 2 is essentially the same as Theorem 1 and is thus left to the supplementary materials. The condition in Theorem 2 can be further tailored to a necessary and sufficient version with extra manipulation on the noise term. Nevertheless, this might not be useful in practice due to the randomness in noise. In addition, the current version of Theorem 2 is already tight in the sense that there exists some noise vector ϵ such that the condition in Theorem 2 is also necessary for strong screening consistency. Theorems 1 and 2 establish ground rules for verifying consistency of a given screener and provide practical guidance for screening design. In Section 4, we consider some concrete examples of ancillary matrix A and prove that conditions in Theorems 1 and 2 are satisfied by the corresponding screeners with large probability under random designs. 4 4 Relationship with other conditions For some special cases such sure independence screening (”SIS”), the restricted diagonally dominant (RDD) condition is related to the strong irrepresentable condition (IC) proposed in [8]. Assume each column of X is standardized to have mean zero. Letting C = XT X/n and β be a given coefficient vector, the IC is expressed as ∥CSc,SC−1 S,S · sign(βS)∥∞≤1 −θ (4) for some θ > 0, where CA,B represents the sub-matrix of C with row indices in A and column indices in B. The authors enumerate several scenarios of C such that IC is satisfied. We verify some of these scenarios for screening matrix Φ. Corollary 1. If Φii = 1, ∀i and |Φij| < c/(2s), ∀i ̸= j for some 0 ≤c < 1 as defined in Corollary 1 and 2 in [8], then Φ is a restricted diagonally dominant matrix with sparsity s and C0 ≥1/c. If |Φij| < r|i−j|, ∀i, j for some 0 < r < 1 as defined in Corollary 3 in [8], then Φ is a restricted diagonally dominant matrix with sparsity s and C0 ≥(1 −r)2/(4r). A more explicit but nontrivial relationship between IC and RDD is illustrated below when |S| = 2. Theorem 3. Assume Φii = 1, ∀i and |Φij| < r, ∀i ̸= j. If Φ is restricted diagonally dominant with sparsity 2 and C0 ≥ρ, then Φ satisfies ∥ΦSc,SΦ−1 S,S · sign(βS)∥∞≤ρ−1 1 −r for all β ∈B(2, ρ). On the other hand, if Φ satisfies the IC for all β ∈B(2, ρ) for some θ, then Φ is a restricted diagonally dominant matrix with sparsity 2 and C0 ≥ 1 1 −θ 1 −r 1 + r. Theorem 3 demonstrates certain equivalence between IC and RDD. However, it does not mean that RDD is also a strong requirement. Notice that IC is directly imposed on the covariance matrix XT X/n. This makes IC a strong assumption that is easily violated; for example, when the predictors are highly correlated. In contrast to IC, RDD is imposed on matrix AX where there is flexibility in choosing A. Only when A is chose to be X/n, RDD is equivalently strong as IC, as shown in next theorem. For other choices of A, such as HOLP defined in next section, the estimator satisfies RDD even when predictors are highly correlated. Therefore, RDD is considered as weak requirement. For ”SIS”, the screening matrix Φ = XT X/n coincides with the covariance matrix, making RDD and IC effectively equivalent. The following theorem formalizes this. Theorem 4. Let A = XT /n and standardize columns of X to have sample variance one. Assume X satisfies the sparse Riesz condition [16], i.e, min π⊆Q, |π|≤s λmin(XT π Xπ/n) ≥µ, for some µ > 0. Now if AX is restricted diagonally dominant with sparsity s + 1 and C0 ≥ρ with ρ > √s/µ, then X satisfies the IC for any β ∈B(s, ρ). In other words, under the condition ρ > √s/µ, the strong screening consistency of SIS for B(s + 1, ρ) implies the model selection consistency of lasso for B(s, ρ). Theorem 4 illustrates the difficulty of SIS. The necessary condition that guarantees good screening performance of SIS also guarantees the model selection consistency of lasso. However, such a strong necessary condition does not mean that SIS should be avoided in practice given its substantial advantages in terms of simplicity and computational efficiency. The strong screening consistency defined in this article is stronger than conditions commonly used in justifying screening procedures as in [11]. Another common assumption in the high dimensional literature is the restricted eigenvalue condition (REC). Compared to REC, RDD is not necessarily stronger due to its flexibility in choosing the ancillary matrix A. [17,18] prove that the REC is satisfied when the design matrix is sub-Gaussian. However, REC might not be guaranteed when the row of X follows heavy-tailed distribution. In contrast, as the example shown in next section and in [15], by choosing A = XT (XXT )−1, the resulting estimator satisfies RDD even when the rows of X follow heavy-tailed distributions. 5 5 Screening under random designs In this section, we consider linear screening under random designs when X and ϵ are Gaussian. The theory developed in this section can be easily extended to a broader family of distributions, for example, where ϵ follows a sub-Gaussian distribution [19] and X follows an elliptical distribution [11, 15]. We focus on the Gaussian case for conciseness. Let ϵ ∼N(0, σ2) and X ∼N(0, Σ). We prove the screening consistency of SIS and HOLP by verifying the condition in Theorem 2. Recall the ancillary matrices for SIS and HOLP are defined respectively as ASIS = X/n, AHOLP = XT (XXT )−1. For simplicity, we assume Σii = 1 for i = 1, 2, · · · , p. To verify the RDD condition, it is essential to quantify the magnitude of the entries of AX and Aϵ. Lemma 1. Let Φ = ASISX, then for any t > 0 and i ̸= j ∈Q, we have P  |Φii −Σii| ≥t  ≤2 exp  −min  t2n 8e2K , tn 2eK  , and P  |Φij −Σij| ≥t  ≤6 exp  −min  t2n 72e2K , tn 6eK  , where K = ∥X 2(1) −1∥ψ1 is a constant, X 2(1) is a chi-square random variable with one degree of freedom and the norm ∥· ∥ψ1 is defined in [19]. Lemma 1 states that the screening matrix Φ = ASISX for SIS will eventually converge to the covariance matrix Σ in l∞when n tends to infinity and log p = o(n). Thus, the screening performance of SIS strongly relies on the structure of Σ. In particular, the (asymptotically) necessary and sufficient condition for SIS being strong screening consistent is Σ satisfying the RDD condition. For the noise term, we have the following lemma. Lemma 2. Let η = ASISϵ. For any t > 0 and i ∈Q, we have P(|ηi| ≥σt) ≤6 exp  −min  t2n 72e2K , tn 6eK  , where K is defined the same as in Lemma 1. The proof of Lemma 2 is essentially the same as the proof of off-diagonal terms in Lemma 1 and is thus omitted. As indicated before, the necessary and sufficient condition for SIS to be strong screening consistent is that Σ follows RDD. As RDD is usually hard to verify, we consider a stronger sufficient condition inspired by Corollary 1. Theorem 5. Let r = maxi̸=j |Σij|. If r < 1 2ρs, then for any δ > 0, if the sample size satisfies n > 144K 1 + 2ρs + 2σ/τ 1 −2ρsr 2 log(3p/δ), (5) where K is defined in Lemma 1, then with probability at least 1 −δ, Φ = ASISX − 2τ −1∥ASISϵ∥∞Ip is restricted diagonally dominant with sparsity s and C0 ≥ρ. In other words, SIS is screening consistent for any β ∈Bτ(s, ρ). Proof. Taking union bound on the results from Lemma 1 and 2, we have for any t > 0 and p > 2, P  min i∈Q Φii ≤1 −t or max i̸=j |Φij| ≥r + t or ∥η∥∞≥σt  ≤7p2 exp  −n K min  t2 72e2 , t 6e  . In other words, for any δ > 0, when n ≥K log(7p2/δ), with probability at least 1 −δ, we have min i∈Q Φii ≥1 −6 √ 2e r K log(7p2/δ) n , max i̸=j |Φij| ≤r + 6 √ 2e r K log(7p2/δ) n , 6 max i∈Q |ηi| ≤6 √ 2eσ r K log(7p2/δ) n . A sufficient condition for Φ to be restricted diagonally dominant is that min i Φii > 2ρs max i̸=j |Φij| + 2τ −1 max i |ηi|. Plugging in the values we have 1 −6 √ 2e r K log(7p2/δ) n > 2ρs(r + 6 √ 2e r K log(7p2/δ) n ) + 12 √ 2eτ −1σ r K log(7p2/δ) n . Solving the above inequality (notice that 7p2/δ < 9p2/δ2 and ρ > 1) completes the proof. The requirement that maxi̸=j |Σij| < 1/(ρsr) or the necessary and sufficient condition that Σ is RDD strictly constrains the correlation structure of X, causing the difficulty for SIS to be strong screening consistent. For HOLP we instead have the following result. Lemma 3. Let Φ = AHOLP X. Assume p > c0n for some c0 > 1, then for any C > 0 there exists some 0 < c1 < 1 < c2 and c3 > 0 such that for any t > 0 and any i ∈Q, j ̸= i, we have P  |Φii| < c1κ−1 n p  ≤2e−Cn, P  |Φii| > c2κn p  ≤2e−Cn and P  |Φij| > c4κt √n p  ≤5e−Cn + 2e−t2/2, where c4 = √ c2(c0−c1) √ c3(c0−1) . Proof. The proof of Lemma 3 relies heavily on previous results for the Stiefel Manifold provided in the supplementary materials. We only sketch the basic idea here and leave the complete proof to the supplementary materials. Defining H = XT (XXT )−1/2, then we have Φ = HHT and H follows the Matrix Angular Central Gaussian (MACG) with covariance Σ. The diagonal terms of HHT can be bounded similarly via the Johnson-Lindenstrauss lemma, by using the fact that HHT = Σ1/2U(U T ΣU)−1UΣ, where U is a p × n random projection matrix. Now for off-diagonal terms, we decompose the Stiefel manifold as H = (G(H2)H1 H2), where H1 is a (p −n + 1) × 1 vector, H2 is a p × (n −1) matrix and G(H2) is chosen so that (G(H2) H2) ∈O(p), and show that H1 follows Angular Central Gaussian (ACG) distribution with covariance G(H2)T ΣG(H2) conditional on H2. It can be shown that e2HHT e1 (d) = e2G(H2)H1|eT 1 H2 = 0. Let t2 1 = eT 1 HHT e1, then eT 1 H2 = 0 is equivalent to eT 1 G(H2)H1 = t1, and we obtain the desired coupling distribution as eT 2 HHT e1 (d) = eT 2 G(H2)H1|eT 1 G(H2)H1 = t1. Using the normal representation of ACG(Σ), i.e., if x = (x1, · · · , xp) ∼N(0, Σ), then x/∥x∥∼ACG(Σ), we can write G(H2)H1 in terms of normal variables and then bound all terms using concentration inequalities. Lemma 3 quantifies the entries of the screening matrix for HOLP. As illustrated in the lemma, regardless of the covariance Σ, diagonal terms of Φ are always O( n p ) and the off-diagonal terms are O( √n p ). Thus, with n ≥O(s2), Φ is likely to satisfy the RDD condition with large probability. For the noise vector we have the following result. Lemma 4. Let η = AHOLP ϵ. Assume p > c0n for some c0 > 1, then for any C > 0 there exist the same c1, c2, c3 as in Lemma 3 such that for any t > 0 and i ∈Q, P  |ηi| ≥2σ√c2κt 1 −c−1 0 √n p  < 4e−Cn + 2e−t2/2, if n ≥8C/(c0 −1)2. The proof is almost identical to Lemma 2 and is provided in the supplementary materials. The following theorem results after combining Lemma 3 and 4. 7 Theorem 6. Assume p > c0n for some c0 > 1. For any δ > 0, if the sample size satisfies n > max  2C′κ4(ρs + σ/τ)2 log(3p/δ), 8C (c0 −1)2  , (6) where C′ = max{ 4c2 4 c2 1 , 4c2 c2 1(1−c−1 0 )2 } and c1, c2, c3, c4, C are the same constants defined in Lemma 3, then with probability at least 1 −δ, Φ = AHOLP X −2τ −1∥AHOLP ϵ∥∞Ip is restricted diagonally dominant with sparsity s and C0 ≥ρ. This implies HOLP is screening consistent for any β ∈ Bτ(s, ρ). Proof. Notice that if min i |Φii| > 2sρ max ij |Φij| + 2τ −1∥XT (XXT )−1ϵ∥∞, (7) then the proof is complete because Φ −2τ −1∥XT (XXT )−1ϵ∥∞is already a restricted diagonally dominant matrix. Let t = √ Cn/ν. The above equation then requires c1κ−1 n p −2c4 √ Cκsρ ν n p −2σ√c2Cκt (1 −c−1 0 )τν n p = c1κ−1 −2c4 √ Cκsρ ν − 2σ√c2Cκ (1 −c−1 0 )τν n p > 0, which implies that ν > 2c4 √ Cκ2ρs c1 + 2σ√c2Cκ2 c1(1 −c−1 0 )τ = C1κ2ρs + C2κ2τ −1σ > 1, where C1 = 2c4 √ C c1 , C2 = 2√c2C c1(1−c−1 0 ). Therefore, taking union bounds on all matrix entries, we have P  (7) does not hold  < (p + 5p2)e−Cn + 2p2e−Cn/ν < (7 + 1 n)p2e−Cn/ν2, where the second inequality is due to the fact that p > n and ν > 1. Now for any δ > 0, (7) holds with probability at least 1 −δ if n ≥ν2 C  log(7 + 1/n) + 2 log p −log δ  , which is satisfied provided (noticing √ 8 < 3) n ≥2ν2 C log 3p δ . Now pushing ν to the limit gives (6), the precise condition we need. There are several interesting observations on equation (5) and (6). First, (ρs + σ/τ)2 appears in both expressions. We note that ρs evaluates the sparsity and the diversity of the signal β while σ/τ is closely related to the signal-to-noise ratio. Furthermore, HOLP relaxes the correlation constraint r < 1/(2ρs) or the covariance constraint (Σ is RDD) with the conditional number constraint. Thus for any Σ, as long as the sample size is large enough, strong screening consistency is assured. Finally, HOLP provides an example to satisfy the RDD condition in answer to the question raised in Section 4. 6 Concluding remarks This article studies and establishes a necessary and sufficient condition in the form of restricted diagonally dominant screening matrices for strong screening consistency of a linear screener. We verify the condition for both SIS and HOLP under random designs. In addition, we show a close relationship between RDD and the IC, highlighting the difficulty of using SIS in screening for arbitrarily correlated predictors. For future work, it is of interest to see how linear screening can be adapted to compressed sensing [20] and how techniques such as preconditioning [21] can improve the performance of marginal screening and variable selection. Acknowledgments This research was partly support by grant NIH R01-ES017436 from the National Institute of Environmental Health Sciences. 8 References [1] David L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006. [2] Richard Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24(4), 2007. [3] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 58(1):267–288, 1996. [4] Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001. [5] Emmanuel Candes and Terence Tao. The dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics, 35(6):2313–2351, 2007. [6] Peter J Bickel, Ya’acov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, 37(4):1705–1732, 2009. [7] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894–942, 2010. [8] Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541–2563, 2006. [9] Martin J Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity using l1-constrained quadratic programming. IEEE Transactions on Information Theory, 2009. [10] Jason D Lee, Yuekai Sun, and Jonathan E Taylor. On model selection consistency of mestimators with geometrically decomposable penalties. Advances in Neural Processing Information Systems, 2013. [11] Jianqing Fan and Jinchi Lv. Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(5):849– 911, 2008. [12] Gaorong Li, Heng Peng, Jun Zhang, Lixing Zhu, et al. Robust rank correlation based screening. The Annals of Statistics, 40(3):1846–1877, 2012. [13] Hansheng Wang. Forward regression for ultra-high dimensional variable screening. Journal of the American Statistical Association, 104(488):1512–1524, 2009. [14] Haeran Cho and Piotr Fryzlewicz. High dimensional variable selection via tilting. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(3):593–622, 2012. [15] Xiangyu Wang and Chenlei Leng. High-dimensional ordinary least-squares projection for screening variables. https://stat.duke.edu/˜xw56/holp-paper.pdf, 2015. [16] Cun-Hui Zhang and Jian Huang. The sparsity and bias of the lasso selection in highdimensional linear regression. The Annals of Statistics, 36(4):1567–1594, 2008. [17] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Restricted eigenvalue properties for correlated gaussian designs. The Journal of Machine Learning Research, 11:2241–2259, 2010. [18] Shuheng Zhou. Restricted eigenvalue conditions on subgaussian random matrices. arXiv preprint arXiv:0912.4045, 2009. [19] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. [20] Lingzhou Xue and Hui Zou. Sure independence screening and compressed random sensing. Biometrika, 98(2):371–380, 2011. [21] Jinzhu Jia and Karl Rohe. Preconditioning to comply with the irrepresentable condition. arXiv preprint arXiv:1208.5584, 2012. 9
2015
147
5,644
Revenue Optimization against Strategic Buyers Mehryar Mohri Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY, 10012 Andr´es Mu˜noz Medina⇤ Google Research 111 8th Avenue New York, NY, 10011 Abstract We present a revenue optimization algorithm for posted-price auctions when facing a buyer with random valuations who seeks to optimize his γ-discounted surplus. In order to analyze this problem we introduce the notion of ✏-strategic buyer, a more natural notion of strategic behavior than what has been considered in the past. We improve upon the previous state-of-the-art and achieve an optimal regret bound in O(log T + 1/ log(1/γ)) when the seller selects prices from a finite set and provide a regret bound in eO( p T + T 1/4/ log(1/γ)) when the prices offered are selected out of the interval [0, 1]. 1 Introduction Online advertisement is currently the fastest growing form of advertising. This growth has been motivated, among other reasons, by the existence of well defined metrics of effectiveness such as click-through-rate and conversion rates. Moreover, online advertisement enables the design of better targeted campaigns by allowing advertisers to decide which type of consumers should see their advertisement. These advantages have promoted the fast pace development of a large number of advertising platforms. Among them, AdExchanges have increased in popularity in recent years. In contrast to traditional advertising, AdExchanges do not involve contracts between publishers and advertisers. Instead, advertisers are allowed to bid in real-time for the right to display their ad. An AdExchange works as follows: when a user visits a publisher’s website, the publisher sends this information to the AdExchange which runs a second-price auction with reserve (Vickrey, 1961; Milgrom, 2004) among all interested advertisers. Finally, the winner of the auction gets the right to display his ad on the publisher’s website and pays the maximum of the second highest bid and the reserve price. In practice, this process is performed in milliseconds, resulting in millions of transactions recorded daily by the AdExchange. Thus, one might expect that the AdExchange could benefit from this information by learning how much an advertiser values the right to display his ad and setting an optimal reserve price. This idea has recently motivated research in the learning community on revenue optimization in second-price auctions with reserve (Mohri and Medina, 2014a; Cui et al., 2011; Cesa-Bianchi et al., 2015). The algorithms proposed by these authors heavily rely on the assumption that the advertisers’ bids are drawn i.i.d. from some underlying distribution. However, if an advertiser is aware of the fact that the AdExchange or publisher are using a revenue optimization algorithm, then, most likely, he would adjust his behavior to trick the publisher into offering a more beneficial price in the future. Under this scenario, the assumptions of (Mohri and Medina, 2014a) and (Cesa-Bianchi et al., 2015) would be violated. In fact, empirical evidence of strategic behavior by advertisers has been documented by Edelman and Ostrovsky (2007). It is therefore critical to analyze the interactions between publishers and strategic advertisers. ⇤This work was partially done at the Courant Institute of Mathematical Sciences. 1 In this paper, we consider the simpler scenario of revenue optimization in posted-price auctions with strategic buyers, first analyzed by Amin et al. (2013). As pointed out by Amin et al. (2013), the study of this simplified problem is truly relevant since a large number of auctions run by AdExchanges consist of only one buyer (or one buyer with a large bid and several buyers with negligible bids). In this scenario, a second-price auction in fact reduces to a posted-price auction where the seller sets a reserve price and the buyer decides to accept it (bid above it) or reject it (bid below). To analyze the sequential nature of this problem, we can cast it as a repeated game between a buyer and a seller where a strategic buyer seeks to optimize his surplus while the seller seeks to collect the largest possible revenue from the buyer. This can be viewed as an instance of a repeated nonzero sum game with incomplete information, which is a problem that has been well studied in the Economics and Game Theory community (Nachbar, 1997, 2001). However, such previous work has mostly concentrated on the characterization of different types of achievable equilibria as opposed to the design of an algorithm for the seller. Furthermore, the problem we consider admits a particular structure that can be exploited to derive learning algorithms with more favorable guarantees for the specific task of revenue optimization. The problem can also be viewed as an instance of a multi-armed bandit problem (Auer et al., 2002; Lai and Robbins, 1985), more specifically, a particular type of continuous bandit problem previously studied by Kleinberg and Leighton (2003). Indeed, at every time t the buyer can only observe the revenue of the price he offered and his goal is to find, as fast as possible, the price that would yield the largest expected revenue. Unlike a bandit problem, however, here, the performance of an algorithm cannot be measured in terms of the external regret. Indeed, as observed by Bubeck and Cesa-Bianchi (2012) and Arora et al. (2012), the notion of external regret becomes meaningless when facing an adversary that reacts to the learner’s actions. In short, instead of comparing to the best achievable revenue by a fixed price over the sequence of rewards seen, one should compare against the simulated sequence of rewards that would have been seen had the seller played a fixed price. This notion of regret is known as strategic regret and regret minimization algorithms have been proposed before under different scenarios (Amin et al., 2013, 2014; Mohri and Medina, 2014a). In this paper we provide a regret minimization algorithm for the stochastic scenario, where, at each round, the buyer receives an i.i.d. valuation from an underlying distribution. While this random valuation might seems surprising, it is in fact a standard assumption in the study of auctions (Milgrom and Weber, 1982; Milgrom, 2004; Cole and Roughgarden, 2014). Moreover, in practice, advertisers rarely interact directly with an AdExchange. Instead, several advertisers are part of an ad network and it is that ad network that bids on their behalf. Therefore, the valuation of the ad network is not likely to remain fixed. Our model is also motivated by the fact that the valuation of an advertiser depends on the user visiting the publisher’s website. Since these visits can be considered random, it follows that the buyer’s valuation is in fact a random variable. A crucial component of our analysis is the definition of a strategic buyer. We consider a buyer who seeks to optimize his cumulative discounted surplus. However, we show that a buyer who exactly maximizes his surplus must have unlimited computational power, which is not a realistic assumption in practice. Instead, we define the notion of an ✏-strategic buyer who seeks only to approximately optimize his surplus. Our main contribution is to show that, when facing an ✏-strategic buyer, a seller can achieve O(log T) regret when the set of possible prices to offer is finite, and an O( p T) regret bound when the set of prices is [0, 1]. Remarkably, these bounds on the regret match those given by Kleinberg and Leighton (2003) in a truthful scenario where the buyer does not behave strategically. The rest of this paper is organized as follows. In Section 2, we discuss in more detail related previous work. Next, we define more formally the problem setup (Section 3). In particular, we give a precise definition of the notion of ✏-strategic buyer (Section 3.2). Our main algorithm for a finite set of prices is described in Section 4, where we also provide a regret analysis. In Section 5, we extend our algorithm to the continuous case where we show that a regret in O( p T) can be achieved. 2 Previous work The problem of revenue optimization in auctions goes back to the seminal work of Myerson (1981), who showed that under some regularity assumptions over the distribution D, the revenue optimal, incentive-compatible mechanism is a second-price auction with reserve. This result applies to singleshot auctions where buyers and the seller interact only once and the underlying value distribution is 2 known to the seller. In practice, however it is not realistic to assume that the seller has access to this distribution. Instead, in cases such as on-line advertisement, the seller interacts with the buyer a large number of times and can therefore infer his behavior from historical data. This fact has motivated the design of several learning algorithms such as that of (Cesa-Bianchi et al., 2015) who proposed a bandit algorithm for revenue optimization in second-price auctions; and the work of (Mohri and Medina, 2014a), who provided learning guarantees and an algorithm for revenue optimization where each auction is associated with a feature vector. The aforementioned algorithms are formulated under the assumption of buyers bidding in an i.i.d. fashion and do not take into account the fact that buyers can in fact react to the use of revenue optimization algorithms by the seller. This has motivated a series of publications focusing on this particular problem. Bikhchandani and McCardle (2012) analyzed the same problem proposed here when the buyer and seller interact for only two rounds. Kanoria and Nazerzadeh (2014) considered a repeated game of second-price auctions where the seller knows that the value distribution can be either high, meaning it is concentrated around high values, or low; and his goal is to find out from which distribution the valuations are drawn under the assumption that buyers can behave strategically. Finally, the scenario considered here was first introduced by Amin et al. (2013) where the authors solve the problem of optimizing revenue against a strategic buyer with a fixed valuation and showed that a buyer can achieve regret in O ! p T 1−γ " . Mohri and Medina (2014b) later showed that one can in fact achieve a regret in O( log T 1−γ ) closing the gap with the lower bound to a factor of log T. The scenario of random valuations we consider here was also analyzed by Amin et al. (2013) where an algorithm achieving regret in O ! |P|T ↵+ 1 (1−γ)1/↵+ 1 ∆1/↵ " was proposed when prices are offered from a finite set P, with ∆= minp2P p⇤D(v > p⇤) −pD(v > p) and ↵a free parameter. Finally, an extension of this algorithm to the contextual setting was presented by the same authors in (Amin et al., 2014) where they provide an algorithm achieving O ! T 2/3 1−γ " regret. The algorithms proposed by Amin et al. (2013, 2014) consist of alternating exploration and exploitation. That is, there exist rounds where the seller only tries to estimate the value of the buyer and other rounds where he uses this information to try to extract the largest possible revenue. It is well known in the bandit literature (Dani and Hayes, 2006; Abernethy et al., 2008) that algorithms that ignore information obtained on exploitation rounds tend to be sub-optimal. Indeed, even in a truthful scenario where the UCB algorithm (Auer et al., 2002) achieves regret in O( log T ∆), the algorithm proposed by Amin et al. (2013) achieves sub-optimal regret in O ! e p log T log 1 ∆" for the optimal choice of ↵which, incidentally, requires also access to the unknown value ∆. We propose instead an algorithm inspired by the UCB strategy using exploration and exploitation simultaneously. We show that our algorithm admits a regret that is in O ! log T ∆ + |P| log(1/γ) " , which matches the UCB bound in the truthful scenario and which depends on γ only through the additive term 1 log(1/γ) ⇡ 1 1−γ known to be unavoidable (Amin et al., 2013). Our results cannot be directly compared with those of Amin et al. (2013) since they consider a fully strategic adversary whereas we consider an ✏-strategic adversary. As we will see in the next section, however, the notion of ✏strategic adversary is in fact more natural than that of a buyer who exactly optimizes his discounted surplus. Moreover, it is not hard to show that, when applied to our scenario, perhaps modulo a constant, the algorithm of Amin et al. (2013) cannot achieve a better regret than in the fully strategic adversary. 3 Setup We consider the following scenario, similar to the one introduced by Amin et al. (2013). 3.1 Scenario A buyer and a seller interact for T rounds. At each round t 2 {1, . . . , T}, the seller attempts to sell some good to the buyer, such as the right to display an ad. The buyer receives a valuation vt 2 [0, 1] which is unknown to the seller and is sampled from a distribution D. The seller offers a price pt, 3 in response to which the buyer selects an action at 2 {0, 1}, with at = 1 indicating that he accepts the price and at = 0 otherwise. We will say the buyer lies if he accepts the price at time t (at = 1) while the price offered is above his valuation (vt pt), or when he rejects the price (at = 0) while his valuation is above the price offered (vt > pt). The seller seeks to optimize his expected revenue over the T rounds of interaction, that is, Rev = E  T X t=1 atpt # . Notice that, when facing a truthful buyer, for any price p, the expected revenue of the seller is given by pD(v > p). Therefore, with knowledge of D, the seller could set all prices pt to p⇤, where p⇤2 argmaxp2[0,1] pD(v > p). Since the actions of the buyer do not affect the choice of future prices by the seller, the buyer has no incentive to lie and the seller will obtain an expected revenue of Tp⇤D(v > p⇤). It is therefore natural to measure the performance of any revenue optimization algorithm in terms of the following notion of strategic regret: RegT = Tp⇤D(v > p⇤) −Rev = max p2[0,1] TpD(v > p) −E  T X t=1 atpt # . The objective of the seller coincides with the one assumed by Kleinberg and Leighton (2003) in the study of repeated interactions with buyers with a random valuation. However, here, we will allow the buyer to behave strategically, which results in a harder problem. Nevertheless, the buyer is not assumed to be fully adversarial as in (Kleinberg and Leighton, 2003). Instead, we will assume, as discussed in detail in the next section, that the buyer seeks to approximately optimize his surplus, which can be viewed as a more natural assumption. 3.2 ✏-strategic Buyers Here, we define the family of buyers considered throughout this paper. We denote by x1:t 2 Rt the vector (x1, . . . , xt) and define the history of the game up to time t by Ht := $ p1:t, v1:t, a1:t % . Before the first round, the seller decides on an algorithm A for setting prices and this algorithm is announced to the buyer. The buyer then selects a strategy B: (Ht−1, vt, pt) 7! at. For any value γ 2 (0, 1) and strategy B, we define the buyer’s discounted expected surplus by Surγ(B) = E  T X t=1 γt−1at(vt −pt) # . A buyer minimizing this discounted surplus wishes to acquire the item as inexpensively as possible, but does not wish to wait too long to obtain a favorable price. In order to optimize his surplus, a buyer must then solve a non-homogeneous Markov decision process (MDP). Indeed, consider the scenario where at time t the seller offers prices from a distribution Dt 2 D, where D is a family of probability distributions over the interval [0, 1]. The seller updates his beliefs as follows: the current distribution Dt is selected as a function of the distribution at the previous round as well as the history Ht−1 (which is all the information available to the seller). More formally, we let ft : (Dt, Ht) 7! Dt+1 be a transition function for the seller. Let st = (Dt, Ht−1, vt, pt) denote the state of the environment at time t, that is, all the information available at time t to the buyer. Finally, let St(st) denote the maximum attainable expected surplus of a buyer that is in state st at time t. It is clear that St will satisfy the following Bellman equations: St(st) = max at2{0,1} γt−1at(vt −pt) + E(vt+1,pt+1)⇠D⇥ft(Dt,Ht) ⇥ St+1(ft(Dt, Ht), Ht, vt+1, pt+1 %⇤ , (1) with the boundary condition ST (sT ) = γT −1(vT −pT )1pT vT . Definition 1. A buyer is said to be strategic if his action at time t is a solution of the Bellman equation (1). Notice that, depending on the choice of the family D, the number of states of the MDP solved by a strategic buyer may be infinite. Even for a deterministic algorithm that offers prices from a finite set P, the number of states of this MDP would be in ⌦(T |P|), which quickly becomes intractable. Thus, in view of the prohibitive cost of computing his actions, the model of a fully strategic buyer does not seem to be realistic. We introduce instead the concept of ✏-strategic buyers. 4 Definition 2. A buyer is said to be ✏-strategic if he behaves strategically, except when no sequence of actions can improve upon the future surplus of the truthful sequence by more than γt0✏, or except for the first 0 < t < t0 rounds, for some t0 ≥0 depending only on the seller’s algorithm, in which cases he acts truthfully. We show in Section 4 that this definition implies the existence of t1 > t0 such that an ✏-strategic buyer only solves an MDP over the interval [t0, t1] which becomes a tractable problem for t1 ⌧T. The parameter t0 used in the definition is introduced to consider the unlikely scenario where a buyer’s algorithm deliberately ignores all information observed during the rounds 0 < t < t0, in which case it is optimal for the buyer to behave truthfully. Our definition is motivated by the fact that, for a buyer with bounded computational power, there is no incentive in acting non-truthfully if the gain in surplus over a truthful behavior is negligible. 4 Regret Analysis We now turn our attention to the problem faced by the seller. The seller’s goal is to maximize his revenue. When the buyer is truthful, Kleinberg and Leighton (2003) have shown that this problem can be cast as a continuous bandit problem. In that scenario, the strategic regret in fact coincides with the pseudo-regret, which is the quantity commonly minimized in a stochastic bandit setting (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012). Thus, if the set of possible prices P is finite, the seller can use the UCB algorithm Auer et al. (2002) to minimize his pseudo-regret. In the presence of an ✏-strategic buyer, the rewards are no longer stochastic. Therefore, we need to analyze the regret of a seller in the presence of lies. Let P denote a finite set of prices offered by the seller. Define µp = pD(v > p) and ∆p = µp⇤−µp. For every price p 2 P, define also Tp(t) to be the number of times price p has been offered up to time t. We will denote by T ⇤and µ⇤the corresponding quantities associated with the optimal price p⇤. Lemma 1. Let L denote the number of times a buyer lies. For any δ > 0, the strategic regret of a seller can be bounded as follows: RegT E[L] + X p: ∆p>δ E[Tp(t)]∆p + Tδ. Proof. Let Lt denote the event that the buyer lies at round t, then the expected revenue of a seller is given by E  T X t=1 X p2P atpt1pt=p(1Lt + 1Lc t) # ≥E  T X t=1 X p2P atpt1pt=p1Lc t # = E X p2P T X t=1 1vt>pp1pt=p1Lc t # , where the last equality follows from the fact that when the buyer is truthful at = 1vt>p. Moreover, using the fact that PT t=1 1Lt = L, we have E X p2P T X t=1 1vt>pp1pt=p1Lc t # = E X p2P T X t=1 1vt>pp1pt=p # −E X p2P T X t=1 1vt>pp1pt=p1Lt # = X p2P µp E[Tp(T)] −E  T X t=1 1vt>ptpt1Lt # ≥ X p2P µp E[Tp(T)] −E[L]. Since the regret of offering prices for which ∆p δ is bounded by Tδ, it follows that the regret of the seller is bounded by E[L] + P p: ∆p>δ ∆p E[Tp(T)] + Tδ. We now define a robust UCB (R-UCBL) algorithm for which we can bound the expectations E[Tp(T)]. For every price p 2 P, define bµp(t) = 1 Tp(t) t X i=1 pt1pt=p1vt>pt 5 to be the true empirical mean of the reward that a seller would obtain when facing a truthful buyer. Let Lt(p) = Pt i=1 " at −1vt>p # 1pt=pp denote the revenue obtained by the seller in rounds where the buyer lied. Notice that Lt(p) can be positive or negative. Finally, let µp(t) = bµp(t) + Lt(p) Tp(t) be the empirical mean obtained when offering price p that is observed by the seller. For the definition of our algorithm, we will make use of the following upper confidence bound: Bp(t, L) = Lp Tp(t) + s 2 log t Tp(t) . We will use B⇤as a shorthand for Bp⇤. Our R-UCBL algorithm selects the price pt that maximizes the quantity max p2P µp(t) + Bp(t, L). We proceed to bound the expected number of times a sub-optimal price p is offered. Proposition 1. Let Pt(p, L) := P "&& Lt(p) Tp(t) && + | Lt(p⇤) T ⇤(t) && ≥L " p Tp(t) + p⇤ T ⇤(t) ## . Then, the following inequality holds: E[Tp(t)] 4Lp ∆p + 32 log T ∆2p + 2 + T X t=1 Pt(p, L). Proof. For any p and t define ⌘p(t) = q 2 log t Tp(t) and let ⌘⇤= ⌘p⇤. If at time t price p 6= p⇤is offered then µp(t) + Bp(t, L) −µ⇤(t) −B⇤(t, L) ≥0 , bµp(t) + Bp(t, L) + Lt(p) Tp(t) −bµ⇤(t) −B⇤(t, L) −Lt(p⇤) T ⇤(t) ≥0 , ⇥ bµp(t) −µp −⌘p(t) ⇤ + ⇥ 2Bp(t, L) −∆p ⇤ + hLt(p) Tp(t) −Lt(p⇤) T ⇤(t) − Lp Tp(t) −Lp⇤ T ⇤(t) i + ⇥ µ⇤−bµ⇤(t) −⌘⇤(t) ⇤ ≥0. (2) Therefore, if price p is selected, then at least one of the four terms in inequality (2) must be positive. Let u = 4Lp ∆p + 32 log T ∆2p . Notice that if Tp(t) > u then 2Bp(t, L) −∆p < 0. Thus, we can write E[Tp(T)] = E h T X t=1 1pt=p(1Tp(t)u + 1Tp(t)>u) i = u + T X t=u Pr(pt = p, Tp(t) > u). This combined with the positivity of at least one of the four terms in (2) yields: E[Tp(T)] u + T X t=u Pr " bµp(t) −µp ≥⌘p(t) # + Pr ⇣Lt(p⇤) T ⇤(t) −Lt(p) Tp(t) ≥ Lp Tp(t) + Lp⇤ T ⇤(t) ⌘ + Pr " µ⇤−bµ⇤(t) > ⌘⇤(t) # u + T X t=u Pr " bµp(t) −µp ≥⌘p(t) # + Pr " µ⇤−bµ⇤(t) > ⌘⇤(t) # + Pt(p, L). (3) We can now bound the probabilities appearing in (3) as follows: Pr bµp(t) −µp ≥ s 2 log t Tp(t) ! Pr 9s 2 [0, t]: 1 s s X i=1 p1vi>p −µp ≥ r 2 log t s !  t X s=1 t−4 = t−3, 6 where the last inequality follows from an application of Hoeffding’s inequality as well as the union bound. A similar argument can be made to bound the other term in (3). Using the definition of u we then have E[Tp(T)] 4Lp ∆p + 32 log T ∆2p + T X t=u 2t−3 + T X t=1 Pt(p, L) 4Lp ∆p + 32 log T ∆2p + 2 + T X t=1 Pt(p, L), which completes the proof. Corollary 1. Let L denote the number of times a buyer lies. Then, the strategic regret of R-UCBL can be bounded as follows: RegT L ⇣ 4 X p2P p ⌘ + E[L] + X p: ∆p>δ ✓32 log T ∆p + 2∆p + T X t=1 Pt(p, L) ◆ + Tδ. Notice that the choice of parameter L of R-UCBL is subject to a trade-off: on the one hand, L should be small to minimize the first term of this regret bound; on the other hand, function Pt(p, L) is decreasing in T, therefore the term PT t=1 Pt(p, L) is beneficial for larger values of L. We now show that an ✏-strategic buyer can only lie a finite number of times, which will imply the existence of an appropriate choice of L for which we can ensure that Pt(p, L) = 0, thereby recovering the standard logarithmic regret of UCB. Proposition 2. If the discounting factor γ satisfies γ γ0 < 1, an ✏-strategic buyer stops lying after S = l log(1/✏(1−γ0)) log(1/γ0) m rounds. Proof. After S rounds, for any sequence of actions at the surplus that can be achieved by the buyer in the remaining rounds is bounded by T X t=t0+S E[at(vt −pt)] γS+t0 −γT 1 −γ γS+t0 1 −γ ✏, for any sequence of actions. Thus, by definition, an ✏-strategic buyer does not lie after S rounds. Corollary 2. If the discounting factor γ satisfies γ γ0 < 1 and the seller uses the R-UCBL algorithm with L = l log(1/✏(1−γ0)) log(1/γ0) m , then the strategic regret of the seller is bounded by & log 1 ✏(1−γ0) log 1 γ0 '⇣ 4 X p2P p + 1 ⌘ + X p:∆p>δ 32 log T ∆p + 2∆p + Tδ. (4) Proof. Follows trivially from Corollary 1 and the previous proposition, which implies that Pt(p, L) ⌘0. Let us compare our results with those of Amin et al. (2013). The regret bound given in (Amin et al., 2013) is in O ⇣ |P|T ↵+ |P|2 ∆2/↵+ |P|2 + ∆(1−γ0) ,1/↵ ⌘ , where ↵is a parameter controlling the fraction of rounds used for exploration and ∆= minp2P ∆p. In particular, notice that the dependency of this bound on the cardinality of P is quadratic instead of linear as in our case. Moreover, the dependency on γ0 is in O( 1 1−γ 1/↵). Therefore, even in a truthful scenario where γ ⌧1. The dependency on T remains polynomial whereas we recover the standard logarithmic regret. Only when the seller has access to ∆, which is a strong requirement, can he set the optimal value of ↵to achieve regret in O + e p log T log 1 ∆, . Of course, the algorithm proposed by Amin et al. (2013) assumes that the buyer is fully strategic whereas we only require the buyer to be ✏-strategic. However, the authors assume that the distribution satisfies a Lipchitz condition which technically allows them to bound the number of lies in the same way as in Proposition 2. Therefore, the regret bound achieved by their algorithm remains the same in our scenario. 7 5 Continuous pricing strategy Thus far, we have assumed that the prices offered by the buyer are selected out of a discrete set P. In practice, however, the optimal price may not be within P and therefore the algorithm described in the previous section might accumulate a large regret when compared against the best price in [0, 1]. In order to solve this problem, we propose to discretize the interval [0, 1] and run our R-UCBL algorithm on the resulting discretization. This induces a trade-off since a better discretization implies a larger regret term in (4). To find the optimal size of the discretization we follow the ideas of Kleinberg and Leighton (2003) and consider distributions D that satisfy the condition that the function f : p 7! pD(v > p) admits a unique maximizer p⇤such that f 00(p) < 0. Throughout this section, we let K 2 N and we consider the following finite set of prices PK = ! i K |1 i K ⇢[0, 1]. We also let pK be an optimal price in PK, that is pK 2 argmaxp2PK f(p) and we let p⇤= argmaxp2[0,1] f(p). Finally, we denote by ∆p = f(pK) −f(p) the sub-optimality gap with respect to price pK and by ∆p = f(p⇤) −f(p) the corresponding gap with respect to p⇤. The following theorem can be proven following similar ideas to those of Kleinberg and Leighton (2003). We defer its proof to the appendix. Theorem 1. Let K = # T log T $1/4, if the discounting factor γ satisfies γ γ0 < 1 and the seller uses the R-UCBL algorithm with the set of prices PK and L = l log(1/✏(1−γ0)) log(1/γ0) m , then the strategic regret of the seller can be bounded as follows: max p2[0,1] f(p) −E  T X t=1 atpt ) C p T log T + & log 1 ✏(1−γ0) log 1 γ0 '⇣ T log T ⌘1/4 + 1 ) . 6 Conclusion We introduced a revenue optimization algorithm for posted-price auctions that is robust against ✏strategic buyers. Moreover, we showed that our notion of strategic behavior is more natural than what has been previously studied. Our algorithm benefits from the optimal O # log T + 1 1−γ $ regret bound for a finite set of prices and admits regret in O # T 1/2 + T 1/4 1−γ $ when the buyer is offered prices in [0, 1], a scenario that had not been considered previously in the literature of revenue optimization against strategic buyers. It is known that a regret in o(T 1/2) is unattainable even in a truthful setting, but it remains an open problem to verify that the dependency on γ cannot be improved. Our algorithm admits a simple analysis and we believe that the idea of making truthful algorithms robust is general and can be extended to more complex auction mechanisms such as second-price auctions with reserve. 7 Acknowledgments We thank Afshin Rostamizadeh and Umar Syed for useful discussions about the topic of this paper and the NIPS reviewers for their insightful comments. This work was partly funded by NSF IIS1117591 and NSF CCF-1535987. 8 References Abernethy, J., E. Hazan, and A. Rakhlin (2008). Competing in the dark: An efficient algorithm for bandit linear optimization. In Proceedings of COLT 2008, pp. 263–274. Amin, K., A. Rostamizadeh, and U. Syed (2013). Learning prices for repeated auctions with strategic buyers. In Proceedings of NIPS, pp. 1169–1177. Amin, K., A. Rostamizadeh, and U. Syed (2014). Repeated contextual auctions with strategic buyers. In Proceedings of NIPS 2014, pp. 622–630. Arora, R., O. Dekel, and A. Tewari (2012). Online bandit learning against an adaptive adversary: from regret to policy regret. In Proceedings of ICML. Auer, P., N. Cesa-Bianchi, and P. Fischer (2002). Finite-time analysis of the multiarmed bandit problem. Machine Learning 47(2-3), 235–256. Bikhchandani, S. and K. McCardle (2012). Behaviour-based price discrimination by a patient seller. The B.E. Journal of Theoretical Economics 12(1), 1935–1704. Bubeck, S. and N. Cesa-Bianchi (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning 5(1), 1–122. Cesa-Bianchi, N., C. Gentile, and Y. Mansour (2015). Regret minimization for reserve prices in second-price auctions. IEEE Transactions on Information Theory 61(1), 549–564. Cole, R. and T. Roughgarden (2014). The sample complexity of revenue maximization. In Proceedings of STOC 2014, pp. 243–252. Cui, Y., R. Zhang, W. Li, and J. Mao (2011). Bid landscape forecasting in online ad exchange marketplace. In Proceedings of SIGKDD 2011, pp. 265–273. Dani, V. and T. P. Hayes (2006). Robbing the bandit: less regret in online geometric optimization against an adaptive adversary. In Proceedings of SODA 2006, pp. 937–943. Edelman, B. and M. Ostrovsky (2007). Strategic bidder behavior in sponsored search auctions. Decision Support Systems 43(1), 192–198. Kanoria, Y. and H. Nazerzadeh (2014). Dynamic reserve prices for repeated auctions: Learning from bids. In Proceedings of WINE 2014, pp. 232. Kleinberg, R. D. and F. T. Leighton (2003). The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Proceedings of FOCS 2003, pp. 594–605. Lai, T. and H. Robbins (1985). Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6(1), 4 – 22. Milgrom, P. and R. Weber (1982). A theory of auctions and competitive bidding. Econometrica: Journal of the Econometric Society 50(5), 1089–1122. Milgrom, P. R. (2004). Putting auction theory to work. Cambridge University Press. Mohri, M. and A. M. Medina (2014a). Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of ICML 2014, pp. 262–270. Mohri, M. and A. M. Medina (2014b). Optimal regret minimization in posted-price auctions with strategic buyers. In Proceedings of NIPS 2014, pp. 1871–1879. Myerson, R. B. (1981). Optimal auction design. Mathematics of Operations Research 6(1), pp. 58–73. Nachbar, J. (2001). Bayesian learning in repeated games of incomplete information. Social Choice and Welfare 18(2), 303–326. Nachbar, J. H. (1997). Prediction, optimization, and learning in repeated games. Econometrica: Journal of the Econometric Society 65(2), 275–309. Vickrey, W. (1961). Counterspeculation, auctions, and competitive sealed tenders. The Journal of finance 16(1), 8–37. 9
2015
148
5,645
The Population Posterior and Bayesian Modeling on Streams James McInerney Columbia University james@cs.columbia.edu Rajesh Ranganath Princeton University rajeshr@cs.princeton.edu David Blei Columbia University david.blei@columbia.edu Abstract Many modern data analysis problems involve inferences from streaming data. However, streaming data is not easily amenable to the standard probabilistic modeling approaches, which require conditioning on finite data. We develop population variational Bayes, a new approach for using Bayesian modeling to analyze streams of data. It approximates a new type of distribution, the population posterior, which combines the notion of a population distribution of the data with Bayesian inference in a probabilistic model. We develop the population posterior for latent Dirichlet allocation and Dirichlet process mixtures. We study our method with several large-scale data sets. 1 Introduction Probabilistic modeling has emerged as a powerful tool for data analysis. It is an intuitive language for describing assumptions about data and provides efficient algorithms for analyzing real data under those assumptions. The main idea comes from Bayesian statistics. We encode our assumptions about the data in a structured probability model of hidden and observed variables; we condition on a data set to reveal the posterior distribution of the hidden variables; and we use the resulting posterior as needed, for example to form predictions through the posterior predictive distribution or to explore the data through the posterior expectations of the hidden variables. Many modern data analysis problems involve inferences from streaming data. Examples include exploring the content of massive social media streams (e.g., Twitter, Facebook), analyzing live video streams, estimating the preferences of users on an online platform for recommending new items, and predicting human mobility patterns for anticipatory computing. Such problems, however, cannot easily take advantage of the standard approach to probabilistic modeling, which requires that we condition on a finite data set. This might be surprising to some readers; after all, one of the tenets of the Bayesian paradigm is that we can update our posterior when given new information. (“Yesterday’s posterior is today’s prior.”) But there are two problems with using Bayesian updating on data streams. The first problem is that Bayesian inference computes posterior uncertainty under the assumption that the model is correct. In theory this is sensible, but only in the impossible scenario where the data truly came from the proposed model. In practice, all models provide approximations to the data-generating distribution, and when the model is incorrect, the uncertainty that maximizes predictive likelihood may be larger or smaller than the Bayesian posterior variance. This problem is exacerbated in potentially never-ending streams; after seeing only a few data points, uncertainty is high, but eventually the model becomes overconfident. The second problem is that the data stream might change over time. This is an issue because, frequently, our goal in applying probabilistic models to streams is not to characterize how they change, but rather to accommodate it. That is, we would like for our current estimate of the latent variables to be accurate to the current state of the stream and to adapt to how the stream might slowly 1 change. (This is in contrast, for example, to time series modeling.) Traditional Bayesian updating cannot handle this. Either we explicitly model the time series, and pay a heavy inferential cost, or we tacitly assume that the data are exchangeable, i.e., that the underlying distribution does not change. In this paper we develop new ideas for analyzing data streams with probabilistic models. Our approach combines the frequentist notion of the population distribution with probabilistic models and Bayesian inference. Main idea: The population posterior. Consider a latent variable model of α data points. (This is unconventional notation; we will describe why we use it below.) Following [14], we define the model to have two kinds of hidden variables: global hidden variables β contain latent structure that potentially governs any data point; local hidden variables zi contain latent structure that only governs the ith data point. Such models are defined by the joint, p(β,z,x) = p(β) α ∏ i=1 p(xi,zi |β), (1) where x = x1:α and z = z1:α. Traditional Bayesian statistics conditions on a fixed data set x to obtain the posterior distribution of the hidden variables p(β,z|x). As we discussed, this framework cannot accommodate data streams. We need a different way to use the model. We define a new distribution, the population posterior, which enables us to consider Bayesian modeling of streams. Suppose we observe α data points independently from the underlying population distribution, X ∼Fα. This induces a posterior p(β,z|X), which is a function of the random data. The population posterior is the expected value of this distribution, EFα [p(z,β|X)] = EFα  p(β,z,X) p(X)  . (2) Notice that this distribution is not a function of observed data; it is a function of the population distribution F and the data size α. The data size is a hyperparameter that can be set; it effectively controls the variance of the population posterior. How to best set it depends on how close the model is to the true data distribution. We have defined a new problem. Given an endless stream of data points coming from F and a value for α, our goal is to approximate the corresponding population posterior. In this paper, we will approximate it through an algorithm based on variational inference and stochastic optimization. As we will show, our algorithm justifies applying a variant of stochastic variational inference [14] to a data stream. We used our method to analyze several data streams with two modern probabilistic models, latent Dirichlet allocation [5] and Dirichlet process mixtures [11]. With held-out likelihood as a measure of model fitness, we found our method to give better models of the data than approaches based on full Bayesian inference [14] or Bayesian updating [8]. Related work. Researchers have proposed several methods for inference on streams of data. Refs. [1, 9, 27] propose extending Markov chain Monte Carlo methods for streaming data. However, sampling-based approaches do not scale to massive datasets; the variational approximation enables more scalable inference. In variational inference, Ref. [15] propose online variational inference by exponentially forgetting the variational parameters associated with old data. Stochastic variational inference (SVI) [14] also decay parameters derived from old data, but interprets this in the context of stochastic optimization. Neither of these methods applies to streaming data; both implicitly rely on the data being of known size (even when subsampling data to obtain noisy gradients). To apply the variational approximation to streaming data, Ref. [8] and Ref. [12] both propose Bayesian updating of the approximating family; Ref. [22] adapts this framework to nonparametric mixture models. Here we take a different approach, changing the variational objective to incorporate a population distribution and then following stochastic gradients of this new objective. In Section 3 we show that this generally performs better than Bayesian updating. Independently, Ref. [23] applied SVI to streaming data by accumulating new data points into a growing window and then uniformly sampling from this window to update the variational parameters. Our method justifies that approach. Further, they propose updating parameters along a trust region, instead of following (natural) gradients, as a way of mitigating local optima. This innovation can be incorporated into our method. 2 2 Variational Inference for the Population Posterior We develop population variational Bayes, a method for approximating the population posterior in Eq. 2. Our method is based on variational inference and stochastic optimization. The F-ELBO. The idea behind variational inference is to approximate difficult-to-compute distributions through optimization [16, 25]. We introduce an approximating family of distributions over the latent variables q(β,z) and try to find the member of q(·) that minimizes the Kullback-Leibler (KL) divergence to the target distribution. Population variational Bayes (VB) uses variational inference to approximate the population posterior in Eq. 2. It aims to minimize the KL divergence from an approximating family, q∗(β,z) = argmin q KL(q(β,z)||EFα[p(β,z|X)]). (3) As for the population posterior, this objective is a function of the population distribution of α data points Fα. Notice the difference to classical VB. In classical VB, we optimize the KL divergence between q(·) and a posterior, KL(q(β,z)||p(β,z|x); its objective is a function of a fixed data set x. In contrast, the objective in Eq. 3 is a function of the population distribution Fα. We will use the mean-field variational family, where each latent variable is independent and governed by a free parameter, q(β,z) = q(β |λ) α ∏ i=1 q(zi |φi). (4) The free variational parameters are the global parameters λ and local parameters φi. Though we focus on the mean-field family, extensions could consider structured families [13, 20], where there is dependence between variables. In classical VB, where we approximate the usual posterior, we cannot compute the KL. Thus, we optimize a proxy objective called the ELBO (evidence lower bound) that is equal to the negative KL up to an additive constant. Maximizing the ELBO is equivalent to minimizing the KL divergence to the posterior. In population VB we also optimize a proxy objective, the F-ELBO. The F-ELBO is an expectation of the ELBO under the population distribution of the data, L (λ,φ;Fα) = EFα " Eq " log p(β)−logq(β |λ)+ α ∑ i=1 log p(Xi,Zi |β)−logq(Zi)] ## . (5) The F-ELBO is a lower bound on the population evidence logEFα[p(X)] and a lower bound on the negative KL to the population posterior. (See Appendix A.) The inner expectation is over the latent variables β and Z, and is a function of the variational distribution q(·). The outer expectation is over the α random data points X, and is a function of the population distribution Fα(·). The F-ELBO is thus a function of both the variational distribution and the population distribution. As we mentioned, classical VB maximizes the (classical) ELBO, which is equivalent to minimizing the KL. The F-ELBO, in contrast, is only a bound on the negative KL to the population posterior. Thus maximizing the F-ELBO is suggestive but is not guaranteed to minimize the KL. That said, our studies show that this is a good quantity to optimize, and in Appendix A we show that the F-ELBO does minimize EFα[KL(q(z||p(z,β|X))], the population KL. Conditionally conjugate models. In the next section we will develop a stochastic optimization algorithm to maximize Eq. 5. First, we describe the class of models that we will work with. Following [14] we focus on conditionally conjugate models. A conditionally conjugate model is one where each complete conditional—the conditional distribution of a latent variable given all the other latent variables and the observations—is in the exponential family. This class includes many models in modern machine learning, such as mixture models, topic models, many Bayesian nonparametric models, and some hierarchical regression models. Using conditionally conjugate models simplifies many calculations in variational inference. 3 Under the joint in Eq. 1, we can write a conditionally conjugate model with two exponential families: p(zi,xi |β) = h(zi,xi)exp  β ⊤t(zi,xi)−a(β) (6) p(β |ζ) = h(β)exp  ζ ⊤t(β)−a(ζ) . (7) We overload notation for base measures h(·), sufficient statistics t(·), and log normalizers a(·). Note that ζ is the hyperparameter and that t(β) = [β,−a(β)] [3]. In conditionally conjugate models each complete conditional is in an exponential family, and we use these families as the factors in the variational distribution in Eq. 4. Thus λ indexes the same family as p(β |z,x) and φi indexes the same family as p(zi |xi,β). For example, in latent Dirichlet allocation [5], the complete conditional of the topics is a Dirichlet; the complete conditional of the per-document topic mixture is a Dirichlet; and the complete conditional of the per-word topic assignment is a categorical. (See [14] for details.) Population variational Bayes. We have described the ingredients of our problem. We are given a conditionally conjugate model, described in Eqs. 6 and 7, a parameterized variational family in Eq. 4, and a stream of data from an unknown population distribution F. Our goal is to optimize the F-ELBO in Eq. 5 with respect to the variational parameters. The F-ELBO is a function of the population distribution, which is an unknown quantity. To overcome this hurdle, we will use the stream of data from F to form noisy gradients of the F-ELBO; we then update the variational parameters with stochastic optimization (a technique to find a local optimum by following noisy unbiased gradients [7]). Before describing the algorithm, however, we acknowledge one technical detail. Mirroring [14], we optimize an F-ELBO that is only a function of the global variational parameters. The one-parameter population VI objective is LFα (λ) = maxφ LFα (λ,φ). This implicitly optimizes the local parameter as a function of the global parameter and allows us to convert the potentially infinite-dimensional optimization problem in Eq. 5 to a finite one. The resulting objective is identical to Eq. 5, but with φ replaced by φ(λ). (Details are in Appendix B). The next step is to form a noisy gradient of the F-ELBO so that we can use stochastic optimization to maximize it. Stochastic optimization maximizes an objective by following noisy and unbiased gradients [7, 19]. We will write the gradient of the F-ELBO as an expectation with respect to Fα, and then use Monte Carlo estimates to form noisy gradients. We compute the gradient of the F-ELBO by bringing the gradient operator inside the expectations of Eq. 5.1 This results in a population expectation of the classical VB gradient with α data points. We take the natural gradient [2], which has a simple form in completely conjugate models [14]. Specifically, the natural gradient of the F-ELBO is ˆ∇λL (λ;Fα) = ζ −λ +EFα " α ∑ i=1 Eφi(λ) [t(xi,Zi)] # . (8) We approximate this expression using Monte Carlo to compute noisy, unbiased natural gradients at λ. To form the Monte Carlo estimate, we collect α data points from F; for each we compute the optimal local parameters φi(λ), which is a function of the sampled data point and variational parameters; we then compute the quantity inside the brackets in Eq. 8. Averaging these results gives the Monte Carlo estimate of the natural gradient. We follow the noisy natural gradient and repeat. The algorithm is summarized in Algorithm 1. Because Eq. 8 is a Monte Carlo estimate, we are free to draw B data points from Fα (where B << α) and rescale the sufficient statistics by α/B. This makes the natural gradient estimate noisier, but faster to calculate. As highlighted in [14], this strategy is more computationally efficient because early iterations of the algorithm have inaccurate values of λ. It is wasteful to pass through a lot of data before making updates to λ. Discussion. Thus far, we have defined the population posterior and showed how to approximate it with population variational inference. Our derivation justifies using an algorithm like stochastic variational inference (SVI) [14] on a stream of data. It is nearly identical to SVI, but includes an additional parameter: the number of data points in the population posterior α. 1For most models of interest, this is justified by the dominated convergence theorem. 4 Algorithm 1 Population Variational Bayes Randomly initialize global variational parameter λ (0) Set iteration t ←0 repeat Draw data minibatch x1:B ∼Fα Optimize local variational parameters φ1(λ (t)),...,φB(λ (t)) Calculate natural gradient ˆ∇λL (λ (t);Fα) [see Eq. 8] Update global variational parameter with learning rate ρ(t) λ (t+1) = λ (t) +ρ(t) α B ˆ∇λL (λ (t);Fα) Update iteration count t ←t +1 until forever Note we can recover the original SVI algorithm as an instance of population VI, thus reinterpreting it as minimizing the KL divergence to the population posterior. We recover SVI by setting α equal to the number of data points in the data set and replacing the stream of data F with ˆFx, the empirical distribution of the observations. The “stream” in this case comes from sampling with replacement from ˆFx, which results in precisely the original SVI algorithm.2 We focused on the conditionally conjugate family for convenience, i.e., the simple gradient in Eq. 8. We emphasize, however, that by using recent tools for nonconjugate inference [17, 18, 24], we can adapt the new ideas described above—the population posterior and the F-ELBO—outside of conditionally conjugate models. Finally, we analyze the population posterior distribution under the assumption the only way the stream affects the model is through the data. Formally, this means the unobserved variables in the model and the stream Fα are independent given the data X. The population posterior without the local latent variables z (which can be marginalized out) is EFα[p(β | X)]. Expanding the expectation gives R p(β | X)p(X | Fα)dX, showing that the population posterior distribution can be written as p(β | Fα). This can be depicted as a graphical model: F˛ X ˇ This means first, that the population posterior is well defined even when the model does not specify the marginal distribution of the data and, second, rather than the classical Bayesian setting where the posterior is conditioned on a finite fixed dataset, the population posterior is a distributional posterior conditioned on the stream Fα. 3 Empirical Evaluation We study the performance of population variational Bayes (population VB) against SVI and streaming variational Bayes (SVB) [8]. With large real-world data we study two models, latent Dirichlet allocation [5] and Bayesian nonparametric mixture models, comparing the held-out predictive performance of the algorithms. All three methods share the same local variational update, which is the dominating computational cost. We study the data coming in a true ordered stream, and in a permuted stream (to better match the assumptions of SVI). Across data and models, population VB usually outperforms the existing approaches. Models. We study two models. The first is latent Dirichlet allocation (LDA) [5]. LDA is a mixed-membership model of text collections and is frequently used to find its latent topics. LDA assumes that there are K topics βk ∼Dir(η), each of which is a multinomial distribution over a fixed vocabulary. Documents are drawn by first choosing a distribution over topics θd ∼Dir(α) and then 2This derivation of SVI is an application of Efron’s plug-in principle [10] applied to inference of the population posterior. The plug-in principle says that we can replace the population F with the empirical distribution of the data ˆF to make population inferences. In our empirical study, however, we found that population VI often outperforms stochastic VI. Treating the data in a true stream, and setting the number of data points different to the true number, can improve predictive accuracy. 5 0 2 4 6 8 10 12 14 16 18 −8.0 −7.8 −7.6 −7.4 −7.2 held out log likelihood New York Times 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 −8.0 −7.8 −7.6 −7.4 −7.2 Science 0 10 20 30 40 50 60 70 −8.6 −8.4 −8.2 −8.0 −7.8 −7.6 −7.4 Twitter Population-VB α=1M Streaming-VB [8] number of documents seen (×105) Time-ordered stream 0 2 4 6 8 10 12 14 16 18 −8.1 −8.0 −7.9 −7.8 −7.7 −7.6 −7.5 held out log likelihood New York Times 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 −8.0 −7.8 −7.6 −7.4 −7.2 −7.0 Science 0 10 20 30 40 50 60 70 −8.0 −7.9 −7.8 −7.7 −7.6 −7.5 −7.4 −7.3 Twitter Population-VB α=1M Streaming-VB [8] SVI [15] number of documents seen (×105) Random time-permuted stream Figure 1: Held out predictive log likelihood for LDA on large-scale streamed text corpora. PopulationVB outperforms existing methods for two out of the three settings. We use the best settings of α. drawing each word by choosing a topic assignment zdn ∼Mult(θd) and finally choosing a word from the corresponding topic wdn ∼βzdn. The joint distribution is p(β,θ,z,w|η,γ) = p(β|η) α ∏ d=1 p(θd|γ) N ∏ i=1 p(zdi|θd)p(wdi|β,zdi). (9) Fixing hyperparameters, the inference problem is to estimate the conditional distribution of the topics given a large collection of documents. The second model is a Dirichlet process (DP) mixture [11]. Loosely, DP mixtures are mixture models with a potentially infinite number of components; thus choosing the number of components is part of the posterior inference problem. When using variational inference for DP mixtures [4], we take advantage of the stick breaking representation to construct a truncated variational approximation [21]. The variables are mixture proportions π ∼Stick(η), mixture components βk ∼H(γ) (for infinite k), mixture assignments zi ∼Mult(π), and observations xi ∼G(βzi). The joint is p(β,π,z,x|η,γ) = p(π|η)p(β|γ) α ∏ i=1 p(zi|π)p(xi|β,xi). (10) The likelihood and prior on the components are general to the observations at hand. In our study of real-valued data we use normal priors and normal likelihoods; in our study of text data we use Dirichlet priors and multinomial likelihoods. For both models we vary α, usually fixed to the number of data points in traditional analysis. Datasets. With LDA we analyze three large-scale streamed corpora: 1.7M articles from the New York Times spanning 10 years, 130K Science articles written over 100 years, and 7.4M tweets collected from Twitter on Feb 2nd, 2014. We processed them all in a similar way, choosing a vocabulary based on the most frequent words in the corpus (with stop words removed): 8,000 for the New York Times, 5,855 for Science, and 13,996 for Twitter. On Twitter, each tweet is a document, and we removed duplicate tweets and tweets that did not contain at least 2 words in the vocabulary. For each data stream, all algorithms took a few hours to process all the examples we collected. With DP mixtures, we analyze human location behavior data. These data allow us to build periodic models of human population mobility, with applications to disaster response and urban planning. Such models account for periodicity by including the hour of the week as one of the dimensions of the 6 0 20 40 60 80 100120140160180 −7.0 −6.9 −6.8 −6.7 −6.6 −6.5 held out log likelihood Ivory Coast Locations 0.00 0.05 0.10 0.15 0.20 0.25 0.30 −0.4 −0.3 −0.2 −0.1 0.0 0.1 Geolife Locations 0 2 4 6 8 10 12 14 16 18 −8.5 −8.4 −8.3 −8.2 −8.1 −8.0 −7.9 −7.8 New York Times Population-VB α=best Streaming-VB [8] number of data points seen (×105) Time-ordered stream 0 20 40 60 80 100120140160180 −6.84 −6.82 −6.80 −6.78 −6.76 −6.74 −6.72 −6.70 held out log likelihood Ivory Coast Locations 0.00 0.05 0.10 0.15 0.20 0.25 0.30 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 Geolife Locations 0 2 4 6 8 10 12 14 16 18 −8.5 −8.4 −8.3 −8.2 −8.1 −8.0 New York Times Population-VB α=best Streaming-VB [8] SVI [15] number of data points seen (×105) Random time-permuted stream Figure 2: Held out predictive log likelihood for Dirichlet process mixture models on large-scale streamed location and text data sets. Note that we apply Gaussian likelihoods in the Geolife dataset, so the reported predictive performance is measured by probability density. We chose the best α for each population-VB curve. 4 5 6 7 8 9 −7.90 −7.85 −7.80 −7.75 −7.70 −7.65 −7.60 held out log likelihood New York Times 4 5 6 7 8 9 −7.30 −7.28 −7.26 −7.24 −7.22 −7.20 −7.18 −7.16 Science 4 5 6 7 8 9 −8.5 −8.4 −8.3 −8.2 −8.1 −8.0 −7.9 −7.8 Twitter Population-VB α=true N logarithm (base 10) of α Population-VB sensitivity to α for LDA 4 5 6 7 8 9 10 11 12 −6.82 −6.81 −6.80 −6.79 −6.78 −6.77 −6.76 −6.75 held out log likelihood Ivory Coast Locations 4 5 6 7 8 9 −0.20 −0.15 −0.10 −0.05 0.00 Geolife Locations 3 4 5 6 7 8 9 −9.5 −9.0 −8.5 −8.0 New York Times Population-VB α=true N logarithm (base 10) of α Population-VB sensitivity to α for DP-Mixture Figure 3: We show the sensitivity of population-VB to hyperparameter α (based on final log likelihoods in the time-ordered stream) and find that the best setting of α often differs from the true number of data points (which may not be known in any case in practice). data to be modeled. The Ivory Coast location data contains 18M discrete cell tower locations for 500K users recorded over 6 months [6]. The Microsoft Geolife dataset contains 35K latitude-longitude GPS locations for 182 users over 5 years. For both data sets, our observations reflect down-sampling the data to ensure that each individual is seen no more than once every 15 minutes. 7 Results. We compare population VB with SVI [14] and SVB [8] for LDA [8] and DP mixtures [22]. SVB updates the variational approximation of the global parameter using density filtering with exponential families. The complexity of the approximation remains fixed as the expected sufficient statistics from minibatches observed in a stream are combined with those of the current approximation. (Here we give the final results. We include details of how we set and fit hyperparameters below.) We measure model fitness by evaluating the average predictive log likelihood on held-out data. This involves splitting held-out observations (that were not involved in the posterior approximation of β) into two equal halves, inferring the local component distribution based on the first half, and testing with the second half [14, 26]. For DP-mixtures, we condition on the observed hour of the week and predict the geographic location of the held-out data point. In standard offline studies, the held-out set is randomly selected from the data. With streams, however, we test on the next 10K documents (for New York Times, Science), 500K tweets (for Twitter), or 25K locations (on Geo data). This is a valid held-out set because the data ahead of the current position in the stream have not yet been seen by the inference algorithms. Figure 1 shows the performance for LDA. We looked at two types of streams: one in which the data appear in order and the other in which they have been permuted (i.e., an exchangeable stream). The time permuted stream reveals performance when each data minibatch is safely assumed to be an i.i.d. sample from F; this results in smoother improvements to predictive likelihood. On our data, we found that population VB outperformed SVI and SVB on two of the data sets and outperformed SVI on all of the data. SVB performed better than population VB on Twitter. Figure 2 shows a similar study for DP mixtures. We analyzed the human mobility data and the New York Times. (Ref. [22] also analyzed the New York Times.) On these data population VB outperformed SVB and SVI in all settings.3 Hyperparameters Unlike traditional Bayesian methods, the data set size α is a hyperparameter to population VB. It helps control the posterior variance of the population posterior. Figure 3 reports sensitivity to α for all studies (for the time-ordered stream). These plots indicate that the optimal setting of α is often different from the true number of data points; the best performing population posterior variance is not necessarily the one implied by the data. The other hyperparameters to our experiments are reported in Appendix C. 4 Conclusions and Future Work We introduced the population posterior, a distribution over latent variables that combines traditional Bayesian inference with the frequentist idea of the population distribution. With this idea, we derived population variational Bayes, an efficient algorithm for probabilistic inference on streams. On two complex Bayesian models and several large data sets, we found that population variational Bayes usually performs better than existing approaches to streaming inference. In this paper, we made no assumptions about the structure of the population distribution. Making assumptions, such as the ability to obtain streams conditional on queries, can lead to variants of our algorithm that learn which data points to see next during inference. Finally, understanding the theoretical properties of the population posterior is also an avenue of interest. Acknowledgments. We thank Allison Chaney, John Cunningham, Alp Kucukelbir, Stephan Mandt, Peter Orbanz, Theo Weber, Frank Wood, and the anonymous reviewers for their comments. This work is supported by NSF IIS-0745520, IIS-1247664, IIS-1009542, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, N66001-15-C-4032, NDSEG, Facebook, Adobe, Amazon, and the Siebel Scholar and John Templeton Foundations. 3Though our purpose is to compare algorithms, we make one note about a specific data set. The predictive accuracy for the Ivory Coast data set plummets after 14M data points. This is because of the data collection policy. For privacy reasons the data set provides the cell tower locations of a randomly selected cohort of 50K users every 2 weeks [6]. The new cohort at 14M data points behaves differently to previous cohorts in a way that affects predictive performance. However, both algorithms steadily improve after this shock. 8 References [1] A. Ahmed, Q. Ho, C. H. Teo, J. Eisenstein, E. P. Xing, and A. J. Smola. Online inference for the infinite topic-cluster model: Storylines from streaming text. In International Conference on Artificial Intelligence and Statistics, pages 101–109, 2011. [2] S. I. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998. [3] J. M. Bernardo and A. F. Smith. Bayesian Theory, volume 405. John Wiley & Sons, 2009. [4] D. M. Blei, M. I. Jordan, et al. Variational inference for Dirichlet process mixtures. Bayesian Analysis, 1(1):121–143, 2006. [5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. The Journal of Machine Learning Research, 3:993–1022, 2003. [6] V. D. Blondel, M. Esch, C. Chan, F. Clérot, P. Deville, E. Huens, F. Morlot, Z. Smoreda, and C. Ziemlicki. Data for development: the D4D challenge on mobile phone data. arXiv preprint arXiv:1210.0137, 2012. [7] L. Bottou. Online learning and stochastic approximations. Online learning in Neural Networks, 17:9, 1998. [8] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. Jordan. Streaming variational Bayes. In Advances in Neural Information Processing Systems, pages 1727–1735, 2013. [9] A. Doucet, S. Godsill, and C. Andrieu. On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10(3):197–208, 2000. [10] B. Efron and R. J. Tibshirani. An introduction to the bootstrap. CRC press, 1994. [11] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90(430):577–588, 1995. [12] Z. Ghahramani and H. Attias. Online variational Bayesian learning. In Slides from talk presented at NIPS 2000 Workshop on Online learning, pages 101–109, 2000. [13] M. D. Hoffman and D. M. Blei. Structured stochastic variational inference. In International Conference on Artificial Intelligence and Statistics, pages 101–109, 2015. [14] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013. [15] A. Honkela and H. Valpola. On-line variational Bayesian learning. In 4th International Symposium on Independent Component Analysis and Blind Signal Separation, pages 803–808, 2003. [16] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [17] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. [18] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, pages 805–813, 2014. [19] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. [20] L. K. Saul and M. I. Jordan. Exploiting tractable substructures in intractable networks. Advances in Neural Information Processing Systems, pages 486–492, 1996. [21] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639–650, 1994. [22] A. Tank, N. Foti, and E. Fox. Streaming variational inference for Bayesian nonparametric mixture models. In International Conference on Artificial Intelligence and Statistics, 2015. [23] L. Theis and M. D. Hoffman. A trust-region method for stochastic variational inference with applications to streaming data. In International Conference on Machine Learning, 2015. [24] M. Titsias and M. Lázaro-Gredilla. Doubly stochastic variational Bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning, pages 1971–1979, 2014. [25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, Jan. 2008. [26] H. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In International Conference on Machine Learning, 2009. [27] L. Yao, D. Mimno, and A. McCallum. Efficient methods for topic model inference on streaming document collections. In Conference on Knowledge Discovery and Data Mining, pages 937–946. ACM, 2009. 9
2015
149
5,646
Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting Xingjian Shi Zhourong Chen Hao Wang Dit-Yan Yeung Department of Computer Science and Engineering Hong Kong University of Science and Technology {xshiab,zchenbb,hwangaz,dyyeung}@cse.ust.hk Wai-kin Wong Wang-chun Woo Hong Kong Observatory Hong Kong, China {wkwong,wcwoo}@hko.gov.hk Abstract The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-theart operational ROVER algorithm for precipitation nowcasting. 1 Introduction Nowcasting convective precipitation has long been an important problem in the field of weather forecasting. The goal of this task is to give precise and timely prediction of rainfall intensity in a local region over a relatively short period of time (e.g., 0-6 hours). It is essential for taking such timely actions as generating society-level emergency rainfall alerts, producing weather guidance for airports, and seamless integration with a longer-term numerical weather prediction (NWP) model. Since the forecasting resolution and time accuracy required are much higher than other traditional forecasting tasks like weekly average temperature prediction, the precipitation nowcasting problem is quite challenging and has emerged as a hot research topic in the meteorology community [22]. Existing methods for precipitation nowcasting can roughly be categorized into two classes [22], namely, NWP based methods and radar echo1 extrapolation based methods. For the NWP approach, making predictions at the nowcasting timescale requires a complex and meticulous simulation of the physical equations in the atmosphere model. Thus the current state-of-the-art operational precipitation nowcasting systems [19, 6] often adopt the faster and more accurate extrapolation based methods. Specifically, some computer vision techniques, especially optical flow based methods, have proven useful for making accurate extrapolation of radar maps [10, 6, 20]. One recent progress along this path is the Real-time Optical flow by Variational methods for Echoes of Radar (ROVER) 1In real-life systems, radar echo maps are often constant altitude plan position indicator (CAPPI) images [9]. 1 algorithm [25] proposed by the Hong Kong Observatory (HKO) for its Short-range Warning of Intense Rainstorms in Localized System (SWIRLS) [15]. ROVER calculates the optical flow of consecutive radar maps using the algorithm in [5] and performs semi-Lagrangian advection [4] on the flow field, which is assumed to be still, to accomplish the prediction. However, the success of these optical flow based methods is limited because the flow estimation step and the radar echo extrapolation step are separated and it is challenging to determine the model parameters to give good prediction performance. These technical issues may be addressed by viewing the problem from the machine learning perspective. In essence, precipitation nowcasting is a spatiotemporal sequence forecasting problem with the sequence of past radar maps as input and the sequence of a fixed number (usually larger than 1) of future radar maps as output.2 However, such learning problems, regardless of their exact applications, are nontrivial in the first place due to the high dimensionality of the spatiotemporal sequences especially when multi-step predictions have to be made, unless the spatiotemporal structure of the data is captured well by the prediction model. Moreover, building an effective prediction model for the radar echo data is even more challenging due to the chaotic nature of the atmosphere. Recent advances in deep learning, especially recurrent neural network (RNN) and long short-term memory (LSTM) models [12, 11, 7, 8, 23, 13, 18, 21, 26], provide some useful insights on how to tackle this problem. According to the philosophy underlying the deep learning approach, if we have a reasonable end-to-end model and sufficient data for training it, we are close to solving the problem. The precipitation nowcasting problem satisfies the data requirement because it is easy to collect a huge amount of radar echo data continuously. What is needed is a suitable model for end-to-end learning. The pioneering LSTM encoder-decoder framework proposed in [23] provides a general framework for sequence-to-sequence learning problems by training temporally concatenated LSTMs, one for the input sequence and another for the output sequence. In [18], it is shown that prediction of the next video frame and interpolation of intermediate frames can be done by building an RNN based language model on the visual words obtained by quantizing the image patches. They propose a recurrent convolutional neural network to model the spatial relationships but the model only predicts one frame ahead and the size of the convolutional kernel used for state-to-state transition is restricted to 1. Their work is followed up later in [21] which points out the importance of multi-step prediction in learning useful representations. They build an LSTM encoder-decoderpredictor model which reconstructs the input sequence and predicts the future sequence simultaneously. Although their method can also be used to solve our spatiotemporal sequence forecasting problem, the fully connected LSTM (FC-LSTM) layer adopted by their model does not take spatial correlation into consideration. In this paper, we propose a novel convolutional LSTM (ConvLSTM) network for precipitation nowcasting. We formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem that can be solved under the general sequence-to-sequence learning framework proposed in [23]. In order to model well the spatiotemporal relationships, we extend the idea of FC-LSTM to ConvLSTM which has convolutional structures in both the input-to-state and state-to-state transitions. By stacking multiple ConvLSTM layers and forming an encoding-forecasting structure, we can build an end-to-end trainable model for precipitation nowcasting. For evaluation, we have created a new real-life radar echo dataset which can facilitate further research especially on devising machine learning algorithms for the problem. When evaluated on a synthetic Moving-MNIST dataset [21] and the radar echo dataset, our ConvLSTM model consistently outperforms both the FC-LSTM and the state-of-the-art operational ROVER algorithm. 2 Preliminaries 2.1 Formulation of Precipitation Nowcasting Problem The goal of precipitation nowcasting is to use the previously observed radar echo sequence to forecast a fixed length of the future radar maps in a local region (e.g., Hong Kong, New York, or Tokyo). In real applications, the radar maps are usually taken from the weather radar every 6-10 minutes and nowcasting is done for the following 1-6 hours, i.e., to predict the 6-60 frames ahead. From the ma2It is worth noting that our precipitation nowcasting problem is different from the one studied in [14], which aims at predicting only the central region of just the next frame. 2 chine learning perspective, this problem can be regarded as a spatiotemporal sequence forecasting problem. Suppose we observe a dynamical system over a spatial region represented by an M × N grid which consists of M rows and N columns. Inside each cell in the grid, there are P measurements which vary over time. Thus, the observation at any time can be represented by a tensor X ∈RP ×M×N, where R denotes the domain of the observed features. If we record the observations periodically, we will get a sequence of tensors ˆ X1, ˆ X2, . . . , ˆ Xt. The spatiotemporal sequence forecasting problem is to predict the most likely length-K sequence in the future given the previous J observations which include the current one: ˜ Xt+1, . . . , ˜ Xt+K = arg max Xt+1,...,Xt+K p(Xt+1, . . . , Xt+K | ˆ Xt−J+1, ˆ Xt−J+2, . . . , ˆ Xt) (1) For precipitation nowcasting, the observation at every timestamp is a 2D radar echo map. If we divide the map into tiled non-overlapping patches and view the pixels inside a patch as its measurements (see Fig. 1), the nowcasting problem naturally becomes a spatiotemporal sequence forecasting problem. We note that our spatiotemporal sequence forecasting problem is different from the one-step time series forecasting problem because the prediction target of our problem is a sequence which contains both spatial and temporal structures. Although the number of free variables in a length-K sequence can be up to O(M KN KP K), in practice we may exploit the structure of the space of possible predictions to reduce the dimensionality and hence make the problem tractable. 2.2 Long Short-Term Memory for Sequence Modeling For general-purpose sequence modeling, LSTM as a special RNN structure has proven stable and powerful for modeling long-range dependencies in various previous studies [12, 11, 17, 23]. The major innovation of LSTM is its memory cell ct which essentially acts as an accumulator of the state information. The cell is accessed, written and cleared by several self-parameterized controlling gates. Every time a new input comes, its information will be accumulated to the cell if the input gate it is activated. Also, the past cell status ct−1 could be “forgotten” in this process if the forget gate ft is on. Whether the latest cell output ct will be propagated to the final state ht is further controlled by the output gate ot. One advantage of using the memory cell and gates to control information flow is that the gradient will be trapped in the cell (also known as constant error carousels [12]) and be prevented from vanishing too quickly, which is a critical problem for the vanilla RNN model [12, 17, 2]. FC-LSTM may be seen as a multivariate version of LSTM where the input, cell output and states are all 1D vectors. In this paper, we follow the formulation of FC-LSTM as in [11]. The key equations are shown in (2) below, where ‘◦’ denotes the Hadamard product: it = σ(Wxixt + Whiht−1 + Wci ◦ct−1 + bi) ft = σ(Wxfxt + Whfht−1 + Wcf ◦ct−1 + bf) ct = ft ◦ct−1 + it ◦tanh(Wxcxt + Whcht−1 + bc) ot = σ(Wxoxt + Whoht−1 + Wco ◦ct + bo) ht = ot ◦tanh(ct) (2) Multiple LSTMs can be stacked and temporally concatenated to form more complex structures. Such models have been applied to solve many real-life sequence modeling problems [23, 26]. 3 The Model We now present our ConvLSTM network. Although the FC-LSTM layer has proven powerful for handling temporal correlation, it contains too much redundancy for spatial data. To address this problem, we propose an extension of FC-LSTM which has convolutional structures in both the input-to-state and state-to-state transitions. By stacking multiple ConvLSTM layers and forming an encoding-forecasting structure, we are able to build a network model not only for the precipitation nowcasting problem but also for more general spatiotemporal sequence forecasting problems. 3 2D Image 3D Tensor P P Figure 1: Transforming 2D image into 3D tensor Ht−1, Ct−1 Ht, Ct Ht+1, Ct+1 Xt Xt+1 Figure 2: Inner structure of ConvLSTM 3.1 Convolutional LSTM The major drawback of FC-LSTM in handling spatiotemporal data is its usage of full connections in input-to-state and state-to-state transitions in which no spatial information is encoded. To overcome this problem, a distinguishing feature of our design is that all the inputs X1, . . . , Xt, cell outputs C1, . . . , Ct, hidden states H1, . . . , Ht, and gates it, ft, ot of the ConvLSTM are 3D tensors whose last two dimensions are spatial dimensions (rows and columns). To get a better picture of the inputs and states, we may imagine them as vectors standing on a spatial grid. The ConvLSTM determines the future state of a certain cell in the grid by the inputs and past states of its local neighbors. This can easily be achieved by using a convolution operator in the state-to-state and input-to-state transitions (see Fig. 2). The key equations of ConvLSTM are shown in (3) below, where ‘∗’ denotes the convolution operator and ‘◦’, as before, denotes the Hadamard product: it = σ(Wxi ∗Xt + Whi ∗Ht−1 + Wci ◦Ct−1 + bi) ft = σ(Wxf ∗Xt + Whf ∗Ht−1 + Wcf ◦Ct−1 + bf) Ct = ft ◦Ct−1 + it ◦tanh(Wxc ∗Xt + Whc ∗Ht−1 + bc) ot = σ(Wxo ∗Xt + Who ∗Ht−1 + Wco ◦Ct + bo) Ht = ot ◦tanh(Ct) (3) If we view the states as the hidden representations of moving objects, a ConvLSTM with a larger transitional kernel should be able to capture faster motions while one with a smaller kernel can capture slower motions. Also, if we adopt a similar view as [16], the inputs, cell outputs and hidden states of the traditional FC-LSTM represented by (2) may also be seen as 3D tensors with the last two dimensions being 1. In this sense, FC-LSTM is actually a special case of ConvLSTM with all features standing on a single cell. To ensure that the states have the same number of rows and same number of columns as the inputs, padding is needed before applying the convolution operation. Here, padding of the hidden states on the boundary points can be viewed as using the state of the outside world for calculation. Usually, before the first input comes, we initialize all the states of the LSTM to zero which corresponds to “total ignorance” of the future. Similarly, if we perform zero-padding (which is used in this paper) on the hidden states, we are actually setting the state of the outside world to zero and assume no prior knowledge about the outside. By padding on the states, we can treat the boundary points differently, which is helpful in many cases. For example, imagine that the system we are observing is a moving ball surrounded by walls. Although we cannot see these walls, we can infer their existence by finding the ball bouncing over them again and again, which can hardly be done if the boundary points have the same state transition dynamics as the inner points. 3.2 Encoding-Forecasting Structure Like FC-LSTM, ConvLSTM can also be adopted as a building block for more complex structures. For our spatiotemporal sequence forecasting problem, we use the structure shown in Fig. 3 which consists of two networks, an encoding network and a forecasting network. Like in [21], the initial states and cell outputs of the forecasting network are copied from the last state of the encoding network. Both networks are formed by stacking several ConvLSTM layers. As our prediction target has the same dimensionality as the input, we concatenate all the states in the forecasting network and feed them into a 1 × 1 convolutional layer to generate the final prediction. We can interpret this structure using a similar viewpoint as [23]. The encoding LSTM compresses the whole input sequence into a hidden state tensor and the forecasting LSTM unfolds this hidden 4 ConvLST M2 Prediction ConvLST M1 Input ConvLST M4 ConvLST M3 Encoding Network Forecasting Network Copy Copy Figure 3: Encoding-forecasting ConvLSTM network for precipitation nowcasting state to give the final prediction: ˜ Xt+1, . . . , ˜ Xt+K = arg max Xt+1,...,Xt+K p(Xt+1, . . . , Xt+K | ˆ Xt−J+1, ˆ Xt−J+2, . . . , ˆ Xt) ≈ arg max Xt+1,...,Xt+K p(Xt+1, . . . , Xt+K | fencoding( ˆ Xt−J+1, ˆ Xt−J+2, . . . , ˆ Xt)) ≈gforecasting(fencoding( ˆ Xt−J+1, ˆ Xt−J+2, . . . , ˆ Xt)) (4) This structure is also similar to the LSTM future predictor model in [21] except that our input and output elements are all 3D tensors which preserve all the spatial information. Since the network has multiple stacked ConvLSTM layers, it has strong representational power which makes it suitable for giving predictions in complex dynamical systems like the precipitation nowcasting problem we study here. 4 Experiments We first compare our ConvLSTM network with the FC-LSTM network on a synthetic MovingMNIST dataset to gain some basic understanding of the behavior of our model. We run our model with different number of layers and kernel sizes and also study some “out-of-domain” cases as in [21]. To verify the effectiveness of our model on the more challenging precipitation nowcasting problem, we build a new radar echo dataset and compare our model with the state-of-the-art ROVER algorithm based on several commonly used precipitation nowcasting metrics. The results of the experiments conducted on these two datasets lead to the following findings: • ConvLSTM is better than FC-LSTM in handling spatiotemporal correlations. • Making the size of state-to-state convolutional kernel bigger than 1 is essential for capturing the spatiotemporal motion patterns. • Deeper models can produce better results with fewer parameters. • ConvLSTM performs better than ROVER for precipitation nowcasting. Our implementations of the models are in Python with the help of Theano [3, 1]. We run all the experiments on a computer with a single NVIDIA K20 GPU. Also, more illustrative “gif” examples are included in the appendix. 4.1 Moving-MNIST Dataset For this synthetic dataset, we use a generation process similar to that described in [21]. All data instances in the dataset are 20 frames long (10 frames for the input and 10 frames for the prediction) and contain two handwritten digits bouncing inside a 64 × 64 patch. The moving digits are chosen randomly from a subset of 500 digits in the MNIST dataset.3 The starting position and velocity direction are chosen uniformly at random and the velocity amplitude is chosen randomly in [3, 5). This generation process is repeated 15000 times, resulting in a dataset with 10000 training sequences, 2000 validation sequences, and 3000 testing sequences. We train all the LSTM models by minimizing the cross-entropy loss4 using back-propagation through time (BPTT) [2] and 3MNIST dataset: http://yann.lecun.com/exdb/mnist/ 4The cross-entropy loss of the predicted frame P and the ground-truth frame T is defined as −P i,j,k Ti,j,k log Pi,j,k + (1 −Ti,j,k) log(1 −Pi,j,k). 5 Table 1: Comparison of ConvLSTM networks with FC-LSTM network on the Moving-MNIST dataset. ‘-5x5’ and ‘-1x1’ represent the corresponding state-to-state kernel size, which is either 5×5 or 1×1. ‘256’, ‘128’, and ‘64’ refer to the number of hidden states in the ConvLSTM layers. ‘(5x5)’ and ‘(9x9)’ represent the input-to-state kernel size. Model Number of parameters Cross entropy FC-LSTM-2048-2048 142,667,776 4832.49 ConvLSTM(5x5)-5x5-256 13,524,496 3887.94 ConvLSTM(5x5)-5x5-128-5x5-128 10,042,896 3733.56 ConvLSTM(5x5)-5x5-128-5x5-64-5x5-64 7,585,296 3670.85 ConvLSTM(9x9)-1x1-128-1x1-128 11,550,224 4782.84 ConvLSTM(9x9)-1x1-128-1x1-64-1x1-64 8,830,480 4231.50 Figure 4: An example showing an “out-of-domain” run. From left to right: input frames; ground truth; prediction by the 3-layer network. RMSProp [24] with a learning rate of 10−3 and a decay rate of 0.9. Also, we perform early-stopping on the validation set. Despite the simple generation process, there exist strong nonlinearities in the resulting dataset because the moving digits can exhibit complicated appearance and will occlude and bounce during their movement. It is hard for a model to give accurate predictions on the test set without learning the inner dynamics of the system. For the FC-LSTM network, we use the same structure as the unconditional future predictor model in [21] with two 2048-node LSTM layers. For our ConvLSTM network, we set the patch size to 4 × 4 so that each 64 × 64 frame is represented by a 16 × 16 × 16 tensor. We test three variants of our model with different number of layers. The 1-layer network contains one ConvLSTM layer with 256 hidden states, the 2-layer network has two ConvLSTM layers with 128 hidden states each, and the 3-layer network has 128, 64, and 64 hidden states respectively in the three ConvLSTM layers. All the input-to-state and state-to-state kernels are of size 5 × 5. Our experiments show that the ConvLSTM networks perform consistently better than the FC-LSTM network. Also, deeper models can give better results although the improvement is not so significant between the 2-layer and 3-layer networks. Moreover, we also try other network configurations with the state-to-state and input-tostate kernels of the 2-layer and 3-layer networks changed to 1 × 1 and 9 × 9, respectively. Although the number of parameters of the new 2-layer network is close to the original one, the result becomes much worse because it is hard to capture the spatiotemporal motion patterns with only 1×1 state-tostate transition. Meanwhile, the new 3-layer network performs better than the new 2-layer network since the higher layer can see a wider scope of the input. Nevertheless, its performance is inferior to networks with larger state-to-state kernel size. This provides evidence that larger state-to-state kernels are more suitable for capturing spatiotemporal correlations. In fact, for 1 × 1 kernel, the receptive field of the states will not grow as time advances. But for larger kernels, later states have larger receptive fields and are related to a wider range of the input. The average cross-entropy loss (cross-entropy loss per sequence) of each algorithm on the test set is shown in Table 1. We need to point out that our experiment setting is different from [21] where an infinite number of training data is assumed to be available. The current offline setting is chosen in order to understand how different models perform in occasions where not so much data is available. Comparison of the 3-layer ConvLSTM and FC-LSTM in the online setting is included in the appendix. 6 Next, we test our model on some “out-of-domain” inputs. We generate another 3000 sequences of three moving digits, with the digits drawn randomly from a different subset of 500 MNIST digits that does not overlap with the training set. Since the model has never seen any system with three digits, such an “out-of-domain” run is a good test of the generalization ability of the model [21]. The average cross-entropy error of the 3-layer model on this dataset is 6379.42. By observing some of the prediction results, we find that the model can separate the overlapping digits successfully and predict the overall motion although the predicted digits are quite blurred. One “out-of-domain” prediction example is shown in Fig. 4. 4.2 Radar Echo Dataset The radar echo dataset used in this paper is a subset of the three-year weather radar intensities collected in Hong Kong from 2011 to 2013. Since not every day is rainy and our nowcasting target is precipitation, we select the top 97 rainy days to form our dataset. For preprocessing, we first transform the intensity values Z to gray-level pixels P by setting P = Z−min{Z} max{Z}−min{Z} and crop the radar maps in the central 330 × 330 region. After that, we apply the disk filter5 with radius 10 and resize the radar maps to 100 × 100. To reduce the noise caused by measuring instruments, we further remove the pixel values of some noisy regions which are determined by applying K-means clustering to the monthly pixel average. The weather radar data is recorded every 6 minutes, so there are 240 frames per day. To get disjoint subsets for training, testing and validation, we partition each daily sequence into 40 non-overlapping frame blocks and randomly assign 4 blocks for training, 1 block for testing and 1 block for validation. The data instances are sliced from these blocks using a 20-frame-wide sliding window. Thus our radar echo dataset contains 8148 training sequences, 2037 testing sequences and 2037 validation sequences and all the sequences are 20 frames long (5 for the input and 15 for the prediction). Although the training and testing instances sliced from the same day may have some dependencies, this splitting strategy is still reasonable because in real-life nowcasting, we do have access to all previous data, including data from the same day, which allows us to apply online fine-tuning of the model. Such data splitting may be viewed as an approximation of the real-life “fine-tuning-enabled” setting for this application. We set the patch size to 2 and train a 2-layer ConvLSTM network with each layer containing 64 hidden states and 3 × 3 kernels. For the ROVER algorithm, we tune the parameters of the optical flow estimator6 on the validation set and use the best parameters (shown in the appendix) to report the test results. Also, we try three different initialization schemes for ROVER: ROVER1 computes the optical flow of the last two observed frames and performs semi-Lagrangian advection afterwards; ROVER2 initializes the velocity by the mean of the last two flow fields; and ROVER3 gives the initialization by a weighted average (with weights 0.7, 0.2 and 0.1) of the last three flow fields. In addition, we train an FC-LSTM network with two 2000-node LSTM layers. Both the ConvLSTM network and the FC-LSTM network optimize the cross-entropy error of 15 predictions. We evaluate these methods using several commonly used precipitation nowcasting metrics, namely, rainfall mean squared error (Rainfall-MSE), critical success index (CSI), false alarm rate (FAR), probability of detection (POD), and correlation. The Rainfall-MSE metric is defined as the average squared error between the predicted rainfall and the ground truth. Since our predictions are done at the pixel level, we project them back to radar echo intensities and calculate the rainfall at every cell of the grid using the Z-R relationship [15]: Z = 10 log a + 10b log R, where Z is the radar echo intensity in dB, R is the rainfall rate in mm/h, and a, b are two constants with a = 118.239, b = 1.5241. The CSI, FAR and POD are skill scores similar to precision and recall commonly used by machine learning researchers. We convert the prediction and ground truth to a 0/1 matrix using a threshold of 0.5mm/h rainfall rate (indicating raining or not) and calculate the hits (prediction = 1, truth = 1), misses (prediction = 0, truth = 1) and false alarms (prediction = 1, truth = 0). The three skill scores are defined as CSI = hits hits+misses+falsealarms, FAR = falsealarms hits+falsealarms, POD = hits hits+misses. The correlation of a predicted frame P and a ground-truth frame T is defined as P i,j Pi,jTi,j √ (P i,j P 2 i,j)(P i,j T 2 i,j)+ε where ε = 10−9. 5The disk filter is applied using the MATLAB function fspecial(’disk’, 10). 6We use an open-source project to calculate the optical flow: http://sourceforge.net/ projects/varflow/ 7 Table 2: Comparison of the average scores of different models over 15 prediction steps. Model Rainfall-MSE CSI FAR POD Correlation ConvLSTM(3x3)-3x3-64-3x3-64 1.420 0.577 0.195 0.660 0.908 Rover1 1.712 0.516 0.308 0.636 0.843 Rover2 1.684 0.522 0.301 0.642 0.850 Rover3 1.685 0.522 0.301 0.642 0.849 FC-LSTM-2000-2000 1.865 0.286 0.335 0.351 0.774 Time 0 5 10 15 FAR 0 0.1 0.2 0.3 0.4 0.5 Time 0 5 10 15 POD 0.2 0.4 0.6 0.8 1 ConvLSTM ROVER1 ROVER2 ROVER3 FC-LSTM Time 0 5 10 15 Correlation 0.7 0.8 0.9 1 Time 0 5 10 15 CSI 0.2 0.4 0.6 0.8 1 Figure 5: Comparison of different models based on four precipitation nowcasting metrics over time. Figure 6: Two prediction examples for the precipitation nowcasting problem. All the predictions and ground truths are sampled with an interval of 3. From top to bottom: input frames; ground truth frames; prediction by ConvLSTM network; prediction by ROVER2. All results are shown in Table 2 and Fig. 5. We can find that the performance of the FC-LSTM network is not so good for this task, which is mainly caused by the strong spatial correlation in the radar maps, i.e., the motion of clouds is highly consistent in a local region. The fully-connected structure has too many redundant connections and makes the optimization very unlikely to capture these local consistencies. Also, it can be seen that ConvLSTM outperforms the optical flow based ROVER algorithm, which is mainly due to two reasons. First, ConvLSTM is able to handle the boundary conditions well. In real-life nowcasting, there are many cases when a sudden agglomeration of clouds appears at the boundary, which indicates that some clouds are coming from the outside. If the ConvLSTM network has seen similar patterns during training, it can discover this type of sudden changes in the encoding network and give reasonable predictions in the forecasting network. This, however, can hardly be achieved by optical flow and semi-Lagrangian advection based methods. Another reason is that, ConvLSTM is trained end-to-end for this task and some complex spatiotemporal patterns in the dataset can be learned by the nonlinear and convolutional structure of the network. For the optical flow based approach, it is hard to find a reasonable way to update the future flow fields and train everything end-to-end. Some prediction results of ROVER2 and ConvLSTM are shown in Fig. 6. We can find that ConvLSTM can predict the future rainfall contour more accurately especially in the boundary. Although ROVER2 can give sharper predictions than ConvLSTM, it triggers more false alarms and is less precise than ConvLSTM in general. Also, the blurring effect of ConvLSTM may be caused by the inherent uncertainties of the task, i.e, it is almost impossible to give sharp and accurate predictions of the whole radar maps in longer-term predictions. We can only blur the predictions to alleviate the error caused by this type of uncertainty. 5 Conclusion and Future Work In this paper, we have successfully applied the machine learning approach, especially deep learning, to the challenging precipitation nowcasting problem which so far has not benefited from sophisticated machine learning techniques. We formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem and propose a new extension of LSTM called ConvLSTM to tackle the problem. The ConvLSTM layer not only preserves the advantages of FC-LSTM but is also suitable for spatiotemporal data due to its inherent convolutional structure. By incorporating ConvLSTM into the encoding-forecasting structure, we build an end-to-end trainable model for precipitation nowcasting. For future work, we will investigate how to apply ConvLSTM to video-based action recognition. One idea is to add ConvLSTM on top of the spatial feature maps generated by a convolutional neural network and use the hidden states of ConvLSTM for the final classification. 8 References [1] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. WardeFarley, and Y. Bengio. Theano: New features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [2] Y. Bengio, I. Goodfellow, and A. Courville. Deep Learning. Book in preparation for MIT Press, 2015. [3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Scipy, volume 4, page 3. Austin, TX, 2010. [4] R. Bridson. Fluid Simulation for Computer Graphics. Ak Peters Series. Taylor & Francis, 2008. [5] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. In ECCV, pages 25–36. 2004. [6] P. Cheung and H.Y. Yeung. Application of optical-flow technique to significant convection nowcast for terminal areas in Hong Kong. In the 3rd WMO International Symposium on Nowcasting and Very ShortRange Forecasting (WSN12), pages 6–10, 2012. [7] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, pages 1724– 1734, 2014. [8] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015. [9] R. H. Douglas. The stormy weather group (Canada). In Radar in Meteorology, pages 61–68. 1990. [10] Urs Germann and Isztar Zawadzki. Scale-dependence of the predictability of precipitation from continental radar images. Part I: Description of the methodology. Monthly Weather Review, 130(12):2859–2873, 2002. [11] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [13] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR, 2015. [14] B. Klein, L. Wolf, and Y. Afek. A dynamic convolutional layer for short range weather prediction. In CVPR, 2015. [15] P.W. Li, W.K. Wong, K.Y. Chan, and E. S.T. Lai. SWIRLS-An Evolving Nowcasting System. Hong Kong Special Administrative Region Government, 2000. [16] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [17] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICML, pages 1310–1318, 2013. [18] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014. [19] M. Reyniers. Quantitative Precipitation Forecasts Based on Radar Observations: Principles, Algorithms and Operational Systems. Institut Royal M´et´eorologique de Belgique, 2008. [20] H. Sakaino. Spatio-temporal image pattern prediction method based on a physical model with timevarying optical flow. IEEE Transactions on Geoscience and Remote Sensing, 51(5-2):3023–3036, 2013. [21] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using lstms. In ICML, 2015. [22] J. Sun, M. Xue, J. W. Wilson, I. Zawadzki, S. P. Ballard, J. Onvlee-Hooimeyer, P. Joe, D. M. Barker, P. W. Li, B. Golding, M. Xu, and J. Pinto. Use of NWP for nowcasting convective precipitation: Recent progress and challenges. Bulletin of the American Meteorological Society, 95(3):409–426, 2014. [23] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112, 2014. [24] T. Tieleman and G. Hinton. Lecture 6.5 - RMSProp: Divide the gradient by a running average of its recent magnitude. Coursera Course: Neural Networks for Machine Learning, 4, 2012. [25] W.C. Woo and W.K. Wong. Application of optical flow techniques to rainfall nowcasting. In the 27th Conference on Severe Local Storms, 2014. [26] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. 9
2015
15
5,647
Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions Amar Shah Department of Engineering Cambridge University as793@cam.ac.uk Zoubin Ghahramani Department of Engineering University of Cambridge zoubin@eng.cam.ac.uk Abstract We develop parallel predictive entropy search (PPES), a novel algorithm for Bayesian optimization of expensive black-box objective functions. At each iteration, PPES aims to select a batch of points which will maximize the information gain about the global maximizer of the objective. Well known strategies exist for suggesting a single evaluation point based on previous observations, while far fewer are known for selecting batches of points to evaluate in parallel. The few batch selection schemes that have been studied all resort to greedy methods to compute an optimal batch. To the best of our knowledge, PPES is the first nongreedy batch Bayesian optimization strategy. We demonstrate the benefit of this approach in optimization performance on both synthetic and real world applications, including problems in machine learning, rocket science and robotics. 1 Introduction Finding the global maximizer of a non-concave objective function based on sequential, noisy observations is a fundamental problem in various real world domains e.g. engineering design [1], finance [2] and algorithm optimization [3]. We are interesed in objective functions which are unknown but may be evaluated pointwise at some expense, be it computational, economical or other. The challenge is to find the maximizer of the expensive objective function in as few sequential queries as possible, in order to minimize the total expense. A Bayesian approach to this problem would probabilistically model the unknown objective function, f. Based on posterior belief about f given evaluations of the the objective function, you can decide where to evaluate f next in order to maximize a chosen utility function. Bayesian optimization [4] has been successfully applied in a range of difficult, expensive global optimization tasks including optimizing a robot controller to maximize gait speed [5] and discovering a chemical derivative of a particular molecule which best treats a particular disease [6]. Two key choices need to be made when implementing a Bayesian optimization algorithm: (i) a model choice for f and (ii) a strategy for deciding where to evaluate f next. A common approach for modeling f is to use a Gaussian process prior [7], as it is highly flexible and amenable to analytic calculations. However, other models have shown to be useful in some Bayesian optimization tasks e.g. Student-t process priors [8] and deep neural networks [9]. Most research in the Bayesian optimization literature considers the problem of deciding how to choose a single location where f should be evaluated next. However, it is often possible to probe several points in parallel. For example, you may possess 2 identical robots on which you can test different gait parameters in parallel. Or your computer may have multiple cores on which you can run algorithms in parallel with different hyperparameter settings. Whilst there are many established strategies to select a single point to probe next e.g. expected improvement, probability of improvement and upper confidence bound [10], there are few well known strategies for selecting batches of points. To the best of our knowledge, every batch selection 1 strategy proposed in the literature involves a greedy algorithm, which chooses individual points until the batch is filled. Greedy choice making can be severely detrimental, for example, a greedy approach to the travelling salesman problem could potentially lead to the uniquely worst global solution [11]. In this work, our key contribution is to provide what we believe is the first non-greedy algorithm to choose a batch of points to probe next in the task of parallel global optimization. Our approach is to choose a set of points which in expectation, maximally reduces our uncertainty about the location of the maximizer of the objective function. The algorithm we develop, parallel predictive entropy search, extends the methods of [12, 13] to multiple point batch selection. In Section 2, we formalize the problem and discuss previous approaches before developing parallel predictive entropy search in Section 3. Finally, we demonstrate the benefit of our non-greedy strategy on synthetic as well as real-world objective functions in Section 4. 2 Problem Statement and Background Our aim is to maximize an objective function f : X →R, which is unknown but can be (noisily) evaluated pointwise at multiple locations in parallel. In this work, we assume X is a compact subset of RD. At each decision, we must select a set of Q points St = {xt,1, ..., xt,Q} ⊂X, where the objective function would next be evaluated in parallel. Each evaluation leads to a scalar observation yt,q = f(xt,q) + ϵt,q, where we assume ϵt,q ∼N(0, σ2) i.i.d. We wish to minimize a future regret, rT = [f(x∗) −f(˜xT )], where x∗∈argmaxx∈X f(x) is an optimal decision (assumed to exist) and ˜xT is our guess of where the maximizer of f is after evaluating T batches of input points. It is highly intractable to make decisions T steps ahead in the setting described, therefore it is common to consider the regret of the very next decision. In this work, we shall assume f is a draw from a Gaussian process with constant mean λ ∈R and differentiable kernel function k : X 2 →R. Most Bayesian optimization research focuses on choosing a single point to query at each decision i.e. Q = 1. A popular strategy in this setting is to choose the point with highest expected improvement over the current best evaluation, i.e. the maximizer of aEI(x|D) = E  max(f(x) −f(xbest), 0) D  = σ(x) h φ τ(x)  + τ(x)Φ τ(x) i , where D is the set of observations, xbest is the best evaluation point so far, σ(x) = p Var[f(x)|D], µ(x) = E[f(x)|D], τ(x) = (µ(x) −f(xbest))/σ(x) and φ(.) and Φ(.) are the standard Gaussian p.d.f. and c.d.f. Aside from being an intuitive approach, a key advantage of using the expected improvement strategy is in the fact that it is computable analytically and is infinitely differentiable, making the problem of finding argmaxx∈X aEI(x|D) amenable to a plethora of gradient based optimization methods. Unfortunately, the corresponding strategy for selecting Q > 1 points to evaluate in parallel does not lead to an analytic expression. [14] considered an approach which sequentially used the EI criterion to greedily choose a batch of points to query next, which [3] formalized and utilized by defining aEI−MCMC x|D, {xq′}q q′=1  = Z X q aEI x|D∪{xq′, yq′}q q′=1  p {yq′}q q′=1|D, {xq′}q q′=1  dy1..dyq, the expected gain in evaluating x after evaluating {xq′, yq′}q q′=1, which can be approximated using Monte Carlo samples, hence the name EI-MCMC. Choosing a batch of points St using the EIMCMC policy is doubly greedy: (i) the EI criterion is greedy as it inherently aims to minimize onestep regret, rt, and (ii) the EI-MCMC approach starts with an empty set and populates it sequentially (and hence greedily), deciding the best single point to include until |St| = Q. A similar but different approach called simulated matching (SM) was introduced by [15]. Let π be a baseline policy which chooses a single point to evaluate next (e.g. EI). SM aims to select a batch St of size Q, which includes a point ‘close to’ the best point which π would have chosen when applied sequentially Q times, with high probability. Formally, SM aims to maximize aSM(St|D) = −ESQ π h Ef h min x∈St(x −argmaxx′∈SQ π f(x′))2 D, SQ π ii , where SQ π is the set of Q points which policy π would query if employed sequentially. A greedy k-medoids based algorithm is proposed to approximately maximize the objective, which the authors justify by the submodularity of the objective function. The upper confidence bound (UCB) strategy [16] is another method used by practitioners to decide where to evaluate an objective function next. The UCB approach is to maximize aUCB(x|D) = µ(x) + α1/2 t σ(x), where αt is a domain-specific time-varying positive parameter which trades off 2 exploration and exploitation. In order to extend this approach to the parallel setting, [17] noted that the predictive variance of a Gaussian process depends only on where observations are made, and not the observations themselves. Therefore, they suggested the GP-BUCB method, which greedily populates the set St by maximizing a UCB type equation Q times sequentially, updating σ at each step, whilst maintaining the same µ for each batch. Finally, a variant of the GP-UCB was proposed by [18]. The first point of the set St is chosen by optimizing the UCB objective. Thereafter, a ‘relevant region’ Rt ⊂X which contains the maximizer of f with high probability is defined. Points are greedily chosen from this region to maximize the information gain about f, measured by expected reduction in entropy, until |St| = Q. This method was named Gaussian process upper confidence bound with pure exploration (GP-UCB-PE). Each approach discussed resorts to a greedy batch selection process. To the best of our knowledge, no batch Bayesian optimization method to date has avoided a greedy algorithm. We avoid a greedy batch selection approach with PPES, which we develop in the next section. 3 Parallel Predictive Entropy Search Our approach is to maximize information [19] about the location of the global maximizer x∗, which we measure in terms of the negative differential entropy of p(x∗|D). Analogous to [13], PPES aims to choose the set of Q points, St = {xq}Q q=1, which maximizes aPPES(St|D) = H  p(x∗|D)  −Ep {yq}Q q=1 D,St  h H  p x∗|D ∪{xq, yq}Q q=1 i , (1) where H[p(x)] = − R p(x) log p(x)dx is the differential entropy of its argument and the expectation above is taken with respect to the posterior joint predictive distribution of {yq}Q q=1 given the previous evaluations, D, and the set St. Evaluating (1) exactly is typically infeasible. The prohibitive aspects are that p x∗|D ∪{xq, yq}Q q=1  would have to be evaluated for many different combinations of {xq, yq}Q q=1, and the entropy computations are not analytically tractable in themselves. Significant approximations need to be made to (1) before it becomes practically useful [12]. A convenient equivalent formulation of the quantity in (1) can be written as the mutual information between x∗ and {yq}Q q=1 given D [20]. By symmetry of the mutual information, we can rewrite aPPES as aPPES(St|D) = H  p {yq}Q q=1|D, St  −Ep(x∗|D) h H  p {yq}Q q=1|D, St, x∗i , (2) where p {yq}Q q=1|D, St, x∗ is the joint posterior predictive distibution for {yq}Q q=1 given the observed data, D and the location of the global maximizer of f. The key advantage of the formulation in (2), is that the objective is based on entropies of predictive distributions of the observations, which are much more easily approximated than the entropies of distributions on x∗. In fact, the first term of (2) can be computed analytically. Suppose p {fq}Q q=1|D, St  is multivariate Gaussian with covariance K, then H  p {yq}Q q=1|D, St  = 0.5 log[det(2πe(K + σ2I))]. We develop an approach to approximate the expectation of the predictive entropy in (2), using an expectation propagation based method which we discuss in the following section. 3.1 Approximating the Predictive Entropy Assuming a sample of x∗, we discuss our approach to approximating H  p {yq}Q q=1|D, St, x∗ in (2) for a set of query points St. Note that we can write p {yq}Q q=1|D, St, x∗ = Z p {fq}Q q=1|D, St, x∗ Q Y q=1 p(yq|fq) df1...dfQ, (3) where p {fq}Q q=1|D, St, x∗ is the posterior distribution of the objective function at the locations xq ∈St, given previous evaluations D, and that x∗is the global maximizer of f. Recall that p(yq|fq) is Gaussian for each q. Our approach will be to derive a Gaussian approximation to p {fq}Q q=1|D, St, x∗ , which would lead to an analytic approximation to the integral in (3). The posterior predictive distribution of the Gaussian process, p {fq}Q q=1|D, St  , is multivariate Gaussian distributed. However, by further conditioning on the location x∗, the global maximizer of f, we impose the condition that f(x) ≤f(x⋆) for any x ∈X. Imposing this constraint for 3 all x ∈X is extremely difficult and makes the computation of p {fq}Q q=1|D, St, x∗ highly intractable. We instead impose the following two conditions (i) f(x) ≤f(x⋆) for each x ∈St, and (ii) f(x⋆) ≥ymax + ϵ, where ymax is the largest observed noisy objective function value and ϵ ∼N(0, σ2). Constraint (i) is equivalent to imposing that f(x⋆) is larger than objective function values at current query locations, whilst condition (ii) makes f(x⋆) larger than previous objective function evaluations, accounting for noise. Denoting the two conditions C, and the variables f = [f1, ..., fQ]⊤and f + = [f; f ⋆], where f ⋆= f(x∗), we incorporate the conditions as follows p f|D, St, x∗ ≈ Z p f +|D, St, x∗ Φ f ⋆−ymax σ  Q Y q=1 I(f ⋆≥fq) df ⋆, (4) where I(.) is an indicator function. The integral in (4) can be approximated using expectation propagation [21]. The Gaussian process predictive p(f +|D, St, x∗) is N(f +; m+, K+). We approximate the integrand of (4) with w(f +) = N(f +; m+, K+) QQ+1 q=1 ˜ZqN(c⊤ q f +; ˜µq, ˜τq), where each ˜Zq and ˜τq are positive, ˜µq ∈R and for q ≤Q, cq is a vector of length Q + 1 with qth entry −1, Q + 1st entry 1, and remaining entries 0, whilst cQ+1 = [0, ..., 0, 1]⊤. The approximation w(f +) approximates the Gaussian CDF, Φ(.), and each indicator function, I(.), with a univariate, scaled Gaussian PDF. The site parameters, { ˜Zq, ˜µq, ˜τq}Q+1 q=1 , are learned using a fast EP algorithm, for which details are given in the supplementary material, where we show that w(f +) = ZN(f +; µ+, Σ+), where µ+ = Σ+  K−1 + m+ + Q+1 X q=1 ˜µq ˜τq cqc⊤ q −1 , Σ+ =  K−1 + + Q+1 X q=1 1 ˜τq cqc⊤ q −1 , (5) and hence p f +|D, St, C  ≈N(f +; µ+, Σ+). Since multivariate Gaussians are consistent under marginalization, a convenient corollary is that p f|D, St, x∗ ≈N(f; µ, Σ), where µ is the vector containing the first Q elements of µ+, and Σ is the matrix containing the first Q rows and columns of Σ+. Since sums of independent Gaussians are also Gaussian distributed, we see that p {yq}Q q=1|D, St, x∗ ≈N([y1, ..., yQ]⊤; µ, Σ + σ2I). The final convenient attribute of our Gaussian approximation, is that the differential entropy of a multivariate Gaussian can be computed analytically, such that H  p {yq}Q q=1|D, St, x∗ ≈0.5 log[det(2πe(Σ + σ2I))]. 3.2 Sampling from the Posterior over the Global Maximizer So far, we have considered how to approximate H  p {yq}Q q=1|D, St, x∗ , given the global maximizer, x∗. We in fact would like the expected value of this quantity over the posterior distribution of the global maximizer, p(x∗|D). Literally, p(x∗|D) ≡p(f(x∗) = maxx∈X f(x)|D), the posterior probability that x∗is the global maximizer of f. Computing the distribution p(x∗|D) is intractable, but it is possible to approximately sample from it and compute a Monte Carlo based approximation of the desired expectation. We consider two approaches to sampling from the posterior of the global maximizer: (i) a maximum a posteriori (MAP) method, and (ii) a random feaure approach. MAP sample from p(x∗|D). The MAP of p(x∗|D) is its posterior mode, given by x∗ MAP = argmaxx∗∈X p(x∗|D). We may approximate the expected value of the predictive entropy by replacing the posterior distribution of x∗with a single point estimate at x∗ MAP. There are two key advantages to using the MAP estimate in this way. Firstly, it is simple to compute x∗ MAP, as it is the global maximizer of the posterior mean of f given the observations D. Secondly, choosing to use x∗ MAP assists the EP algorithm developed in Section 3.1 to converge as desired. This is because the condition f(x∗) ≥f(x) for x ∈X is easy to enforce when x∗= x∗ MAP, the global maximizer of the poserior mean of f. When x∗is sampled such that the posterior mean at x∗is significantly suboptimal, the EP approximation may be poor. Whilst using the MAP estimate approximation is convenient, it is after all a point estimate and fails to characterize the full posterior distribution. We therefore consider a method to draw samples from p(x∗|D) using random features. Random Feature Samples from p(x∗|D). A naive approach to sampling from p(x∗|D) would be to sample g ∼p(f|D), and choosing argmaxx∈X g. Unfortunately, this would require sampling g over an uncountably infinite space, which is infeasible. A slightly less naive method would be to sequentially construct g, whilst optimizing it, instead of evaluating it everywhere in X. However, this approach would have cost O(m3) where m is the number of function evaluations of g necessary to find its optimum. We propose as in [13], to sample and optimize an analytic approximation to g. 4 By Bochner’s theorem [22], a stationary kernel function, k, has a Fourier dual s(w), which is equal to the spectral density of k. Setting p(w) = s(w)/α, a normalized density, we can write k(x, x′) = αEp(w)[e−iw⊤(x−x′)] = 2αEp(w,b)[cos(w⊤x + b) cos(w⊤x′ + b)], (6) where b ∼U[0, 2π]. Let φ(x) = p 2α/m cos(Wx+b) denote an m-dimensional feature mapping where W and b consist of m stacked samples from p(w, b), then the kernel k can be approximated by the inner product of these features, k(x, x′) ≈φ(x)⊤φ(x′) [23]. The linear model g(x) = φ(x)⊤θ + λ where θ|D ∼N(A−1φ⊤(y −λ1), σ2A−1) is an approximate sample from p(f|D), where y is a vector of objective function evaluations, A = φ⊤φ + σ2I and φ⊤= [φ(x1)...φ(xn)]. In fact, limm→∞g is a true sample from p(f|D) [24]. The generative process above suggests the following approach to approximately sampling from p(x∗|D): (i) sample random features φ(i) and corresponding posterior weights θ(i) using the process above, (ii) construct g(i)(x) = φ(i)(x)⊤θ(i) + λ, and (iii) finally compute x⋆(i) = argmaxx∈X g(i)(x) using gradient based methods. 3.3 Computing and Optimizing the PPES Approximation Let ψ denote the set of kernel parameters and the observation noise variance, σ2. Our posterior belief about ψ is summarized by the posterior distribution p(ψ|D) ∝p(ψ)p(D|ψ), where p(ψ) is our prior belief about ψ and p(D|ψ) is the GP marginal likelihood given the parameters ψ. For a fully Bayesian treatment of ψ, we must marginalize aPPES with respect to p(ψ|D). The expectation with respect to the posterior distribution of ψ is approximated with Monte Carlo samples. A similar approach is taken in [3, 13]. Combining the EP based method to approximate the predictive entropy with either of the two methods discussed in the previous section to approximately sample from p(x∗|D), we can construct ˆaPPES an approximation to (2), defined by ˆaPPES(St|D) = 1 2M M X i=1 h log[det(K(i) + σ2(i)I)] −log[det(Σ(i) + σ2(i)I)] i , (7) where K(i) is constructed using ψ(i) the ith sample of M from p(ψ|D), Σ(i) is constructed as in Section 3.1, assuming the global maximizer is x∗(i) ∼p(x∗|D, ψ(i)). The PPES approximation is simple and amenable to gradient based optimization. Our goal is to choose St = {x1, ..., xQ} which maximizes ˆaPPES in (7). Since our kernel function is differentiable, we may consider taking the derivative of ˆaPPES with respect to xq,d, the dth component of xq, ∂ˆaPPES ∂xq,d = 1 2M M X i=1  trace h (K(i) + σ2(i)I)−1 ∂K(i) ∂xq,d i −trace h (Σ(i) + σ2(i)I)−1 ∂Σ(i) ∂xq,d i . (8) Computing ∂K(i) ∂xq,d is simple directly from the definition of the chosen kernel function. Σ(i) is a function of K(i), {cq}Q+1 q=1 and {˜σ(i) q }Q+1 q=1 , and we know how to compute ∂K(i) ∂xq,d , and that each cq is a constant vector. Hence our only concern is how the EP site parameters, {˜σ(i) q }Q+1 q=1 , vary with xq,d. Rather remarkably, we may invoke a result from Section 2.1 of [25], which says that converged site parameters, { ˜Zq, ˜µq, ˜σq}Q+1 q=1 , have 0 derivative with respect to parameters of p(f +|D, St, x∗). There is a key distinction between explicit dependencies (where Σ actually depends on K) and implicit dependencies where a site parameter, ˜σq, might depend implicitly on K. A similar approach is taken in [26], and discussed in [7]. We therefore compute ∂Σ(i) + ∂xq,d = Σ(i) + K(i)−1 + ∂K(i) + ∂xq,d K(i)−1 + Σ(i) + . (9) On first inspection, it may seem computationally too expensive to compute derivatives with respect to each q and d. However, note that we may compute and store the matrices K(i)−1 + Σ(i) + , (K(i) + σ2(i)I)−1 and (Σ(i) + σ2(i)I)−1 once, and that ∂K(i) + ∂xq,d is symmetric with exactly one non-zero row and non-zero column, which can be exploited for fast matrix multiplication and trace computations. 5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 −1.5 −1 −0.5 0 0.5 1 1.5 2 1 (a) Synthetic function 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 −1.5 −1 −0.5 0 0.5 1 1.5 2 1 (b) aPPES(x, x′) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 −1.5 −1 −0.5 0 0.5 1 1.5 2 1 (c) ˆaPPES(x, x′) Figure 1: Assessing the quality of our approximations to the parallel predictive entropy search strategy. (a) Synthetic objective function (blue line) defined on [0, 1], with noisy observations (black squares). (b) Ground truth aPPES defined on [0, 1]2, obtained by rejection sampling. (c) Our approximation ˆaPPES using expectation propagation. Dark regions correspond to pairs (x, x′) with high utility, whilst faint regions correspond to pairs (x, x′) with low utility. 4 Empirical Study In this section, we study the performance of PPES in comparison to aforementioned methods. We model f as a Gaussian process with constant mean λ and covariance kernel k. Observations of the objective function are considered to be independently drawn from N(f(x), σ2). In our experiments, we choose to use a squared-exponential kernel of the form k(x, x′) = γ2 exp  −0.5 P d(xd − x′ d)2/l2 d  . Therefore the set of model hyperparameters is {λ, γ, l1, ..., lD, σ}, a broad Gaussian hyperprior is placed on λ and uninformative Gamma priors are used for the other hyperparameters. It is worth investigating how well ˆaPPES (7) is able to approximate aPPES (2). In order to test the approximation in a manner amenable to visualization, we generate a sample f from a Gaussian process prior on X = [0, 1], with γ2 = 1, σ2 = 10−4 and l2 = 0.025, and consider batches of size Q = 2. We set M = 200. A rejection sampling based approach is used to compute the ground truth aPPES, defined on X Q = [0, 1]2. We first discretize [0, 1]2, and sample p(x∗|D) in (2) by evaluating samples from p(f|D) on the discrete points and choosing the input with highest function value. Given x∗, we compute H  p y1, y2|D, x1, x2, x∗ using rejection sampling. Samples from p(f|D) are evaluted on discrete points in [0, 1]2 and rejected if the highest function value occurs not at x∗. We add independent Gaussian noise with variance σ2 to the non rejected samples from the previous step and approximate H  p y1, y2|D, x1, x2, x∗ using kernel density estimation [27]. Figure 1 includes illustrations of (a) the objective function to be maximized, f, with 5 noisy observations, (b) the aPPES ground truth obtained using the rejection sampling method and finally (c) ˆaPPES using the EP method we develop in the previous section. The black squares on the axes of Figures 1(b) and 1(c) represent the locations in X = [0, 1] where f has been noisily sampled, and the darker the shade, the larger the function value. The lightly shaded horizontal and vertical lines in these figures along the points The figures representing aPPES and ˆaPPES appear to be symmetric, as is expected, since the set St = {x, x′} is not an ordered set, since all points in the set are probed in parallel i.e. St = {x, x′} = {x′, x}. The surface of ˆaPPES is similar to that of aPPES. In paticular, the ˆaPPES approximation often appeared to be an annealed version of the ground truth aPPES, in the sense that peaks were more pronounced, and non-peak areas were flatter. Since we are interested in argmax{x,x′}∈X 2 aPPES({x, x′}), our key concern is that the peaks of ˆaPPES occur at the same input locations as aPPES. This appears to be the case in our experiment, suggesting that the argmax ˆaPPES is a good approximation for argmaxaPPES. We now test the performance of PPES in the task of finding the optimum of various objective functions. For each experiment, we compare PPES (M = 200) to EI-MCMC (with 100 MCMC samples), simulated matching with a UCB baseline policy, GP-BUCB and GP-UCB-PE. We use the random features method to sample from p(x∗|D), rejecting samples which lead to failed EP runs. An experiment of an objective function, f, consists of sampling 5 input points uniformly at random and running each algorithm starting with these samples and their corresponding (noisy) function values. We measure performance after t batch evaluations using immediate regret, rt = |f(˜xt) −f(x∗)|, where x∗is the known optimizer of f and ˜xt is the recommendation of an algorithm after t batch evaluations. We perform 100 experiments for each objective function, and report the median of the 6 0 5 10 15 20 25 0 2 4 6 8 10 t regret PPES EI-MCMC SMUCB BUCB UCBPE 0 5 10 15 20 25 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 t regret 0 10 20 30 0 1 2 3 4 5 6 7 t regret 0 10 20 30 40 50 0 0.5 1 1.5 2 2.5 3 t regret 1 (a) Branin 0 5 10 15 20 25 0 2 4 6 8 10 t regret PPES EI-MCMC SMUCB BUCB UCBPE 0 5 10 15 20 25 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 t regret 0 10 20 30 0 1 2 3 4 5 6 7 t regret 0 10 20 30 40 50 0 0.5 1 1.5 2 2.5 3 t regret 1 (b) Cosines 0 5 10 15 20 25 0 2 4 6 t regret 0 5 10 15 20 25 0 0.1 0.2 0.3 0.4 0 5 t regret 0 10 20 30 0 1 2 3 4 5 6 7 t regret 0 10 20 30 40 50 0 0.5 1 1.5 2 2.5 3 t regret 1 (c) Shekel 0 5 10 15 20 25 0 2 4 6 t regret 0 5 10 15 20 25 0 0.1 0.2 0.3 0.4 t regret 0 10 20 30 0 1 2 3 4 5 6 7 t regret 0 10 20 30 40 50 0 0.5 1 1.5 2 2.5 3 t regret 1 (d) Hartmann Figure 2: Median of the immediate regret of the PPES and 4 other algorithms over 100 experiments on benchmark synthetic objective functions, using batches of size Q = 3. immediate regret obtained for each algorithm. The confidence bands represent one standard deviation obtained from bootstrapping. The empirical distribution of the immediate regret is heavy tailed, making the median more representative of where most data points lie than the mean. Our first set of experiments is on a set of synthetic benchmark objective functions including BraninHoo [28], a mixture of cosines [29], a Shekel function with 10 modes [30] (each defined on [0, 1]2) and the Hartmann-6 function [28] (defined on [0, 1]6). We choose batches of size Q = 3 at each decision time. The plots in Figure 2 illustrate the median immediate regrets found for each algorithm. The results suggest that the PPES algorithm performs close to best if not the best for each problem considered. EI-MCMC does significantly better on the Hartmann function, which is a relatively smooth function with very few modes, where greedy search appears beneficial. Entropy-based strategies are more exploratory in higher dimensions. Nevertheless, PPES does significantly better than GP-UCB-PE on 3 of the 4 problems, suggesting that our non-greedy batch selection procedure enhances performance versus a greedy entropy based policy. We now consider maximization of real world objective functions. The first, boston, returns the negative of the prediction error of a neural network trained on a random train/text split of the Boston Housing dataset [31]. The weight-decay parameter and number of training iterations for the neural network are the parameters to be optimized over. The next function, hydrogen, returns the amount of hydrogen produced by particular bacteria as a function of pH and nitrogen levels of a growth medium [32]. Thirdly we consider a function, rocket, which runs a simulation of a rocket [33] being launched from the Earth’s surface and returns the time taken for the rocket to land on the Earth’s surface. The variables to be optimized over are the launch height from the surface, the mass of fuel to use and the angle of launch with respect to the Earth’s surface. If the rocket does not return, the function returns 0. Finally we consider a function, robot, which returns the walking speed of a bipedal robot [34]. The function’s input parameters, which live in [0, 1]8, are the robot’s controller. We add Gaussian noise with σ = 0.1 to the noiseless function. Note that all of the functions we consider are not available analytically. boston trains a neural network and returns test error, whilst rocket and robot run physical simulations involving differential equations before returning a desired quantity. Since the hydrogen dataset is available only for discrete points, we define hydrogen to return the predictive mean of a Gaussian process trained on the dataset. Figure 3 show the median values of immediate regret by each method over 200 random initializations. We consider batches of size Q = 2 and Q = 4. We find that PPES consistently outperforms competing methods on the functions considered. The greediness and nonrequirement of MCMC sampling of the SM-UCB, GP-BUCB and GP-UCB-PE algorithms make them amenable to large batch experiments, for example, [17] consider optimization in R45 with batches of size 10. However, these three algorithms all perform poorly when selecting batches of smaller size. The performance on the hydrogen function illustrates an interesting phenemona; whilst the immediate regret of PPES is mediocre initially, it drops rapidly as more batches are evaluated. This behaviour is likely due to the non-greediness of the approach we have taken. EI-MCMC makes good initial progress, but then fails to explore the input space as well as PPES is able to. Recall that after each batch evaluation, an algorithm is required to output ˜xt, its best estimate for the maximizer of the objective function. We observed that whilst competing algorithms tended to evaluate points which had high objective function values compared to PPES, yet when it came to recommending ˜xt, 7 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret PPES EI-MCMC SMUCB BUCB UCBPE 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 1 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret PPES EI-MCMC SMUCB BUCB UCBPE 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 1 (a) boston 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret PPES EI-MCMC SMUCB BUCB UCBPE 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 1 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret PPES EI-MCMC SMUCB BUCB UCBPE 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 0 5 10 15 20 0 1 2 3 4 ·10−2 t regret 0 10 20 30 40 0 2 4 6 8 10 12 14 t regret 1 (b) hydrogen 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 1 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 1 (c) rocket 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 1 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 t regret 0 10 20 30 40 0 0.5 1 1.5 2 2.5 3 3.5 4 t regret 1 (d) robot Figure 3: Median of the immediate regret of the PPES and 4 other algorithms over 100 experiments on real world objective functions. Figures in the top row use batches of size Q = 2, whilst figues on the bottom row use batches of size Q = 4. PPES tended to do a better job. Our belief is that this occured exactly because the PPES objective aims to maximize information gain rather than objective function value improvement. The rocket function has a strong discontinuity making if difficult to maximize. If the fuel mass, launch height and/or angle are too high, the rocket would not return to the Earth’s surface, resulting in a 0 function value. It can be argued that a stationary kernel Gaussian process is a poor model for this function, yet it is worth investigating the performance of a GP based models since a practitioner may not know whether or not their black-box function is smooth apriori. PPES seemed to handle this function best and had fewer samples which resulted in 0 function value than each of the competing methods and made fewer recommendations which led to a 0 function value. The relative increase in PPES performance from increasing batch size from Q = 2 to Q = 4 is small for the robot function compared to the other functions considered. We believe this is a consequence of using a slightly naive optimization procedure to save computation time. Our optimization procedure first computes ˆaPPES at 1000 points selected uniformly at random, and performs gradient ascent from the best point. Since ˆaPPES is defined on X Q = [0, 1]32, this method may miss a global optimum. Other methods all select their batches greedily, and hence only need to optimize in X = [0, 1]8. However, this should easily be avoided by using a more exhaustive gradient based optimizer. 5 Conclusions We have developed parallel predictive entropy search, an information theoretic approach to batch Bayesian optimization. Our method is greedy in the sense that it aims to maximize the one-step information gain about the location of x∗, but it is not greedy in how it selects a set of points to evaluate next. Previous methods are doubly greedy, in that they look one step ahead, and also select a batch of points greedily. Competing methods are prone to under exploring, which hurts their perfomance on multi-modal, noisy objective functions, as we demonstrate in our experiments. References [1] G. Wang and S. Shan. Review of Metamodeling Techniques in Support of Engineering Design Optimization. Journal of Mechanical Design, 129(4):370–380, 2007. [2] W. Ziemba & R. Vickson. Stochastic Optimization Models in Finance. World Scientific Singapore, 2006. 8 [3] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. NIPS, 2012. [4] J. Mockus. Bayesian Approach to Global Optimization: Theory and Applications. Kluwer, 1989. [5] D. Lizotte, T. Wang, M. Bowling, and D. Schuurmans. Automatic Gait Optimization with Gaussian Process Regression. IJCAI, pages 944–949, 2007. [6] D. M. Negoescu, P. I. Frazier, and W. B. Powell. The Knowledge-Gradient Algorithm for Sequencing Experiments in Drug Discovery. INFORMS Journal on Computing, 23(3):346–363, 2011. [7] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [8] A. Shah, A. G. Wilson, and Z. Ghahramani. Student-t Processes as Alternatives to Gaussian Processes. AISTATS, 2014. [9] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, Mr Prabat, and R. P. Adams. Scalable Bayesian Optimization Using Deep Neural Networks. ICML, 2015. [10] E. Brochu, M. Cora, and N. de Freitas. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Applications to Active User Modeling and Hierarchical Reinforcement Learning. Technical Report TR-2009-23, University of British Columbia, 2009. [11] G. Gutin, A. Yeo, and A. Zverovich. Traveling salesman should not be greedy:domination analysis of greedy-type heuristics for the TSP. Discrete Applied Mathematics, 117:81–86, 2002. [12] P. Hennig and C. J. Schuler. Entropy Search for Information-Efficient Global Optimization. JMLR, 2012. [13] J. M. Hern´andez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive Entropy Search for Efficient Global Optimization of Black-box Functions. NIPS, 2014. [14] D. Ginsbourger, J. Janusevskis, and R. Le Riche. Dealing with Asynchronicity in Parallel Gaussian Process Based Optimization. 2011. [15] J. Azimi, A. Fern, and X. Z. Fern. Batch Bayesian Optimization via Simulation Matching. NIPS, 2010. [16] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. ICML, 2010. [17] T. Desautels, A. Krause, and J. Burdick. Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process Bandit Optimization. ICML, 2012. [18] E. Contal, D. Buffoni, D. Robicquet, and N. Vayatis. Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration. In Machine Learning and Knowledge Discovery in Databases, pages 225–240. Springer Berlin Heidelberg, 2013. [19] D. J. MacKay. Information-Based Objective Functions for Active Data Selection. Neural Computation, 4(4):590–604, 1992. [20] N. Houlsby, J. M. Hern´andez-Lobato, F. Huszar, and Z. Ghahramani. Collaborative Gaussian Processes for Preference Learning. NIPS, 2012. [21] T. P. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Masachusetts Institute of Technology, 2001. [22] S. Bochner. Lectures on Fourier Integrals. Princeton University Press, 1959. [23] A. Rahimi and B. Recht. Random Features for Large-Scale Kernel Machines. NIPS, 2007. [24] R. M. Neal. Bayesian Learning for Neural Networks. PhD thesis, University of Toronto, 1995. [25] M. Seeger. Expectation Propagation for Exponential Families. Technical Report, U.C. Berkeley, 2008. [26] J. P. Cunningham, P. Hennig, and S. Lacoste-Julien. Gaussian Probabilities and Expectation Propagation. arXiv, 2013. http://arxiv.org/abs/1111.6832. [27] I. Ahmad and P. E. Lin. A Nonparametric Estimation of the Entropy for Absolutely Continuous Distributions. IEEE Trans. on Information Theory, 22(3):372–375, 1976. [28] D. Lizotte. Practical Bayesian Optimization. PhD thesis, University of Alberta, 2008. [29] B. S. Anderson, A. W. Moore, and D. Cohn. A Nonparametric Approach to Noisy and Costly Optimization. ICML, 2000. [30] J. Shekel. Test Functions for Multimodal Search Techniques. Information Science and Systems, 1971. [31] K. Bache and M. Lichman. UCI Machine Learning Repository, 2013. [32] E. H. Burrows, W. K. Wong, X. Fern, F.W.R. Chaplen, and R.L. Ely. Optimization of ph and nitrogen for enhanced hydrogen production by synechocystis sp. pcc 6803 via statistical and machine learning methods. Biotechnology Progress, 25(4):1009–1017, 2009. [33] J. E. Hasbun. In Classical Mechanics with MATLAB Applications. Jones & Bartlett Learning, 2008. [34] E. Westervelt and J. Grizzle. Feedback Control of Dynamic Bipedal Robot Locomotion. Control and Automation Series. CRC PressINC, 2007. 9
2015
150
5,648
The Return of the Gating Network: Combining Generative Models and Discriminative Training in Natural Image Priors Dan Rosenbaum School of Computer Science and Engineering Hebrew University of Jerusalem Yair Weiss School of Computer Science and Engineering Hebrew University of Jerusalem Abstract In recent years, approaches based on machine learning have achieved state-of-theart performance on image restoration problems. Successful approaches include both generative models of natural images as well as discriminative training of deep neural networks. Discriminative training of feed forward architectures allows explicit control over the computational cost of performing restoration and therefore often leads to better performance at the same cost at run time. In contrast, generative models have the advantage that they can be trained once and then adapted to any image restoration task by a simple use of Bayes’ rule. In this paper we show how to combine the strengths of both approaches by training a discriminative, feed-forward architecture to predict the state of latent variables in a generative model of natural images. We apply this idea to the very successful Gaussian Mixture Model (GMM) of natural images. We show that it is possible to achieve comparable performance as the original GMM but with two orders of magnitude improvement in run time while maintaining the advantage of generative models. 1 Introduction Figure 1 shows an example of an image restoration problem. We are given a degraded image (in this case degraded with Gaussian noise) and seek to estimate the clean image. Image restoration is an extremely well studied problem and successful systems for specific scenarios have been built without any explicit use of machine learning. For example, approaches based on “coring” can be used to successfully remove noise from an image by transforming to a wavelet basis and zeroing out coefficients that are close to zero [7]. More recently the very successful BM3D method removes noise from patches by finding similar patches in the noisy image and combining all similar patches in a nonlinear way [4]. In recent years, machine learning based approaches are starting to outperform the hand engineered systems for image restoration. As in other areas of machine learning, these approaches can be divided into generative approaches which seek to learn probabilistic models of clean images versus discriminative approaches which seek to learn models that map noisy images to clean images while minimizing the training loss between the predicted clean image and the true one. Two influential generative approaches are the fields of experts (FOE) approach [16] and KSVD [5] which assume that filter responses to natural images should be sparse and learn a set of filters under this assumption. While very good performance can be obtained using these methods, when they are trained generatively they do not give performance that is as good as BM3D. Perhaps the most successful generative approach to image restoration is based on Gaussian Mixture Models (GMMs) [22]. In this approach 8x8 image patches are modeled as 64 dimensional vectors and a 1 Noisy image full model gating (200 × 64 dot-products per patch) fast gating (100 dot-products per patch) 29.16dB 29.12dB Figure 1: Image restoration with a Gaussian mixture model. Middle: the most probable component of every patch calculated using a full posterior calculation vs. a fast gating network (color coded by embedding in a 2-dimensional space). Bottom: the restored image: the gating network achieves almost identical results but in 2 orders of magnitude faster. simple GMM with 200 components is used to model the density in this space. Despite its simplicity, this model remains among the top performing models in terms of likelihood given to left out patches and also gives excellent performance in image restoration [23, 20]. In particular, it outperforms BM3D on image denoising and has been successfully used for other image restoration problems such as deblurring [19]. The performance of generative models in denoising can be much improved by using an “empirical Bayes” approach where the parameters are estimated from the noisy image [13, 21, 14, 5]. Discriminative approaches for image restoration typically assume a particular feed forward structure and use training to optimize the parameters of the structure. Hel-Or and Shaked used discriminative training to optimize the parameters of coring [7]. Chen et al. [3] discriminatively learn the parameters of a generative model to minimize its denoising error. They show that even though the model was trained for a specific noise level, it acheives similar results as the GMM for different noise levels. Jain and Seung trained a convolutional deep neural network to perform image denoising. Using the same training set as was used by the FOE and GMM papers, they obtained better results than FOE but not as good as BM3D or GMM [9]. Burger et al. [2] trained a deep (nonconvolutional) multi layer perceptron to perform denoising. By increasing the size of the training set by two orders of magnitude relative to previous approaches, they obtained what is perhaps the 2 best stand-alone method for image denoising. Fanello et al. [6] trained a random forest architecture to optimize denoising performance. They obtained results similar to the GMM but at a much smaller computational cost. Which approach is better, discriminative or generative? First it should be said that the best performing methods in both categories give excellent performance. Indeed, even the BM3D approach (which can be outperformed by both types of methods) has been said to be close to optimal for image denoising [12]. The primary advantage of the discriminative approach is its efficiency at run-time. By defining a particular feed-forward architecture we are effectively constraining the computational cost at run-time and during learning we seek the best performing parameters for a fixed computational cost. The primary advantage of the generative approach, on the other hand, is its modularity. Learning only requires access to clean images, and after learning a density model for clean images, Bayes’ rule can be used to peform restoration on any image degradation and can support different loss functions at test time. In contrast, discriminative training requires separate training (and usually separate architectures) for every possible image degradation. Given that there are literally an infinite number of ways to degrade images (not just Gaussian noise with different noise levels but also compression artifacts, blur etc.), one would like to have a method that maintains the modularity of generative models but with the computational cost of discriminative models. In this paper we propose such an approach. Our method is based on the observation that the most costly part of inference with many generative models for natural images is in estimating latent variables. These latent variables can be abstract representations of local image covariance (e.g. [10]) or simply a discrete variable that indicates which Gaussian most likely generated the data in a GMM. We therefore discriminatively train a feed-forward architecture, or a “gating network” to predict these latent variables using far less computation. The gating network need only be trained on “clean” images and we show how to combine it during inference with Bayes’ rule to perform image restoration for any type of image degradation. Our results show that we can maintain the accuracy and the modularity of generative models but with a speedup of two orders of magnitude in run time. In the rest of the paper we focus on the Gaussian mixture model although this approach can be used for other generative models with latent variables like the one proposed by Karklin and Lewicki [10]. Code implementing our proposed algorithms for the GMM prior and Karklin and Lewicki’s prior is available online at www.cs.huji.ac.il/˜danrsm. 2 Image restoration with Gaussian mixture priors Modeling image patches with Gaussian mixtures has proven to be very effective for image restoration [22]. In this model, the prior probability of an image patch x is modeled by: Pr(x) = P h πhN(x; µh, Σh). During image restoration, this prior is combined with a likelihood function Pr(y|x) and restoration is based on the posterior probability Pr(x|y) which is computed using Bayes’ rule. Typically, MAP estimators are used [22] although for some problems the more expensive BLS estimator has been shown to give an advantage [17]. In order to maximize the posterior probability different numerical optimizations can be used. Typically they require computing the assignment probabilities: Pr(h|x) = πhN(x; µh, Σh) P k πkN(x; µk, Σk) (1) These assignment probabilities play a central role in optimizing the posterior. For example, it is easy to see that the gradient of the log of the posterior involves a weighted sum of gradients where the assignment probabilities give the weights: ∂log Pr(x|y) ∂x = ∂[log Pr(x) + log Pr(y|x) −log Pr(y)] ∂x = − X h Pr(h|x)(x −µh)⊤Σ−1 h + ∂log Pr(y|x) ∂x (2) Similarly, one can use a version of the EM algorithm to iteratively maximize the posterior probability by solving a sequence of reweighted least squares problems. Here the assignment probabilities define the weights for the least squares problems [11]. Finally, in auxiliary samplers for performing 3 BLS estimation, each iteration requires sampling the hidden variables according to the current guess of the image [17]. For reasons of computational efficiency, the assignment probabilities are often used to calculate a hard assignment of a patch to a component: ˆh(x) = arg max h Pr(h|x) (3) Following the literature on “mixtures of experts” [8] we call this process gating. As we now show, this process is often the most expensive part of performing image restoration with a GMM prior. 2.1 Running time of inference The successful EPLL algorithm [22] for image restoration with patch priors defines a cost function based on the simplifying assumption that the patches of an image are independent: J(x) = − X i log Pr(xi) −λ log Pr(y|x) (4) where {xi} are the image patches, x is the full image and λ is a parameter that compensates for the simplifying assumption. Minimizing this cost when the prior is a GMM, is done by alternating between three steps. We give here only a short representation of each step but the full algorithm is given in the supplementary material. The three steps are: • Gating. For each patch, the current guess xi is assigned to one of the components ˆh(xi) • Filtering. For each patch, depending on the assignments ˆh(xi), a least squares problem is solved. • Mixing. Overlapping patches are averaged together with the noisy image y. It can be shown that after each iteration of the three steps, the EPLL splitting cost function (a relaxation of equation 4) is decreased. In terms of computation time, the gating step is by far the most expensive one. The filtering step multiplies each d dimensional patch by a single d×d matrix which is equivalent to d dot-products or d2 flops per patch. Assuming a local noise model, the mixing step involves summing up all patches back to the image and solving a local cost on the image (equivalent to 1 dot-product or d flops per patch).1 In the gating step however, we compute the probability of all the Gaussian components for every patch. Each computation performs d dot-products, and so for K components we get a total of d × K dot-products or d2 × K flops per patch. For a GMM with 200 components like the one used in [22], this results in a gating step which is 200 times slower than the filtering and mixing steps. 3 The gating network Figure 2: Architecture of the gating step in GMM inference (left) vs. a more efficient gating network. 1For non-local noise models like in image deblurring there is an additional factor of the square of the kernel dimension. If the kernel dimension is in the order of d, the mixing step performs d dot-products or d2 flops. 4 The left side of figure 2 shows the computation involved in a naive computing of the gating. In the GMM used in [22], the Gaussians are zero mean so computing the most likely component involves multiplying each patch with all the eigenvectors of the covariance matrix and squaring the results: log Pr(x|h) = −x⊤Σ−1 h x + consth = − X i 1 σh i (vh i x)2 + consth (5) where σh i and vh i are the eigenvalues and eigenvectors of the covariance matrix. The eigenvectors can be viewed as templates, and therefore, the gating is performed according to weighted sums of dotproducts with different templates. Every component has a different set of templates and a different weighting of their importance (the eigenvalues). Framing this process as a feed-forward network starting with a patch of dimension d and using K Gaussian components, the first layer computes d × K dot-products (followed by squaring), and the second layer performs K dot-products. Viewed this way, it is clear that the naive computation of the gating is inefficient. There is no “sharing” of dot-products between different components and the number of dot-products that are required for deciding about the appropriate component, may be much smaller than is done with this naive computation. 3.1 Discriminative training of the gating network In order to obtain a more efficient gating network we use discriminative training. We rewrite equation 5 as: log Pr(x|h) ≈− X i wh i (vT i x)2 + consth (6) Note that the vectors vi are required to be shared and do not depend on h. Only the weights wh i depend on h. Given a set of vectors vi and the weights w the posterior probability of a patch assignment is approximated by: Pr(h|x) ≈ exp(−P i wh i (vT i x)2 + consth) P k exp(−P i wk i (vT i x)2 + constk) (7) We minimize the cross entropy between the approximate posterior probability and the exact posterior probability given by equation 1. The training is done on 500 mini-batches of 10K clean image patches each, taken randomly from the 200 images in the BSDS training set. We minimize the training loss for each mini-batch using 100 iterations of minimize.m [15] before moving to the next mini-batch. Results of the training are shown in figure 3. Unlike the eigenvectors of the GMM covariance matrices which are often global Fourier patterns or edge filters, the learned vectors are more localized in space and resemble Gabor filters. generatively trained: discriminatively trained: log 10 # of dot-products 1 2 3 4 5 PSNR 25 25.5 26 26.5 27 generative discriminative Figure 3: Left: A subset of the 200 × 64 eigenvectors used for the full posterior calculation. Center: The first layer of the discriminatively trained gating network which serves as a shared pool of 100 eigenvectors. Right: The number of dot-products versus the resulting PSNR for patch denoising using different models. Discrimintively training smaller gating networks is better than generatively training smaller GMMs (with less components). 5 Figure 1 compares the gating performed by the full network and the discriminatively trained one. Each pixel shows the predicted component for a patch centered around that pixel. Components are color coded so that dark pixels correspond to components with low variance and bright pixels to high variance. The colors denote the preferred orientation of the covariance. Although the gating network requires far less dot-products it gives similar (although not identical) gating. Figure 4 shows sample patches arranged according to the gating with either the full model (top) or the gating network (bottom). We classify a set of patches by their assignment probabilities. For 60 of the 200 components we display 10 patches that are classified to that component. It can be seen that when the classification is done using the gating network or the full posterior, the results are visually similar. The right side of figure 3 compares between two different ways to reduce computation time. The green curve shows gating networks with different sizes (containing 25 to 100 vectors) trained on top of the 200 component GMM. The blue curve shows GMMs with a different number of components (from 2 to 200). Each of the models is used to perform patch denoising (using MAP inference) with noise level of 25. It is clearly shown that in terms of the number of dot-products versus the resulting PSNR, discriminatively training a small gating network on top of a GMM with 200 components is much better than a pure generative training of smaller GMMs. gating with the full model gating with the learned network Figure 4: Gating with the full posterior computation vs. the learned gating network. Top: Patches from clean images arranged according to the component with maximum probability. Every column represents a different component (showing 60 out of 200). Bottom: Patches arranged according to the component with maximum gating score. Both gating methods have a very similar behavior. 4 Results We compare the image restoration performance of our proposed method to several other methods proposed in the literature. The first class of methods used for denoising are “internal” methods that do not require any learning but are specific to image denoising. A prime example is BM3D. The second class of methods are generative models which are only trained on clean images. The original EPLL algorithm is in this class. Finally, the third class of models are discriminative which are trained “end-to-end”. These typically have the best performance but need to be trained in advance for any image restoration problem. In the right hand side of table 1 we show the denoising results of our implementation of EPLL with a GMM of 200 components. It can be seen that the difference between doing the full inference and using a learned gating network (with 100 vectors) is about 0.1dB to 0.3dB which is comparable to the difference between different published values of performance for a single algorithm. Even with the learned gating network the EPLL’s performance is among the top performing methods for all noise levels. The fully discriminative MLP method is the best performing method for each noise level but it is trained explicitly and separately for each noise level. The right hand side of table 1 also shows the run times of our Matlab implementation of EPLL on a standard CPU. Although the number of dot-products in the gating has been decreased by a factor of 6 σ 20 25 30 50 75 internal BM3D[22] 28.57 25.63 BM3D[1] 28.35 25.45 23.96 BM3D[6] 29.25 27.32 25.09 LSSC[22] 28.70 25.73 LSSC[6] 29.40 27.39 25.09 KSVD[22] 28.20 25.15 generative FoE[22] 27.77 23.29 KSVDG[22] 28.28 25.18 EPLL[22] 28.71 25.72 EPLL[1] 28.47 25.50 24.16 EPLL[6] 29.38 27.44 25.22 discriminative CSF5 7×7[18] 28.72 MLP[1] 28.75 25.83 24.42 FF[6] 29.65 27.48 25.25 EPLL with different gating methods σ 25 50 75 sec. full 28.52 25.53 24.02 91 gating 28.40 25.37 23.79 5.6 gating3 28.36 25.30 23.71 0.7 full: naive posterior computation. gating: the learned gating network. gating3: the learned network calculated with a stride of 3. Table 1: Average PSNR (dB) for image denoising. Left: Values for different denoising methods as reported by different papers. Right: Comparing different gating methods for our EPLL implementation, computed over 100 test images of BSDS. Using a fast gating method results in a PSNR difference comparable to the difference between different published values of the same algorithm. noisy: 20.19 MLP: 27.31 full: 27.01 gating: 26.99 noisy: 20.19 MLP: 30.37 full: 30.14 gating: 30.06 Figure 5: Image denoising examples. Using the fast gating network or the full inference computation, is visually indistinguishable. 128, the effect on the actual run times is more complex. Still, by only switching to the new gating network, we obtain a speedup factor of more than 15 on small images. We also show that further speedup can be achieved by simply working with less overlapping patches (“stride”). The results show that using a stride of 3 (i.e. working on every 9’th patch) leads to almost no loss in PSNR. Although the “stride” speedup can be achieved by any patch based method, it emphasizes another important trade-off between accuracy and running-time. In total, we see that a speedup factor of more than 100, lead to very similar results than the full inference. We expect even more dramatic speedups are possible with more optimized and parallel code. Figures 5 gives a visual comparison of denoised images. As can be expected from the PSNR values, the results with full EPLL and the gating network EPLL are visually indistinguishable. To highlight the modularity advantage of generative models, figure 6 shows results of image deblurring using the same prior. Even though all the training of the EPLL and the gating was done on clean sharp images, the prior can be combined with a likelihood for deblurring to obtain state-of-the-art deblurring results. Again, the full and the gating results are visually indistinguishable. 7 9 × 9 blur: 19.12 Hyper-Laplacian: 24.69 full: 26.25 gating: 26.15 9 × 9 blur: 22.50 Hyper-Laplacian: 25.03 full: 25.77 gating: 25.75 Figure 6: Image deblurring examples. Using the learned gating network maintains the modularity property, allowing it to be used for different restoration tasks. Once again, results are very similar to the full inference computation. noisy CSF5 7×7 EPLLgating PSNR: 20.17 30.49 30.51 running-time: 230sec. 83sec. Figure 7: Denoising of a 18mega-pixel image. Using the learned gating network and a stride of 3, we get very fast inference with comparable results to discriminatively “end-to-end” trained models. Finally, figure 7 shows the result of performing resotration on an 18 mega-pixel image. EPLL with a gating network achieves comparable results to a discriminatively trained method (CSF) [18] but is even more efficient while maintaining the modularity of the generative approach. 5 Discussion Image restoration is a widely studied problem with immediate practical applications. In recent years, approaches based on machine learning have started to outperform handcrafted methods. This is true both for generative approaches and discriminative approaches. While discriminative approaches often give the best performance for a fixed computational budget, the generative approaches have the advantage of modularity. They are only trained on clean images and can be used to perform one of an infinite number of possible resotration tasks by using Bayes’ rule. In this paper we have shown how to combine the best aspects of both approaches. We discriminatively train a feed-forward architecture to perform the most expensive part of inference using generative models. Our results indicate that we can still obtain state-of-the-art performance with two orders of magnitude improvement in run times while maintaining the modularity advantage of generative models. Acknowledgements Support by the ISF, Intel ICRI-CI and the Gatsby Foundation is greatfully acknowledged. 8 References [1] Harold Christopher Burger, Christian Schuler, and Stefan Harmeling. Learning how to combine internal and external denoising methods. In Pattern Recognition, pages 121–130. Springer, 2013. [2] Harold Christopher Burger, Christian J Schuler, and Stefan Harmeling. Image denoising with multilayer perceptrons, part 1: comparison with existing algorithms and with bounds. arXiv preprint arXiv:1211.1544, 2012. [3] Yunjin Chen, Thomas Pock, Ren´e Ranftl, and Horst Bischof. Revisiting loss-specific training of filterbased mrfs for image restoration. In Pattern Recognition, pages 271–281. Springer, 2013. [4] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. Image Processing, IEEE Transactions on, 16(8):2080–2095, 2007. [5] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. Image Processing, IEEE Transactions on, 15(12):3736–3745, 2006. [6] Sean Ryan Fanello, Cem Keskin, Pushmeet Kohli, Shahram Izadi, Jamie Shotton, Antonio Criminisi, Ugo Pattacini, and Tim Paek. Filter forests for learning data-dependent convolutional kernels. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1709–1716. IEEE, 2014. [7] Yacov Hel-Or and Doron Shaked. A discriminative approach for wavelet denoising. Image Processing, IEEE Transactions on, 17(4):443–457, 2008. [8] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79–87, 1991. [9] Viren Jain and Sebastian Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems, pages 769–776, 2009. [10] Yan Karklin and Michael S Lewicki. Emergence of complex cell properties by learning to generalize in natural scenes. Nature, 457(7225):83–86, 2009. [11] EffiLevi. Using natural image priors-maximizing or sampling? PhD thesis, The Hebrew University of Jerusalem, 2009. [12] Anat Levin and Boaz Nadler. Natural image denoising: Optimality and inherent bounds. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2833–2840. IEEE, 2011. [13] Siwei Lyu and Eero P Simoncelli. Statistical modeling of images with fields of gaussian scale mixtures. In Advances in Neural Information Processing Systems, pages 945–952, 2006. [14] Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-local sparse models for image restoration. In Computer Vision, 2009 IEEE 12th International Conference on, pages 2272–2279. IEEE, 2009. [15] Carl E Rassmusen. minimize.m, 2006. http://learning.eng.cam.ac.uk/carl/code/minimize/. [16] Stefan Roth and Michael J Black. Fields of experts: A framework for learning image priors. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 860–867. IEEE, 2005. [17] Uwe Schmidt, Qi Gao, and Stefan Roth. A generative perspective on mrfs in low-level vision. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 1751–1758. IEEE, 2010. [18] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2774–2781. IEEE, 2014. [19] Libin Sun, Sunghyun Cho, Jue Wang, and James Hays. Edge-based blur kernel estimation using patch priors. In Computational Photography (ICCP), 2013 IEEE International Conference on, pages 1–8. IEEE, 2013. [20] Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive densityestimator. In Advances in Neural Information Processing Systems, pages 2175–2183, 2013. [21] Guoshen Yu, Guillermo Sapiro, and St´ephane Mallat. Solving inverse problems with piecewise linear estimators: From gaussian mixture models to structured sparsity. Image Processing, IEEE Transactions on, 21(5):2481–2499, 2012. [22] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 479–486. IEEE, 2011. [23] Daniel Zoran and Yair Weiss. Natural images, gaussian mixtures and dead leaves. In NIPS, pages 1745– 1753, 2012. 9
2015
151
5,649
Fighting Bandits with a New Kind of Smoothness Jacob Abernethy University of Michigan jabernet@umich.edu Chansoo Lee University of Michigan chansool@umich.edu Ambuj Tewari University of Michigan tewaria@umich.edu Abstract We provide a new analysis framework for the adversarial multi-armed bandit problem. Using the notion of convex smoothing, we define a novel family of algorithms with minimax optimal regret guarantees. First, we show that regularization via the Tsallis entropy, which includes EXP3 as a special case, matches the O( p NT) minimax regret with a smaller constant factor. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as O(pNT log N), as long as the perturbation distribution has a bounded hazard function. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property and lead to near-optimal algorithms. 1 Introduction The classic multi-armed bandit (MAB) problem, generally attributed to the early work of Robbins (1952), poses a generic online decision scenario in which an agent must make a sequence of choices from a fixed set of options. After each decision is made, the agent receives some feedback in the form of a loss (or gain) associated with her choice, but no information is provided on the outcomes of alternative options. The agent’s goal is to minimize the total loss over time, and the agent is thus faced with the balancing act of both experimenting with the menu of choices while also utilizing the data gathered in the process to improve her decisions. The MAB framework is not only mathematically elegant, but useful for a wide range of applications including medical experiments design (Gittins, 1996), automated poker playing strategies (Van den Broeck et al., 2009), and hyperparameter tuning (Pacula et al., 2012). Early MAB results relied on stochastic assumptions (e.g., IID) on the loss sequence (Auer et al., 2002; Gittins et al., 2011; Lai and Robbins, 1985). As researchers began to establish non-stochastic, worst-case guarantees for sequential decision problems such as prediction with expert advice (Littlestone and Warmuth, 1994), a natural question arose as to whether similar guarantees were possible for the bandit setting. The pioneering work of Auer, Cesa-Bianchi, Freund, and Schapire (2003) answered this in the affirmative by showing that their algorithm EXP3 possesses nearly-optimal regret bounds with matching lower bounds. Attention later turned to the bandit version of online linear optimization, and several associated guarantees were published the following decade (Abernethy et al., 2012; Dani and Hayes, 2006; Dani et al., 2008; Flaxman et al., 2005; McMahan and Blum, 2004). Nearly all proposed methods have relied on a particular algorithmic blueprint; they reduce the bandit problem to the full-information setting, while using randomization to make decisions and to estimate the losses. A well-studied family of algorithms for the full-information setting is Follow the Regularized Leader (FTRL), which optimizes the objective function of the following form: arg min x2K L>x + λR(x) (1) where K is the decision set, L is (an estimate of) the cumulative loss vector, and R is a regularizer, a convex function with suitable curvature to stabilize the objective. The choice of regularizer R is 1 critical to the algorithm’s performance. For example, the EXP3 algorithm (Auer, 2003) regularizes with the entropy function and achieves a nearly optimal regret bound when K is the probability simplex. For a general convex set, however, other regularizers such as self-concordant barrier functions (Abernethy et al., 2012) have tighter regret bounds. Another class of algorithms for the full information setting is Follow the Perturbed Leader (FTPL) (Kalai and Vempala, 2005) whose foundations date back to the earliest work in adversarial online learning (Hannan, 1957). Here we choose a distribution D on RN, sample a random vector Z ⇠D, and solve the following linear optimization problem arg min x2K (L + Z)>x. (2) FTPL is computationally simpler than FTRL due to the linearity of the objective, but it is analytically much more complex due to the randomness. For every different choice of D, an entirely new set of techniques had to be developed (Devroye et al., 2013; Van Erven et al., 2014). Rakhlin et al. (2012) and Abernethy et al. (2014) made some progress towards unifying the analysis framework. Their techniques, however, are limited to the full-information setting. In this paper, we propose a new analysis framework for the multi-armed bandit problem that unifies the regularization and perturbation algorithms. The key element is a new kind of smoothness property, which we call differential consistency. It allows us to generate a wide class of both optimal and near-optimal algorithms for the adversarial multi-armed bandit problem. We summarize our main results: 1. We show that regularization via the Tsallis entropy leads to the state-of-the-art adversarial MAB algorithm, matching the minimax regret rate of Audibert and Bubeck (2009) with a tighter constant. Interestingly, our algorithm fully generalizes EXP3. 2. We show that a wide array of well-studied noise distributions lead to near-optimal regret bounds (matching those of EXP3). Furthermore, our analysis reveals a strikingly simple and appealing sufficient condition for achieving O( p T) regret: the hazard rate function of the noise distribution must be bounded by a constant. We conjecture that this requirement is in fact both necessary and sufficient. 2 Gradient-Based Prediction Algorithms for the Multi-Armed Bandit Let us now introduce the adversarial multi-armed bandit problem. On each round t = 1, . . . , T, a learner must choose a distribution pt 2 ∆N over the set of N available actions. The adversary (Nature) chooses a vector gt 2 [−1, 0]N of losses, the learner samples it ⇠pt, and plays action it. After selecting this action, the learner observes only the value gt,it, and receives no information as to the values gt,j for j 6= it. This limited information feedback is what makes the bandit problem much more challenging than the full-information setting in which the entire gt is observed. The learner’s goal is to minimize the regret. Regret is defined to be the difference in the realized loss and the loss of the best fixed action in hindsight: RegretT := max i2[N] T X t=1 (gt,i −gt,it). (3) To be precise, we consider the expected regret, where the expectation is taken with respect to the learner’s randomization. Loss vs. Gain Note: We use the term “loss” to refer to g, although the maximization in (3) would imply that g should be thought of as a “gain” instead. We use the former term, however, as we impose the assumption that gt 2 [−1, 0]N throughout the paper. 2.1 The Gradient-Based Algorithmic Template Our results focus on a particular algorithmic template described in Framework 1, which is a slight variation of the Gradient Based Prediction Algorithm (GBPA) of Abernethy et al. (2014). Note that 2 the algorithm (i) maintains an unbiased estimate of the cumulative losses ˆGt, (ii) updates ˆGt by adding a single round estimate ˆgt that has only one non-zero coordinate, and (iii) uses the gradient of a convex function ˜Φ as sampling distribution pt. The choice of ˜Φ is flexible but ˜Φ must be a differentiable convex function and its derivatives must always be a probability distribution. Framework 1 may appear restrictive but it has served as the basis for much of the published work on adversarial MAB algorithms (Auer et al., 2003; Kujala and Elomaa, 2005; Neu and Bart´ok, 2013). First, the GBPA framework essentially encompasses all FTRL and FTPL algorithms (Abernethy et al., 2014), which are the core techniques not only for the full information settings, but also for the bandit settings. Second, the estimation scheme ensures that ˆGt remains an unbiased estimate of Gt. Although there is some flexibility, any unbiased estimation scheme would require some kind of inverse-probability scaling—information theory tells us that the unbiased estimates of a quantity that is observed with only probabilty p must necessarily involve fluctuations that scale as O(1/p). Framework 1: Gradient-Based Prediction Alg. (GBPA) Template for Multi-Armed Bandit GBPA(˜Φ): ˜Φ is a differentiable convex function such that r˜Φ 2 ∆N and ri ˜Φ > 0 for all i. Initialize ˆG0 = 0 for t = 1 to T do Nature: A loss vector gt 2 [−1, 0]N is chosen by the Adversary Sampling: Learner chooses it according to the distribution p( ˆGt−1) = rΦt( ˆGt−1) Cost: Learner “gains” loss gt,it Estimation: Learner “guesses” ˆgt := gt,it pit( ˆ Gt−1)eit Update: ˆGt = ˆGt−1 + ˆgt Lemma 2.1. Define Φ(G) ⌘maxi Gi so that we can write the expected regret of GBPA(˜Φ) as ERegretT = Φ(GT ) −PT t=1hr˜Φ( ˆGt−1), gti. Then, the expected regret of the GBPA(˜Φ) can be written as: ERegretT  ˜Φ(0) −Φ(0) | {z } overestimation penalty +Ei1,...,it−1  Φ( ˆGT ) −˜Φ( ˆGT ) | {z } underestimation penalty + T X t=1 Eit[D˜Φ( ˆGt, ˆGt−1)| ˆGt−1] | {z } divergence penalty ( , (4) where the expectations are over the sampling of it. Proof. Let ˜Φ be a valid convex function for the GBPA. Consider GBPA(˜Φ) being run on the loss sequence g1, . . . , gT . The algorithm produces a sequence of estimated losses ˆg1, . . . , ˆgT . Now consider GBPA-NE(˜Φ), which is GBPA(˜Φ) run with the full information on the deterministic loss sequence ˆg1, . . . , ˆgT (there is no estimation step, and the learner updates ˆGt directly). The regret of this run can be written as Φ( ˆGT ) −PT t=1hr˜Φ( ˆGt−1), ˆgti, and Φ(GT ) Φ( ˆGT ) by the convexity of Φ. Hence, it suffices to show that the GBPA-NE(˜Φ) has regret at most the righthand side of Equation 4, which is a fairly well-known result in online learning literature; see, for example, (Cesa-Bianchi and Lugosi, 2006, Theorem 11.6) or (Abernethy et al., 2014, Section 2). For completeness, we included the full proof in Appendix A. 2.2 A New Kind of Smoothness What has emerged as a guiding principle throughout machine learning is that enforcing stability of an algorithm can often lead immediately to performance guarantees—that is, small modifications of the input data should not dramatically alter the output. In the context of GBPA, algorithmic stability is guaranteed as long as the dervative r˜Φ(·) is Lipschitz. Abernethy et al. (2014) explored a set of conditions on r2 ˜Φ(·) that lead to optimal regret guarantees for the full-information setting. Indeed, 3 this work discussed different settings where the regret depends on an upper bound on either the nuclear norm or the operator norm of this hessian. In short, regret in the full information setting relies on the smoothness of the choice of ˜Φ. In the bandit setting, however, merely a uniform bound on the magnitude of r2 ˜Φ is insufficient to guarantee low regret; the regret (Lemma 2.1) involves terms of the form D˜Φ( ˆGt−1 + ˆgt, ˆGt−1), where the incremental quantity ˆgt can scale as large as the inverse of the smallest probability of p( ˆGt−1). What is needed is a stronger notion of the smoothness that bounds r2 ˜Φ in correspondence with r˜Φ, and we propose the following definition: Definition 2.2 (Differential Consistency). For constants γ, C > 0, we say that a convex function ˜Φ(·) is (γ, C)-differentially-consistent if for all G 2 (−1, 0]N, r2 ii ˜Φ(G) C(ri ˜Φ(G))γ. We now prove a useful bound that emerges from differential consistency, and in the following two sections we shall show how this leads to regret guarantees. Theorem 2.3. Suppose ˜Φ is (γ, C)-differentially-consistent for constants C, γ > 0. Then divergence penalty at time t in Lemma 2.1 can be upper bounded as: Eit[D˜Φ( ˆGt, ˆGt−1)| ˆGt−1] C N X i=1 ⇣ ri ˜Φ( ˆGt−1) ⌘γ−1 . Proof. For the sake of clarity, we drop the subscripts; we use ˆG to denote the cumulative estimate ˆGt−1, ˆg to denote the marginal estimate ˆgt = ˆGt −ˆGt−1, and g to denote the true loss gt. Note that by definition of Algorithm 1, ˆg is a sparse vector with one non-zero and non-positive coordinate ˆgit = gt,i/ri ˜Φ( ˆG). Plus, it is conditionally independent given ˆG. For a fixed it, Let h(r) := D˜Φ( ˆG + rˆg/kˆgk, ˆG) = D˜Φ( ˆG + reit, ˆG), so that h00(r) = (ˆg/kˆgk)>r2 ˜Φ ⇣ ˆG + tˆg/kˆgk ⌘ (ˆg/kˆgk) = e> itr2 ˜Φ ⇣ ˆG −teit ⌘ eit. Now we can write Eit[D˜Φ( ˆG + ˆg, ˆG)| ˆG] = PN i=1 P[it = i] R kˆgk 0 R s 0 h00(r) dr ds = PN i=1 ri ˜Φ( ˆG) R kˆgk 0 R s 0 e> i r2 ˜Φ ⇣ ˆG −rei ⌘ ei dr ds PN i=1 ri ˜Φ( ˆG) R kˆgk 0 R s 0 C ⇣ ri ˜Φ( ˆG −rei) ⌘γ dr ds PN i=1 ri ˜Φ( ˆG) R kˆgk 0 R s 0 C ⇣ ri ˜Φ( ˆG) ⌘γ dr ds = C PN i=1 ⇣ ri ˜Φ( ˆG) ⌘1+γ R kˆgk 0 R s 0 dr ds = C 2 PN i=1 ⇣ ri ˜Φ( ˆG) ⌘γ−1 g2 i C PN i=1 ⇣ ri ˜Φ( ˆG) ⌘γ−1 . The first inequality is by the supposition and the second inequality is due to the convexity of ˜Φ which guarantees that ri is an increasing function in the i-th coordinate. Interestingly, this part of the proof critically depends on the fact that the we are in the “loss” setting where g is always non-positive. 3 A Minimax Bandit Algorithm via Tsallis Smoothing The design of a multi-armed bandit algorithm in the adversarial setting proved to be a challenging task. Ignoring the dependence on N for the moment, we note that the initial published work on EXP3 provided only an O(T 2/3) guarantee (Auer et al., 1995), and it was not until the final version of this work (Auer et al., 2003) that the authors obtained the optimal O( p T) rate. For the more 4 general setting of online linear optimization, several sub-optimal rates were achieved (Dani and Hayes, 2006; Flaxman et al., 2005; McMahan and Blum, 2004) before the desired p T was obtained (Abernethy et al., 2012; Dani et al., 2008). We can view EXP3 as an instance of GBPA where the potential function ˜Φ(·) is the Fenchel conjugate of the Shannon entropy. For any p 2 ∆N, the (negative) Shannon entropy is defined as H(p) := P i pi log pi, and its Fenchel conjugate is H?(G) = supp2∆N {hp, Gi −⌘H(p)}. In fact, we have a closed-form expression for the supremum: H?(G) = 1 ⌘log (P i exp(⌘Gi)) . By inspecting the gradient of the above expression, it is easy to see that EXP3 chooses the distribution pt = rH?(G) every round. The tighter EXP3 bound given by Auer et al. (2003) scaled according to O(pTN log N) and the authors provided a matching lower bound of the form ⌦( p TN). It remained an open question for some time whether there exists a minimax optimal algorithm that does not contain the log term until Audibert and Bubeck (2009) proposed the Implicitly Normalized Forecaster (INF). The INF is implicitly defined via a specially-designed potential function with certain properties. It was not immediately clear from this result how to define a minimax-optimal algorithm using the now-standard tools of regularization and Bregman divergence. More recently, Audibert et al. (2011) improved upon Audibert and Bubeck (2009), extending the results to the combinatorial setting, and they also discovered that INF can be interpreted in terms of Bregman divergences. We give here a reformulation of INF that leads to a very simple analysis in terms of our notion of differential consistency. Our reformulation can be viewed as a variation of EXP3, where the key modification is to replace the Shannon entropy function with the Tsallis entropy1 for parameter 0 < ↵< 1: S↵(p) = 1 1 −↵ ⇣ 1 − X p↵ i ⌘ . This particular function, proposed by Tsallis (1988), possesses a number of natural properties. The Tsallis entropy is in fact a generalization of the Shannon entropy, as one obtains the latter as a special case of the former asymptotically. That is, it is easy to prove the following uniform convergence: S↵(·) ! H(·) as ↵! 1. We emphasize again that one can easily show that Tsallis-smoothing bandit algorithm is indeed identical to INF using the appropriate parameter mapping, although our analysis is simpler due to the notion of differential consistency (Definition 2.2). Theorem 3.1. Let ˜Φ(G) = maxp2∆N {hp, Gi −⌘S↵(p)}. Then the GBPA(˜Φ) has regret at most ERegret ⌘N 1−↵−1 1 −↵ + N ↵T ⌘↵. (5) Before proving the theorem, we note that it immediately recovers the EXP3 upper bound as a special case ↵! 1. An easy application of L’Hˆopital’s rule shows that as ↵! 1, N 1−↵−1 1−↵ ! log N and N ↵/↵! N. Choosing ⌘= p (N log N)/T, we see that the right-hand side of (5) tends to 2pTN log N. However the choice ↵! 1 is clearly not the optimal choice, as we show in the following statement, which directly follows from the theorem once we see that N 1−↵−1 < N 1−↵. Corollary 3.2. For any ↵2 (0, 1), if we choose ⌘= q ↵N 1−2↵ (1−↵)T then we have ERegret 2 q NT ↵(1−↵). In particular, the choice of ↵= 1 2 gives a regret of no more than 4 p NT. Proof of Theorem 3.1. We will bound each penalty term in Lemma 2.1. Since S↵is non-positive, the underestimation penalty is upper bounded by 0 and the overestimation penalty is at most (−min S↵). The minimum of S↵occurs at (1/N, . . . , 1/N). Hence, (overestimation penalty) − ⌘ 1 −↵ 1 − N X i=1 1 N ↵ ! ⌘(N 1−↵−1). (6) 1More precisely, the function we give here is the negative Tsallis entropy according to its original definition. 5 Now it remains to upper bound the divergence penalty with (⌘↵)−1N ↵T. We observe that straightforward calculus gives r2S↵(p) = ⌘↵diag(p↵−2 1 , . . . , p↵−2 N ). Let I∆N (·) be the indicator function of ∆N; that is, I∆N (x) = 0 for x 2 ∆N and I∆N (x) = 1 for x /2 ∆N. It is clear that ˜Φ(·) is the dual of the function S↵(·) + I∆N (·), and moreover we observe that r2S↵(p) is a sub-hessian of S↵(·) + I∆N (·) at p(G), following the setup of Penot (1994). Taking advantage of Proposition 3.2 in the latter reference, we conclude that r−2S↵(p(G)) is a super-hessian of ˜Φ = S⇤ ↵at G. Hence, r2 ˜Φ(G) ⪯(⌘↵)−1diag(p2−↵ 1 (G), . . . , p2−↵ N (G)) for any G. What we have stated, indeed, is that ˜Φ is (2 −↵, (⌘↵)−1)-differentially-consistent, and thus applying Theorem 2.3 gives D˜Φ( ˆGt, ˆGt−1) (⌘↵)−1 N X i=1 ⇣ pi( ˆGt−1) ⌘1−↵ . Noting that the 1 ↵-norm and the 1 1−↵-norm are dual to each other, we can apply H¨older’s inequality to any probability distribution p1, . . . , pN to obtain N X i=1 p1−↵ i = N X i=1 p1−↵ i · 1  N X i=1 p 1−↵ 1−↵ i !1−↵ N X i=1 1 1 ↵ !↵ = (1)1−↵N ↵= N ↵. So, the divergence penalty is at most (⌘↵)−1N ↵, which completes the proof. 4 Near-Optimal Bandit Algorithms via Stochastic Smoothing Let D be a continuous distribution over an unbounded support with probability density function f and cumulative density function F. Consider the GBPA(˜Φ(G; D)) where ˜Φ(G; D) = EZ1,...,ZN iid ⇠D max i {Gi + Zi} which is a stochastic smoothing of (maxi Gi) function. Since the max function is convex, ˜Φ is also convex. By Bertsekas (1973), we can swap the order of differentiation and expectation: ˜Φ(G; D) = EZ1,...,ZN iid ⇠Dei⇤, where i⇤= arg max i=1,...,N {Gi + Zi}. (7) Even if the function is not differentiable everywhere, the swapping is still possible with any subgradient as long as they are bounded. Hence, the ties between coordinates (which happen with probability zero anyways) can be resolved in an arbitrary manner. It is clear that r˜Φ is in the probability simplex, and note that @ ˜Φ @Gi = EZ1,...,ZN 1{Gi + Zi > Gj + Zj, 8j 6= i} = E ˜ Gj⇤[PZi[Zi > ˜Gj⇤−Gi]] = E ˜ Gj⇤[1 −F( ˜Gj⇤−Gi)] (8) where ˜Gj⇤= maxj6=i Gj + Zj. The unbounded support condition guarantees that this partial derivative is non-zero for all i given any G. So, ˜Φ(G; D) satisfies the requirements of Algorithm 1. 4.1 Connection to Follow the Perturbed Leader There is a straightforward way to efficiently implement the sampling step of the bandit GBPA (Algorithm 1) with a stochastically smoothed function. Instead of evaluating the expectation of Equation 7, we simply take a random sample. In fact, this is equivalent to Follow the Perturbed Leader Algorithm (FTPL) (Kalai and Vempala, 2005) for bandit settings. On the other hand, implementing the estimation step is hard because generally there is no closed-form expression for r˜Φ. To address this issue, Neu and Bart´ok (2013) proposed Geometric Resampling (GR). GR uses an iterative resampling process to estimate ri ˜Φ. This process gives an unbiased estimate when allowed 6 to run for an unbounded number of iterations. Even when we truncate the resampling process after M iterations, the extra regret due to the estimation bias is at most NT eM (additive term). Since the lower bound for the multi-armed bandit problem is O( p NT), any choice of M = O( p NT) does not affect the asymptotic regret of the algorithm. In summary, all our GBPA regret bounds in this section hold for the corresponding FTPL algorithm with an extra additive NT eM term in the bound. Despite the fact that perturbation-based algorithms provide a natural randomized decision strategy, they have seen little applications mostly because they are hard to analyze. But one should expect general results to be within reach: the EXP3 algorithm, for example, can be viewed through the lens of perturbations, where the noise is distributed according to the Gumbel distribution. Indeed, an early result of Kujala and Elomaa (2005) showed that a near-optimal MAB strategy comes about through the use of exponentially-distributed noise, and the same perturbation strategy has more recently been utilized in the work of Neu and Bart´ok (2013) and Koc´ak et al. (2014). However, a more general understanding of perturbation methods has remained elusive. For example, would Gaussian noise be sufficient for a guarantee? What about, say, the Weibull distribution? 4.2 Hazard Rate analysis In this section, we show that the performance of the GBPA(˜Φ(G; D)) can be characterized by the hazard function of the smoothing distribution D. The hazard rate is a standard tool in survival analysis to describe failures due to aging; for example, an increasing hazard rate models units that deteriorate with age while a decreasing hazard rate models units that improve with age (a counter intuitive but not illogical possibility). To the best of our knowledge, the connection between hazard rates and design of adversarial bandit algorithms has not been made before. Definition 4.1 (Hazard rate function). Hazard rate function of a distribution D is hD(x) := f(x) 1 −F(x) For the rest of the section, we assume that D is unbounded in the direction of +1, so that the hazard function is well-defined everywhere. This assumption is for the clarity of presentation and can be easily removed (Appendix B). Theorem 4.2. The regret of the GBPA on ˜Φ(L) = EZ1,...,Zn⇠D maxi{Gi + ⌘Zi} is at most: N(sup hD) ⌘ T + ⌘EZ1,...,Zn⇠D h max i Zi i Proof. We analyze each penalty term in Lemma 2.1. Due to the convexity of Φ, the underestimation penalty is non-positive. The overestimation penalty is clearly at most EZ1,...,Zn⇠D[maxi Zi], and Lemma 4.3 proves the N(sup hD) upper bound on the divergence penalty. It remains to provide the tuning parameter ⌘. Suppose we scale the perturbation Z by ⌘> 0, i.e., we add ⌘Zi to each coordinate. It is easy to see that E[maxi=1,...,n ⌘Xi] = ⌘E[maxi=1,...,n Xi]. For the divergence penalty, let F⌘be the CDF of the scaled random variable. Observe that F⌘(t) = F(t/⌘) and thus f⌘(t) = 1 ⌘f(t/⌘). Hence, the hazard rate scales by 1/⌘, which completes the proof. Lemma 4.3. The divergence penalty of the GBPA with ˜Φ(G) = EZ⇠D maxi{Gi + Zi} is at most N(sup hD) each round. Proof. Recall the gradient expression in Equation 8. The i-th diagonal entry of the Hessian is: r2 ii ˜Φ(G) = @ @Gi E ˜ Gj⇤[1 −F( ˜Gj⇤−Gi)] = E ˜ Gj⇤ @ @Gi (1 −F( ˜Gj⇤−Gi)) ( = E ˜ Gj⇤f( ˜Gj⇤−Gi) = E ˜ Gj⇤[h( ˜Gj⇤−Gi)(1 −F( ˜Gj⇤−Gi))] (9) (sup h)E ˜ Gj⇤[1 −F( ˜Gj⇤−Gi)] = (sup h)ri(G) where ˜Gj⇤= maxj6=i{Gj + Zj} which is a random variable independent of Zi. We now apply Theorem 2.3 with γ = 1 and C = (sup h) to complete the proof. 7 Distribution supx hD(x) E[maxN i=1 Zi] O(pTN log N) Param. Gumbel(µ = 1, β = 1) 1 as x ! 0 log N + γ0 N/A Frechet (↵> 1) at most 2↵ N 1/↵Γ(1 −1/↵) ↵= log N Weibull*(λ = 1, k 1) k at x = 0 O( 2 1 k 3 !(log N) 1 k ) k = 1 (Exponential) Pareto*(xm = 1, ↵) ↵at x = 0 ↵N 1/↵/(↵−1) ↵= log N Gamma(↵≥1, β) β as x ! 1 log N +(↵−1) log log N − log Γ(↵) + β−1γ0 β = ↵= 1 (Exponential) Table 1: Distributions that give O(pTN log N) regret FTPL algorithm. The parameterization follows Wikipedia pages for easy lookup. We denote the Euler constant (⇡0.58) by γ0. Distributions marked with (*) need to be slightly modified using the conditioning trick explained in Appendix B.2. The maximum of Frechet hazard function has to be computed numerically (Elsayed, 2012, p. 47) but elementary calculations show that it is bounded by 2↵(Appendix D). Corollary 4.4. Follow the Perturbed Leader Algorithm with distributions in Table 1 (restricted to a certain range of parameters), combined with Geometric Resampling (Section 4.1) with M = p NT, has an expected regret of order O(pTN log N). Table 1 provides the two terms we need to bound. We derive the third column of the table in Appendix C using Extreme Value Theory (Embrechts et al., 1997). Note that our analysis in the proof of Lemma 4.3 is quite tight; the only place we have an inequality is when we upper bound the hazard rate. It is thus reasonable to pose the following conjecture: Conjecture 4.5. If a distribution D has a monotonically increasing hazard rate hD(x) that does not converge as x ! +1 (e.g., Gaussian), then there is a sequence of losses that will incur at least a linear regret. The intuition is that if adversary keeps incurring a high loss for the i-th arm, then with high probability ˜Gj⇤−Gi will be large. So, the expectation in Equation 9 will be dominated by the hazard function evaluated at large values of ˜Gj⇤−Gi. Acknowledgments. J. Abernethy acknowledges the support of NSF under CAREER grant IIS1453304. A. Tewari acknowledges the support of NSF under CAREER grant IIS-1452099. References J. Abernethy, E. Hazan, and A. Rakhlin. Interior-point methods for full-information and bandit online learning. IEEE Transactions on Information Theory, 58(7):4164–4175, 2012. J. Abernethy, C. Lee, A. Sinha, and A. Tewari. Online linear optimization via smoothing. In COLT, pages 807–823, 2014. J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, pages 217–226, 2009. J.-Y. Audibert, S. Bubeck, and G. Lugosi. Minimax policies for combinatorial prediction games. In COLT, 2011. P. Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine Learning Research, 3:397–422, 2003. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: The adversarial multi-arm bandit problem. In FOCS, 1995. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal of Computuataion, 32(1):48–77, 2003. ISSN 0097-5397. D. P. Bertsekas. Stochastic optimization problems with nondifferentiable cost functionals. Journal of Optimization Theory and Applications, 12(2):218–231, 1973. ISSN 0022-3239. 8 N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. V. Dani and T. P. Hayes. Robbing the bandit: less regret in online geometric optimization against an adaptive adversary. In SODA, pages 937–943, 2006. V. Dani, T. Hayes, and S. Kakade. The price of bandit information for online optimization. In NIPS, 2008. L. Devroye, G. Lugosi, and G. Neu. Prediction by random-walk perturbation. In Conference on Learning Theory, pages 460–473, 2013. E. Elsayed. Reliability Engineering. Wiley Series in Systems Engineering and Management. Wiley, 2012. ISBN 9781118309544. URL https://books.google.com/books?id= NdjF5G6tfLQC. P. Embrechts, C. Kl¨uppelberg, and T. Mikosch. Modelling Extremal Events: For Insurance and Finance. Applications of mathematics. Springer, 1997. ISBN 9783540609315. URL https: //books.google.com/books?id=BXOI2pICfJUC. A. D. Flaxman, A. T. Kalai, and H. B. McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In SODA, pages 385–394, 2005. ISBN 0-89871-585-7. J. Gittins. Quantitative methods in the planning of pharmaceutical research. Drug Information Journal, 30(2):479–487, 1996. J. Gittins, K. Glazebrook, and R. Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011. J. Hannan. Approximation to bayes risk in repeated play. In M. Dresher, A. W. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume III, pages 97–139, 1957. A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005. T. Koc´ak, G. Neu, M. Valko, and R. Munos. Efficient learning by implicit exploration in bandit problems with side observations. In NIPS, pages 613–621. Curran Associates, Inc., 2014. J. Kujala and T. Elomaa. On following the perturbed leader in the bandit setting. In Algorithmic Learning Theory, pages 371–385. Springer, 2005. T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985. N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. ISSN 0890-5401. H. B. McMahan and A. Blum. Online geometric optimization in the bandit setting against an adaptive adversary. In COLT, pages 109–123, 2004. G. Neu and G. Bart´ok. An efficient algorithm for learning with semi-bandit feedback. In Algorithmic Learning Theory, pages 234–248. Springer, 2013. M. Pacula, J. Ansel, S. Amarasinghe, and U.-M. OReilly. Hyperparameter tuning in bandit-based adaptive operator selection. In Applications of Evolutionary Computation, pages 73–82. Springer, 2012. J.-P. Penot. Sub-hessians, super-hessians and conjugation. Nonlinear Analysis: Theory, Methods & Applications, 23(6):689–702, 1994. URL http://www.sciencedirect.com/ science/article/pii/0362546X94902127. S. Rakhlin, O. Shamir, and K. Sridharan. Relax and randomize: From value to algorithms. In Advances in Neural Information Processing Systems, pages 2141–2149, 2012. H. Robbins. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc., 58(5): 527–535, 1952. C. Tsallis. Possible generalization of boltzmann-gibbs statistics. Journal of Statistical Physics, 52 (1-2):479–487, 1988. G. Van den Broeck, K. Driessens, and J. Ramon. Monte-carlo tree search in poker using expected reward distributions. In Advances in Machine Learning, pages 367–381. Springer, 2009. T. Van Erven, W. Kotlowski, and M. K. Warmuth. Follow the leader with dropout perturbations. In COLT, 2014. 9
2015
152
5,650
Sparse and Low-Rank Tensor Decomposition Parikshit Shah parikshit@yahoo-inc.com Nikhil Rao nikhilr@cs.utexas.edu Gongguo Tang gtang@mines.edu Abstract Motivated by the problem of robust factorization of a low-rank tensor, we study the question of sparse and low-rank tensor decomposition. We present an efficient computational algorithm that modifies Leurgans’ algoirthm for tensor factorization. Our method relies on a reduction of the problem to sparse and low-rank matrix decomposition via the notion of tensor contraction. We use well-understood convex techniques for solving the reduced matrix sub-problem which then allows us to perform the full decomposition of the tensor. We delineate situations where the problem is recoverable and provide theoretical guarantees for our algorithm. We validate our algorithm with numerical experiments. 1 Introduction Tensors are useful representational objects to model a variety of problems such as graphical models with latent variables [1], audio classification [20], psychometrics [8], and neuroscience [3]. One concrete example proposed in [1] involves topic modeling in an exchangeable bag-of-words model wherein given a corpus of documents one wishes to estimate parameters related to the different topics of the different documents (each document has a unique topic associated to it). By computing the empirical moments associated to (exchangeable) bi-grams and tri-grams of words in the documents, [1] shows that this problem reduces to that of a (low rank) tensor decomposition. A number of other machine learning tasks, such as Independent Component Analysis [11], and learning Gaussian mixtures [2] are reducible to that of tensor decomposition. While most tensor problems are computationally intractable [12] there has been renewed interest in developing tractable and principled approaches for the same [4,5,12,15,19,21,24–27]. In this paper we consider the problem of performing tensor decompositions when a subset of the entries of a low-rank tensor X are corrupted adversarially, so that the tensor observed is Z = X+Y where Y is the corruption. One may view this problem as the tensor version of a sparse and low-rank matrix decomposition problem as studied in [6,9,10,13]. We develop an algorithm for performing such a decomopsition and provide theoretical guarantees as to when such decomposition is possible. Our work draws on two sets of tools: (a) The line of work addressing the Robust PCA problem in the matrix case [6, 9], and (b) Application of Leaurgans’ algorithm for tensor decomposition and tensor inverse problems [4,17,24]. Our algorithm is computationally efficient and scalable, it relies on the key notion of tensor contraction which effectively reduces a tensor problem of dimension n × n × n to four decompostion problems for matrices of size n×n. One can then apply convex methods for sparse and low-rank matrix decomposition followed by certain linear algebraic operations to recover the constituent tensors. Our algorithm not only produces the correct decomposition of Z into X and Y , but also produces the low rank factorization of X. We are able to avoid tensor unfolding based approaches [14,21,26] which are expensive and would lead to solving convex problems that are larger by orders of magnitude; in the 3rd order case the unfolded matrix would be n2 × n. Furthermore, our method is 1 conceptually simple, to impelement as well as to analyze theoretically. Finally our method is also modular – it can be extended to the higher order case as well as to settings where the corrupted tensor Z has missing entries, as described in Section 5. 1.1 Problem Setup In this paper, vectors are denoted using lower case characters (e.g. x, y, a, b, etc.), matrices by uppercase characters (e.g. X, Y, etc,) and tensors by upper-case bold characters (e.g. X, T , A etc.). We will work with tensors of third order (representationally to be thought of as three-way arrays), and the term mode refers to one of the axes of the tensor. A slice of a tensor refers to a two dimensional matrix generated from the tensor by varying indices along two modes while keeping the third mode fixed. For a tensor X we will refer to the indices of the ith mode-1 slice (i.e., the slice corresponding to the indices {i} × [n2] × [n3]) by S(1) i , where [n2] = {1, 2, . . . , n2} and [n3] is defined similarly. We denote the matrix corresponding to S(1) i by X1 i . Similarly the indices of the kth mode-3 slice will be denoted by S(3) k and the matrix by X3 k. Given a tensor of interest X, consider its decomposition into rank one tensors X = r X i=1 λiui ⊗vi ⊗wi, (1) where {ui}i=1,...,r ⊆Rn1, {vi}i=1,...,r ⊆Rn2, and {wi}i=1,...,r ⊆Rn3 are unit vectors. Here ⊗denotes the tensor product, so that X ∈Rn1×n2×n3 is a tensor of order 3 and dimension n1 × n2 × n3. Without loss of generality, throughout this paper we assume that n1 ≤n2 ≤n3. We will present our results for third order tensors, and analogous results for higher orders follow in a transparent manner. We will be dealing with low-rank tensors, i.e. those tensors with r ≤n1. Tensors can have rank larger than the dimension, indeed r ≥n3 is an interesting regime, but far more challenging and is a topic left for future work. Kruskal’s Theorem [16] guarantees that tensors satisfying Assumption 1.1 below have a unique minimal decomposition into rank one terms of the form (1). The number of terms is called the (Kruskal) rank. Assumption 1.1. {ui}i=1,...,r ⊆Rn1, {vi}i=1,...,r ⊆Rn2, and {wi}i=1,...,r ⊆Rn3 are sets of linearly independent vectors. While rank decomposition of tensors in the worst case is known to be computationally intractable [12], it is known that the (mild) assumption stated in Assumption 1.1 above suffices for an algorithm known as Leurgans’ algorithm [4,18] to correctly identify the factors in this unique decomposition. In this paper, we will make this assumption about our tensor X throughout. This assumption may be viewed as a “genericity” or “smoothness” assumption [4]. In (1), r is the rank, λi ∈R are scalars, and ui ∈Rn1, vi ∈Rn2, wi ∈Rn3 are the tensor factors. Let U ∈Rn1×r denote the matrix whose columns are ui, and correspondingly define V ∈Rn2×r and W ∈Rn3×r. Let Y ∈Rn1×n2×n3 be a sparse tensor to be viewed as a “corruption” or adversarial noise added to X, so that one observes: Z = X + Y . The problem of interest is that of decomposition, i.e. recovering Xand Y from Z. For a tensor X, we define its mode-3 contraction with respect to a contraction vector a ∈Rn3, denoted by X3 a ∈Rn1×n2, as the following matrix:  X3 a  ij = n3 X k=1 Xijkak, (2) so that the resulting n1 × n2 matrix is a weighted sum of the mode-3 slices of the tensor X. Under this notation, the kth mode-3 slice matrix X3 k is a mode-3 contraction with respect to the kth canonical basis vector. We similarly define the mode-1 contraction with respect to a vector c ∈Rn1 as  X1 c  jk = n1 X i=1 Xijkci. (3) 2 In the subsequent discussion we will also use the following notation. For a matrix M, ∥M∥refers to the spectral norm, ∥M∥∗the nuclear norm, ∥M∥1 := P i,j |Mij| the elementwise ℓ1 norm, and ∥M∥∞:= maxi,j |Mi,j| the elementwise ℓ∞norm. 1.2 Incoherence The problem of sparse and low-rank decomposition for matrices has been studied in [6, 9, 13, 22], and it is well understood that exact decomposition is not always possible. In order for the problem to be identifiable, two situations must be avoided: (a) the low-rank component X must not be sparse, and (b) the sparse component Y must not be low-rank. In fact, something stronger is both necessary and sufficient: the tangent spaces of the low-rank matrix (with respect to the rank variety) and the sparse matrix (with respect to the variety of sparse matrices) must have a transverse intersection [9]. For the problem to be amenable to recovery using comptationally tractable (convex) methods, somewhat stronger, incoherence assumptions are standard in the matrix case [6,7,9]. We will make similar assumptions for the tensor case, which we now describe. Given the decomposition (1) of X we define the following subspaces of matrices: TU,V =  UAT + BV T : A ∈Rn2×r, B ∈Rn1×r TV,W =  V CT + DW T : C ∈Rn3×r, D ∈Rn2×r . (4) Thus TU,V is the set of rank r matrices whose column spaces are contained in span(U) or row spaces are contained in span(V ) respectively, and a similar definition holds for TV,W and matrices V, W. If Q is a rank r matrix with column space span(U) and row space span(V ), TU,V is the tangent space at Q with respect to the variety of rank r matrices. For a tensor Y , the support of Y refers to the indices corresponding to the non-zero entries of Y . Let Ω⊆[n1] × [n2] × [n3] denote the support of Y . Further, for a slice Y 3 i , let Ω(3) i ⊆[n1] × [n2] denote the corresponding sparsity pattern of the slice Y 3 i (more generally Ω(k) i can be defined as the sparsity of the matrix resulting from the ith mode k slice). When a tensor contraction of Y is computed along mode k, the sparsity of the resulting matrix is the union of the sparsity patterns of each (matrix) slice, i.e. Ω(k) = Snk i=1 Ω(k) i . Let S Ω(k) denote the set of (sparse) matrices with support Ω(k). We define the following incoherence parameters: ζ (U, V ) := max M∈TU,V :∥M∥≤1∥M∥∞ ζ (V, W) := max M∈TV,W :∥M∥≤1∥M∥∞ µ  Ω(k) := max N∈S(Ω(k)):∥N∥∞≤1 ∥N∥. The quantities ζ (U, V ) and ζ (V, W) being small implies that for contractions of the tensor Z, all matrices in the tangent space of those contractions with respect to the variety of rank r matrices are “diffuse”, i.e. do not have sparse elements [9]. Similarly, µ Ω(k) being small implies that all matrices with the contracted sparsity pattern Ω(k) are such that their spectrum is “diffuse”, i.e. they do not have low rank. We will see specific settings where these forms of incoherence hold for tensors in Section 3. 2 Algorithm for Sparse and Low Rank Tensor Decomposition We now introduce our algorithm to perform sparse and low rank tensor decompositions. We begin with a Lemma: Lemma 2.1. Let X ∈Rn1×n2×n3, with n1 ≤n2 ≤n3 be a tensor of rank r ≤n1. Then the rank of X3 a is at most r. Similarly the rank of X1 c is at most r. Proof. Consider a tensor X = Pr i=1 λi ui ⊗vi ⊗wi. The reader may verify in a straightforward manner that X3 a enjoys the decomposition: X3 a = r X i=1 λi⟨wi, a⟩uivT i . (5) 3 The proof for the rank of X1 c is analogous. Note that while (5) is a matrix decomposition of the contraction, it is not a singular value decomposition (the components need not be orthogonal, for instance). Recovering the factors needs an application of simultaneous diagonalization, which we describe next. Lemma 2.2. [4, 18] Suppose we are given an order 3 tensor X = Pr i=1 λi ui ⊗vi ⊗wi of size n1 × n2 × n3 satisfying the conditions of Assumption 1.1. Suppose the contractions X3 a and X3 b are computed with respect to unit vectors a, b ∈Rn3 distributed independently and uniformly on the unit sphere Sn3−1 and consider the matrices M1 and M2 formed as: M1 = X3 a(X3 b )† M2 = (X3 b )†X3 a. Then the eigenvectors of M1 (corresponding to the non-zero eigenvalues) are {ui}i=1,...,r, and the eigenvectors of M T 2 are {vi}i=1,...,r. Remark Note that while the eigenvectors {ui} , {vj} are thus determined, a source of ambiguity remains. For a fixed ordering of {ui} one needs to determine the order in which {vj} are to be arranged. This can be (generically) achieved by using the (common) eigenvalues of M1 and M2 for pairing i(f the contractions X3 a, X3 b are computed with respect to random vectors a, b the eigenvalues are distinct almost surely). Since the eigenvalues of M1, M2 are distinct they can be used to pair the columns of U and V . Lemma 2.2 is essentially a simultaneous diagonalization result [17] that facilitates tensor decomposition [4]. Given a tensor T , one can compute two contractions for mode 1 and apply simultaneous diagonalization as described in Lemma 2.2 - this would yield the factors vi, wi (up to sign and reordering). One can then repeat the same process with mode 3 contractions to obtain ui, vi. In the final step one can then obtain λi by solving a system of linear equations. The full algorithm is described in Algorithm 2 in the supplementary material. For a contraction Zk v of a tensor Z with respect to a vector v along mode k, consider solving the convex problem: minimize X,Y ∥X∥∗+ νk∥Y∥1 subject to Zk v = X + Y. (6) Our algorithm, stated in Algorithm 1, proceeds as follows: Given a tensor Z = X +Y , we perform two random contractions (w.r.t. vectors a, b) of the tensor along mode 3 to obtain matrices Z(3) a , Z(3) b . Since Z is a sum of sparse and low-rank components, by Lemma 2.1 so are the matrices Z(3) a , Z(3) b . We thus use (6) to decompose them into constituent sparse and low-rank components, which are the contractions of the matrices X(3) a , X(3) b , Y (3) a , Y (3) b . We then use X(3) a , X(3) b and Lemma 2.2 to obtain the factors U, V . We perform the same operations along mode 1 to obtain factors V, W. In the last step, we solve for the scale factors λi (a system of linear equations). Algorithm 2 in the supplementary material, which we adopt for our decomposition problem in Algorithm 1, essentially relies on the idea of simultaneous diagonalization of matrices sharing common row and column spaces [17]. In this paper we do not analyze the situation where random noise is added to all the entries, but only the sparse adversarial noise setting. We note, however, that the key algorithmic insight of using contractions to perform tensor recovery is numerically stable and robust with respect to noise, as has been studied in [4,11,17]. Parameters that need to be picked to implement our algorithm are the regularization coefficients ν1, ν3. In the theoretical guarantees we will see that this can be picked in a stable manner, and that a range of values guarantee exact decomposition when the suitable incoherence conditions hold. In practice these coefficents would need to be determined by a cross-validation method. Note also that under suitable random sparsity assumptions [6], the regularization coefficient may be picked to be the inverse of the square-root of the dimension. 2.1 Computational Complexity The computational complexity of our algorithm is dominated by the complexity of perfoming the sparse and low-rank matrix decomposition of the contractions via (6). For simplicity, let us consider 4 Algorithm 1 Algorithm for sparse and low rank tensor decomposition 1: Input: Tensor Z, parameters ν1, ν3. 2: Generate contraction vectors a, b ∈Rn3 independently and uniformly distributed on unit sphere. 3: Compute mode 3 contractions Z3 a and Z3 b respectively. 4: Solve the convex problem (6) with v = a, k = 3. Call the resulting solution matrices X3 a, Y 3 a , and regularization parameter ν1. 5: Solve the convex problem (6) with v = b, k = 3. Call the resulting solution matrices X3 b , Y 3 b and regularization parameter ν3. 6: Compute eigen-decomposition of M1 := X3 a(X3 b )† and M2 := (X3 b )†X3 a. Let U and V denote the matrices whose columns are the eigenvectors of M1 and M T 2 respectively corresponding to the non-zero eigenvalues, in sorted order. (Let r be the (common) rank of M1 and M2.) The eigenvectors, thus arranged are denoted as {ui}i=1,...,r and {vi}i=1,...,r. 7: Generate contraction vectors c, d ∈Rn1 independently and uniformly distributed on unit sphere. 8: Solve the convex problem (6) with v = c, k = 1. Call the resulting solution matrices X1 c , Y 1 c and regularization parameter ν3. 9: Solve the convex problem (6) with v = d, k = 1. Call the resulting solution matrices X1 d, Y 1 d and regularization parameter ν4. 10: Compute eigen-decomposition of M3 := X1 c (X1 d)† and M4 := (X1 c )†X1 d. Let ˜V and ˜W denote the matrices whose columns are the eigenvectors of M3 and M T 4 respectively corresponding to the non-zero eigenvalues, in sorted order. (Let r be the (common) rank of M3 and M4.) 11: Simultaneously reorder the columns of ˜V , ˜W, also performing simultaneous sign reversals as necessary so that the columns of V and ˜V are equal, call the resulting matrix W with columns {wi}i=1,...,r. 12: Solve for λi in the linear system X3 a = r X i=1 λiuivT i ⟨wi, a⟩. 13: Output: Decomposition ˆ X := Pr i=1 λi ui ⊗vi ⊗wi, ˆY := Z −ˆ X. the case where the target tensor Z ∈Rn×n×n has equal dimensions in different modes. Using a standard first order method, the solution of (6) has a per iteration complexity of O(n3), and to achieve an accuracy of ϵ, O 1 ϵ  iterations are required [22]. Since only four such steps need be performed, the complexity of the method is O  n3 ϵ  where ϵ is the accuracy to which (6) is solved. Another alternative is to reformulate (6) such that it is amenable to greedy atomic approaches [23], which yields an order of magnitude improvement. We note that in contrast, a tensor unfolding for this problem [14,21,26] results in the need to solve much larger convex programs. For instance, for Z ∈Rn×n×n, the resulting flattened matrix would be of size n2 × n and the resulting convex problem would then have a complexity of O  n4 ϵ  . For higher order tensors, the gap in computational complexity would increase by further orders of n. 2.2 Numerical Experiments We now present numerical results to validate our approach. We perform experiments for tensors of size 50 × 50 × 50 (non-symmetric). A tensor Z is generated as the sum of a low rank tensor X and a sparse tensor Y . The low-rank component is generated as follows: Three sets of r unit vecots ui, vi, wi ∈R50 are generated randomly, independently and uniformly distributed on the unit sphere. Also a random positive scale factor (uniformly distributed on [0, 1] is chosen and the tensor X = Pr i=1 λi ui ⊗vi × wi. The tensor Y is generated by (Bernoulli) randomly sampling its entries with probability p. For each such p, we perform 10 trials and apply our algorithm. In all our experiments, the regularization parameter was picked to be ν = 1 √n. The optimization problem (6) is solved using CVX in MATLAB. We report success if the MSE is smaller than 10−5, separately for both the X and Y components. We plot the empirical probability of success as a function of p in Fig. 1 (a), (b), for multiple values of the true rank r. In Fig. 1 (c), (d) we test the scalability 5 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 sparsity x 100 P(recovery) r = 1 r = 2 r = 3 r = 4 (a) Low Rank Component 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 sparsity x 100 P(recovery) r = 1 r = 2 r = 3 r = 4 (b) Sparse Component 0 0.05 0.1 0.15 0.2 0 1 2 3 4 5 Corruption Sparsity # Inexact Recoveries (c) Low Rank Component 0 0.05 0.1 0.15 0.2 0 1 2 3 4 5 Corruption Sparsity # Inexact Recoveries (d) Sparse Component Figure 1: Recovery of the low rank and sparse components from our proposed methods. In figures (a) and (b) we see that the probability of recovery is high when both the rank and sparsity are low. In figures (c) and (d) we study the recovery error for a tensor of dimensions 300 × 300 × 300 and rank 50. of our method, by generating a random 300 × 300 × 300 tensor of rank 50, and corrupting it with a sparse tensor of varying sparsity level. We run 5 independent trials and see that for low levels of corruption, both the low rank and sparse components are accurately recovered by our method. 3 Main Results We now present the main rigorous guarantees related to the performance of our algorithm. Due to space constraints, the proofs are deferred to the supplementary materials. Theorem 3.1. Suppose Z = X + Y , where X = Pr i=1 λiui ⊗vi ⊗wi, has rank r ≤n1 and such that the factors satisfy Assumption 1.1. Suppose Y has support Ωand the following condition is satisfied: µ  Ω(3) ζ (U, V ) ≤1 6 µ  Ω(1) ζ (V, W) < 1 6. Then Algoritm 1 succeeds in exactly recovering the component tensors, i.e. (X, Y ) = ( ˆ X, ˆY ) whenever νk are picked so that ν3 ∈  ζ(U,V ) 1−4ζ(U,V )µ(Ω(3)), 1−3ζ(U,V )µ(Ω(3)) µ(Ω(3))  and ν1 ∈  ζ(V,W ) 1−4ζ(V,W )µ(Ω(1)), 1−3ζ(V,W )µ(Ω(1)) µ(Ω(1))  . Specifically, choice of ν3 = (3ζ(U,V ))p (µ(Ω(3))) 1−p and ν1 = (3ζ(V,W ))p (µ(Ω(1))) 1−p for any p ∈[0, 1] in these respective intervals guarantees exact recovery. For a matrix M, the degree of M, denoted by deg(M), is the maximum number of non-zeros in any row or column of M. For a tensor Y , we define the degree along mode k, denoted by degk(Y ) to be the maximum number of non-zero entries in any row or column of a matrix supported on Ω(k) (defined in Section 1.2). The degree of Y is denoted by deg(Y ) := maxk∈{1,2,3} degk(Y ). Lemma 3.2. We have: µ  Ω(k) ≤deg(Y ), for all k. For a subspace S ⊆Rn, let us define the incoherence of the subspace as: β(S) := max i ∥PSei∥2, where PS denotes the projection operator onto S, ei is a standard unit vector and ∥· ∥2 is the Euclidean norm of a vector. Let us define: inc(X) := max {β (span(U)) , β (span(V )) , β (span(W))} inc3(X) := max {β (span(U)) , β (span(V ))} inc1(X) := max {β (span(V )) , β (span(W))} . 6 Note that inc(X) < 1, always. For many random ensembles of interest, we have that the incoherence scales gracefully with the dimension n, i.e.: inc(X) ≤K q max{r,log n} n . Lemma 3.3. We have ζ (U, V ) ≤2 inc(X) ζ (V, W) ≤2 inc(X). Corollary 3.4. Let Z = X + Y , with X = Pr i=1 λiui ⊗vi ⊗wi and rank r ≤n1, the factors satisfy Assumption 1.1 and incoherence inc(X). Suppose Y is sparse and has degree deg(Y ). If the condition inc(X)deg(Y ) < 1 12 holds then Algorithm 1 successfully recovers the true solution, i.e. . (X, Y ) = ( ˆ X, ˆY ) when the parameters ν3 ∈  2inc3(X) 1 −8deg3(Y )inc3(X), 1 −6deg3(Y )inc3(X) deg3(Y )  ν1 ∈  2inc1(X) 1 −8deg1(Y )inc1(X), 1 −6deg1(Y )inc1(X) deg1(Y )  . Specifically, a choice of ν3 = (6inc3(X))p (2deg3(Y ))1−p , ν1 = (6inc1(X))p (2deg1(Y ))1−p for any p ∈[0, 1] is a valid choice that guarantees exact recovery. Remark Note that Corollary 3.4 presents a deterministic guarantee on the recoverability of a sparse corruption of a low rank tensor, and can be viewed as a tensor extension of [9, Corollary 3]. We now consider, for the sake of simplicity, tensors of uniform dimension, i.e. X, Y , Z ∈Rn×n×n. We show that when the low-rank and sparse components are suitably random, the approach outlined in Algorithm 1 achieves exact recovery. We define the random sparsity model to be one where each entry of the tensor Y is non-zero independently and with identical probability ρ. We make no assumption about the mangitude of the entries of Y , only that its non-zero entries are thus sampled. Lemma 3.5. Let X = Pr i=1 λiui ⊗vi ⊗wi, where ui, vi, wi ∈Rn are uniformly randomly distributed on the unit sphere Sn−1. Then the incoherence of the tensor X satisifies: inc(X) ≤c1 r max {r, log n} n with probability exceeding 1 −c2n−3 log n for some constants c1, c2. Lemma 3.6. Suppose the entries of Y are sampled according to the random sparsity model, and ρ = O  n 3 2 max(log n, r) −1 . Then the tensor Y satisfies: deg(Y ) ≤ √n 12c1 max(log n,r) with probability exceeding 1 −exp  −c3 √n max(log n,r)  for some constant c3 > 0. Corollary 3.7. Let Z = X + Y where X is low rank with random factors as per the conditions of Lemma 3.5 and Y is sparse with random support as per the conditions in Lemma 3.6. Provided r ∼o  n 1 2  , Algorithm 1 successfully recovers the correct decomposition, i.e. ( ˆ X, ˆY ) = (X, Y ) with probability exceeding 1 −n−α for some α > 0. Remarks 1) Under this sampling model, the cardinality of the support of Y is allowed to be as large as m = O(n 3 2 log−1 n) when the rank r is constant (independent of n). 2) We could equivalently have looked at a uniformly random sampling model, i.e. one where a support set of size m is chosen uniformly randomly from the set of all possible support sets of cardinality at most m, and our results for exact recovery would have gone through. This follows from the equivalence principle for successful recovery between Bernoulli sampling and uniform sampling, see [6, Appendix 7.1]. 3) Note that for the random sparsity ensemble, [6] shows that a choice of ν = 1 √n ensures exact recovery (an additional condition regarding the magnitudes of the factors is needed, however). By extension, the same choice can be shown to work for our setting. 7 4 Extensions The approach described in Algorithm 1 and the analysis is quite modular and can be adapted to various settings to account for different forms of measurements and robustness models. We do not present an analysis of these situations due to space constraints, but outline how these extensions follow from the current development in a straightforward manner. 1) Higher Order Tensors: Algorithm 1 can be extended naturally to the higher order setting. Recall that in the third order case, one needs to recover two contractions along the third mode to discover factors U, V and then two contractions along the first mode to discover factors V, W. For an order K tensor of the form Z ∈Rn1×...×nK which is the sum of a low rank component X = Pr i=1 λi NK l=1 u(l) i and a sparse component Y , one needs to compute higher order contractions of Z along K −1 different modes. For each of these K −1 modes the resulting contraction is the sum of a sparse and low-rank matrix, and thus pairs of matrix problems of the form (6) reveal the sparse and low-rank components of the contractions. The low-rank factors can then be recovered via application of Lemma 2.2 and the full decomposition can thus be recovered. The same guarantees as in Theorem 3.1 and Corollary 3.4 hold verbatim (the notions of incoherence inc(X) and degree deg(Y ) of tensors need to be extended to the higher order case in the natural way) 2) Block sparsity: Situations where entire slices of the tensor are corrupted may happen in recommender systems with adversarial ratings [10]. A natural approach in this case is to use a convex relaxation of the form minimize M1,M2 νk∥M1∥∗+ ∥M2∥1,2 subject to Zk v = M1 + M2 in place of (6) in Algorithm 1. In the above, ∥M∥1,2 := P i ∥Mi∥2, where Mi is the ith column of M. Since exact recovery of the block-sparse and low-rank components of the contractions are guaranteed via this relaxation under suitable assumptions [10], the algorithm would inherit associated provable guarantees. 3) Tensor completion: In applications such as recommendation systems, it may be desirable to perform tensor completion in the presence of sparse corruptions. In [24], an adaptation of Leurgans’ algorithm was presented for performing completion from measurements restricted to only four slices of the tensor with near-optimal sample complexity (under suitable genericity assumptions about the tensor). We note that it is straightforward to blend Algorithm 1 with this method to achieve completion with sparse corruptions. Recalling that Z = X + Y and therefore Z3 k = X3 k + Y 3 k (i.e. the kth mode 3 slice of Z is a sum of sparse and low rank slices of X and Y ), if only a subset of elements of Z3 k (say PΛ Z3 k  ) is observed for some index set Λ, we can replace (6) in Algorithm 1 with minimize M1,M2 νk∥M1∥∗+ ∥M2∥1 subject to PΛ Zk v  = PΛ (M1 + M2) . Under suitable incoherence assumptions [6, Theorem 1.2], the above will achieve exact recovery of the slices. Once four slices are accurately recovered, one can then use Leurgans’ algorithm to recover the full tensor [24, Theorem 3.6]. Indeed the above idea can be extended more generally to the concept of deconvolving a sum of sparse and low-rank tensors from separable measurements [24]. 4) Non-convex approaches: A basic primitive for sparse and low-rank tensor decomposition used in this paper is that of using (6) for matrix decomposition. More efficient non-convex approaches such as the ones described in [22] may be used instead to speed up Algorithm 1. These alternative nonconvex methods [22] requre O(rn2) steps per iterations, and O log 1 ϵ  iterations resulting in a total complexity of O rn2 log 1 ϵ  for solving the decomposition of the contractions to an accuracy of ϵ. References [1] A. ANANDKUMAR, R. GE, D. HSU, AND S. M. KAKADE, A tensor approach to learning mixed membership community models, The Journal of Machine Learning Research, 15 (2014), pp. 2239–2312. [2] A. ANANDKUMAR, R. GE, D. HSU, S. M. KAKADE, AND M. TELGARSKY, Tensor decompositions for learning latent variable models, Tech. Rep. 1, 2014. 8 [3] C. BECKMANN AND S. SMITH, Tensorial extensions of independent component analysis for multisubject FMRI analysis, NeuroImage, 25 (2005), pp. 294–311. [4] A. BHASKARA, M. CHARIKAR, A. MOITRA, AND A. VIJAYARAGHAVAN, Smoothed analysis of tensor decompositions, in Proceedings of the 46th Annual ACM Symposium on Theory of Computing, ACM, 2014, pp. 594–603. [5] S. BHOJANAPALLI AND S. SANGHAVI, A new sampling technique for tensors, arXiv preprint arXiv:1502.05023, (2015). [6] E. J. CAND`ES, X. LI, Y. MA, AND J. WRIGHT, Robust principal component analysis?, Journal of the ACM, 58 (2011), pp. 11–37. [7] E. J. CAND`ES AND B. RECHT, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), pp. 717–772. [8] R. B. CATTELL, Parallel proportional profiles and other principles for determining the choice of factors by rotation, Psychometrika, 9 (1944), pp. 267–283. [9] V. CHANDRASEKARAN, S. SANGHAVI, P. A. PARRILO, AND A. S. WILLSKY, Rank-sparsity incoherence for matrix decomposition, SIAM Journal on Optimization, 21 (2011), pp. 572–596. [10] Y. CHEN, H. XU, C. CARAMANIS, AND S. SANGHAVI, Robust matrix completion and corrupted columns, in Proceedings of the 28th International Conference on Machine Learning (ICML-11), L. Getoor and T. Scheffer, eds., New York, NY, USA, 2011, ACM, pp. 873–880. [11] N. GOYAL, S. VEMPALA, AND Y. XIAO, Fourier PCA and robust tensor decomposition, in Proceedings of the 46th Annual ACM Symposium on Theory of Computing, ACM, 2014, pp. 584–593. [12] C. J. HILLAR AND L.-H. LIM, Most tensor problems are NP-hard, Journal of the ACM, 60 (2013), pp. 45:1–45:39. [13] D. HSU, S. KAKADE, AND T. ZHANG, Robust matrix decomposition with sparse corruptions, Information Theory, IEEE Transactions on, 57 (2011), pp. 7221–7234. [14] B. HUANG, C. MU, D. GOLDFARB, AND J. WRIGHT, Provable models for robust low-rank tensor completion, Pacific Journal of Optimization, 11 (2015), pp. 339–364. [15] A. KRISHNAMURTHY AND A. SINGH, Low-rank matrix and tensor completion via adaptive sampling, in Advances in Neural Information Processing Systems, 2013. [16] J. B. KRUSKAL, Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra Applicat., 18 (1977). [17] V. KULESHOV, A. CHAGANTY, AND P. LIANG, Tensor factorization via matrix factorization, arXiv.org, (2015). [18] S. LEURGANS, R. ROSS, AND R. ABEL, A decomposition for three-way arrays, SIAM Journal on Matrix Analysis and Applications, 14 (1993), pp. 1064–1083. [19] Q. LI, A. PRATER, L. SHEN, AND G. TANG, Overcomplete tensor decomposition via convex optimization, in IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Cancun, Mexico, Dec. 2015. [20] N. MESGARANI, M. SLANEY, AND S. A. SHAMMA, Discrimination of speech from non-speech based on multiscale spectro-temporal modulations, Audio, Speech and Language Processing, IEEE Transactions on, 14 (2006), pp. 920–930. [21] C. MU, B. HUANG, J. WRIGHT, AND D. GOLDFARB, Square deal: Lower bounds and improved relaxations for tensor recovery, preprint arXiv:1307.5870, 2013. [22] P. NETRAPALLI, U. NIRANJAN, S. SANGHAVI, A. ANANDKUMAR, AND P. JAIN, Non-convex robust PCA, in Advances in Neural Information Processing Systems, 2014. [23] N. RAO, P. SHAH, AND S. WRIGHT, Forward-backward greedy algorithms for signal demixing, in Signals, Systems and Computers, 2013 Asilomar Conference on, IEEE, 2014. [24] P. SHAH, N. RAO, AND G. TANG, Optimal low-rank tensor recovery from separable measurements: Four contractions suffice, arXiv.org, (2015). [25] G. TANG AND P. SHAH, Guaranteed tensor decomposition: A moment approach, International Conference on Machine Learning (ICML 2015), (2015), pp. 1491–1500. [26] R. TOMIOKA, K. HAYASHI, AND H. KASHIMA, Estimation of low-rank tensors via convex optimization, preprint arXiv:1010.0789, 2011. [27] M. YUAN AND C.-H. ZHANG, On tensor completion via nuclear norm minimization, preprint arXiv:1405.1773, 2014. 9
2015
153
5,651
Testing Closeness With Unequal Sized Samples Bhaswar B. Bhattacharya Department of Statistics Stanford University California, CA 94305 bhaswar@stanford.edu Gregory Valiant∗ Department of Computer Science Stanford University California, CA 94305 valiant@stanford.edu Abstract We consider the problem of testing whether two unequal-sized samples were drawn from identical distributions, versus distributions that differ significantly. Specifically, given a target error parameter ε > 0, m1 independent draws from an unknown distribution p with discrete support, and m2 draws from an unknown distribution q of discrete support, we describe a test for distinguishing the case that p = q from the case that ||p −q||1 ≥ε. If p and q are supported on at most n elements, then our test is successful with high probability provided m1 ≥n2/3/ε4/3 and m2 = Ω  max{ n √m1ε2 , √n ε2 }  . We show that this tradeoff is information theoretically optimal throughout this range in the dependencies on all parameters, n, m1, and ε, to constant factors for worst-case distributions. As a consequence, we obtain an algorithm for estimating the mixing time of a Markov chain on n states up to a log n factor that uses ˜O(n3/2τmix) queries to a “next node” oracle. The core of our testing algorithm is a relatively simple statistic that seems to perform well in practice, both on synthetic and on natural language data. We believe that this statistic might prove to be a useful primitive within larger machine learning and natural language processing systems. 1 Introduction One of the most basic problems in statistical hypothesis testing is the question of distinguishing whether two unknown distributions are very similar, or significantly different. Classical tests, like the Chi-squared test or the Kolmogorov-Smirnov statistic, are optimal in the asymptotic regime, for fixed distributions as the sample sizes tend towards infinity. Nevertheless, in many modern settings—such as the analysis of customer, web logs, natural language processing, and genomics, despite the quantity of available data—the support sizes and complexity of the underlying distributions are far larger than the datasets, as evidenced by the fact that many phenomena are observed only a single time in the datasets, and the empirical distributions of the samples are poor representations of the true underlying distributions.1 In such settings, we must understand these statistical tasks not only in the asymptotic regime (in which the amount of available data goes to infinity), but in the “undersampled” regime in which the dataset is significantly smaller than the size or complexity of the distribution in question. Surprisingly, despite an intense history of study by the statistics, information theory, and computer science communities, aspects of basic hypothesis testing and estimation questions–especially in the undersampled regime—remain unresolved, and require both new algorithms, and new analysis techniques. ∗Supported in part by NSF CAREER Award CCF-1351108 1To give some specific examples, two recent independent studies [19, 26] each considered the genetic sequences of over 14,000 individuals, and found that rare variants are extremely abundant, with over 80% of mutations observed just once in the sample. A separate recent paper [16] found that the discrepancy in rare mutation abundance cited in different demographic modeling studies can largely be explained by discrepancies in the sample sizes of the respective studies, as opposed to differences in the actual distributions of rare mutations across demographics, highlighting the importance of improved statistical tests in this “undersampled” regime. 1 In this work, we examine the basic hypothesis testing question of deciding whether two unknown distributions over discrete supports are identical (or extremely similar), versus have total variation distance at least ε, for some specified parameter ε > 0. We consider (and largely resolve) this question in the extremely practically relevant setting of unequal sample sizes. Informally, taking ε to be a small constant, we show that provided p and q are supported on at most n elements, for any γ ∈[0, 1/3], the hypothesis test can be successfully performed (with high probability over the random samples) given samples of size m1 = Θ(n2/3+γ) from p, and m2 = Θ(n2/3−γ/2) from q, where n is the size of the supports of the distributions p and q. Furthermore, for every γ in this range, this tradeoff between m1 and m2 is necessary, up to constant factors. Thus, our results smoothly interpolate between the known bounds of Θ(n2/3) on the sample size necessary in the setting where one is given two equal-sized samples [6, 9], and the bound of Θ(√n) on the sample size in the setting in which the sample is drawn from one distribution and the other distribution is known to the algorithm [22, 29]. Throughout most of the regime of parameters, when m1 ≪m2 2, our algorithm is a natural extension of the algorithm proposed in [9], and is similar to the algorithm proposed in [3] except with the addition of a normalization term that seems crucial to obtaining our information theoretic optimality. In the extreme regime when m1 ≈n and m2 ≈√n, our algorithm introduces an additional statistic which (we believe) is new. Our algorithm is relatively simple, and practically viable. In Section 4 we illustrate the efficacy of our approach on both synthetic data, and on the real-world problem of deducing whether two words are synonyms, based on a small sample of the bi-grams in which they occur. We also note that, as pointed out in several related work [3, 12, 6], this hypothesis testing question has applications to other problems, such as estimating or testing the mixing time of Markov chains, and our results yield improved algorithms in these settings. 1.1 Related Work The general question of how to estimate or test properties of distributions using fewer samples than would be necessary to actually learn the distribution, has been studied extensively since the late ’90s. Most of the work has focussed on “symmetric” properties (properties whose value is invariant to relabeling domain elements) such as entropy, support size, and distance metrics between distributions (such as ℓ1 distance). This has included both algorithmic work (e.g. [4, 5, 7, 8, 10, 13, 20, 21, 27, 28, 29]), and results on developing techniques and tools for establishing lower bounds (e.g. [23, 30, 27]). See the recent survey by Rubinfeld for a more thorough summary of the developments in this area [24]). The specific problem of “closeness testing” or “identity testing”, that is, deciding whether two distributions, p and q, are similar, versus have significant distance, has two main variants: the oneunknown-distribution setting in which q is known and a sample is drawn from p, and the twounknown-distributions settings in which both p and q are unknown and samples are drawn from both. We briefly summarize the previous results for these two settings. In the one-unknown-distribution setting (which can be thought of as the limiting setting in the case that we have an arbitrarily large sample drawn from distribution q, and a relatively modest sized sample from p), initial work of Goldreich and Ron [12] considered the problem of testing whether p is the uniform distribution over [n], versus has distance at least ε. The tight bounds of Θ(√n/ ε2) were later shown by Paninski [22], essentially leveraging the birthday paradox and the intuition that, among distributions supported on n elements, the uniform distribution maximizes the number of domain elements that will be observed once. Batu et al. [8] showed that, up to polylogarithmic factors of n, and polynomial factors of ε, this dependence was optimal for worst-case distributions over [n]. Recently, an “instance–optimal” algorithm and matching lower bound was shown: for any distribution q, up to constant factors, max{ 1 ε, ε−2||q−max −Θ(ε)||2/3} samples from p are both necessary and sufficient to test p = q versus ||p −q|| ≥ε, where ||q−max −Θ(ε)||2/3 ≤||q||2/3 is the 2/3-rd norm of the vector of probabilities of distribution q after the maximum element has been removed, and the smallest elements up to Θ(ε) total mass have been removed. (This immediately implies the tight bounds that if q is any distribution supported on [n], O(√n/ ε2) samples are sufficient to test its identity.) The two-unknown-distribution setting was introduced to this community by Batu et al. [6]. The optimal sample complexity of this problem was recently determined by Chan et al. [9]: they showed 2 that m = Θ(n2/3/ε4/3) samples are necessary and sufficient. In a slightly different vein, Acharya et al. [1, 2] recently considered the question of closeness testing with two unknown distributions from the standpoint of competitive analysis. They proposed an algorithm that performs the desired task using O(s3/2 polylog s) samples, and established a lower bound of Ω(s7/6), where s represents the number of samples required to determine whether a set of samples were drawn from p versus q, in the setting where p and q are explicitly known. A natural generalization of this hypothesis testing problem, which interpolates between the twounknown-distribution setting and the one-unknown-distribution setting, is to consider unequal sized samples from the two distributions. More formally, given m1 samples from the distribution p, the asymmetric closeness testing problem is to determine how many samples, m2, are required from the distribution q such that the hypothesis p = q versus ||p −q||1 > ε can be distinguished with large constant probability (say 2/3). Note that the results of Chan et al. [9] imply that it is sufficient to consider m1 ≥Θ(n2/3/ε4/3). This problem was studied recently by Acharya et al. [3]: they gave an algorithm that given m1 samples from the distribution p uses m2 = O(max{ n log n ε3√m1 , √n log n ε2 }) samples from q, to distinguish the two distributions with high probability. They also proved a lower bound of m2 = Ω(max{ √n ε2 , n2 ε4m2 1 }). There is a polynomial gap in these upper and lower bounds in the dependence on n, √m1 and ε. As a corollary to our main hypothesis testing result, we obtain an improved algorithm for testing the mixing time of a Markov chain. The idea of testing mixing properties of a Markov chain goes back to the work of Goldreich and Ron [12], which conjectured an algorithm for testing expansion of bounded-degree graphs. Their test is based on picking a random node and testing whether random walks from this node reach a distribution that is close to the uniform distribution on the nodes of the graph. They conjectured that their algorithm had O(√n) query complexity. Later, Czumaj and Sohler [11], Kale and Seshadhri [15], and Nachmias and Shapira [18] have independently concluded that the algorithm of Goldreich and Ron is provably a test for expansion property of graphs. Rapid mixing of a chain can also be tested using eigenvalue computations. Mixing is related to the separation between the two largest eigenvalues [25, 17], and eigenvalues of a dense n × n matrix can be approximated in O(n3) time and O(n2) space. However, for a sparse n × n symmetric matrix with m nonzero entries, the same task can be achieved in O(n(m + log n)) operations and O(n + m) space. Batu et al. [6] used their ℓ1 distance test on the t-step distributions, to test mixing properties of Markov chains. Given a finite Markov chain with state space [n] and transition matrix PPP = ((P(x, y))), they essentially show that one can estimate the mixing time τmix up to a factor of log n using ˜O(n5/3τmix) queries to a next node oracle, which takes a state x ∈[n] and outputs a state y ∈[n] drawn from the distribution P(x, ·). Such an oracle can often be simulated significantly more easily than actually computing the transition matrix P(x, y). We conclude this related work section with a comment on “robust” hypothesis testing and distance estimation. A natural hope would be to simply estimate ||p −q|| to within some additive ε, which is a strictly more difficult task than distinguishing p = q from ||p −q|| ≥ε. The results of Valiant and Valiant [27, 28, 29] show that this problem is significantly more difficult than hypothesis testing: the distance can be estimated to additive error ε for distributions supported on ≤n elements using samples of size O(n/ log n) (in both the setting where either one, or both distributions are unknown). Moreover, Ω(n/ log n) samples are information theoretically necessary, even if q is the uniform distribution over [n], and one wants to distinguish the case that ||p −q||1 ≤ 1 10 from the case that ||p −q||1 ≥ 9 10. Recall that the non-robust test of distinguishing p = q versus ||p −q|| > 9/10 requires a sample of size only O(√n). The exact worst-case sample complexity of distinguishing whether ||p −q||1 ≤ 1 nc versus ||p −q||1 ≥ε is not well understood, though in the case of constant ε, up to logarithmic factors, the required sample size seems to scale linearly in the exponent between n2/3 and n as c goes from 1/3 to 0. 1.2 Our results Our main result resolves the minimax sample complexity of the closeness testing problem in the unequal sample setting, to constant factors, in terms of n, the support sizes of the distributions in question: 3 Theorem 1. Given m1 ≥n2/3/ε4/3 and ε > n−1/12, and sample access to distributions p and q over [n], there is an O(m1) time algorithm which takes m1 independent draws from p and m2 = O(max{ n √m1ε2 , √n ε2 }) independent draws from q, and with probability at least 2/3 distinguishes whether ||p −q||1 ≤O  1 m2  versus ||p −q||1 ≥ε. (1) Moreover, given m1 samples from p, Ω(max{ n √m1ε2 , √n ε2 }) samples from q are informationtheoretically necessary to distinguish p = q from ||p −q||1 ≥ε with any constant probability bounded below by 1/2. The lower bound in the above theorem is proved using the machinery developed in Valiant [30], and “interpolates” between the Θ(√n/ ε2) lower bound in the one-unknown-distribution setting of testing uniformity [22] and the Θ(n2/3/ ε4/3) lower bound in the setting of equal sample sizes from two unknown distributions [9]. The algorithm establishing the upper bound involves a re-weighted version of a statistic proposed in [9], and is similar to the algorithm proposed in [3] modulo the addition of a normalizing term, which seems crucial to obtaining our tight results. In the extreme regime when m1 ≈n and m2 ≈√n/ ε2, we incorporate an additional statistic that has not appeared before in the literature. As an application of Theorem 1 in the extreme regime when m1 ≈n, we obtain an improved algorithm for estimating the mixing time of a Markov chain: Corollary 1. Consider a finite Markov chain with state space [n] and a next node oracle; there is an algorithm that estimates the mixing time, τmix, up to a multiplicative factor of log n, that uses ˜O(n3/2τmix) time and queries to the next node oracle. Concurrently to our work, Hsu et al. [14] considered the question of estimating the mixing time based on a single sample path (as opposed to our model of a sampling oracle). In contrast to our approach via hypothesis testing, they considered the natural spectral approach, and showed that the mixing time can be approximated, up to logarithmic factors, given a path of length ˜O(τ 3 mix/πmin), where πmin is the minimum probability of a state under the stationary distribution. Hence, if the stationary distribution is uniform over n states, this becomes ˜O(nτ 3 mix). It remains an intriguing open question whether one can simultaneously achieve both the linear dependence on τmix of our results and the linear dependence on 1/πmin or the size of the state space, n, as in their results. 1.3 Outline We begin by stating our testing algorithm, and describe the intuition behind the algorithm. The formal proof of the performance guarantees of the algorithm require rather involved bounds on the moments of various parameters, and are provided in the supplementary material. We also defer the entirety of the matching information theoretic lower bounds to the supplementary material, as the techniques may not appeal to as wide an audience as the algorithmic portion of our work. The application of our testing results to the problem of testing or estimating the mixing time of a Markov chain is discussed in Section 3. Finally, Section 4 contains some empirical results, suggesting that the statistic at the core of our testing algorithm performs very well in practice. This section contains both results on synthetic data, as well as an illustration of how to apply these ideas to the problem of estimating the semantic similarity of two words based on samples of the n-grams that contain the words in a corpus of text. 2 Algorithms for ℓ1 Testing In this section we describe our algorithm for ℓ1 testing with unequal samples. This gives the upper bound in Theorem 1 on the sample sizes necessary to distinguish p = q from ||p −q||1 ≥ε. For clarity and ease of exposition, in this section we consider ε to be some absolute constant, and supress the dependency on ε . The slightly more involved algorithm that also obtains the optimal dependency on the parameter ε is given in the supplementary material. We begin by presenting the algorithm, and then discuss the intuition for the various steps. 4 Algorithm 1 The Closeness Testing Algorithm Suppose ε = Ω(1) and m1 = O(n1−γ) for some γ ≥0. Let S1, S2 denote two independent sets of m1 samples drawn from p and let T1, T2 denote two independent sets of m2 samples drawn from q. We wish to test p = q versus ||p −q||1 > ε. • Let b = C0 log n m2 , for an absolute constant C0, and define the set B = {i ∈[n] : XS1 i m1 > b} ∪{i ∈[n] : Y T1 i m2 > b}, where XS1 i denotes the number of occurrences of i in S1, and Y T1 i denotes the number of occurrences of i in T1. • Let Xi denote the number of occurrences of element i in S2, and Yi denote the number of occurrences of element i in T2: 1. Check if X i∈B Xi m1 −Yi m2 ≤ε/6. (2) 2. Check if Z := X i∈[n]\B (m2Xi −m1Yi)2 −(m2 2Xi + m2 1Yi) Xi + Yi ≤Cγm3/2 1 m2, (3) for an appropriately chosen constant Cγ (depending on γ). 3. If γ ≥1/9: • If (2) and (3) hold, then ACCEPT. Otherwise, REJECT. 4. Otherwise, if γ < 1/9 : • Check if R := X i∈[n]\B 111{Yi = 2} Xi + 1 ≤C1 m2 2 m1 , (4) where C1 is an appropriately chosen absolute constant. • REJECT if there exists i ∈[n] such that Yi ≥3 and Xi ≤C2 m1 m2n1/3 , where C2 is an appropriately chosen absolute constant. • If (2), (3), and (4) hold, then ACCEPT. Otherwise, REJECT. The intuition behind the above algorithm is as follows: with high probability, all elements in the set B satisfy either pi > b/2, or qi > b/2 (or both). Given that these elements are “heavy”, their contribution to the ℓ1 distance will be accurately captured by the ℓ1 distance of their empirical frequencies (where these empirical frequencies are based on the second set of samples, S2, T2). For the elements that are not in set B—the “light” elements—their empirical frequencies will, in general, not accurately reflect their true probabilities, and hence the distance between the empirical distributions of the “light” elements will be misleading. The Z statistic of Equation 3 is designed specifically for this regime. If the denominator of this statistic were omitted, then this would give an estimator for the squared ℓ2 distance between the distributions (scaled by a factor of m2 1m2 2). To see this, note that if pi and qi are small, then Binomial(m1, pi) ≈Poisson(m1pi) and Binomial(m2, qi) ≈Poisson(m2qi); furthermore, a simple calculation yields that if Xi ← Poisson(m1pi) and Yi ←Poisson(m2qi), then E  (m2Xi −m1Yi)2 −(m2 2Xi + m2 1Yi)  = m2 1m2 2(p −q)2. The normalization by Xi + Yi “linearizes” the Z statistic, essentially turning the squared ℓ2 distance into an estimate of the ℓ1 distance between light elements of the two distributions. Similar results can possibly be obtained using other linear functions of Xi and Yi in the denominator, though we note that the “obvious” normalizing factor of Xi + m1 m2 Yi does not seem to work theoretically, and seems to have extremely poor performance in practice. For the extreme case (corresponding to γ < 1/9) where m1 ≈n and m2 ≈√n/ ε2, the statistic Z might have a prohibitively large variance; this is essentially due to the “birthday paradox” which might cause a constant number of rare elements (having probability O(1/n) to occur twice in a sample of size m2 ≈√n/ ε2). Each such element will contribute Ω(m2 1) ≈n2 to the Z statistic, 5 and hence the variance can be ≈n4. The statistic R of Equation (4) is tailored to deal with these cases, and captures the intuition that we are more tolerant of indices i for which Yi = 2 if the corresponding Xi is larger. It is worth noting that one can also define a natural analog of the R statistic corresponding to the indices i for which Yi = 3, etc., using which the robustness parameter of the test can be improved. The final check—ensuring that in this regime with m1 ≫m2 there are no elements for which Yi ≥3 but Xi is small—rules out the remaining sets of distributions, p, q, for which the variance of the Z statistic is intolerably large. Finally, we should emphasize that the crude step of using two independent batches of samples— the first to obtain the partition of the domain into “heavy” and “light” elements, and the second to actually compute the statistics, is for ease of analysis. As our empirical results of Section 4 suggest, for practical applications one may want to use only the Z-statistic of (3), and one certainly should not “waste” half the samples to perform the “heavy”/“light” partition. 3 Estimating Mixing Times in Markov Chains The basic hypothesis testing question of distinguishing identical distributions from those with significant ℓ1 distance can be employed for several other practically relevant tasks. One example is the problem of estimating the mixing time of Markov chains. Consider a finite Markov chain with state space [n], transition matrix PPP = ((P(x, y))), with stationary distribution π. The t-step distribution starting at the point x ∈[n], P t x(·) is the probability distribution on [n] obtained by running the chain for t steps starting from x. Definition 1. The ε-mixing time of a Markov chain with transition matrixPPP = ((P(x, y))) is defined as tmix(ε) := inf n t ∈[n] : supx∈[n] 1 2 P y∈[n] |P t x(y) −π(y)| ≤ε o . Definition 2. The average t-step distribution of a Markov chain PPP with n states is the distribution P t = 1 n P x∈[n] P t x, that is, the distribution obtained by choosing x uniformly from [n] and walking t steps from the state x. The connection between closeness testing and testing whether a Markov chain is close to mixing was first observed by Batu et al. [6], who proposed testing the ℓ1 difference between distributions P t0 x and P t0, for every x ∈[n]. The algorithm leveraged their equal sample-size hypothesis testing results, drawing ˜O(n2/3 log n) samples from both the distributions P t0 x and P t0. This yields an overall running time of ˜O(n5/3t0). Here, we note that our unequal sample-size hypothesis testing algorithm can yield an improved runtime. Since the distribution P t0 is independent of the starting state x, it suffices to take ˜O(n) samples from P t0 once and ˜O(√n) samples from P t x, for every x ∈[n]. This results in a query and runtime complexity of ˜O(n3/2t0). We sketch this algorithm below. Algorithm 2 Testing for Mixing Times in Markov Chains Given t0 ∈R and a finite Markov chain with state space [n] and transition matrix PPP = ((P(x, y))), we wish to test H0 : tmix  O  1 √n  ≤t0, versus H1 : tmix (1/4) > t0. (5) 1. Draw O(log n) samples S1, . . . , SO(log n), each of size Pois(C1n) from the average t0-step distribution. 2. For each state x ∈[n] we will distinguish whether ||P t0 x −P t0||1 ≤O( 1 √n), versus ||P t0 x −P t0||1 > 1/4, with probability of error ≪1/n. We do this by running O(log n) runs of Algorithm 1, with the i-th run using Si and a fresh set of Pois(O(√n)) samples from P t x. 3. If all n of the ℓ1 closeness testing problems are accepted, then we ACCEPT H0. 6 The above testing algorithm can be leveraged to estimate the mixing time of a Markov chain, via the basic observation that if tmix(1/4) ≤t0, then for any ε, tmix(ε) ≤ log ε log 1/2t0, and thus tmix(1/√n) ≤ 2 log n · tmix(1/4). Because tmix(1/4) and tmix(O(1/√n)) differ by at most a factor of log n, by applying Algorithm 2 for a geometrically increasing sequence of t0’s, and repeating each test O(log t0 + log n) times, one obtains Corollary 1, restated below: Corollary 1 For a finite Markov chain with state space [n] and a next node oracle, there is an algorithm that estimates the mixing time, τmix, up to a multiplicative factor of log n, that uses ˜O(n3/2τmix) time and queries to the next node oracle. 4 Empirical Results Both our formal algorithms and the corresponding theorems involve some unwieldy constant factors (that can likely be reduced significantly). Nevertheless, in this section we provide some evidence that the statistic at the core of our algorithm can be fruitfully used in practice, even for surprisingly small sample sizes. 4.1 Testing similarity of words An extremely important primitive in natural language processing is the ability to estimate the semantic similarity of two words. Here, we show that the Z statistic, Z = P i (m2Xi−m1Yi)2−(m2 2Xi+m2 1Yi) m3/2 1 m2(Xi+Yi) , which is the core of our testing algorithm, can accurately distinguish whether two words are very similar based on surprisingly small samples of the contexts in which they occur. Specifically, for each pair of words, a, b that we consider, we select m1 random occurrences of a and m2 random occurrences of word b from the Google books corpus, using the Google Books Ngram Dataset.2 We then compare the sample of words that follow a with the sample of words that follow b. Henceforth, we refer to these as samples of the set of bi-grams involving each word. Figure 1(a) illustrates the Z statistic for various pairs of words that range from rather similar words like “smart” and “intelligent”, to essentially identical word pairs such as “grey” and “gray” (whose usage differs mainly as a result of historical variation in the preference for one spelling over the other); the sample size of bi-grams containing the first word is fixed at m1 = 1, 000, and the sample size corresponding to the second word varies from m2 = 50 through m2 = 1, 000. To provide a frame of reference, we also compute the value of the statistic for independent samples corresponding to the same word (i.e. two different samples of words that follow “wolf”); these are depicted in red. For comparison, we also plot the total variation distance between the empirical distributions of the pair of samples, which does not clearly differentiate between pairs of identical words, versus different words, particularly for the smaller sample sizes. One subtle point is that the issue with using the empirical distance between the distributions goes beyond simply not having a consistent reference point. For example, let X denote a large sample of size m1 from distribution p, X′ denote a small sample of size m2 from p, and Y denote a small sample of size m2 from a different distribution q. It is tempting to hope that the empirical distance between X and X′ will be smaller than the empirical distance between X and Y . As Figure 1(b) illustrates, this is not always the case, even for natural distributions: for the specific example illustrated in the figure, over much of the range of m2, the empirical distance between X and X′ is indistinguishable from that of X and Y , though the Z statistic easily discerns that these distributions are very different. This point is further emphasized in Figure 2, which depicts this phenomena in the synthetic setting where p = Unif[n] is the uniform distribution over n elements, and q is the distribution whose elements have probabilities (1 ± ε)/n, for ε = 1/2. The second and fourth plots represent the probability that the distance between two empirical distributions of samples from p is smaller than the distance between the empirical distributions of the samples from p and q; the first and third plots represent the analogous probability involving the Z statistic. The first two plots correspond to n = 1, 000 and the last two correspond to n = 50, 000. In all plots, we consider a pair of samples of respective sizes m1 and m2, as m1 and m2 range between √n and n. 2The Google Books Ngram Dataset is freely available here: http://storage.googleapis.com/ books/ngrams/books/datasetsv2.html 7 !"# !"# !"#$%$&'()*$+,'-&.) /0%)!)+,'1+1&) ##$%&'# '%()# 2$"$('%$,3)4.,5..-)6'$%+)78)97%:+) $%&'# $%&'# $%&'# $%&'# $%&'# '%()# !"# !"# !"#$%$&'()*$+,'-&.) /0%)!)+,'1+1&) $%&'# $%('# )*+,# ,*-# ./0# .1*2# (+!*30# 4&(%+'# ####5%&6# 3!(%0# 2$"$('%$,3)4.,5..-)6'$%+)78)97%:+) 740&++7$&40# 3!(%0# (a)            102                  103              102                    103   m2 m2 !"# !"# !"#$%$&'()*$+,'-&.) /0%)!)+,'1+1&) ##$%&'# '%()# 2$"$('%$,3)4.,5..-)6'$%+)78)97%:+) $%&'# $%&'# $%&'# $%&'# $%&'# '%()# !"# !"# !"#$%$&'()*$+,'-&.) /0%)!)+,'1+1&) ##$%&'# '%()# 2$"$('%$,3)4.,5..-)6'$%+)78)97%:+) $%&'# $%&'# $%&'# $%&'# $%&'# '%()# (b)            102                    103              102                              103   m2 m2 Figure 1: (a) Two measures of the similarity between words, based on samples of the bi-grams containing each word. Each line represents a pair of words, and is obtained by taking a sample of m1 = 1, 000 bi-grams containing the first word, and m2 = 50, . . . , 1, 000 bi-grams containing the second word, where m2 is depicted along the x-axis in logarithmic scale. In both plots, the red lines represent pairs of identical words (e.g. “wolf/wolf”,“almost/almost”,...). The blue lines represent pairs of similar words (e.g. “wolf/fox”, “almost/nearly”,...), and the black line represents the pair “grey/gray” whose distribution of bi-grams differ because of historical variations in preference for each spelling. Solid lines indicate the average over 200 trials for each word pair and choice of m2, with error bars of one standard deviation depicted. The left plot depicts our statistic, which clearly distinguishes identical words, and demonstrates some intuitive sense of semantic distance. The right plot depicts the total variation distance between the empirical distributions—which does not successfully distinguish the identical words, given the range of sample sizes considered. The plot would not be significantly different if other distance metrics between the empirical distributions, such as f-divergence, were used in place of total variation distance. Finally, note the extremely uniform magnitudes of the error bars in the left plot, as m2 increases, which is an added benefit of the Xi + Yi normalization term in the Z statistic. (b) Illustration of how the empirical distance can be misleading: here, the empirical distance between the distributions of samples of bi-grams for “wolf/wolf” is indistinguishable from that for the pair “wolf/fox*” over much of the range of m2; nevertheless, our statistic clearly discerns that these are significantly different distributions. Here, “fox*” denotes the distribution of bi-grams whose first word is “fox”, restricted to only the most common 100 bi-grams. Pr [ Z(pm1,qm2) > Z(pm1,pm2) ] n = 1,000 m1 m2 n 0.5 n 0.75 n n 0.75 n m2 Pr [ Z(pm1,qm2) > Z(pm1,pm2) ] n = 50,000 m1 n 0.5 n 0.75 n n 0.75 n Pr [ || pm1 – qm2 || > || pm1 – pm2 || ] n = 1,000 m1 m2 n 0.5 n 0.75 n n 0.75 n m1 n 0.5 n 0.75 n n 0.75 n m2 Pr [ || pm1 – qm2 || > || pm1 – pm2 || ] n = 50,000 1 0.9 0.8 0.7 0.6 0.5 Figure 2: The first and third plot depicts the probability that the Z statistic applied to samples of sizes m1, m2 drawn from p = Unif[n] is smaller than the Z statistic applied to a sample of size m1 drawn from p and m2 drawn from q, where q is a perturbed version of p in which all elements have probability (1 ± 1/2)/n. The second and fourth plots depict the probability that empirical distance between a pair of samples (of respective sizes m1, m2) drawn from p is less than the empirical distribution between a sample of size m1 drawn from p and m2 drawn from q. The first two plots correspond to n = 1, 000 and the last two correspond to n = 50, 000. In all plots, m1 and m2 range between √n and n on a logarithmic scale. In all plots the colors depict the average probability based on 100 trials. 8 References [1] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, and S. Pan, Competitive closeness testing, COLT, 2011. [2] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, and S. Pan, Competitive classification and closeness testing. COLT, 2012. [3] J. Acharya, A. Jafarpour, A. Orlitsky, and A. T. Suresh, Sublinear algorithms for outlier detection and generalized closeness testing, ISIT, 3200–3204, 2014. [4] J. Acharya, C. Daskalakis, and G. Kamath, Optimal testing for properties of distributions, NIPS, 2015. [5] Z. Bar-Yossef, R. Kumar, and D. Sivakumar. Sampling algorithms: lower bounds and applications, STOC, 2001. [6] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White, Testing that distributions are close, FOCS, 2000. [7] T. Batu, S. Dasgupta, R. Kumar, and R. Rubinfeld, The complexity of approximating the entropy, SIAM Journal on Computing, 2005. [8] T. Batu, E. Fischer, L. Fortnow, R. Kumar, R. Rubinfeld, and P. White, Testing random variables for independence and identity, FOCS, 2001. [9] S.-on Chan, I. Diakonikolas, P. Valiant, G. Valiant, Optimal Algorithms for Testing Closeness of Discrete Distributions, Symposium on Discrete Algorithms (SODA), 1193–1203, 2014, [10] M. Charikar, S. Chaudhuri, R. Motwani, and V.R. Narasayya, Towards estimation error guarantees for distinct values, Symposium on Principles of Database Systems (PODS), 2000. [11] A. Czumaj and C. Sohler, Testing expansion in bounded-degree graphs, FOCS, 2007. [12] O. Goldreich and D. Ron, On testing expansion in bounded-degree graphs, ECCC, TR00-020, 2000. [13] S. Guha, A. McGregor, and S. Venkatasubramanian, Streaming and sublinear approximation of entropy and information distances, Symposium on Discrete Algorithms (SODA), 2006. [14] D. Hsu, A. Kontorovich, and C. Szepesv´ari, Mixing time estimation in reversible Markov chains from a single sample path, NIPS, 2015. [15] S. Kale and C. Seshadhri, An expansion tester for bounded degree graphs, ICALP, LNCS, Vol. 5125, 527–538, 2008. [16] A. Keinan and A. G. Clark. Recent explosive human population growth has resulted in an excess of rare genetic variants. Science, 336(6082):740743, 2012. [17] D. A. Levin, Y. Peres, and E. L. Wilmer, Markov Chains and Mixing Times, Amer. Math. Soc., 2009. [18] A. Nachmias and A. Shapira, Testing the expansion of a graph, Electronic Colloquium on Computational Complexity (ECCC), Vol. 14 (118), 2007. [19] M. R. Nelson and D. Wegmann et al., An abundance of rare functional variants in 202 drug target genes sequenced in 14,002 people. Science, 337(6090):100104, 2012. [20] L. Paninski, Estimation of entropy and mutual information, Neural Comp., Vol. 15 (6), 1191–1253, 2003. [21] L. Paninski, Estimating entropy on m bins given fewer than m samples, IEEE Transactions on Information Theory, Vol. 50 (9), 2200–2203, 2004. [22] L. Paninski, A coincidence-based test for uniformity given very sparsely-sampled discrete data, IEEE Transactions on Information Theory, Vol. 54, 4750–4755, 2008. [23] S. Raskhodnikova, D. Ron, A. Shpilka, and A. Smith, Strong lower bounds for approximating distribution support size and the distinct elements problem, SIAM Journal on Computing, Vol. 39(3), 813–842, 2009. [24] R. Rubinfeld, Taming big probability distributions, XRDS, Vol. 19(1), 24–28, 2012. [25] A. Sinclair and M. Jerrum, Approximate counting, uniform generation and rapidly mixing Markov chains, Information and Computation, Vol. 82(1), 93–133, 1989. [26] J. A. Tennessen, A.W. Bigham, and T.D. O’Connor et al. Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science, 337(6090):6469, 2012 [27] G. Valiant and P. Valiant, Estimating the unseen: an n/ log n-sample estimator for entropy and support size, shown optimal via new CLTs, STOC, 2011. [28] G. Valiant and P. Valiant, Estimating the unseen: improved estimators for entropy and other properties, NIPS, 2013. [29] G. Valiant and P. Valiant, An Automatic Inequality Prover and Instance Optimal Identity Testing, FOCS, 51–60, 2014. [30] P. Valiant, Testing symmetric properties of distributions, STOC, 2008. [31] P. Valiant, Testing Symmetric Properties of Distributions, PhD thesis, M.I.T., 2008. 9
2015
154
5,652
Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach Yinlam Chow Stanford University ychow@stanford.edu Aviv Tamar UC Berkeley avivt@berkeley.edu Shie Mannor Technion shie@ee.technion.ac.il Marco Pavone Stanford University pavone@stanford.edu Abstract In this paper we address the problem of decision making within a Markov decision process (MDP) framework where risk and modeling errors are taken into account. Our approach is to minimize a risk-sensitive conditional-value-at-risk (CVaR) objective, as opposed to a standard risk-neutral expectation. We refer to such problem as CVaR MDP. Our first contribution is to show that a CVaR objective, besides capturing risk sensitivity, has an alternative interpretation as expected cost under worst-case modeling errors, for a given error budget. This result, which is of independent interest, motivates CVaR MDPs as a unifying framework for risk-sensitive and robust decision making. Our second contribution is to present an approximate value-iteration algorithm for CVaR MDPs and analyze its convergence rate. To our knowledge, this is the first solution algorithm for CVaR MDPs that enjoys error guarantees. Finally, we present results from numerical experiments that corroborate our theoretical findings and show the practicality of our approach. 1 Introduction Decision making within the Markov decision process (MDP) framework typically involves the minimization of a risk-neutral performance objective, namely the expected total discounted cost [3]. This approach, while very popular, natural, and attractive from a computational standpoint, neither takes into account the variability of the cost (i.e., fluctuations around the mean), nor its sensitivity to modeling errors, which may significantly affect overall performance [12]. Risk-sensitive MDPs [9] address the first aspect by replacing the risk-neutral expectation with a risk-measure of the total discounted cost, such as variance, Value-at-Risk (VaR), or Conditional-VaR (CVaR). Robust MDPs [15], on the other hand, address the second aspect by defining a set of plausible MDP parameters, and optimize decision with respect to the expected cost under worst-case parameters. In this work we consider risk-sensitive MDPs with a CVaR objective, referred to as CVaR MDPs. CVaR [1, 20] is a risk-measure that is rapidly gaining popularity in various engineering applications, e.g., finance, due to its favorable computational properties [1] and superior ability to safeguard a decision maker from the “outcomes that hurt the most” [22]. In this paper, by relating risk to robustness, we derive a novel result that further motivates the usage of a CVaR objective in a decision-making context. Specifically, we show that the CVaR of a discounted cost in an MDP is equivalent to the expected value of the same discounted cost in presence of worst-case perturbations of the MDP parameters (specifically, transition probabilities), provided that such perturbations are within a certain error budget. This result suggests CVaR MDP as a method for decision making under both cost variability and model uncertainty, motivating it as unified framework for planning under uncertainty. Literature review: Risk-sensitive MDPs have been studied for over four decades, with earlier efforts focusing on exponential utility [9], mean-variance [24], and percentile risk criteria [7] . Recently, for the reasons explained above, several authors have investigated CVaR MDPs [20]. Specifically, 1 in [4], the authors propose a dynamic programming algorithm for finite-horizon risk-constrained MDPs where risk is measured according to CVaR. The algorithm is proven to asymptotically converge to an optimal risk-constrained policy. However, the algorithm involves computing integrals over continuous variables (Algorithm 1 in [4]) and, in general, its implementation appears particularly difficult. In [2], the authors investigate the structure of CVaR optimal policies and show that a Markov policy is optimal on an augmented state space, where the additional (continuous) state variable is represented by the running cost. In [8], the authors leverage such result to design an algorithm for CVaR MDPs that relies on discretizing occupation measures in the augmented-state MDP. This approach, however, involves solving a non-convex program via a sequence of linear-programming approximations, which can only shown to converge asymptotically. A different approach is taken by [5], [19] and [25], which consider a finite dimensional parameterization of control policies, and show that a CVaR MDP can be optimized to a local optimum using stochastic gradient descent (policy gradient). A recent result by Pflug and Pichler [17] showed that CVaR MDPs admit a dynamic programming formulation by using a state-augmentation procedure different from the one in [2]. The augmented state is also continuous, making the design of a solution algorithm challenging. Contributions: The contribution of this paper is twofold. First, as discussed above, we provide a novel interpretation for CVaR MDPs in terms of robustness to modeling errors. This result is of independent interest and further motivates the usage of CVaR MDPs for decision making under uncertainty. Second, we provide a new optimization algorithm for CVaR MDPs, which leverages the state augmentation procedure introduced by Pflug and Pichler [17]. We overcome the aforementioned computational challenges (due to the continuous augmented state) by designing an algorithm that merges approximate value iteration [3] with linear interpolation. Remarkably, we are able to provide explicit error bounds and convergence rates based on contraction-style arguments. In contrast to the algorithms in [4, 8, 5, 25], given the explicit MDP model our approach leads to finite-time error guarantees, with respect to the globally optimal policy. In addition, our algorithm is significantly simpler than previous methods, and calculates the optimal policy for all CVaR confidence intervals and initial states simultaneously. The practicality of our approach is demonstrated in numerical experiments involving planning a path on a grid with thousand of states. To the best of our knowledge, this is the first algorithm to approximate globally-optimal policies for non-trivial CVaR MDPs whose error depends on the resolution of interpolation. Organization: This paper is structured as follows. In Section 2 we provide background on CVaR and MDPs, we state the problem we wish to solve (i.e., CVaR MDPs), and motivate the CVaR MDP formulation by establishing a novel relation between CVaR and model perturbations. Section 3 provides the basis for our solution algorithm, based on a Bellman-style equation for the CVaR. Then, in Section 4 we present our algorithm and correctness analysis. In Section 5 we evaluate our approach via numerical experiments. Finally, in Section 6, we draw some conclusions and discuss directions for future work. 2 Preliminaries, Problem Formulation, and Motivation 2.1 Conditional Value-at-Risk Let Z be a bounded-mean random variable, i.e., E[|Z|] < ∞, on a probability space (Ω, F, P), with cumulative distribution function F(z) = P(Z ≤z). In this paper we interpret Z as a cost. The value-at-risk (VaR) at confidence level α ∈(0, 1) is the 1 −α quantile of Z, i.e., VaRα(Z) = min  z | F(z) ≥1 −α . The conditional value-at-risk (CVaR) at confidence level α ∈(0, 1) is defined as [20]: CVaRα(Z) = min w∈R n w + 1 αE  (Z −w)+o , (1) where (x)+ = max(x, 0) represents the positive part of x. If there is no probability atom at VaRα(Z), it is well known from Theorem 6.2 in [23] that CVaRα(Z) = E  Z | Z ≥VaRα(Z)  . Therefore, CVaRα(Z) may be interpreted as the expected value of Z, conditioned on the α-portion of the tail distribution. It is well known that CVaRα(Z) is decreasing in α, CVaR1(Z) equals to E(Z), and CVaRα(Z) tends to max(Z) as α ↓0. During the last decade, the CVaR risk-measure has gained popularity in financial applications, among others. It is especially useful for controlling rare, but potentially disastrous events, which occur above the 1 −α quantile, and are neglected by the VaR [22]. Furthermore, CVaR enjoys desirable axiomatic properties, such as coherence [1]. We refer to [26] for further motivation about CVaR and a comparison with other risk measures such as VaR. A useful property of CVaR, which we exploit in this paper, is its alternative dual representation [1]: CVaRα(Z) = max ξ∈UCVaR(α,P) Eξ[Z], (2) 2 where Eξ[Z] denotes the ξ-weighted expectation of Z, and the risk envelope UCVaR is given by UCVaR(α, P) =  ξ : ξ(ω) ∈  0, 1 α  , R ω∈Ωξ(ω)P(ω)dω = 1  . Thus, the CVaR of a random variable Z may be interpreted as the worst-case expectation of Z, under a perturbed distribution ξP. In this paper, we are interested in the CVaR of the total discounted cost in a sequential decisionmaking setting, as discussed next. 2.2 Markov Decision Processes An MDP is a tuple M = (X, A, C, P, x0, γ), where X and A are finite state and action spaces; C(x, a) ∈[−Cmax, Cmax] is a bounded deterministic cost; P(·|x, a) is the transition probability distribution; γ ∈[0, 1) is the discounting factor, and x0 is the initial state. (Our results easily generalize to random initial states and random costs.) Let the space of admissible histories up to time t be Ht = Ht−1 × A × X, for t ≥1, and H0 = X. A generic element ht ∈Ht is of the form ht = (x0, a0, . . . , xt−1, at−1, xt). Let ΠH,t be the set of all history-dependent policies with the property that at each time t the randomized control action is a function of ht. In other words, ΠH,t :=  µ0 : H0 →P(A), µ1 : H1 →P(A), . . . , µt : Ht → P(A)}|µj(hj) ∈P(A) for all hj ∈Hj, 1 ≤j ≤t . We also let ΠH = limt→∞ΠH,t be the set of all history dependent policies. 2.3 Problem Formulation Let C(xt, at) denote the stage-wise costs observed along a state/control trajectory in the MDP model, and let C0,T = PT t=0 γtC(xt, at) denote the total discounted cost up to time T. The risksensitive discounted-cost problem we wish to address is as follows: min µ∈ΠH CVaRα  lim T →∞C0,T x0, µ  , (3) where µ = {µ0, µ1, . . .} is the policy sequence with actions at = µt(ht) for t ∈{0, 1, . . .}. We refer to problem (3) as CVaR MDP (One may also consider a related formulation combining mean and CVaR, the details of which are presented in the supplementary material). The problem formulation in (3) directly addresses the aspect of risk sensitivity, as demonstrated by the numerous applications of CVaR optimization in finance (see, e.g., [21, 11, 6]) and the recent approaches for CVaR optimization in MDPs [4, 8, 5, 25]. In the following, we show a new result providing additional motivation for CVaR MDPs, from the point of view of robustness to modeling errors. 2.4 Motivation - Robustness to Modeling Errors We show a new result relating the CVaR objective in (3) to the expected discounted-cost in presence of worst-case perturbations of the MDP parameters, where the perturbations are budgeted according to the “number of things that can go wrong”. Thus, by minimizing CVaR, the decision maker also guarantees robustness of the policy. Consider a trajectory (x0, a0, . . . , xT ) in a finite-horizon MDP problem with transitions Pt(xt|xt−1, at−1). We explicitly denote the time index of the transition matrices for reasons that will become clear shortly. The total probability of the trajectory is P(x0, a0, . . . , xT ) = P0(x0)P1(x1|x0, a0) · · · PT (xT |xT −1, aT −1), and we let C0,T (x0, a0, . . . , xT ) denote its discounted cost, as defined above. We consider an adversarial setting, where an adversary is allowed to change the transition probabilities at each stage, under some budget constraints. We will show that, for a specific budget and perturbation structure, the expected cost under the worst-case perturbation is equivalent to the CVaR of the cost. Thus, we shall establish that, in this perspective, being risk sensitive is equivalent to being robust against model perturbations. For each stage 1 ≤t ≤T, consider a perturbed transition matrix ˆPt = Pt◦δt, where δt ∈RX×A×X is a multiplicative probability perturbation and ◦is the Hadamard product, under the condition that ˆPt is a stochastic matrix. Let ∆t denote the set of perturbation matrices that satisfy this condition, and let ∆= ∆1 × · · · × ∆T the set of all possible perturbations to the trajectory distribution. 3 We now impose a budget constraint on the perturbations as follows. For some budget η ≥1, we consider the constraint δ1(x1|x0, a0)δ2(x2|x1, a1) · · · δT (xT |xT −1, aT −1) ≤η, ∀x0, . . . , xT ∈X, ∀a0, . . . , aT −1 ∈A. (4) Essentially, the product in Eq. (4) states that with small budget the worst cannot happen at each time. Instead, the perturbation budget has to be split (multiplicatively) along the trajectory. We note that Eq. (4) is in fact a constraint on the perturbation matrices, and we denote by ∆η ⊂∆the set of perturbations that satisfy this constraint with budget η. The following result shows an equivalence between the CVaR and the worst-case expected loss. Proposition 1 (Interpretation of CVaR as a Robustness Measure) It holds CVaR 1 η (C0,T (x0, a0, . . . , xT )) = sup (δ1,...,δT )∈∆η E ˆ P [C0,T (x0, a0, . . . , xT )] , (5) where E ˆ P [·] denotes expectation with respect to a Markov chain with transitions ˆPt. The proof of Proposition 1 is in the supplementary material. It is instructive to compare Proposition 1 with the dual representation of CVaR in (2) where both results convert the CVaR risk into a robustness measure. Note, in particular, that the perturbation budget in Proposition 1 has a temporal structure, which constrains the adversary from choosing the worst perturbation at each time step. Remark 1 An equivalence between robustness and risk-sensitivity was previously suggested by Osogami [16]. In that study, the iterated (dynamic) coherent risk was shown to be equivalent to a robust MDP [10] with a rectangular uncertainty set. The iterated risk (and, correspondingly, the rectangular uncertainty set) is very conservative [27], in the sense that the worst can happen at each time step. In contrast, the perturbations considered here are much less conservative. In general, solving robust MDPs without the rectangularity assumption is NP-hard. Nevertheless, Mannor et. al. [13] showed that, for cases where the number of perturbations to the parameters along a trajectory is upper bounded (budget-constrained perturbation), the corresponding robust MDP problem is tractable. Analogous to the constraint set (1) in [13], the perturbation set in Proposition 1 limits the total number of log-perturbations along a trajectory. Accordingly, we shall later see that optimizing problem (3) with perturbation structure (4) is indeed also tractable. Next section provides the fundamental theoretical ideas behind our approach to the solution of (3). 3 Bellman Equation for CVaR In this section, by leveraging a recent result from [17], we present a dynamic programming (DP) formulation for the CVaR MDP problem in (3). As we shall see, the value function in this formulation depends on both the state and the CVaR confidence level α. We then establish important properties of such DP formulation, which will later enable us to derive an efficient DP-based approximate solution algorithm and provide correctness guarantees on the approximation error. All proofs are presented in the supplementary material. Our starting point is a recursive decomposition of CVaR, whose proof is detailed in Theorem 10 of [17]. Theorem 2 (CVaR Decomposition, Theorem 21 in [17]) For any t ≥ 0, denote by Z = (Zt+1, Zt+2, . . . ) the cost sequence from time t + 1 onwards. The conditional CVaR under policy µ, i.e., CVaRα(Z | ht, µ), obeys the following decomposition: CVaRα(Z | ht, µ) = max ξ∈UCVaR(α,P (·|xt,at)) E[ξ(xt+1) · CVaRαξ(xt+1)(Z | ht+1, µ) | ht, µ], where at is the action induced by policy µt(ht), and the expectation is with respect to xt+1. Theorem 2 concerns a fixed policy µ; we now extend it to a general DP formulation. Note that in the recursive decomposition in Theorem 2 the right-hand side involves CVaR terms with different confidence levels than that in the left-hand side. Accordingly, we augment the state space X with an additional continuous state Y = (0, 1], which corresponds to the confidence level. For any x ∈X and y ∈Y, the value-function V (x, y) for the augmented state (x, y) is defined as: V (x, y) = min µ∈ΠH CVaRy  lim T →∞C0,T | x0 = x, µ  . 4 Similar to standard DP, it is convenient to work with operators defined on the space of value functions [3]. In our case, Theorem 2 leads to the following definition of CVaR Bellman operator T : X ×Y → X × Y: T[V ](x, y) = min a∈A " C(x, a) + γ max ξ∈UCVaR(y,P (·|x,a)) X x′∈X ξ(x′)V (x′, yξ(x′)) P(x′|x, a) # . (6) We now establish several useful properties for the Bellman operator T[V ]. Lemma 3 (Properties of CVaR Bellman Operator) The Bellman operator T[V ] has the following properties: 1. (Contraction.) ∥T[V1] −T[V2]∥∞≤γ∥V1 −V2∥∞, where ∥f∥∞=supx∈X,y∈Y |f(x, y)|. 2. (Concavity preserving in y.) For any x ∈X, suppose yV (x, y) is concave in y ∈Y. Then the maximization problem in (6) is concave. Furthermore, yT[V ](x, y) is concave in y. The first property in Lemma 3 is similar to standard DP [3], and is instrumental to the design of a converging value-iteration approach. The second property is nonstandard and specific to our approach. It will be used to show that the computation of value-iteration updates involves concave, and therefore tractable optimization problems. Furthermore, it will be used to show that a linearinterpolation of V (x, y) in the augmented state y has a bounded error. Equipped with the results in Theorem 2 and Lemma 3, we can now show that the fixed point solution of T[V ](x, y) = V (x, y) is unique, and equals to the solution of the CVaR MDP problem (3) with x0 = x and α = y. Theorem 4 (Optimality Condition) For any x ∈X and y ∈(0, 1], the solution to T[V ](x, y) = V (x, y) is unique, and equals to V ∗(x, y) = minµ∈ΠH CVaRy (limT →∞C0,T | x0 = x, µ). Next, we show that the optimal value of the CVaR MDP problem (3) can be attained by a stationary Markov policy, defined as a greedy policy with respect to the value function V ∗(x, y). Thus, while the original problem is defined over the intractable space of history-dependent policies, a stationary Markov policy (over the augmented state space) is optimal, and can be readily derived from V ∗(x, y). Furthermore, an optimal history-dependent policy can be readily obtained from an (augmented) optimal Markov policy according to the following theorem. Theorem 5 (Optimal Policies) Let π∗ H = {µ0, µ1, . . .} ∈ΠH be a history-dependent policy recursively defined as: µk(hk) = u∗(xk, yk), ∀k ≥0, (7) with initial conditions x0 and y0 = α, and state transitions xk ∼P(· | xk−1, u∗(xk−1, yk−1)), yk = yk−1ξ∗ xk−1,yk−1,u∗(xk), ∀k ≥1, (8) where the stationary Markovian policy u∗(x, y) and risk factor ξ∗ x,y,u∗(·) are solution to the minmax optimization problem in the CVaR Bellman operator T[V ∗](x, y). Then, π∗ H is an optimal policy for problem (3) with initial state x0 and CVaR confidence level α. Theorems 4 and 5 suggest that a value-iteration DP method [3] can be used to solve the CVaR MDP problem (3). Let an initial value-function guess V0 : X × Y →R be chosen arbitrarily. Value iteration proceeds recursively as follows: Vk+1(x, y) = T[Vk](x, y), ∀(x, y) ∈X × Y, k ∈{0, 1, . . . , }. (9) Specifically, by combining the contraction property in Lemma 3 and uniqueness result of fixed point solutions from Theorem 4, one concludes that limk→∞Vk(x, y) = V ∗(x, y). By selecting x = x0 and y = α, one immediately obtains V ∗(x0, α) = minµ∈ΠH CVaRα (limT →∞C0,T | x0, µ). Furthermore, an optimal policy may be derived from V ∗(x, y) according to the policy construction procedure in Theorem 5. Unfortunately, while value iteration is conceptually appealing, its direct implementation in our setting is generally impractical since, e.g., the state y is continuous. In the following, we pursue an approximation to the value iteration algorithm (9), based on a linear interpolation scheme for y. 5 Algorithm 1 CVaR Value Iteration with Linear Interpolation 1: Given: • N(x) interpolation points Y(x) =  y1, . . . , yN(x) ∈[0, 1]N(x) for every x ∈X with yi < yi+1, y1 = 0 and yN(x) = 1. • Initial value function V0(x, y) that satisfies Assumption 1. 2: For t = 1, 2, . . . • For each x ∈X and each yi ∈Y(x), update the value function estimate as follows: Vt(x, yi) = TI[Vt−1](x, yi), 3: Set the converged value iteration estimate as bV ∗(x, yi), for any x ∈X, and yi ∈Y(x). 4 Value Iteration with Linear Interpolation In this section we present an approximate DP algorithm for solving CVaR MDPs, based on the theoretical results of Section 3. The value iteration algorithm in Eq. (9) presents two main implementation challenges. The first is due to the fact that the augmented state y is continuous. We handle this challenge by using interpolation, and exploit the concavity of yV (x, y) to bound the error introduced by this procedure. The second challenge stems from the the fact that applying T involves maximizing over ξ. Our strategy is to exploit the concavity of the maximization problem to guarantee that such optimization can indeed be performed effectively. As discussed, our approach relies on the fact that the Bellman operator T preserves concavity as established in Lemma 3. Accordingly, we require the following assumption for the initial guess V0(x, y), Assumption 1 The guess for the initial value function V0(x, y) satisfies the following properties: 1) yV0(x, y) is concave in y ∈Y and 2) V0(x, y) is continuous in y ∈Y for any x ∈X . Assumption 1 may easily be satisfied, for example, by choosing V0(x, y) = CVaRy(Z | x0 = x), where Z is any arbitrary bounded random variable. As stated earlier, a key difficulty in applying value iteration (9) is that, for each state x ∈X, the Bellman operator has to be calculated for each y ∈Y, and Y is continuous. As an approximation, we propose to calculate the Bellman operator only for a finite set of values y, and interpolate the value function in between such interpolation points. Formally, let N(x) denote the number of interpolation points. For every x ∈X, denote by Y(x) =  y1, . . . , yN(x) ∈[0, 1]N(x) the set of interpolation points. We denote by Ix[V ](y) the linear interpolation of the function yV (x, y) on these points, i.e., Ix[V ](y) = yiV (x, yi) + yi+1V (x, yi+1) −yiV (x, yi) yi+1 −yi (y −yi), where yi = max {y′ ∈Y(x) : y′ ≤y} and yi+1 is the closest interpolation point such that y ∈[yi, yi+1], i.e., yi+1 = min {y′ ∈Y(x) : y′ ≥y}. The interpolation of yV (x, y) instead of V (x, y) is key to our approach. The motivation is twofold: first, it can be shown [20] that for a discrete random variable Z, yCVaRy(Z) is piecewise linear in y. Second, one can show that the Lipschitzness of y V (x, y) is preserved during value iteration, and exploit this fact to bound the linear interpolation error. We now define the interpolated Bellman operator TI as follows: TI[V ](x, y) = min a∈A " C(x, a) + γ max ξ∈UCVaR(y,P (·|x,a)) X x′∈X Ix′[V ](yξ(x′)) y P(x′|x, a) # . (10) Remark 2 Notice that by L’Hospital’s rule one has limy→0 Ix[V ](yξ(x))/y = V (x, 0)ξ(x). This implies that at y = 0 the interpolated Bellman operator is equivalent to the original Bellman operator, i.e., T[V ](x, 0) = mina∈A  C(x, a) + γ maxx′∈X:P (x′|x,a)>0 V (x′, 0) = TI[V ](x, 0). Algorithm 1 presents CVaR value iteration with linear interpolation. The only difference between this algorithm and standard value iteration (9) is the linear interpolation procedure described above. In the following, we show that Algorithm 1 converges, and bound the error due to interpolation. We begin by showing that the useful properties established in Lemma 3 for the Bellman operator T extend to the interpolated Bellman operator TI. 6 Lemma 6 (Properties of Interpolated Bellman Operator) TI[V ] has the same properties of T[V ] as in Lemma 3, namely 1) contraction and 2) concavity preservation. Lemma 6 implies several important consequences for Algorithm 1. The first one is that the maximization problem in (10) is concave, and thus may be solved efficiently at each step. This guarantees that the algorithm is tractable. Second, the contraction property in Lemma 6 guarantees that Algorithm 1 converges, i.e., there exists a value function bV ∗∈R|X|×|Y| such that limn→∞Tn I[V0](x, yi) = bV ∗(x, yi). In addition, the convergence rate is geometric and equals to γ. The following theorem provides an error bound between approximate value iteration and exact value iteration (3) in terms of the interpolation resolution. Theorem 7 (Convergence and Error Bound) Suppose the initial value function V0(x, y) satisfies Assumption 1 and let ϵ > 0 be an error tolerance parameter. For any state x ∈X and step t ≥0, choose y2 > 0 such that Vt(x, y2) −Vt(x, 0) ≥−ϵ and update the interpolation points according to the logarithmic rule: yi+1 = θyi, ∀i ≥2, with uniform constant θ ≥1. Then, Algorithm 1 has the following error bound: 0 ≥bV ∗(x0, α) −min µ∈ΠH CVaRα  lim T →∞C0,T | x0, µ  ≥ −γ 1 −γ O ((θ −1) + ϵ) , and the following finite time convergence error bound: Tn I[V0](x0, α) −min µ∈ΠH CVaRα  lim T →∞C0,T | x0, µ  ≤O ((θ −1) + ϵ) + O(γn) 1 −γ . Theorem 7 shows that 1) the interpolation-based value function is a conservative estimate for the optimal solution to problem (3); 2) the interpolation procedure is consistent, i.e., when the number of interpolation points is arbitrarily large (specifically, ϵ →0 and yi+1/yi →1), the approximation error tends to zero; and 3) the approximation error bound is O((θ −1) + ϵ), where log θ is the log-difference of the interpolation points, i.e., log θ = log yi+1 −log yi, ∀i. For a pre-specified ϵ, the condition Vt(x, y2) −Vt(x, 0) ≥−ϵ may be satisfied by a simple adaptive procedure for selecting the interpolation points Y(x). At each iteration t > 0, after calculating Vt(x, yi) in Algorithm 1, at each state x in which the condition does not hold, add a new interpolation point y′ 2 = ϵy2 |Vt(x,y2)−Vt(x,0)|, and additional points between y′ 2 and y2 such that the condition log θ ≥ log yi+1 −log yi is maintained. Since all the additional points belong to the segment [0, y2], the linearly interpolated Vt(x, yi) remains unchanged, and Algorithm 1 proceeds as is. For bounded costs and ϵ > 0, the number of additional points required is bounded. The full proof of Theorem 7 is detailed in the supplementary material; we highlight the main ideas and challenges involved. In the first part of the proof we bound, for all t > 0, the Lipschitz constant of yVt(x, y) in y. The key to this result is to show that the Bellman operator T preserves the Lipschitz property for yVt(x, y). Using the Lipschitz bound and the concavity of yVt(x, y), we then bound the error Ix[Vt](y) y −Vt(x, y) for all y. The condition on y2 is required for this bound to hold when y →0. Finally, we use this result to bound ∥TI[Vt](x, y) −T[Vt](x, y)∥∞. The results of Theorem 7 follow from contraction arguments, similar to approximate dynamic programming [3]. 5 Experiments We validate Algorithm 1 on a rectangular grid world, where states represent grid points on a 2D terrain map. An agent (e.g., a robotic vehicle) starts in a safe region and its objective is to travel to a given destination. At each time step the agent can move to any of its four neighboring states. Due to sensing and control noise, however, with probability δ a move to a random neighboring state occurs. The stage-wise cost of each move until reaching the destination is 1, to account for fuel usage. In between the starting point and the destination there are a number of obstacles that the agent should avoid. Hitting an obstacle costs M >> 1 and terminates the mission. The objective is to compute a safe (i.e., obstacle-free) path that is fuel efficient. For our experiments, we choose a 64 × 53 grid-world (see Figure 1), for a total of 3,312 states. The destination is at position (60, 2), and there are 80 obstacles plotted in yellow. By leveraging Theorem 7, we use 21 log-spaced interpolation points for Algorithm 1 in order to achieve a small value function error. We choose δ = 0.05, and a discount factor γ = 0.95 for an effective horizon of 200 steps. Furthermore, we set the penalty cost equal to M = 2/(1 −γ)–such choice trades off high penalty for collisions and computational complexity (that increases as M increases). For the 7 Figure 1: Grid-world simulation. Left three plots show the value functions and corresponding paths for different CVaR confidence levels. The rightmost plot shows a cost histogram (for 400 Monte Carlo trials) for a risk-neutral policy and a CVaR policy with confidence level α = 0.11. interpolation parameters discussed in Theorem 7, we set ϵ = 0.1 and θ = 2.067 (in order to have 21 logarithmically distributed grid points for the CVaR confidence parameter in [0, 1]). In Figure 1 we plot the value function V (x, y) for three different values of the CVaR confidence parameter α, and the corresponding paths starting from the initial position (60, 50). The first three figures in Figure 1 show how by decreasing the confidence parameter α the average travel distance (and hence fuel consumption) slightly increases but the collision probability decreases, as expected. We next discuss robustness to modeling errors. We conducted simulations in which with probability 0.5 each obstacle position is perturbed in a random direction to one of the neighboring grid cells. This emulates, for example, measurement errors in the terrain map. We then trained both the riskaverse (α = 0.11) and risk-neutral (α = 1) policies on the nominal (i.e., unperturbed) terrain map, and evaluated them on 400 perturbed scenarios (20 perturbed maps with 20 Monte Carlo evaluations each). While the risk-neutral policy finds a shorter route (with average cost equal to 18.137 on successful runs), it is vulnerable to perturbations and fails more often (with over 120 failed runs). In contrast, the risk-averse policy chooses slightly longer routes (with average cost equal to 18.878 on successful runs), but is much more robust to model perturbations (with only 5 failed runs). For the computation of Algorithm 1 we represented the concave piecewise linear maximization problem in (10) as a linear program, and concatenated several problems to reduce repeated overhead stemming from the initialization of the CPLEX linear programming solver. This resulted in a computation time on the order of two hours. We believe there is ample room for improvement, for example by leveraging parallelization and sampling-based methods. Overall, we believe our proposed approach is currently the most practical method available for solving CVaR MDPs (as a comparison, the recently proposed method in [8] involves infinite dimensional optimization). The Matlab code used for the experiments is provided in the supplementary material. 6 Conclusion In this paper we presented an algorithm for CVaR MDPs, based on approximate value-iteration on an augmented state space. We established convergence of our algorithm, and derived finite-time error bounds. These bounds are useful to stop the algorithm at a desired error threshold. In addition, we uncovered an interesting relationship between the CVaR of the total cost and the worst-case expected cost under adversarial model perturbations. In this formulation, the perturbations are correlated in time, and lead to a robustness framework significantly less conservative than the popular robust-MDP framework, where the uncertainty is temporally independent. Collectively, our work suggests CVaR MDPs as a unifying and practical framework for computing control policies that are robust with respect to both stochasticity and model perturbations. Future work should address extensions to large state-spaces. We conjecture that a sampling-based approximate DP approach [3] should be feasible since, as proven in this paper, the CVaR Bellman equation is contracting (as required by approximate DP methods). Acknowledgement The authors would like to thank Mohammad Ghavamzadeh for helpful comments on the technical details, and Daniel Vainsencher for practical optimization advice. Y-L. Chow and M. Pavone are partially supported by the Croucher Foundation doctoral scholarship and the Office of Naval Research, Science of Autonomy Program, under Contract N00014-15-1-2673. Funding for Shie Mannor and Aviv Tamar were partially provided by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL). 8 References [1] P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Mathematical finance, 9(3): 203–228, 1999. [2] N. B¨auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research, 74(3):361–379, 2011. [3] D. Bertsekas. Dynamic programming and optimal control, Vol II. Athena Scientific, 4th edition, 2012. [4] V. Borkar and R. Jain. Risk-constrained Markov decision processes. IEEE Transaction of Automatic Control, 59(9):2574 – 2579, 2014. [5] Y. Chow and M. Ghavamzadeh. Algorithms for CVaR optimization in MDPs. In Advances in Neural Information Processing Systems 27, pages 3509–3517, 2014. [6] K. Dowd. Measuring market risk. John Wiley & Sons, 2007. [7] J. Filar, D. Krass, and K. Ross. Percentile performance criteria for limiting average Markov decision processes. Automatic Control, IEEE Transactions on, 40(1):2–10, 1995. [8] W. Haskell and R. Jain. A convex analytic approach to risk-aware Markov decision processes. SIAM Journal of Control and Optimization, 2014. [9] R. A. Howard and J. E. Matheson. Risk-sensitive Markov decision processes. Management Science, 18 (7):356–369, 1972. [10] G. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257280, 2005. [11] G. Iyengar and A. Ma. Fast gradient descent method for mean-CVaR optimization. Annals of Operations Research, 205(1):203–212, 2013. [12] S. Mannor, D. Simester, P. Sun, and J. Tsitsiklis. Bias and variance approximation in value function estimates. Management Science, 53(2):308–322, 2007. [13] S. Mannor, O. Mebel, and H. Xu. Lightning does not strike twice: Robust MDPs with coupled uncertainty. In International Conference on Machine Learning, pages 385–392, 2012. [14] P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):583–601, 2002. [15] A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780–798, 2005. [16] T. Osogami. Robustness and risk-sensitivity in markov decision processes. In Advances in Neural Information Processing Systems, pages 233–241, 2012. [17] G. Pflug and A. Pichler. Time consistent decisions and temporal decomposition of coherent risk functionals. Optimization online, 2015. [18] M. Phillips. Interpolation and approximation by polynomials, volume 14. Springer Science & Business Media, 2003. [19] L. Prashanth. Policy gradients for cvar-constrained mdps. In Algorithmic Learning Theory, pages 155– 169. Springer, 2014. [20] R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21–42, 2000. [21] R. Rockafellar, S. Uryasev, and M. Zabarankin. Master funds in portfolio analysis with general deviation measures. Journal of Banking & Finance, 30(2):743–778, 2006. [22] G. Serraino and S. Uryasev. Conditional value-at-risk (CVaR). In Encyclopedia of Operations Research and Management Science, pages 258–266. Springer, 2013. [23] A. Shapiro, D. Dentcheva, and A. Ruszczy´nski. Lectures on stochastic programming. SIAM, 2009. [24] M. Sobel. The variance of discounted Markov decision processes. Journal of Applied Probability, pages 794–802, 1982. [25] A. Tamar, Y. Glassner, and S. Mannor. Optimizing the CVaR via sampling. In AAAI, 2015. [26] S. Uryasev, S. Sarykalin, G. Serraino, and K. Kalinchenko. VaR vs CVaR in risk management and optimization. In CARISMA conference, 2010. [27] H. Xu and S. Mannor. The robustness-performance tradeoff in Markov decision processes. In Advances in Neural Information Processing Systems, pages 1537–1544, 2006. 9
2015
155
5,653
Fast Lifted MAP Inference via Partitioning Somdeb Sarkhel The University of Texas at Dallas Parag Singla I.I.T. Delhi Vibhav Gogate The University of Texas at Dallas Abstract Recently, there has been growing interest in lifting MAP inference algorithms for Markov logic networks (MLNs). A key advantage of these lifted algorithms is that they have much smaller computational complexity than propositional algorithms when symmetries are present in the MLN and these symmetries can be detected using lifted inference rules. Unfortunately, lifted inference rules are sound but not complete and can often miss many symmetries. This is problematic because when symmetries cannot be exploited, lifted inference algorithms ground the MLN, and search for solutions in the much larger propositional space. In this paper, we present a novel approach, which cleverly introduces new symmetries at the time of grounding. Our main idea is to partition the ground atoms and force the inference algorithm to treat all atoms in each part as indistinguishable. We show that by systematically and carefully refining (and growing) the partitions, we can build advanced any-time and any-space MAP inference algorithms. Our experiments on several real-world datasets clearly show that our new algorithm is superior to previous approaches and often finds useful symmetries in the search space that existing lifted inference rules are unable to detect. Markov logic networks (MLNs) [5] allow application designers to compactly represent and reason about relational and probabilistic knowledge in a large number of application domains including computer vision and natural language understanding using a few weighted first-order logic formulas. These formulas act as templates for generating large Markov networks – the undirected probabilistic graphical model. A key reasoning task over MLNs is maximum a posteriori (MAP) inference, which is defined as the task of finding an assignment of values to all random variables in the Markov network that has the maximum probability. This task can be solved using propositional (graphical model) inference techniques. Unfortunately, these techniques are often impractical because the Markov networks can be quite large, having millions of variables and features. Recently, there has been growing interest in developing lifted inference algorithms [4, 6, 17, 22] for solving the MAP inference task [1, 2, 3, 7, 13, 14, 16, 18, 19]. These algorithms work, as much as possible, on the much smaller first-order specification, grounding or propositionalizing only as necessary and can yield significant complexity reductions in practice. At a high level, lifted algorithms can be understood as algorithms that identify symmetries in the first-order specification using lifted inference rules [9, 13, 19], and then use these symmetries to simultaneously infer over multiple symmetric objects. Unfortunately, in a vast majority of cases, the inference rules are unable to identify several useful symmetries (the rules are sound but not complete), either because the symmetries are approximate or because the symmetries are domain-specific and do not belong to a known type. In such cases, lifted inference algorithms partially ground some atoms in the MLN and search for a solution in this much larger partially propositionalized space. In this paper, we propose the following straight-forward yet principled approach for solving this partial grounding problem [21, 23]: partition the ground atoms into groups and force the inference algorithm to treat all atoms in each group as indistinguishable (symmetric). For example, consider a first-order atom R(x) and assume that x can be instantiated to the following set of constants: {1, 2, 3, 4, 5}. If the atom possesses the so-called non-shared or single-occurrence symmetry [13, 19], then the lifted inference algorithm will search over only two assignments: all five groundings of R(x) are either all true or all false, in order to find the MAP solution. When no identifiable symmetries exist, the lifted algorithm will inefficiently search over all possible 32 truth assignments to the 5 1 ground atoms and will be equivalent in terms of (worst-case) complexity to a propositional algorithm. In our approach, we would partition the domain, say as {{1, 3}, {2, 4, 5}}, and search over only the following 4 assignments: all groundings in each part can be either all true or all false. Thus, if we are lucky and the MAP solution is one of the 4 assignments, our approach will yield significant reductions in complexity even though no identifiable symmetries exist in the problem. Our approach is quite general and includes the fully lifted and fully propositional approaches as special cases. For instance, setting the partition size k to 1 and n respectively where n is the number of constants will yield exactly the same solution as the one output by the fully lifted and fully propositional approach. Setting k to values other than 1 and n yields a family of inference schemes that systematically explores the regime between these two extremes. Moreover, by controlling the size k of each partition we can control the size of the ground theory, and thus the space and time complexity of our algorithm. We prove properties and improve upon our basic idea in several ways. First, we prove that our proposed approach yields a consistent assignment that is a lower-bound on the MAP value. Second, we show how to improve the lower bound and thus the quality of the MAP solution by systematically refining the partitions. Third, we show how to further improve the complexity of our refinement procedure by exploiting the exchangeability property of successive refinements. Specifically, we show that the exchangeable refinements can be arranged on a lattice, which can then be searched via a heuristic search procedure to yield an efficient any-time, any-space algorithm for MAP inference. Finally, we demonstrate experimentally that our method is highly scalable and yields close to optimal solutions in a fraction of the time as compared to existing approaches. In particular, our results show that for even small values of k (k bounds the partition size), our algorithm yields close to optimal MAP solutions, clearly demonstrating the power of our approach. 1 Notation And Background Partition of a Set. A collection of sets C is a partition of a set X if and only if each set in C is nonempty, pairwise disjoint and the union of all sets equals X. The sets in C are called the cells or parts of the partition. If two elements, a, b, of the set appear in a same cell of a partition ρ we denote them by the operator ‘∼ρ’, i.e., a ∼ρ b. A partition α of a set X is a refinement of a partition ρ of X if every element of α is a subset of some element of ρ. Informally, this means that α is a further fragmentation of ρ. We say that α is finer than ρ (or ρ is coarser than α) and denote it as α ≺ρ. We will also use the notation α ⪯ρ to denote that either α is finer than ρ, or α is the same as ρ. For example, let ρ = {{1, 2}, {3}} be a partition of the set X = {1, 2, 3} containing two cells {1, 2} and {3} and let α = {{1}, {2}, {3}} be another partition of X, then α is a refinement ρ, namely, α ≺ρ. First-order logic. We will use a strict subset of first-order logic that has no function symbols, equality constraints or existential quantifiers. Our subset consists of (1) constants, denoted by upper case letters (e.g., X, Y , etc.), which model objects in the domain; (2) logical variables, denoted by lower case letters (e.g., x, y, etc.) which can be substituted with objects, (3) logical operators such as ∨(disjunction), ∧(conjunction), ⇔(implication) and ⇒(equivalence), (4) universal (∀) and existential (∃) quantifiers and (5) predicates which model properties and relationships between objects. A predicate consists of a predicate symbol, denoted by typewriter fonts (e.g., Friends, R, etc.), followed by a parenthesized list of arguments. A term is a logical variable or a constant. A literal is a predicate or its negation. A formula in first order logic is an atom (a predicate), or any complex sentence that can be constructed from atoms using logical operators and quantifiers. For example, ∀x Smokes(x) ⇒Asthma(x) is a formula. A clause is a disjunction of literals. Throughout, we will assume that all formulas are clauses and their variables are standardized apart. A ground atom is an atom containing only constants. A ground formula is a formula obtained by substituting all of its variables with a constant, namely a formula containing only ground atoms. For example, the groundings of ¬ Smokes(x) ∨Asthma(x) where ∆x = {Ana, Bob}, are the two propositional formulas: ¬ Smokes(Ana) ∨Asthma(Ana) and ¬ Smokes(Bob) ∨Asthma(Bob). Markov logic. A Markov logic network (MLN) is a set of weighted clauses in first-order logic. We will assume that all logical variables in all formulas are universally quantified (and therefore we will drop the quantifiers from all formulas), are typed and can be instantiated to a finite set of constants (for a variable x, this set will be denoted by ∆x) and there is a one-to-one mapping between the constants and objects in the domain (Herbrand interpretations). Note that the class of MLNs we are assuming is not restrictive at all because almost all MLNs used in application domains such as 2 natural language processing and the Web fall in this class. Given a finite set of constants, the MLN represents a (ground) Markov network that has one random variable for each ground atom in its Herbrand base and a weighted feature for each ground clause in the Herbrand base. The weight of each feature is the weight of the corresponding first-order clause. Given a world ω, which is a truth assignment to all the ground atoms, the Markov network represents the following probability distribution P(ω) = Z−1 exp(P i wiN(fi, ω)) where (fi, wi) is a weighted first-order formula, N(fi, ω) is the number of true groundings of fi in ω and Z is the partition function. For simplicity, we will assume that the MLN is in normal form, which is defined as an MLN that satisfies the following two properties: (i) there are no constants in any formula; and (ii) if two distinct atoms of predicate R have variables x and y as the same argument of R, then ∆x = ∆y. Because of the second condition, in normal MLNs, we can associate domains with each argument of a predicate. Let iR denote the i-th argument of predicate R and let D(iR) denote the number of elements in the domain of iR. We will also assume that all domains are of the form {1, ..., D(iR)}. Since domain size is finite, any domain can be converted to this form. A common optimization inference task over MLNs is finding the most probable state of the world ω, that is finding a complete assignment to all ground atoms which maximizes the probability. Formally, arg max ω PM(ω) = arg max ω 1 Z(M) exp X i wiN(fi, ω) ! = arg max ω X i wiN(fi, ω) (1) From Eq. (1), we can see that the MAP problem reduces to finding a truth assignment that maximizes the sum of weights of satisfied clauses. Therefore, any weighted satisfiability solver such as MaxWalkSAT [20] can used to solve it. However, MaxWalkSAT is a propositional solver and is unable to exploit symmetries in the first-order representation, and as a result can be quite inefficient. Alternatively, the MAP problem can be solved in a lifted manner by leveraging various lifted inference rules such as the decomposer, the binomial rule [6, 9, 22] and the recently proposed single occurrence rule [13, 19]. A schematic of such a procedure is given in Algorithm 1. Before presenting the algorithm, we will describe some required definitions. Let iR denote the i-th argument of predicate R. Given an MLN, two arguments iR and jS of its predicates R and S respectively are called unifiable if they share a logical variable in an MLN formula. Being symmetric and transitive, the unifiable relation splits the arguments of all the predicates into a set of domain equivalence classes. Example 1. Consider a normal MLN M having two weighted formulas (R(x) ∨S(x, y), w1) and (R(z) ∨T(z), w2). Here, we have two sets of domain equivalence classes {1R, 1S, 1T} and {2S}. Algorithm 1 LMAP(MLN M) // base case if M is empty return 0 Simplify(M) // Propositional decomposition if M has disjoint MLNs M1, . . . , Mk then return Pk i=1 LMAP(Mi) // Lifted decomposition if M has a liftable domain equivalence class U then return LMAP(M|U) // Lifted conditioning if M has a singleton atom A then return maxD(1A) i=0 LMAP(M|(A, i)) + w(A, i) // Partial grounding Heuristically select a domain equivalence class U and ground it yielding a new MLN M ′ return LMAP(M ′) Algorithm 1 has five recursive steps and returns the optimal MAP value. The first two lines are the base case and the simplification step, in which the MLN is simplified by deleting redundant formulas, rewriting predicates by removing constants (so that lifted conditioning can be applied) and assigning values to ground atoms whose values can be inferred using assignments made so far. The second step is the propositional decomposition step in which the algorithm recurses over disjoint MLNs (if any) and returns their sum. In the lifted decomposition step, the algorithm finds a domain equivalence class U such that in the MAP solution all ground atoms of the predicates that have elements of U as arguments are either all true or all false. To find such a class, rules given in [9, 13, 19] can be used. In the algorithm, M|U denotes the MLN obtained by setting the domain of all elements of U to 1 and updating the formula weights accordingly. In the lifted conditioning step, if there is an atom having just one argument (singleton atom), then the algorithm partitions the possible truth assignments to groundings of A such that, in each part all truth assignments have the same number of true atoms. In the algorithm, M|(A, i) denotes the MLN obtained by setting i groundings of A to true and the remaining to false. w(A, i) is the total weight of ground formulas satisfied by the 3 assignment. The final step in LMAP is the partial grounding step and is executed only when the algorithm is unable to apply lifted inference rules. In this step, the algorithm heuristically selects a domain equivalence class U and grounds it completely. For example, Example 2. Consider an MLN with two formulas: R(x, y) ∨S(y, z), w1 and S(a, b) ∨T(a, c), w2. Let D(2R) = 2. After grounding the equivalence class {2R, 1S, 1T}, we get an MLN having four formulas: (R(x1, 1)∨S(1, z1), w1), (R(x2, 2)∨S(1, z2), w1), (S(1, b1)∨T(1, c1), w2) and (S(2, b2)∨ T(2, c2), w2).1 2 Scaling up the Partial Grounding Step using Set Partitioning Algorithm 2 Constrained-Ground (MLN M, Size k and domain equivalence class U) M ′ = M Create a partition π of size k of ∆iR where iR ∈U foreach predicate R such that ∃iR ∈U do foreach cell πj of π do Add all possible hard formulas of the form R(x1, . . . , xr) ⇔R(y1, . . . , yr) such that xi = yi if iR /∈U and xi = Xa, yi = Xb if iR ∈U where Xa, Xb ∈πj. return M ′ Partial grounding often yields a much bigger MLN than the original MLN and is the chief reason for the inefficiency and poor scalability of Algorithm LMAP. To address this problem, we propose a novel approach to speed up inference by adding additional constraints to the existing lifted MAP formulation. Our idea is as follows: reduce the number of ground atoms by partitioning them and treating all atoms in each part as indistinguishable. Thus, instead of introducing O(tn) new ground atoms where t is the cardinality of the domain equivalence class and n is the number of constants, our approach will only introduce O(tk) ground atoms where k << n. Our new, approximate partial grounding method (which will replace the partial grounding step in Algorithm 1) is formally described in Algorithm 2. The algorithm takes as input an MLN M, an integer k > 0 and a domain equivalence class U as input and outputs a new MLN M ′. The algorithm first partitions the domain of the class U into k cells, yielding a partition π. Then, for each cell πj of π and each predicate R such that one or more of its arguments is in U, the algorithm adds all possible constraints of the form R(x1, . . . , xr) ⇔R(y1, . . . , yr) such that for each i: (1) we add the equality constraint between the logical variables xi and yi if the i-th argument of the predicate is not in U and (1) set xi = Xa and yi = Xb if i-th argument of R is in U where Xa, Xb ∈πj. Since adding constraints restricts feasible solutions to the optimization problem, it is easy to show that: Proposition 1. Let M ′ = Constrain-Ground(M, k), where M is an MLN and k > 0 is an integer, be the MLN used in the partial grounding step of Algorithm 1 (instead of the partial grounding step described in the algorithm). Then, the MAP value returned by the modified algorithm will be smaller than or equal to the one returned by Algorithm 1. The following example demonstrates how Algorithm 2 constructs a new MLN. Example 3. Consider the MLN in Example 2. Let {{1, D2,R}} be a 1-partition of the domain of U. Then, after applying Algorithm 2, the new MLN will have the following three hard formulas in addition to the formulas given in Example 2: (1) R(x3, 1) ⇔R(x3, 2), (2) S(1, x4) ⇔S(2, x4) and (3) T(1, x5) ⇔T(2, x5). Although, adding constraints reduces the search space of the MAP problem, Algorithm 2 still needs to ground the MLN. This can be time consuming. Alternatively, we can group indistinguishable atoms together without grounding the MLN using the following definition: Definition 1. Let U be a domain equivalence class and let π be its partition. Two ground atoms R(x1, ..., xr) and R(y1, ..., yr) of a predicate R such that ∃iR ∈U are equivalent if xi = yi if iR /∈U and xi = Xa, yi = Xb if iR ∈U where Xa, Xb ∈πj. We denote this by R(x1, ..., xr)⊥πR(y1, ..., yr). Notice that the relation ⊥π is symmetric and reflexive. Thus, we can group all the ground atoms corresponding to the transitive closure of this relation, yielding a “meta ground atom” such that if the meta atom is assigned to true (false), all the ground atoms in the transitive closure will be true (false). This yields the partition-ground algorithm described as Algorithm 3. The algorithm starts 1The constants can be removed by renaming the predicates yielding a normal MLN. For example, we can rename R(x1, 1) as R1(x1). This renaming occurs in the simplification step. 4 by creating a k partition of the domain of U. It then updates the domain of U so that it only contains k values, grounds all arguments of predicates that are in the set U and updates the formula weights appropriately. The formula weights should be updated because, when the domain is compressed, several ground formulas are replaced by just one ground formula. Intuitively, if t (partially) ground formulas having weight w are replaced by one (partially) ground formula (f, w′) then w′ should be equal to wt. The two for loops in Algorithm 3 accomplish this. We can show that: Proposition 2. The MAP value output by replacing the partial grounding step in Algorithm 1 with Algorithm Partition-Ground, is the same as the one output by replacing the the partial grounding step in Algorithm 1 with Algorithm Constrained-Ground. Algorithm 3 Partition-Ground (MLN M, Size k and domain equivalence class U) M ′ = M Create a partition π of size k of ∆iR where iR ∈U Update the domain ∆iR to {1, . . . , k} in M ′ Ground all predicates R such that iR ∈U foreach formula (f ′, w′) in M ′ such that f contains an atom of R where iR ∈U do Let f be the formula in M from which f ′ was derived foreach logical variable in f that was substituted by the j-th value in ∆iR to yield f ′ do w′ = w′ × |πj| where πj is the j-th cell of π return M ′ The key advantage using Algorithm PartitionGround is that the lifted algorithm (LMAP) will have much smaller space complexity than the one using Algorithm ConstrainedGround. Specifically, unlike the latter, which yields O(n|U|) ground atoms (assuming each predicate has only one argument in U) where n is the number of constants in the domain of U, the former generates only O(k|U|) ground atoms, where k << n. The following example illustrates how algorithm partition-ground constructs a new MLN. Example 4. Consider an MLN M, with two formulas: (R(x, y) ∨S(y, z), w1) and (S(a, b) ∨ T(a, c), w2). Let D(2R) = 3 and π = {{1, 2}, {3}} = {ν1, ν2}. After grounding 2R with respect to π, we get an MLN, M ′, having four formulas: (Rν1(x1) ∨Sν1(z1), 2w1), (Rν2(x2) ∨Sν2(z2), w1), (Sν1(b1) ∨Tν1(c1), 2w2) and (Sν2(b2) ∨Tν2(c2), w2). The total weight of grounding in M is (3w1D(1R)D(2S) + 3w2D(2T)D(2S)) which is the same as in M ′. The following example illustrates how the algorithm constructs a new MLN in presence of self-joins. Example 5. Consider an MLN, M, with the single formula: ¬R(x, y) ∨R(y, x), w. Let D(1R) = D(2R) = 3 and π = {{1, 2}, {3}} = {ν1, ν2}. After grounding 1R (and also on D(2R), as they belong to the same domain equivalence class) with respect to π, we get an MLN, M ′, having following four formulas: (Rν1,ν1 ∨Rν1,ν1, 4w), (Rν1,ν2 ∨Rν2,ν1, 2w), (Rν2,ν1 ∨Rν1,ν2, 2w) and (Rν2,ν2 ∨Rν2,ν2, w). 2.1 Generalizing the Partition Grounding Approach Algorithm Partition-Ground allows us to group the equivalent atoms with respect to a partition and has much smaller space complexity and time complexity than the partial grounding strategy described in Algorithm 1. However, it yields a lower bound on the MAP value. In this section, we show how to improve the lower bound using refinements of the partition. The basis of our generalization is the following theorem: Theorem 1. Given two partitions π and φ of U such that φ ⪯π, the MAP value of the partially ground MLN with respect to φ is less than or equal to the MAP value of the partially ground MLN with respect to π . Proof. Sketch: Since the partition φ is a finer refinement of π, any candidate MAP assignment corresponding to the MLN obtained via φ already includes all the candidate assignments corresponding to the MLN obtained via π, and since the MAP value of both of these MLNs are a lower bound of the original MAP value, the theorem follows. We can use Theorem 1 to devise a new any-time MAP algorithm which refines the partitions to get a better estimate of MAP values. Our approach is presented in Algorithm 4. The algorithm begins by identifying all non-liftable domains, namely domains Ui that will be partially grounded during the execution of Algorithm 1, and associating a 1-partition πi with each domain. Then, until there is timeout, it iterates through the following two steps. First, it runs the LMAP algorithm, which uses the pair (Ui, πi) in Algorithm partition-ground during the i-th partial 5 grounding step, yielding a MAP solution µ. Second, it heuristically selects a partition πj and refines it. From Theorem 1, it is clear that as the number of iterations is increased, the MAP solution will either improve or remain the same. Thus, Algorithm Refine-MAP is an anytime algorithm. Algorithm 4 Refine-MAP(MLN M) Let U = {Ui} be the non-liftable domains Set πi = {∆jR} where jR ∈Ui for all Ui ∈U µ = −∞ while timeout has not occurred do µ =LMAP(M) /* LMAP uses the pair (Ui, πi) and Algorithm partition-ground for its i-th partial grounding step. */ Heuristically select a partition πj and refine it return µ Alternatively, we can also devise an any-space algorithm using the following idea. We will first determine k, the maximum size of a partition that we can fit in the memory. As different partitions of size k will give us different MAP values, we can search through them to find the best possible MAP solution. A drawback of the any-space approach is that it explores a prohibitively large search space. In particular, the number of possible partitions of size k for a set of size n (denoted by n k ) is given by the so called Stirling numbers of the second kind which grows exponentially with n. (The total number of partitions of a set is given by the Bell number, Bn = Pn k=1 n k ). Clearly, searching over all the possible partitions of size k is not practical. Luckily, we can exploit symmetries in the MLN representation to substantially reduce the number of partitions we have to consider, since many of them will give us the same MAP value. Formally, Theorem 2. Given two k-partitions π = {π1, . . . , πk} and φ = {φ1, . . . , φk} of U such that |πi| = |φi| for all i, the MAP value of the partially ground MLN with respect to π is equal to the MAP value of the partially ground MLN with respect to φ . Proof. Sketch: A formula f, when ground on an argument iR with respect to a partition π creates |π| copies of the formula. Since |φ| = |π| = k grounding on iR with respect to φ also creates the same number of formulas which are identical upto a renaming of constants. Furthermore, since |πi| = |φi| (each of their parts have identical cardinality) and as weight of a ground formula is determined by the cell sizes (see Algorithm Partition-Ground) the ground formulas obtained using φ and π will have same weights as well. As a result, MLNs obtained by grounding on any argument iR with respect to φ and π are indistinguishable (subject to renaming of variables and constants) and the proof follows. {{1}, {2}, {3}, {4}} {{1}, {2}, {3, 4}} {{1}, {2, 3, 4}} {{1, 2}, {3, 4}} {{1, 2, 3, 4}} Figure 1: Exchangeable Partition Lattice corresponding to the domain {1, 2, 3, 4}. From Theorem 2, it follows that the number of elements in cells and the number of cells of a partition is sufficient to define a partially ground MLN with respect to that partition. Consecutive refinements of such partitions will thus yield a lattice, which we will refer to as Exchangeable Partition Lattice. The term ‘exchangeable’ refers to the fact that two partitions containing same number of elements of cells and same number of cells are exchangeable with each other (in terms of MAP solution quality). Figure 1 shows the Exchangeable Partition Lattice corresponding to the domain {1, 2, 3, 4}. If we do not use exchangeability, the number of partitions in the lattice would have been B4 = 4 1 + 4 2 + 4 3 + 4 4 = 1 + 7 + 6 + 1 = 15. On the other hand, the lattice has 5 elements. Different traversal strategies of this exchangeable partition lattice will give rise to different lifted MAP algorithms. For example, a greedy depth-first traversal of the lattice yields Algorithm 4. We can also explore the lattice using systematic depth-limited search and return the maximum solution found for a particular depth limit d. This yields an improved version of our any-space approach described earlier. We can even combine the two strategies by traversing the lattice in some heuristic order. For our experiments, we use greedy depth-limited search, because full depth-limited search was very expensive. Note that although our algorithm assumes normal MLNs, which are pre-shattered, we can easily extend it to use shattering as needed [10]. Moreover by clustering evidence atoms together [21, 23] we can further reduce the size of the shattered theory [4]. 6 3 Experiments We implemented our algorithm on top of the lifted MAP algorithm of Sarkhel et al. [18], which reduces lifted MAP inference to an integer polynomial program (IPP). We will call our algorithm P-IPP (which stands for partition-based IPP). We performed two sets of experiments. The first set measures the impact of increasing the partition size k on the quality of the MAP solution output by our algorithm. The second set compares the performance and scalability of our algorithm with several algorithms from literature. All of our experiments were run on a third generation i7 quad-core machine having 8GB RAM. We used following five MLNs in our experimental study: (1) An MLN which we call Equivalence that consists of following three formulas: Equals(x,x), Equals(x,y) →Equals(y,x), and Equals(x,y) ∧Equals(y,z) →Equals(x,z); (2) The Student MLN from [18, 19], consisting of four formulas and three predicates; (3) The Relationship MLN from [18], consisting of four formulas and three predicates; (4) WebKB MLN [11] from the Alchemy web page, consisting of three predicates and seven formulas; and (5) Citation Information-Extraction (IE) MLN from the Alchemy web page [11], consisting of five predicates and fourteen formulas . We compared the solution quality and scalability of our approach with the following algorithms and systems: Alchemy (ALY) [11], Tuffy (TUFFY) [15], ground inference based on integer linear programming (ILP) and the IPP algorithm of Sarkhel et al. [18]. Alchemy and Tuffy are two stateof-the-art open source software packages for learning and inference in MLNs. Both of them ground the MLN and then use an approximate solver, MaxWalkSAT [20] to compute the MAP solution. Unlike Alchemy, Tuffy uses clever Database tricks to speed up computation and in principle can be much more scalable than Alchemy. ILP is obtained by converting the MAP problem over the ground Markov network to an Integer Linear Program. We ran each algorithm on the aforementioned MLNs for varying time-bounds and recorded the solution quality, which is measured using the total weight of the false clauses in the (approximate) MAP solution, also referred to as the cost. Smaller the cost, better the MAP solution. For a fair comparison, we used a parallelized Integer Linear Programming solver called Gurobi [8] to solve the integer linear programming problems generated by our algorithm as well as by other competing algorithms. Figure 2 shows our experimental results. Note that if the curve for an algorithm is not present in a plot, then it means that the corresponding algorithm ran out of either memory or time on the MLN and did not output any solution. We observe that Tuffy and Alchemy are the worst performing systems both in terms of solution quality and scalability. ILP scales slightly better than Tuffy and Alchemy. However, it is unable to handle MLNs having more than 30K clauses. We can see that our new algorithm P-IPP, run as an anytime scheme, by refining partitions, not only finds higher quality MAP solutions but also scales better in terms of time complexity than IPP. In particular, IPP could not scale to the equivalence MLN having roughly 1 million ground clauses and the relation MLN having roughly 125.8M ground clauses. The reason is that these MLNs have self-joins (same predicate appearing multiple times in a formula), which IPP is unable to lift. On the other hand, our new approach is able to find useful approximate symmetries in these hard MLNs. To measure the impact of varying the partition size on the MAP solution quality, we conducted the following experiment. We first ran the IPP algorithm until completion to compute the optimum MAP value. Then, we ran our algorithm multiple times, until completion as well, and recorded the solution quality achieved in each run for different partition sizes. Figure 3 plots average cost across various runs as a function of k (the error bars show the standard deviation). For brevity, we only show results for the IE and Equivalence MLNs. The optimum solutions for the three MLNs were found in (a) 20 minutes, (b) 6 hours and (c) 8 hours respectively. On the other hand, our new approach P-IPP yields close to optimal solutions in a fraction of the time, and for relatively small values of k (≈5 −10). 4 Summary and Future Work Lifted inference techniques have gained popularity in recent years, and have quickly become the approach of choice to scale up inference in MLNs. A pressing issue with existing lifted inference technology is that most algorithms only exploit exact, identifiable symmetries and resort to grounding or propositional inference when such symmetries are not present. This is problematic because grounding can blow up the search space. In this paper, we proposed a principled, approximate approach to solve this grounding problem. The main idea in our approach is to partition the ground atoms into a small number of groups and then treat all ground atoms in a group as indistinguishable 7 -10000 0 10000 20000 30000 40000 50000 60000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds TUFFY ALY P-IPP IPP ILP (a) IE(3.2K,1M) -5.6e+06 -5.4e+06 -5.2e+06 -5e+06 -4.8e+06 -4.6e+06 -4.4e+06 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds P-IPP IPP (b) IE(380K,15.6B) -1e+08 -9e+07 -8e+07 -7e+07 -6e+07 -5e+07 -4e+07 -3e+07 -2e+07 -1e+07 0 1e+07 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds P-IPP IPP (c) IE(3.02M,302B) -400 -200 0 200 400 600 800 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds TUFFY ALY P-IPP IPP ILP (d) Equivalence(100,1.2K) -10000 0 10000 20000 30000 40000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds TUFFY ALY P-IPP IPP ILP (e) Equivalence(900,28.8K) -340000 -320000 -300000 -280000 -260000 -240000 -220000 -200000 -180000 -160000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds P-IPP (f) Equivalence(10K,1.02M) 742 744 746 748 750 752 754 756 758 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds P-IPP (g) WebKb(3.2K,1M) 0 1e+06 2e+06 3e+06 4e+06 5e+06 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds P-IPP IPP (h) Student(3M,1T) 24700 24800 24900 25000 25100 25200 25300 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds P-IPP (i) Relation(750K,125.8M) Figure 2: Cost vs Time: Cost of unsatisfied clauses(smaller is better) vs time for different domain sizes. Notation used to label each figure: MLN(numvariables, numclauses). Note: the quantities reported are for ground Markov network associated with the MLN. Standard deviation is plotted as error bars. -3600 -3400 -3200 -3000 -2800 -2600 -2400 -2200 -2000 2 3 4 5 6 7 8 9 10 Cost k Optimum P-IPP (a) IE(3.2K,1M) -700000 -600000 -500000 -400000 -300000 -200000 -100000 2 3 4 5 6 7 8 9 10 Cost k Optimum P-IPP (b) IE(82.8K,731.6M) -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 2 3 4 5 6 7 8 9 10 Cost k Optimum P-IPP (c) Equivalence(100,1.2K) Figure 3: Cost vs Partition Size: Notation used to label each figure: MLN(numvariables, numclauses). (from each other). This simple idea introduces new, approximate symmetries which can help speed-up the inference process. Although our proposed approach is inherently approximate, we proved that it has nice theoretical properties in that it is guaranteed to yield a consistent assignment that is a lowerbound on the MAP value. We further described an any-time algorithm which can improve this lower bound through systematic refinement of the partitions. Finally, based on the exchangeability property of the refined partitions, we demonstrated a method for organizing the partitions in a lattice structure which can be traversed heuristically to yield efficient any-time as well as any-space lifted MAP inference algorithms. Our experiments on a wide variety of benchmark MLNs clearly demonstrate the power of our new approach. Future work includes connecting this work to the work on Sherali-Adams hierarchy [2]; deriving a variational principle for our method [14]; and developing novel branch and bound [12] as well as weight learning algorithms based on our partitioning approach. Acknowledgments: This work was supported in part by the DARPA Probabilistic Programming for Advanced Machine Learning Program under AFRL prime contract number FA8750-14-C-0005. 8 References [1] U. Apsel and R. Braman. Exploiting uniform assignments in first-order MPE. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pages 74–83, 2012. [2] U. Apsel, K. Kersting, and M. Mladenov. Lifting Relational MAP-LPs Using Cluster Signatures. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. [3] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, 2013. [4] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, UrbanaChampaign, IL, 2007. [5] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan & Claypool, 2009. [6] V. Gogate and P. Domingos. Probabilistic Theorem Proving. In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, pages 256–265. AUAI Press, 2011. [7] F. Hadiji and K. Kersting. Reduce and Re-Lift: Bootstrapped Lifted Likelihood Maximization for MAP. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, 2013. [8] Gurobi Optimization Inc. Gurobi Optimizer Reference Manual, 2014. [9] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted Inference from the Other Side: The tractable Features. In Proceedings of the 24th Annual Conference on Neural Information Processing Systems, 2010. [10] J. Kisynski and D. Poole. Constraint Processing in Lifted Probabilistic Inference. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 293–302, 2009. [11] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy System for Statistical Relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2008. http://alchemy.cs.washington.edu. [12] R. Marinescu and R. Dechter. AND/OR Branch-and-Bound Search for Combinatorial Optimization in Graphical Models. Artificial Intelligence, 173(16-17):1457–1491, 2009. [13] H. Mittal, P. Goyal, V. Gogate, and P. Singla. New Rules for Domain Independent Lifted MAP Inference. In Advances in Neural Information Processing Systems, 2014. [14] M. Mladenov, A. Globerson, and K. Kersting. Efficient Lifting of MAP LP Relaxations Using k-Locality. Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 2014. [15] F. Niu, C. R´e, A. Doan, and J. Shavlik. Tuffy: Scaling up Statistical Inference in Markov Logic Networks Using an RDBMS. Proceedings of the VLDB Endowment, 2011. [16] J. Noessner, M. Niepert, and H. Stuckenschmidt. RockIt: Exploiting Parallelism and Symmetry for MAP Inference in Statistical Relational Models. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, 2013. [17] D. Poole. First-Order Probabilistic Inference. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 985–991, Acapulco, Mexico, 2003. Morgan Kaufmann. [18] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. An Integer Polynomial Programming Based Framework for Lifted MAP Inference. In Advances in Neural Information Processing Systems, 2014. [19] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. Lifted MAP inference for Markov Logic Networks. Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 2014. [20] B. Selman, H. Kautz, and B. Cohen. Local Search Strategies for Satisfiability Testing. In Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge. 1996. [21] G. Van den Broeck and A. Darwiche. On the Complexity and Approximation of Binary Evidence in Lifted Inference. In Advances in Neural Information Processing Systems, 2013. [22] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by First-Order Knowledge Compilation. In Proceedings of the Twenty Second International Joint Conference on Artificial Intelligence, pages 2178–2185, 2011. [23] D. Venugopal and V. Gogate. Evidence-based Clustering for Scalable Inference in Markov Logic. In Machine Learning and Knowledge Discovery in Databases. 2014. 9
2015
156
5,654
Algorithmic Stability and Uniform Generalization Ibrahim Alabdulmohsin King Abdullah University of Science and Technology Thuwal 23955, Saudi Arabia ibrahim.alabdulmohsin@kaust.edu.sa Abstract One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction methods. Finally, we connect algorithmic stability to the size of the hypothesis space, which recovers the classical PAC result that the size (complexity) of the hypothesis space should be controlled in order to improve algorithmic stability and improve generalization. 1 Introduction One fundamental goal of any learning algorithm is to strike a right balance between underfitting and overfitting. In mathematical terms, this is often translated into two separate objectives. First, we would like the learning algorithm to produce a hypothesis that is reasonably consistent with the empirical evidence (i.e. to have a small empirical risk). Second, we would like to guarantee that the empirical risk (training error) is a valid estimate of the true unknown risk (test error). The former condition protects against underfitting while the latter condition protects against overfitting. The rationale behind these two objectives can be understood if we define the generalization risk Rgen by the absolute difference between the empirical and true risks: Rgen .= Remp −Rtrue . Then, it is elementary to observe that the true risk Rtrue is bounded from above by the sum Remp + Rgen. Hence, by minimizing both the empirical risk (underfitting) and the generalization risk (overfitting), one obtains an inference procedure whose true risk is minimal. Minimizing the empirical risk alone can be carried out using the empirical risk minimization (ERM) procedure [1] or some approximations to it. However, the generalization risk is often impossible to deal with directly. Instead, it is a common practice to bound it analyticaly so that we can establish conditions under which it is guaranteed to be small. By establishing conditions for generalization, one hopes to design better learning algorithms that both perform well empirically and generalize well to novel observations in the future. A prominent example of such an approach is the Support Vector Machines (SVM) algorithm for binary classification [2]. However, bounding the generalization risk is quite intricate because it can be approached from various angles. In fact, several methods have been proposed in the past to prove generalization bounds including uniform convergence, algorithmic stability, Rademacher and Gaussian complexities, generic chaining bounds, the PAC-Bayesian framework, and robustness-based analysis 1 [1, 3, 4, 5, 6, 7, 8, 9]. Concentration of measure inequalities form the building blocks of these rich theories. The proliferation of generalization bounds can be understood if we look into the general setting of learning introduced by Vapnik [1]. In this setting, we have an observation space Z and a hypothesis space H. A learning algorithm, henceforth denoted L : ∪∞ m=1 Zm →H, uses a finite set of observations to infer a hypothesis H ∈H. In the general setting, the inference process end-to-end is influenced by three key factors: (1) the nature of the observation space Z, (2) the nature of the hypothesis space H, and (3) the details of the learning algorithm L. By imposing constraints on any of these three components, one may be able to derive new generalization bounds. For example, the Vapnik-Chervonenkis (VC) theory derives generalization bounds by assuming constraints on H, while stability bounds, e.g. [6, 10, 11, 12], are derived by assuming constraints on L. Given that different generalization bounds can be established by imposing constraints on any of Z, H, or L, it is intriguing to ask if there exists a single view for generalization that ties all of these different components together. In this paper, we answer this question in the affirmative by establishing that algorithmic stability alone is equivalent to uniform generalization. Informally speaking, an inference process is said to generalize uniformly if the generalization risk vanishes uniformly across all bounded parametric loss functions at the limit of large training sets. A more precise definition will be presented in the sequel. We will show why constraints that are imposed on either H, Z, or L to improve uniform generalization can be interpreted as methods of improving the stability of the learning algorithm L. This is similar in spirit to a result by Kearns and Ron, who showed that having a finite VC dimension in the hypothesis space H implies a certain notion of algorithmic stability in the inference process [13]. Our statement, however, is more general as it applies to all learning algorithms that fall under Vapnik’s general setting of learning, well beyond uniform convergence. The rest of the paper is as follows. First, we review the current literature on algorithmic stability, generalization, and learnability. Then, we introduce key definitions that will be repeatedly used throughout the paper. Next, we prove the central theorem, which reveals that algorithmic stability is equivalent to uniform generalization, and provide various interpretations of this result afterward. 2 Related Work Perhaps, the two most fundamental concepts in statistical learning theory are those of learnability and generalization [12, 14]. The two concepts are distinct from each other. As will be discussed in more details next, whereas learnability is concerned with measuring the excess risk within a hypothesis space, generalization is concerned with estimating the true risk. In order to define learnability and generalization, suppose we have an observation space Z, a probability distribution of observations P(z), and a bounded stochastic loss function L(·; H) : Z → [0, 1], where H ∈H is an inferred hypothesis. Note that L is implicitly a function of (parameterized by) H as well. We define the true risk of a hypothesis H ∈H by the risk functional: Rtrue(H) = EZ∼P(z)  L(Z; H)  (1) Then, a learning algorithm is called consistent if the true risk of its inferred hypothesis H converges to the optimal true risk within the hypothesis space H at the limit of large training sets m →∞. A problem is called learnable if it admits a consistent learning algorithm [14]. It has been known that learnability for supervised classification and regression problems is equivalent to uniform convergence [3, 14]. However, Shalev-Shwartz et al. recently showed that uniform convergence is not necessary in Vapnik’s general setting of learning and proposed algorithmic stability as an alternative key condition for learnability [14]. Unlike learnability, the question of generalization is concerned primarily with how representative the empirical risk Remp is to the true risk Rtrue. To elaborate, suppose we have a finite training set Sm = {Zi}i=1,..,m, which comprises of m i.i.d. observations Zi ∼P(z). We define the empirical risk of a hypothesis H with respect to Sm by: Remp(H; Sm) = 1 m X Zi∈Sm L(Zi; H) (2) We also let Rtrue(H) be the true risk as defined in Eq. (1). Then, a learning algorithm L is said to generalize if the empirical risk of its inferred hypothesis converges to its true risk as m →∞. 2 Similar to learnability, uniform convergence is, by definition, sufficient for generalization [1], but it is not necessary because the learning algorithm can always restrict its search space to a smaller subset of H (artificially so to speak). By contrast, it is not known whether algorithmic stability is necessary for generalization. It has been shown that various notions of algorithmic stability can be defined that are sufficient for generalization [6, 10, 11, 12, 15, 16]. However, it is not known whether an appropriate notion of algorithmic stability can be defined that is both necessary and sufficient for generalization in Vapnik’s general setting of learning. In this paper, we answer this question by showing that stability in the inference process is not only sufficient for generalization, but it is, in fact, equivalent to uniform generalization, which is a notion of generalization that is stronger than the one traditionally considered in the literature. 3 Preliminaries To simplify the discussion, we will always assume that all sets are countable, including the observation space Z and the hypothesis space H. This is similar to the assumptions used in some previous works such as [6]. However, the main results, which are presented in Section 4, can be readily generalized. In addition, we assume that all learning algorithms are invariant to permutations of the training set. Hence, the order of training examples is irrelevant. Moreover, if X ∼P(x) is a random variable drawn from the alphabet X and f(X) is a function of X, we write EX∼P(x) f(X) to mean P x∈X P(x) f(x). Often, we will simply write EX f(X) to mean EX∼P(x) f(X) if the distribution of X is clear from the context. If X takes its values from a finite set S uniformly at random, we write X ∼S to denote this distribution of X. If X is a boolean random variable, then I{X} = 1 if and only if X is true, otherwise I{X} = 0. In general, random variables are denoted with capital letters, instances of random variables are denoted with small letters, and alphabets are denoted with calligraphic typeface. Also, given two probability mass functions P and Q defined on the same alphabet A, we will write ⟨P, Q⟩to denote the overlapping coefficient, i.e. intersection, between P and Q. That is, ⟨P, Q⟩.= P a∈A min{P(a), Q(a)}. Note that ⟨P, Q⟩= 1 −||P , Q||T , where ||P , Q||T is the total variation distance. Last, we will write B(k; φ, n) = n k  φk (1 −φ)n−k to denote the binomial distribution. In this paper, we consider the general setting of learning introduced by Vapnik [1]. To reiterate, we have an observation space Z and a hypothesis space H. Our learning algorithm L receives a set of m observations Sm = {Zi}i=1,..,m ∈Zm generated i.i.d. from a fixed unknown distribution P(z), and picks a hypothesis H ∈H with probability PL(H = h|Sm). Formally, L : ∪∞ m=1 Zm →H is a stochastic map. In this paper, we allow the hypothesis H to be any summary statistic of the training set. It can be a measure of central tendency, as in unsupervised learning, or it can be a mapping from an input space to an output space, as in supervised learning. In fact, we even allow H to be a subset of the training set itself. In formal terms, L is a stochastic map between the two random variables H ∈H and Sm ∈Zm, where the exact interpretation of those random variables is irrelevant. In any learning task, we assume a non-negative bounded loss function L(Z; H) : Z →[0, 1] is used to measure the quality of the inferred hypothesis H ∈H on the observation Z ∈Z. Most importantly, we assume that L(·; H) : Z →[0, 1] is parametric: Definition 1 (Parametric Loss Functions). A loss function L(·; H) : Z →[0, 1] is called parametric if it is independent of the training set Sm given the inferred hypothesis H. That is, a parametric loss function satisfies the Markov chain: Sm →H →L(·; H). For any fixed hypothesis H ∈H, we define its true risk Rtrue(H) by Eq. (1), and define its empirical risk on a training set Sm, denoted Remp(H; Sm), by Eq. (2). We also define the true and empirical risks of the learning algorithm L by the expected risk of its inferred hypothesis: ˆRtrue(L) = ESm EH ∼PL(h|Sm) Rtrue(H) = ESm EH|Sm Rtrue(H) (3) ˆRemp(L) = ESm EH ∼PL(h|Sm) Remp(H; Sm) = ESm EH|Sm Remp(H; Sm) (4) To simplify notation, we will write ˆRtrue and ˆRemp instead of ˆRtrue(L) and ˆRemp(L). We will consider the following definition of generalization: Definition 2 (Generalization). A learning algorithm L : ∪∞ m=1 Zm →H with a parametric loss function L(·; H) : Z →[0, 1] generalizes if for any distribution P(z) on Z, we have limm→∞| ˆRemp−ˆRtrue = 0, where ˆRtrue and ˆRemp are given in Eq. (3) and Eq. (4) respectively. 3 In other words, a learning algorithm L generalizes according to Definition 2 if its empirical performance (training loss) becomes an unbiased estimator to the true risk as m →∞. Next, we define uniform generalization: Definition 3 (Uniform Generalization). A learning algorithm L : ∪∞ m=1 Zm →H generalizes uniformly if for any ϵ > 0, there exists m0(ϵ) > 0 such that for all distributions P(z) on Z, all parametric loss functions, and all sample sizes m > m0(ϵ), we have | ˆRemp(L) −ˆRtrue(L) ≤ϵ. Uniform generalization is stronger than the original notion of generalization in Definition 2. In particular, if a learning algorithm generalizes uniformly, then it generalizes according to Definition 2 as well. The converse, however, is not true. Even though uniform generalization appears to be quite a strong condition, at first sight, a key contribution of this paper is to show that it is not a strong condition because it is equivalent to a simple condition, namely algorithmic stability. 4 Main Results Before we prove that algorithmic stability is equivalent to uniform generalization, we introduce a probabilistic notion of mutual stability between two random variables. In order to abstract away any labeling information the random variables might possess, e.g. the observation space may or may not be a metric space, we define stability by the impact of observations on probability distributions: Definition 4 (Mutual Stability). Let X ∈X and Y ∈Y be two random variables. Then, the mutual stability between X and Y is defined by: S(X; Y ) .= ⟨P(X) P(Y ), P(X, Y )⟩= EX ⟨P(Y ), P(Y |X)⟩= EY ⟨P(X), P(X|Y )⟩ If we recall that 0 ≤⟨P, Q⟩≤1 is the overlapping coefficient between the two probability distributions P and Q, we see that S(X; Y ) given by Definition 4 is indeed a probabilistic measure of mutual stability. It measures how stable the distribution of Y is before and after observing an instance of X, and vice versa. A small value of S(X; Y ) means that the probability distribution of X or Y is heavily perturbed by a single observation of the other random variable. Perfect mutual stability is achieved when the two random variables are independent of each other. With this probabilistic notion of mutual stability in mind, we define the stability of a learning algorithm L by the mutual stability between its inferred hypothesis and a random training example. Definition 5 (Algorithmic Stability). Let L : ∪∞ m=1 Zm →H be a learning algorithm that receives a finite set of training examples Sm = {Zi}i=1,..,m ∈Zm drawn i.i.d. from a fixed distribution P(z). Let H ∼PL(h|Sm) be the hypothesis inferred by L, and let Ztrn ∼Sm be a single random training example. We define the stability of L by: S(L) = infP(z) S(H; Ztrn), where the infimum is taken over all possible distributions of observations P(z). A learning algorithm is called algorithmically stable if limm→∞S(L) = 1. Note that the above definition of algorithmic stability is rather weak; it only requires that the contribution of any single training example on the overall inference process to be more and more negligible as the sample size increases. In addition, it is well-defined even if the learning algorithm is deterministic because the hypothesis H, if it is a deterministic function of an entire training set of m observations, remains a stochastic function of any individual observation. We illustrate this concept with the following example: Example 1. Suppose that observations Zi ∈{0, 1} are i.i.d. Bernoulli trials with P(Zi = 1) = φ, and that the hypothesis produced by L is the empirical average H = 1 m Pm i=1 Zi. Because P(H = k/m Ztrn = 1) = B(k −1; φ, m −1) and P(H = k/m Ztrn = 0) = B(k; φ, m −1), it can be shown using Stirling’s approximation [17] that the algorithmic stability of this learning algorithm is asymptotically given by S(L) ∼1 − 1 √ 2 π m, which is achieved when φ = 1/2. A more general statement will be proved later in Section 5. Next, we show that the notion of algorithmic stability in Definition 5 is equivalent to the notion of uniform generalization in Definition 3. Before we do that, we first state the following lemma. Lemma 1 (Data Processing Inequality). Let A, B, and C be three random variables that satisfy the Markov chain A →B →C. Then: S(A; B) ≤S(A; C). 4 Proof. The proof consists of two steps 1. First, we note that because the Markov chain implies that P(C|B, A) = P(C|B), we have S(A; (B, C)) = S(A; B) by direct substitution into Definition 5. Second, similar to the information-cannot-hurt inequality in information theory [18], it can be shown that S(A; (B, C)) ≤S(A; C) for any random variables A, B and C. This is proved using some algebraic manipulation and the fact that the minimum of the sums is always larger than the sum of minimums, i.e. min  P i αi, P i βi ≥P i min{αi, βi}. Combining both results yields S(A; B) = S(A; (B, C)) ≤S(A; C), which is the desired result. Now, we are ready to state the main result of this paper. Theorem 1. For any learning algorithm L : ∪∞ m=1 Zm →H, algorithmic stability as given in Definition 5 is both necessary and sufficient for uniform generalization (see Definition 3). In addition, ˆRtrue −ˆRemp ≤1 −S(H; Ztrn) ≤1 −S(L), where Rtrue and Remp are the true and empirical risks of the learning algorithm defined in Eq. (3) and (4) respectively. Proof. Here is an outline of the proof. First, because a parametric loss function L(·; H) : Z →[0, 1] is itself a random variable that satisfies the Markov chain Sm →H →L(·; H), it is not independent of Ztrn ∼Sm. Hence, the empirical risk is given by ˆRemp = EL(·;H) EZtrn|L(·;H) L(Ztrn; H). By contrast, the true risk is given by ˆRtrue = EL(·;H) EZtrn∼P(z) L(Ztrn; H). The difference is: ˆRtrue −ˆRemp = EL(·;H)  EZtrn L(Ztrn; H) −EZtrn|L(·;H) L(Ztrn; H)  To sandwich the right-hand side between an upper and a lower bound, we note that if P1(z) and P2(z) are two distributions defined on the same alphabet Z and F(·) : Z →[0, 1] is a bounded loss function, then EZ∼P1(z) F(Z) −EZ∼P2(z) F(Z) ≤||P1(z) , P2(z)||T , where ||P , Q||T is the total variation distance. The proof to this result can be immediately deduced by considering the two regions {z ∈Z : P1(z) > P2(z)} and {z ∈Z : P1(z) < P2(z)} separately. This is, then, used to deduce the inequalities: ˆRtrue −ˆRemp ≤1 −S(L(·; H); Ztrn) ≤1 −S(H; Ztrn) ≤1 −S(L), where the second inequality follows by the data processing inequality in Lemma 1, whereas the last inequality follows by definition of algorithmic stability (see Definition 5). This proves that if L is algorithmically stable, i.e. S(L) →1 as m →∞, then ˆRtrue −ˆRemp converges to zero uniformly across all parametric loss functions. Therefore, algorithmic stability is sufficient for uniform generalization. The converse is proved by showing that for any δ > 0, there exists a bounded parametric loss and a distribution Pδ(z) such that 1 −S(L) −δ ≤ ˆRtrue −ˆRemp ≤1 −S(L). Therefore, algorithmic stability is also necessary for uniform generalization. 5 Interpreting Algorithmic Stability and Uniform Generalization In this section, we provide several interpretations of algorithmic stability and uniform generalization. In addition, we show how Theorem 1 recovers some classical results in learning theory. 5.1 Algorithmic Stability and Data Processing The relationship between algorithmic stability and data processing is presented in Lemma 1. Given the random variables A, B, and C and the Markov chain A →B →C, we always have S(A; B) ≤ S(A; C). This presents us with qualitative insights into the design of machine learning algorithms. First, suppose we have two different hypotheses H1 and H2. We will say that H2 contains less informative than H1 if the Markov chain Sm →H1 →H2 holds. For example, if observations Zi ∈{0, 1} are Bernoulli trials, then H1 ∈R can be the empirical average as given in Example 1 while H2 ∈{0, 1} can be the label that occurs most often in the training set. Because H2 = I{H1 ≥ m/2}, the hypothesis H2 contains strictly less information about the original training set than H1. Formally, we have Sm →H1 →H2. In this case, H2 enjoys a better uniform generalization bound than H1 because of data-processing. Intuitively, we know that such a result should hold because H2 is less tied to the original training set than H1. This brings us to the following remark. 1Detailed proofs are available in the supplementary file. 5 Remark 1. We can improve the uniform generalization bound (or equivalently algorithmic stability) of a learning algorithm by post-processing its inferred hypothesis H in a manner that is conditionally independent of the original training set given H. Example 2. Post-processing hypotheses is a common technique used in machine learning. This includes sparsifying the coefficient vector w ∈Rd in linear methods, where wj is set to zero if it has a small absolute magnitude. It also includes methods that have been proposed to reduce the number of support vectors in SVM by exploiting linear dependence [19]. By the data processing inequality, such methods improve algorithmic stability and uniform generalization. Needless to mention, better generalization does not immediately translate into a smaller true risk. This is because the empirical risk itself may increase when the inferred hypothesis is post-processed independently of the original training set. Second, if the Markov chain A →B →C holds, we also obtain S(A; C) ≥S(B; C) by applying the data processing inequality to the reverse Markov chain C →B →A. As a result, we can improve algorithmic stability by contaminating training examples with artificial noise prior to learning. This is because if ˆSm is a perturbed version of a training set Sm, then Sm →ˆSm →H implies that S(Ztrn; H) ≥S( ˆZtrn; H), when Ztrn ∼Sm and ˆZtrn ∼ˆSm are random training examples drawn uniformly at random from each training set respectively. This brings us to the following remark: Remark 2. We can improve the algorithmic stability of a learning algorithm by introducing artificial noise to training examples, and applying the learning algorithm on the perturbed training set. Example 3. Corrupting training examples with artificial noise, such as the recent dropout method, are popular techniques in neural networks to improve generalization [20]. By the data processing inequality, such methods indeed improve algorithmic stability and uniform generalization. 5.2 Algorithmic Stability and the Size of the Observation Space Next, we look into how the size of the observation space Z influences algorithmic stability. First, we start with the following definition: Definition 6 (Lazy Learning). A learning algorithm L is called lazy if its hypothesis H ∈H is mapped one-to-one with the training set Sm, i.e. the mapping H →Sm is injective. A lazy learner is called lazy if its hypothesis is equivalent to the original training set in its information content. Hence, no learning actually takes place. One example is instance-based learning when H = Sm. Despite their simple nature, lazy learners are useful in practice. They are useful theoretical tools as well. In particular, because of the equivalence H ≡Sm and the data processing inequality, the algorithmic stability of a lazy learner provides a lower bound to the stability of any possible learning algorithm. Therefore, we can relate algorithmic stability (uniform generalization) to the size of the observation space by quantifying the algorithmic stability of lazy learners. Because the size of Z is usually infinite, however, we introduce the following definition of effective set size. Definition 7. In a countable space Z endowed with a probability mass function P(z), the effective size of Z w.r.t. P(z) is defined by: Ess [Z; P(z)] .= 1 + P z∈Z p P(z) (1 −P(z)) 2. At one extreme, if P(z) is uniform over a finite alphabet Z, then Ess [Z; P(z)] = |Z|. At the other extreme, if P(z) is a Kronecker delta distribution, then Ess [Z; P(z)] = 1. As proved next, this notion of effective set size determines the rate of convergence of an empirical probability mass function to its true distribution when the distance is measured in the total variation sense. As a result, it allows us to relate algorithmic stability to a property of the observation space Z. Theorem 2. Let Z be a countable space endowed with a probability mass function P(z). Let Sm be a set of m i.i.d. samples Zi ∼P(z). Define PSm(z) to be the empirical probability mass function induced by drawing samples uniformly at random from Sm. Then: ESm ||P(z), PSm(z)||T = q Ess [Z; P(z)]−1 2 π m + o(1/√m), where 1 ≤Ess [Z; P(z)] ≤|Z| is the effective size of Z (see Definition 7). In addition, for any learning algorithm L : ∪∞ m=1 Zm →H, we have S(H; Ztrn) ≥ 1 − q Ess [Z; P(z)]−1 2 π m −o(1/√m), where the bound is achieved by lazy learners (see Definition 6)2. 2A special case of Theorem 2 was proved by de Moivre in the 1730s, who showed that the empirical mean of i.i.d. Bernoulli trials with a probability of success φ converges to the true mean at a rate of p 2φ(1 −φ)/(πm) 6 Proof. Here is an outline of the proof. First, we know that P(Sm) = m m1, m2, ...  pm1 1 pm2 2 · · · , where · ·  is the multinomial coefficient. Using the relation ||P, Q||T = 1 2||P −Q||1, the multinomial series, and De Moivre’s formula for the mean deviation of the binomial random variable [22], it can be shown with some algebraic manipulations that: ESm ||P(z), PSm(z)||T = 1 m X k=1,2,... (1 −pk)(1−pk)mp1+mpk k m! (pkm)! ((1 −pk)m −1)! Using Stirling’s approximation to the factorial [17], we obtain the simple asymptotic expression: ESm ||P(z), PSm(z)||T ∼1 2 X k=1,2,3,... r 2pk(1 −pk) πm = 1 − r Ess [Z; P(z)] −1 2πm , which is tight due to the tightness of the Stirling approximation. The rest of the theorem follows from the Markov chain Sm →Sm →H, the data processing inequality, and Definition 6. Corollary 1. Given the conditions of Theorem 2, if Z is in addition finite (i.e. |Z| < ∞), then for any learning algorithm L, we have: S(L) ≥1 − q |Z|−1 2πm −o(1/√m) Proof. Because in a finite observation space Z, the maximum effective set size (see Definition 7) is |Z|, which is attained at the uniform distribution P(z) = 1/|Z|. Intuitively speaking, Theorem 2 and its corollary state that in order to guarantee good uniform generalization for all possible learning algorithms, the number of observations must be sufficiently large to cover the entire effective size of the observation space Z. Needless to mention, this is difficult to achieve in practice so the algorithmic stability of machine learning algorithms must be controlled in order to guarantee a good generalization from a few empirical observations. Similarly, the uniform generalization bound can be improved by reducing the effective size of the observation space, such as by using dimensionality reduction methods. 5.3 Algorithmic Stability and the Complexity of the Hypothesis Space Finally, we look into the hypothesis space and how it influences algorithmic stability. First, we look into the role of the size of the hypothesis space. This is formalized in the following theorem. Theorem 3. Denote by H ∈H the hypothesis inferred by a learning algorithm L : ∪∞ m=1 Zm → H. Then, the following bound on algorithmic stability always holds: S(L) ≥1 − r H(H) 2 m ≥1 − r log |H| 2 m , where H is the Shannon entropy measured in nats (i.e. using natural logarithms). Proof. The proof is information-theoretic. If we let I(X; Y ) be the mutual information between the r.v.’s X and Y and let Sm = {Z1, Z2, . . . , Zm} be a random choice of a training set, we have: I(Sm; H) = H(Sm) −H(Sm | H) = h m X i=1 H(Zi) i − h H(Z1|H) + H(Z2|Z1, H) + · · · i Because conditioning reduces entropy, i.e. H(A|B) ≤H(A) for any r.v.’s A and B, we have: I(Sm; H) ≥ m X i=1 [H(Zi) −H(Zi | H)] = m [H(Ztrn) −H(Ztrn | H)] Therefore: I(Ztrn; H) ≤I(Sm; H) m (5) on average. This is believed to be the first appearance of the square-root law in statistical inference in the literature [21]. Because the effective set size of the Bernoulli distribution, according to Definition 7, is given by 1 + 4φ(1 −φ), Theorem 2 agrees with, in fact generalizes, de Moivre’s result. 7 Next, we use Pinsker’s inequality [18], which states that for any probability distributions P and Q: ||P , Q||T ≤ q D(P || Q) 2 , where ||P , Q||T is total variation distance and D(P || Q) is the Kullback-Leibler divergence measured in nats (i.e. using natural logarithms). If we recall that S(Sm; H) = 1 −||P(Sm) P(H) , P(Sm, H)||T while mutual information is I(Sm; H) = D(P(Sm, H) || P(Sm) P(H)), we deduce from Pinsker’s inequality and Eq. (5): S(Ztrn; H) = 1 −||P(Ztrn) P(H) , P(Ztrn, H)||T ≥1 − r I(Ztrn; H) 2 ≥1 − r I(Sm; H) 2m ≥1 − r H(H) 2m ≥1 − r log |H| 2m In the last line, we used the fact that I(X; Y ) ≤H(X) for any random variables X and Y . Theorem 3 re-establishes the classical PAC result on the finite hypothesis space [23]. In terms of algorithmic stability, a learning algorithm will enjoy a high stability if the size of the hypothesis space is small. In terms of uniform generalization, it states that the generalization risk of a learning algorithm is bounded from above uniformly across all parametric loss functions by p H(H)/(2m) ≤ p log |H|/(2m), where H(H) is the Shannon entropy of H. Next, we relate algorithmic stability to the Vapnik-Chervonenkis (VC) dimension. Despite the fact that the VC dimension is defined on binary-valued functions whereas algorithmic stability is a functional of probability distributions, there exists a connection between the two concepts. To show this, we first introduce a notion of an induced concept class that exists for any learning algorithm L: Definition 8. The concept class C induced by a learning algorithm L : ∪∞ m=1 Zm →H is defined to be the set of total Boolean functions c(z) = I{P(Ztrn = z | H) ≥P(Ztrn = z)} for all H ∈H. Intuitively, every hypothesis H ∈H induces a total partition on the observation space Z given by the Boolean function in Definition 8. That is, H splits Z into two disjoint sets: the set of values in Z that are, a posteriori, less likely to have been present in the training set than before given that the inferred hypothesis is H, and the set of all other values. The complexity (richness) of the induced concept class C is related to algorithmic stability via the VC dimension. Theorem 4. Let L : ∪∞ m=1 Zm →H be a learning algorithm with an induced concept class C. Let dV C(C) be the VC dimension of C. Then, the following bound holds if m > dV C(C) + 1: S(L) ≥1 −4 + p dV C(C) (1 + log(2m)) √ 2m In particular, L is algorithmically stable if its induced concept class C has a finite VC dimension. Proof. The proof relies on the fact that algorithmic stability S(L) is bounded from below by 1 − supP(z) n ESm suph∈H EZ∼P(z) ch(Z) − EZ∼Sm ch(Z) o , where cH(z) = I{P(Ztrn = z|H) ≥P(Ztrn) = z}. The final bound follows by applying uniform convergence results [23]. 6 Conclusions In this paper, we showed that a probabilistic notion of algorithmic stability was equivalent to uniform generalization. In informal terms, a learning algorithm is called algorithmically stable if the impact of a single training example on the probability distribution of the final hypothesis always vanishes at the limit of large training sets. In other words, the inference process never depends heavily on any single training example. If algorithmic stability holds, then the learning algorithm generalizes well regardless of the choice of the parametric loss function. We also provided several interpretations of this result. For instance, the relationship between algorithmic stability and data processing reveals that algorithmic stability can be improved by either post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we established a relationship between algorithmic stability and the effective size of the observation space, which provided a formal justification for dimensionality reduction methods. Finally, we connected algorithmic stability to the complexity (richness) of the hypothesis space, which re-established the classical PAC result that the complexity of the hypothesis space should be controlled in order to improve stability, and, hence, improve generalization. 8 References [1] V. N. Vapnik, “An overview of statistical learning theory,” Neural Networks, IEEE Transactions on, vol. 10, September 1999. [2] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, pp. 273–297, 1995. [3] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth, “Learnability and the VapnikChervonenkis dimension,” Journal of the ACM (JACM), vol. 36, no. 4, pp. 929–965, 1989. [4] M. Talagrand, “Majorizing measures: the generic chaining,” The Annals of Probability, vol. 24, no. 3, pp. 1049–1103, 1996. [5] D. A. McAllester, “PAC-Bayesian stochastic model selection,” Machine Learning, vol. 51, pp. 5–21, 2003. [6] O. Bousquet and A. Elisseeff, “Stability and generalization,” The Journal of Machine Learning Research (JMLR), vol. 2, pp. 499–526, 2002. [7] P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” The Journal of Machine Learning Research (JMLR), vol. 3, pp. 463–482, 2002. [8] J.-Y. Audibert and O. Bousquet, “Combining PAC-Bayesian and generic chaining bounds,” The Journal of Machine Learning Research (JMLR), vol. 8, pp. 863–889, 2007. [9] H. Xu and S. Mannor, “Robustness and generalization,” Machine learning, vol. 86, no. 3, pp. 391–423, 2012. [10] A. Elisseeff, M. Pontil, et al., “Leave-one-out error and stability of learning algorithms with applications,” NATO-ASI series on Learning Theory and Practice Science Series Sub Series III: Computer and Systems Sciences, 2002. [11] S. Kutin and P. Niyogi, “Almost-everywhere algorithmic stability and generalization error,” in Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence (UAI), 2002. [12] T. Poggio, R. Rifkin, S. Mukherjee, and P. Niyogi, “General conditions for predictivity in learning theory,” Nature, vol. 428, pp. 419–422, 2004. [13] M. Kearns and D. Ron, “Algorithmic stability and sanity-check bounds for leave-one-out crossvalidation,” Neural Computation, vol. 11, no. 6, pp. 1427–1453, 1999. [14] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan, “Learnability, stability and uniform convergence,” The Journal of Machine Learning Research (JMLR), vol. 11, pp. 2635– 2670, 2010. [15] L. Devroye, L. Gy¨orfi, and G. Lugosi, A probabilistic theory of pattern recognition. Springer, 1996. [16] V. Vapnik and O. Chapelle, “Bounds on error expectation for support vector machines,” Neural Computation, vol. 12, no. 9, pp. 2013–2036, 2000. [17] H. Robbins, “A remark on stirling’s formula,” American Mathematical Monthly, pp. 26–29, 1955. [18] T. M. Cover and J. A. Thomas, Elements of information theory. Wiley & Sons, 1991. [19] T. Downs, K. E. Gates, and A. Masters, “Exact simplification of support vector solutions,” JMLR, vol. 2, pp. 293–297, 2002. [20] S. Wager, S. Wang, and P. S. Liang, “Dropout training as adaptive regularization,” in NIPS, pp. 351–359, 2013. [21] S. M. Stigler, The history of statistics: The measurement of uncertainty before 1900. Harvard University Press, 1986. [22] P. Diaconis and S. Zabell, “Closed form summation for classical distributions: Variations on a theme of de moivre,” Statlstlcal Science, vol. 6, no. 3, pp. 284–302, 1991. [23] S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. 9
2015
157
5,655
Learning with Group Invariant Features: A Kernel Perspective. Youssef Mroueh IBM Watson Group mroueh@us.ibm.com Stephen Voinea∗ CBMM, MIT. voinea@mit.edu ∗Co-first author Tomaso Poggio CBMM, MIT . tp@ai.mit.edu Abstract We analyze in this paper a random feature map based on a theory of invariance (I-theory) introduced in [1]. More specifically, a group invariant signal signature is obtained through cumulative distributions of group-transformed random projections. Our analysis bridges invariant feature learning with kernel methods, as we show that this feature map defines an expected Haar-integration kernel that is invariant to the specified group action. We show how this non-linear random feature map approximates this group invariant kernel uniformly on a set of N points. Moreover, we show that it defines a function space that is dense in the equivalent Invariant Reproducing Kernel Hilbert Space. Finally, we quantify error rates of the convergence of the empirical risk minimization, as well as the reduction in the sample complexity of a learning algorithm using such an invariant representation for signal classification, in a classical supervised learning setting. 1 Introduction Encoding signals or building similarity kernels that are invariant to the action of a group is a key problem in unsupervised learning, as it reduces the complexity of the learning task and mimics how our brain represents information invariantly to symmetries and various nuisance factors (change in lighting in image classification and pitch variation in speech recognition) [1, 2, 3, 4]. Convolutional neural networks [5, 6] achieve state of the art performance in many computer vision and speech recognition tasks, but require a large amount of labeled examples as well as augmented data, where we reflect symmetries of the world through virtual examples [7, 8] obtained by applying identitypreserving transformations such as shearing, rotation, translation, etc., to the training data. In this work, we adopt the approach of [1], where the representation of the signal is designed to reflect the invariant properties and model the world symmetries with group actions. The ultimate aim is to bridge unsupervised learning of invariant representations with invariant kernel methods, where we can use tools from classical supervised learning to easily address the statistical consistency and sample complexity questions [9, 10]. Indeed, many invariant kernel methods and related invariant kernel networks have been proposed. We refer the reader to the related work section for a review (Section 5) and we start by showing how to accomplish this invariance through group-invariant Haarintegration kernels [11], and then show how random features derived from a memory-based theory of invariances introduced in [1] approximate such a kernel. 1.1 Group Invariant Kernels We start by reviewing group-invariant Haar-integration kernels introduced in [11], and their use in a binary classification problem. This section highlights the conceptual advantages of such kernels as well as their practical inconvenience, putting into perspective the advantage of approximating them with explicit and invariant random feature maps. 1 Invariant Haar-Integration Kernels. We consider a subset X of the hypersphere in d dimensions Sd−1. Let ρX be a measure on X. Consider a kernel k0 on X, such as a radial basis function kernel. Let G be a group acting on X, with a normalized Haar measure µ. G is assumed to be a compact and unitary group. Define an invariant kernel K between x, z ∈X through Haar-integration [11] as follows: K(x, z) = Z G Z G k0(gx, g′z)dµ(g)dµ(g′). (1) As we are integrating over the entire group, it is easy to see that: K(g′x, gz) = K(x, z), ∀g, g′ ∈ G, ∀x, z ∈X. Hence the Haar-integration kernel is invariant to the group action. The symmetry of K is obvious. Moreover, if k0 is a positive definite kernel, it follows that K is positive definite as well [11]. One can see the Haar-integration kernel framework as another form of data augmentation, since we have to produce group-transformed points in order to compute the kernel. Invariant Decision Boundary. Turning now to a binary classification problem, we assume that we are given a labeled training set: S = {(xi, yi) | xi ∈X, yi ∈Y = {±1}}N i=1. In order to learn a decision function f : X →Y, we minimize the following empirical risk induced by an L-Lipschitz, convex loss function V , with V ′(0) < 0 [12]: minf∈HK ˆEV (f) := 1 N PN i=1 V (yif(xi)), where we restrict f to belong to a hypothesis class induced by the invariant kernel K, the so called Reproducing Kernel Hilbert Space HK. The representer theorem [13] shows that the solution of such a problem, or the optimal decision boundary f ∗ N has the following form: f ∗ N(x) = PN i=1 α∗ i K(x, xi). Since the kernel K is group-invariant it follows that : f ∗ N(gx) = PN i=1 αiK(gx, xi) = PN i=1 αiK(x, xi) = f ∗ N(x), ∀g ∈G. Hence the the decision boundary f ∗is group-invariant as well, and we have: f ∗ N(gx) = f ∗ N(x), ∀g ∈G, ∀x ∈X. Reduced Sample Complexity. We have shown that a group-invariant kernel induces a groupinvariant decision boundary, but how does this translate to the sample complexity of the learning algorithm? To answer this question, we will assume that the input set X has the following structure: X = X0 ∪GX0, GX0 = {z|z = gx, x ∈X0, g ∈G/ {e}}, where e is the identity group element. This structure implies that for a function f in the invariant RKHS HK, we have: ∀z ∈GX0, ∃x ∈X0, ∃g ∈G such that, z = gx, and f(z) = f(x). Let ρy(x) = P(Y = y|x) be the label posteriors. We assume that ρy(gx) = ρy(x), ∀g ∈G. This is a natural assumption since the label is unchanged given the group action. Assume that the set X is endowed with a measure ρX that is also group-invariant. Let f be the group-invariant decision function and consider the expected risk induced by the loss V , EV (f), defined as follows: EV (f) = Z X X y∈Y V (yf(x))ρy(x)ρX (x)dx, (2) EV (f) is a proxy to the misclassification risk [12]. Using the invariant properties of the function class and the data distribution we have by invariance of f, ρy, and ρ: EV (f) = Z X0 X y∈Y V (yf(x))ρy(x)ρX (x)dx + Z GX0 X y∈Y V (yf(z))ρy(z)ρX (z)dz = Z G dµ(g) Z X0 X y∈Y V (yf(gx))ρy(gx)ρX (x)dx = Z G dµ(g) Z X0 X y∈Y V (yf(x))ρy(x)ρX (x)dx (By invariance of f, ρy, and ρ ) = Z X0 X y∈Y V (yf(x))ρy(x)ρX (x)dx. Hence, given an invariant kernel to a group action that is identity preserving, it is sufficient to minimize the empirical risk on the core set X0, and it generalizes to samples in GX0. Let us imagine that X is finite with cardinality |X|; the cardinality of the core set X0 is a small fraction of the cardinality of X: |X0| = α|X|, where 0 < α < 1. Hence, when we sample training points from X0, the maximum size of the training set is N = α|X| << |X|, yielding a reduction in the sample complexity. 2 1.2 Contributions We have just reviewed the group-invariant Haar-integration kernel. In summary, a group-invariant kernel implies the existence of a decision function that is invariant to the group action, as well as a reduction in the sample complexity due to sampling training points from a reduced set, a.k.a the core set X0. Kernel methods with Haar-integration kernels come at a very expensive computational price at both training and test time: computing the Kernel is computationally cumbersome as we have to integrate over the group and produce virtual examples by transforming points explicitly through the group action. Moreover, the training complexity of kernel methods scales cubicly in the sample size. Those practical considerations make the usefulness of such kernels very limited. The contributions of this paper are on three folds: 1. We first show that a non-linear random feature map Φ : X →RD derived from a memorybased theory of invariances introduced in [1] induces an expected group-invariant Haarintegration kernel K. For fixed points x, z ∈X, we have: E ⟨Φ(x), Φ(z)⟩= K(x, z), where K satisfies: K(gx, g′z) = K(x, z), ∀g, g′ ∈G, x, z ∈X. 2. We show a Johnson-Lindenstrauss type result that holds uniformly on a set of N points that assess the concentration of this random feature map around its expected induced kernel. For sufficiently large D, we have ⟨Φ(x), Φ(z)⟩≈K(x, z), uniformly on an N points set. 3. We show that, with a linear model, an invariant decision function can be learned in this random feature space by sampling points from the core set X0 i.e: f ∗ N(x) ≈⟨w∗, Φ(x)⟩ and generalizes to unseen points in GX0, reducing the sample complexity. Moreover, we show that those features define a function space that approximates a dense subset of the invariant RKHS, and assess the error rates of the empirical risk minimization using such random features. 4. We demonstrate the validity of these claims on three datasets: text (artificial), vision (MNIST), and speech (TIDIGITS). 2 From Group Invariant Kernels to Feature Maps In this paper we show that a random feature map based on I-theory [1]: Φ : X →RD approximates a group-invariant Haar-integration kernel K having the form given in Equation (1): ⟨Φ(x), Φ(z)⟩≈K(x, z). We start with some notation that will be useful for defining the feature map. Denote the cumulative distribution function of a random variable X by, FX(τ) = P(X ≤τ), Fix x ∈X, Let g ∈G be a random variable drawn according to the normalized Haar measure µ and let t be a random template whose distribution will be defined later. For s > 0, define the following truncated cumulative distribution function (CDF) of the dot product ⟨x, gt⟩: ψ(x, t, τ) = Pg(⟨x, gt⟩≤τ) = F⟨x,gt⟩(τ), τ ∈[−s, s], x ∈X, Let ε ∈(0, 1). We consider the following Gaussian vectors (sampling with rejection) for the templates t: t = n ∼N  0, 1 dId  , if ∥n∥2 2 < 1 + ε, t =⊥else . The reason behind this sampling is to keep the range of ⟨x, gt⟩under control: The squared norm ∥n∥2 2 will be bounded by 1 + ε with high probability by a classical concentration result (See proof of Theorem 1 for more details). The group being unitary and x ∈Sd−1, we know that : | ⟨x, gt⟩| ≤ ∥n∥2 < √1 + ε ≤1 + ε, for ε ∈(0, 1). Remark 1. We can also consider templates t, drawn uniformly on the unit sphere Sd−1. Uniform templates on the sphere can be drawn as follows: t = ν ∥ν∥2 , ν ∼N(0, Id), 3 since the norm of a gaussian vector is highly concentrated around its mean √ d, we can use the gaussian sampling with rejection. Results proved for gaussian templates (with rejection) will hold true for templates drawn at uniform on the sphere with different constants. Define the following kernel function, Ks(x, z) = Et Z s −s ψ(x, t, τ)ψ(z, t, τ)dτ, where s will be fixed throughout the paper to be s = 1+ε since the gaussian sampling with rejection controls the dot product to be in that range. Let ¯g ∈ G. As the group is closed, we have ψ(t, ¯gx, τ) = R G 1I⟨g¯gx,t⟩≤τdµ(g) = R G 1I⟨gx,t⟩≤τdµ(g) = ψ(t, x, τ) and hence K(gx, g′z) = K(x, z), for all g, g′ ∈G. It is clear now that K is a group-invariant kernel. In order to approximate K, we sample |G| elements uniformly and independently from the group G, i.e. gi, i = 1 . . . |G|, and define the normalized empirical CDF : φ(x, t, τ) = 1 |G|√m |G| X i=1 1I⟨git,x⟩≤τ, −s ≤τ ≤s. We discretize the continuous threshold τ as follows: φ  x, t, sk n  = √s √nm|G| |G| X i=1 1I⟨git,x⟩≤s n k, −n ≤k ≤n. We sample m templates independently according to the Gaussian sampling with rejection, tj, j = 1 . . . m. We are now ready to define the random feature map Φ: Φ(x) =  φ  x, tj, sk n  j=1...m,k=−n...n ∈R(2n+1)×m. It is easy to see that: lim n→∞Et,g ⟨Φ(x), Φ(z)⟩R(2n+1)×m = lim n→∞Et,g m X j=1 n X k=−n φ  x, tj, sk n  φ  z, tj, sk n  = Ks(x, z). In Section 3 we study the geometric information captured by this kernel by stating explicitly the similarity it computes. Remark 2 (Efficiency of the representation). 1) The main advantage of such a feature map, as outlined in [1], is that we store transformed templates in order to compute Φ, while if we wanted to compute an invariant kernel of type K (Equation (1)), we would need to explicitly transform the points. The latter is computationally expensive. Storing transformed templates and computing the signature Φ is much more efficient. It falls in the category of memory-based learning, and is biologically plausible [1]. 2) As |G|,m,n get large enough, the feature map Φ approximates a group-invariant Kernel, as we will see in next section. 3 An Equivalent Expected Kernel and a Uniform Concentration Result In this section we present our main results, with proofs given in the supplementary material . Theorem 1 shows that the random feature map Φ, defined in the previous section, corresponds in expectation to a group-invariant Haar-integration kernel Ks(x, z). Moreover, s −Ks(x, z) computes the average pairwise distance between all points in the orbits of x and z, where the orbit is defined as the collection of all group-transformations of a given point x : Ox = {gx, g ∈G}. Theorem 1 (Expectation). Let ε ∈(0, 1) and x, z ∈X. Define the distance dG between the orbits Ox and Oz: dG(x, z) = 1 √ 2πd Z G Z G ∥gx −g′z∥2 dµ(g)dµ(g′), and the group-invariant expected kernel Ks(x, z) = lim n→∞Et,g ⟨Φ(x), Φ(z)⟩R(2n+1)×m = Et Z s −s ψ(x, t, τ)ψ(z, t, τ)dτ, s = 1 + ε. 4 1. The following inequality holds with probability 1: ε −δ2(d, ε) ≤Ks(x, z) −(1 −dG(x, z)) ≤ε + δ1(d, ε), (3) where δ1(ε, d) = e−dε2/16 √ d −1 2 e−εd/2(1+ε) d 2 √ d and δ2(ε, δ) = e−dε2/16 √ d + (1 + ε)e−dε2/8. 2. For any ε ∈(0, 1) as the dimension d →∞we have δ1(ε, d) →0 and δ2(ε, d) →0, and we have asymptotically Ks(x, z) →1 −dG(x, z) + ε = s −dG(x, z). 3. Ks is symmetric and Ks is positive semi-definite. Remark 3. 1) ε, δ1(d, ε), and δ2(d, ε) are not errors due to results holding with high probability but are due to the truncation and are a technical artifact of the proof. 2) Local invariance can be defined by restricting the sampling of the group elements to a subset G ⊂G. Assuming that for each g ∈G, g−1 ∈G, the equivalent kernel has asymptotically the following form: Ks(x, z) ≈s − 1 √ 2πd Z G Z G ∥gx −g′z∥2 dµ(g)dµ(g′). 3) The norm-one constraint can be relaxed, let R = supx∈X ∥x∥2 < ∞, hence we can set s = R(1 + ε), and −δ2(d, ε) ≤Ks(x, z) −(R(1 + ε) −dG(x, z)) ≤δ1(d, ε), (4) where δ1(ε, d) = R e−dε2/16 √ d −R 2 e−εd/2(1+ε) d 2 √ d and δ2(ε, δ) = R e−dε2/16 √ d + R(1 + ε)e−dε2/8. Theorem 2 is, in a sense, an invariant Johnson-Lindenstrauss [14] type result where we show that the dot product defined by the random feature map Φ , i.e ⟨Φ(x), Φ(z)⟩, is concentrated around the invariant expected kernel uniformly on a data set of N points, given a sufficiently large number of templates m, a large number of sampled group elements |G|, and a large bin number n. The error naturally decomposes to a numerical error ε0 and statistical errors ε1, ε2 due to the sampling of the templates and the group elements respectively. Theorem 2. [Johnson-Lindenstrauss type Theorem- N point Set] Let D = {xi | xi ∈X}N i=1 be a finite dataset. Fix ε0, ε1, ε2, δ1, δ2 ∈(0, 1). For a number of bins n ≥ 1 ε0 , templates m ≥ C1 ε2 1 log( N δ1 ), and group elements |G| ≥C2 ε2 2 log( Nm δ2 ), where C1, C2 are universal numeric constants, we have: |⟨Φ(xi), Φ(xj)⟩−Ks(xi, xj)| ≤ε0 + ε1 + ε2, i = 1 . . . N, j = 1 . . . N, (5) with probability 1 −δ1 −δ2. Putting together Theorems 1 and 2, the following Corollary shows how the group-invariant random feature map Φ captures the invariant distance between points uniformly on a dataset of N points. Corollary 1 (Invariant Features Maps and Distances between Orbits). Let D = {xi | xi ∈X}N i=1 be a finite dataset. Fix ε0, δ ∈(0, 1). For a number of bins n ≥ 3 ε0 , templates m ≥9C1 ε2 0 log( N δ ), and group elements |G| ≥9C2 ε2 0 log( Nm δ ), where C1, C2 are universal numeric constants, we have: ε −δ2(d, ε) −ε0 ≤⟨Φ(xi), Φ(xj)⟩−(1 −dG(xi, xj)) ≤ε0 + ε + δ1(d, ε), (6) i = 1 . . . N, j = 1 . . . N, with probability 1 −2δ. Remark 4. Assuming that the templates are unitary and drawn form a general distribution p(t), the equivalent kernel has the following form: Ks(x, z) = Z G Z G dµ(g)dµ(g′) Z s −max(⟨x, gt⟩, ⟨z, g′t⟩)p(t)dt  . Indeed when we use the gaussian sampling with rejection for the templates, the integral R max(⟨x, gt⟩, ⟨z, g′t⟩)p(t)dt is asymptotically proportional to g−1x −g ′,−1z 2 . It is interesting to consider different distributions that are domain-specific for the templates and assess the number of the templates needed to approximate such kernels. It is also interesting to find the optimal templates that achieve the minimum distortion in equation 6, in a data dependent way, but we will address these points in future work. 5 4 Learning with Group Invariant Random Features In this section, we show that learning a linear model in the invariant, random feature space, on a training set sampled from the reduced core set X0, has a low expected risk, and generalizes to unseen test points generated from the distribution on X = X0 ∪GX0. The architecture of the proof follows ideas from [15] and [16]. Recall that given an L-Lipschitz convex loss function V , our aim is to minimize the expected risk given in Equation (2). Denote the CDF by ψ(x, t, τ) = P(⟨gt, x⟩≤τ), and the empirical CDF by ˆψ(x, t, τ) = 1 |G| P|G| i=1 1I⟨git,x⟩≤τ. Let p(t) be the distribution of templates t. The RKHS defined by the invariant kernel Ks, Ks(x, z) = R R s −s ψ(x, t, τ)ψ(z, t, τ)p(t)dtdτ denoted HKs , is the completion of the set of all finite linear combinations of the form: f(x) = X i αiKs(x, xi), xi ∈X, αi ∈R. (7) Similarly to [16], we define the following infinite-dimensional function space: Fp =  f(x) = Z Z s −s w(t, τ)ψ(x, t, τ)dtdτ | sup τ,t |w(t, τ)| p(t) ≤C  . Lemma 1. Fp is dense in HKs. For f ∈Fp we have EV (f) = R X0 P y∈Y V (yf(x))ρy(x)dρX (x), where X0 is the reduced core set. Since Fp is dense in HKs, we can learn an invariant decision function in the space Fp, instead of learning in HKs. Let Ψ(x) = h ˆψ x, tj, sk n i j=1...m,k=−n...n . Ψ, and Φ are equivalent up to constants. We will approximate the set Fp as follows: ˜F =   f(x) = ⟨w, Ψ(x)⟩= s n m X j=1 n X k=−n wj,k ˆψ  x, tj, sk n  , tj ∼p, j = 1 . . . m | ∥w∥∞≤C m   . Hence, we learn the invariant decision function via empirical risk minimization where we restrict the function to belong to ˜F, and the sampling in the training set is restricted to the core set X0. Note that with this function space we are regularizing for convenience the norm infinity of the weights but this can be relaxed in practice to a classical Tikhonov regularization. Theorem 3 (Learning with Group invariant features). Let S = {(xi, yi) | xi ∈X0, yi ∈ Y, i = 1 . . . N}, a training set sampled from the core set X0. Let f ∗ N = arg minf∈˜ F ˆEV (f) = 1 N PN i=1 V (yif(xi)).Fix δ > 0, then EV (f ∗ N) ≤min f∈Fp EV (f) + 2 1 √ N 4LsC + 2V (0) + LC s 1 2 log 1 δ ! + 2sLC √m 1 + s 2 log 1 δ ! + L 2sC p |G|  1 + r 2 log m δ  + 2sC n ! , with probability at least 1 −3δ on the training set and the choice of templates and group elements. The proof of Theorem 3 is given in Appendix B. Theorem 3 shows that learning a linear model in the invariant random feature space defined by Φ (or equivalently Ψ), has a low expected risk. More importantly, this risk is arbitrarily close to the optimal risk achieved in an infinitedimensional class of functions, namely Fp. The training set is sampled from the reduced core set X0, and invariant learning generalizes to unseen test points generated from the distribution on X = X0 ∪GX0, hence the reduction in the sample complexity. Recall that Fp is dense in the RKHS of the Haar-integration invariant Kernel, and so the expected risk achieved by a linear model in the invariant random feature space is not far from the one attainable in the invariant RKHS. Note that the error decomposes into two terms. The first, O( 1 √ N ), is statistical and it depends on the training sample complexity N. The other is governed by the approximation error of functions Fp, with functions in ˜F, and depends on the number of templates m, number of group elements sampled |G|, the number of bins n, and has the following form O( 1 √m)+O q log m |G|  + 1 n. 6 5 Relation to Previous Work We now put our contributions in perspective by outlining some of the previous work on invariant kernels and approximating kernels with random features. Approximating Kernels. Several schemes have been proposed for approximating a non-linear kernel with an explicit non-linear feature map in conjunction with linear methods, such as the Nystr¨om method [17] or random sampling techniques in the Fourier domain for translation-invariant kernels [15]. Our features fall under the random sampling techniques where, unlike previous work, we sample both projections and group elements to induce invariance with an integral representation. We note that the relation between random features and quadrature rules has been thoroughly studied in [18], where sharper bounds and error rates are derived, and can apply to our setting. Invariant Kernels. We focused in this paper on Haar-integration kernels [11], since they have an integral representation and hence can be represented with random features [18]. Other invariant kernels have been proposed: In [19] authors introduce transformation invariant kernels, but unlike our general setting, the analysis is concerned with dilation invariance. In [20], multilayer arccosine kernels are built by composing kernels that have an integral representation, but does not explicitly induce invariance. More closely related to our work is [21], where kernel descriptors are built for visual recognition by introducing a kernel view of histogram of gradients that corresponds in our case to the cumulative distribution on the group variable. Explicit feature maps are obtained via kernel PCA, while our features are obtained via random sampling. Finally the convolutional kernel network of [22] builds a sequence of multilayer kernels that have an integral representation, by convolution, considering spatial neighborhoods in an image. Our future work will consider the composition of Haar-integration kernels, where the convolution is applied not only to the spatial variable but to the group variable akin to [2]. 6 Numerical Evaluation In this paper, and specifically in Theorems 2 and 3, we showed that the random, group-invariant feature map Φ captures the invariant distance between points, and that learning a linear model trained in the invariant, random feature space will generalize well to unseen test points. In this section, we validate these claims through three experiments. For the claims of Theorem 2, we will use a nearest neighbor classifier, while for Theorem 3, we will rely on the regularized least squares (RLS) classifier, one of the simplest algorithms for supervised learning. While our proofs focus on norm-infinity regularization, RLS corresponds to Tikhonov regularization with square loss. Specifically, for performing T−way classification on a batch of N training points in Rd, summarized in the data matrix X ∈RN×d and label matrix Y ∈RN×T , RLS will perform the optimization, minW ∈Rm×T  1 N ||Y −Φ(X)W||2 F + λ||W||2 F , where ||·||F is the Frobenius norm, λ is the regularization parameter, and Φ is the feature map, which for the representation described in this paper will be a CDF pooling of the data projected onto group-transformed random templates. All RLS experiments in this paper were completed with the GURLS toolbox [23]. The three datasets we explore are: Xperm (Figure 1): An artificial dataset consisting of all sequences of length 5 whose elements come from an alphabet of 8 characters. We want to learn a function which assigns a positive value to any sequence that contains a target set of characters (in our case, two of them) regardless of their position. Thus, the function label is globally invariant to permutation, and so we project our data onto all permuted versions of our random template sequences. MNIST (Figure 2): We seek local invariance to translation and rotation, and so all random templates are translated by up to 3 pixels in all directions and rotated between -20 and 20 degrees. TIDIGITS (Figure 3): We use a subset of TIDIGITS consisting of 326 speakers (men, women, children) reading the digits 0-9 in isolation, and so each datapoint is a waveform of a single word. We seek local invariance to pitch and speaking rate [25], and so all random templates are pitch shifted up and down by 400 cents and warped to play at half and double speed. The task is 10-way classification with one class-per-digit. See [24] for more detail. Acknowledgements: Stephen Voinea acknowledges the support of a Nuance Foundation Grant. This work was also supported in part by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 7 0.4 0.5 0.6 0.7 0.8 0.9 1.0 10 100 1000 Number of Training Points Per Class Accuracy Φ Raw Bag−Of−Words Haar CDF(25,1) CDF(25,10) CDF(25,25) Xperm Sample Complexity RLS 10 100 1000 Number of Training Points Per Class Φ Raw Bag−Of−Words CDF(25,1) Xperm Sample Complexity 1 −NN Figure 1: Classification accuracy as a function of training set size, averaged over 100 random training samples at each size. Φ = CDF(n, m) refers to a random feature map with n bins and m templates. With 25 templates, the random feature map outperforms the raw features and a bag-ofwords representation (also invariant to permutation) and even approaches an RLS classifier with a Haar-integration kernel. Error bars were removed from the RLS plot for clarity. See supplement. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 10 100 Number of Templates Accuracy Bins 5 25 MNIST Accuracy RLS (1000 Points Per Class) 0.5 0.6 0.7 0.8 0.9 1.0 10 100 1000 Number of Training Points Per Class Φ Raw CDF(50,500) MNIST Sample Complexity RLS Figure 2: Left Plot) Mean classification accuracy as a function of number of bins and templates, averaged over 30 random sets of templates. Right Plot) Classification accuracy as a function of training set size, averaged over 100 random samples of the training set at each size. At 1000 examples per class, we achieve an accuracy of 98.97%. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 10 100 1000 Number of Templates Accuracy Bins 5 25 100 TIDIGITS Speaker RLS 10 100 1000 Number of Templates Bins 5 25 100 TIDIGITS Gender RLS Figure 3: Mean classification accuracy as a function of number of bins and templates, averaged over 30 random sets of templates. In the “Speaker” dataset, we test on unseen speakers, and in the “Gender” dataset, we test on a new gender, giving us an extreme train/test mismatch. [25]. 8 References [1] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio, “Unsupervised learning of invariant representations in hierarchical architectures.,” CoRR, vol. abs/1311.4158, 2013. [2] J. Bruna and S. Mallat, “Invariant scattering convolution networks,” CoRR, vol. abs/1203.1513, 2012. [3] G. Hinton, A. Krizhevsky, and S. Wang, “Transforming auto encoders,” ICANN-11, 2011. [4] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, 2013. [5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, pp. 2278–2324, 1998. [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks.,” in NIPS, pp. 1106–1114, 2012. [7] P. Niyogi, F. Girosi, and T. Poggio, “Incorporating prior information in machine learning by creating virtual examples,” in Proceedings of the IEEE, pp. 2196–2209, 1998. [8] Y.-A. Mostafa, “Learning from hints in neural networks,” Journal of complexity, vol. 6, pp. 192–198, June 1990. [9] V. N. Vapnik, Statistical learning theory. A Wiley-Interscience Publication 1998. [10] I. Steinwart and A. Christmann, Support vector machines. Information Science and Statistics, New York: Springer, 2008. [11] B. Haasdonk, A. Vossen, and H. Burkhardt, “Invariance in kernel methods by haar-integration kernels.,” in SCIA , Springer, 2005. [12] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe, “Convexity, classification, and risk bounds,” Journal of the American Statistical Association, vol. 101, no. 473, pp. 138–156, 2006. [13] G. Wahba, Spline models for observational data, vol. 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. Philadelphia, PA: SIAM, 1990. [14] W. B. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space.,” Conference in modern analysis and probability, 1984. [15] A. Rahimi and B. Recht, “Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning.,” in NIPS 2008. [16] A. Rahimi and B. Recht, “Uniform approximation of functions with random bases,” in Proceedings of the 46th Annual Allerton Conference, 2008. [17] C. Williams and M. Seeger, “Using the nystrm method to speed up kernel machines,” in NIPS, 2001. [18] F. R. Bach, “On the equivalence between quadrature rules and random features,” CoRR, vol. abs/1502.06800, 2015. [19] C. Walder and O. Chapelle, “Learning with transformation invariant kernels,” in NIPS, 2007. [20] Y. Cho and L. K. Saul, “Kernel methods for deep learning,” in NIPS, pp. 342–350, 2009. [21] L. Bo, X. Ren, and D. Fox, “Kernel descriptors for visual recognition,” in NIPS., 2010. [22] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, “Convolutional kernel networks,” in NIPS, 2014. [23] A. Tacchetti, P. K. Mallapragada, M. Santoro, and L. Rosasco, “Gurls: a least squares library for supervised learning,” CoRR, vol. abs/1303.0934, 2013. [24] S. Voinea, C. Zhang, G. Evangelopoulos, L. Rosasco, and T. Poggio, “Word-level invariant representations from acoustic waveforms,” vol. 14, pp. 3201–3205, September 2014. [25] M. Benzeghiba, R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, P. Laface, A. Mertins, C. Ris, R. Rose, V. Tyagi, and C. Wellekens, “Automatic speech recognition and speech variability: A review,” Speech Communication, vol. 49, pp. 763–786, 01 2007. 9
2015
158
5,656
Tractable Bayesian Network Structure Learning with Bounded Vertex Cover Number Janne H. Korhonen Helsinki Institute for Information Technology HIIT Department of Computer Science University of Helsinki janne.h.korhonen@helsinki.fi Pekka Parviainen Helsinki Institute for Information Technology HIIT Department of Computer Science Aalto University pekka.parviainen@aalto.fi Abstract Both learning and inference tasks on Bayesian networks are NP-hard in general. Bounded tree-width Bayesian networks have recently received a lot of attention as a way to circumvent this complexity issue; however, while inference on bounded tree-width networks is tractable, the learning problem remains NP-hard even for tree-width 2. In this paper, we propose bounded vertex cover number Bayesian networks as an alternative to bounded tree-width networks. In particular, we show that both inference and learning can be done in polynomial time for any fixed vertex cover number bound k, in contrast to the general and bounded tree-width cases; on the other hand, we also show that learning problem is W[1]-hard in parameter k. Furthermore, we give an alternative way to learn bounded vertex cover number Bayesian networks using integer linear programming (ILP), and show this is feasible in practice. 1 Introduction Bayesian networks are probabilistic graphical models representing joint probability distributions of random variables. They can be used as a model in a variety of prediction tasks, as they enable computing the conditional probabilities of a set of random variables given another set of random variables; this is called the inference task. However, to use a Bayesian network as a model for inference, one must first obtain the network. Typically, this is done by estimating the network based on observed data; this is called the learning task. Both the inference and learning tasks are NP-hard in general [3, 4, 6]. One approach to deal with this issue has been to investigate special cases where these problems would be tractable. That is, the basic idea is to select models from a restricted class of Bayesian networks that have structural properties enabling fast learning or inference; this way, the computational complexity will not be an issue, though possibly at the cost of accuracy if the true distribution is far from the model family. Most notably, it is known that the inference task can be solved in polynomial time if the network has bounded tree-width, or more precisely, the inference task is fixed-parameter tractable in the tree-width of the network. Moreover, this is in a sense optimal, as bounded tree-width is necessary for polynomial-time inference unless the exponential time hypothesis (ETH) fails [17]. 1 The possibility of tractable inference has motivated several recent studies also on learning bounded tree-width Bayesian networks [2, 12, 16, 19, 22]. However, unlike in the case of inference, learning a Bayesian network of bounded tree-width is NP-hard for any fixed tree-width bound at least 2 [16]. Furthermore, it is known that learning many relatively simple classes such as paths [18] and polytrees [9] is also NP-hard. Indeed, so far the only class of Bayesian networks for which a polynomial time learning algorithm is known are trees, i.e., graphs with tree-width 1 [5] – it appears that our knowledge about structure classes allowing tractable learning is quite limited. 1.1 Structure Learning with Bounded Vertex Cover Number In this work, we propose bounded vertex cover number Bayesian networks as an alternative to the tree-width paradigm. Roughly speaking, we consider Bayesian networks where all pairwise dependencies – i.e., edges in the moralised graph – are covered by having at least one node from the vertex cover incident to each of them; see Section 2 for technical details. Like bounded tree-width Bayesian networks, this is a parameterised class, allowing a trade-off between the complexity of models and the size of the space of possible models by varying the parameter k. Results: complexity of learning bounded vertex cover networks. Crucially, we show that learning an optimal Bayesian network structure with vertex cover number at most k can be done in polynomial time for any fixed k. Moreover, vertex cover number provides an upper bound for tree-width, implying that inference is also tractable; thus, we identify a rare example of a class of Bayesian networks where both learning and inference are tractable. Specifically, our main theoretical result shows that an optimal Bayesian network structure with vertex cover number at most k can be found in time 4kn2k+O(1) (Theorem 5). However, while the running time of our algorithm is polynomial with respect to the number of nodes, the degree of the polynomial depends on k. We show that this is in a sense best we can hope for; that is, we show that there is no fixed-parameter algorithm with running time f(k) poly(n) for any function f even when the maximum allowed parent set size is restricted to 2, unless the commonly accepted complexity assumption FPT ̸= W[1] fails (Theorem 6). Results: ILP formulation and learning in practice. While we prove that the learning bounded vertex cover Bayesian network structures can be done in polynomial time, the unavoidable dependence on k in the degree the polynomial makes the algorithm of our main theorem infeasible for practical usage when the vertex cover number k increases. Therefore, we investigate using an integer linear programming (ILP) formulation as an alternative way to find optimal bounded vertex cover Bayesian networks in practice (Section 4). Although the running time of an ILP is exponential in the worst case, the actual running time in many practical scenarios is significantly lower; indeed, most of the state-of-the-art algorithms for exact learning of Bayesian networks in general [1, 8] and with bounded tree-width [19, 22] are based on ILPs. Our experiments show that bounded vertex cover number Bayesian networks can, indeed, be learned fast in practice using ILP (Section 5). 2 Preliminaries Directed graphs. A directed graph D = (N, A) consists of a node set N and arc set A ⊆N × N; for a fixed node set, we usually identify a directed graph with its arc set A. A directed graph is called a directed acyclic graph or DAG if it contains no directed cycles. We write n = |N| and uv for arc (u, v) ∈A. For u, v ∈N with uv ∈A, we say that u is a parent of v and v is a child of u. We write Av for the parent set of v, that is, Av = {u ∈N : uv ∈A}. Bayesian network structure learning. We consider the Bayesian network structure learning using the score-based approach [7, 14], where the input consists of the node set N and the local scores fv(S) for each node v ∈N and S ⊆N \ {v}. The task is to find a DAG A – the network structure – that maximises the score f(A) = P v∈N fv(Av). We assume that the scores fv are computed beforehand, and that we can access each entry fv(S) in constant time. We generally consider a setting where only parent sets belonging to specified sets Fv ⊆2N are permitted. Typically, Fv consists of parent sets of size at most k, in which case we assume that the scores fv(S) are given only for |S| ≤k; that is, the size of the input is O n n k  . 2 Moralised graphs. For a DAG A, the moralised graph of A is an undirected graph MA = (N, EA), where EA is obtained by adding (1) an undirected edge {u, v} to EA for each arc uv ∈A, and (2) by adding an undirected edge {u, v} to EA if u and v have a common child, that is, {uw, vw} ⊆A in A for some w ∈A. The edges added to EA due to rule (2) are called moral edges. Tree-width and vertex cover number. A tree-decomposition of a graph G = (V, E) is a pair (X, T), where T is a tree with node set {1, 2, . . . , m} and X = {X1, X2, . . . , Xm} is a collection of subsets of V with Sm i=1 Xi = V such that (a) for each {u, v} ∈E there is i with u, v ∈Xi, and (b) for each v ∈V the graph T[{i: v ∈Xi}] is connected. The width of a tree-decomposition (T, X) is maxi |Xi| −1. The tree-width tw(G) of a graph G is the minimum width of a tree-decomposition of G. For a DAG A, we define the tree-width tw(A) as the tree-width of the moralised graph MA [12]. For a graph G = (V, E), a set C ⊆V is a vertex cover if each edge is incident to at least one vertex in C. The vertex cover number of a graph τ(G) is the size of the smallest vertex cover in G. As with tree-width, we define the vertex cover number τ(A) of a DAG A as τ(MA). Lemma 1. For a DAG A, we have tw(A) ≤τ(A). Proof. By definition, the moralised graph MA has a vertex cover C of size τ(A). We can construct a star-shaped tree-decomposition for MA with a central node i with Xi = C and a leaf j with Xj = C ∪v for every v ∈N \ C. Clearly, this tree-decomposition has width τ(A); thus, we have tw(A) = tw(MA) ≤τ(A). Structure learning with parameters. Finally, we give a formal definition for the bounded treewidth and bounded vertex cover number Bayesian network structure learning problems. That is, let p ∈{τ, tw}; in the bounded-p Bayesian network structure learning, we are given a node set N, local scores fv(S) and an integer k, and the task is to find a DAG A maximising score P v∈N fv(Av) subject to p(A) ≤k. For both tree-width and vertex cover number, the parameter k also bounds the maximum parent set size, so we will assume that the local scores fv(S) are given only if |S| ≤k. 3 Complexity Results 3.1 Polynomial-time Algorithm We start by making a few simple observations about the structure of bounded vertex cover number Bayesian networks. In the following, we slightly abuse the terminology and say that N1 ⊆N is a vertex cover for a DAG A if N1 is a vertex cover of MA. Lemma 2. Let N1 ⊆N be a set of size k, and let A be a DAG on N. Set N1 is a vertex cover for A if and only if (a) for each node v /∈N1, we have Av ⊆N1, and (b) each node v ∈N1 has at most one parent outside N1. Proof. (⇒) For (a), we have that if there were nodes u, v /∈N1 such that u is the child of v, the moralised graph MA would have edge {u, v} that is not covered by N1. Likewise for (b), we have that if a node u ∈N1 had parents v, w /∈N1, then MA would have edge {v, w} not covered by N1. Thus, both (a) and (b) have to hold if A has vertex cover N1. (⇐) Since (a) holds, all directed edges in A have one endpoint in N1, and thus the corresponding undirected edges in MA are covered by N1. Moreover, by (a) and (b), no node has two parents outside N1, so all moral edges in MA also have at least one endpoint in N1. Lemma 2 allows us to partition a DAG with vertex cover number k into a core that covers at most 2k nodes that are either in a fixed vertex cover or are parents of those nodes (core nodes), and a periphery 3 (b) (a) e u v e u v N1 N2 Figure 1: (a) Example of a DAG with vertex cover number 4, with sets N1 and N2 as in Lemma 3. (b) Reduction used in Theorem 6; each edge in the original graph is replaced by a possible v-structure. containing arcs going into nodes that have no children and all parents in the vertex cover (peripheral nodes). This is illustrated in Figure 1(a), and the following lemma formalises the observation. Lemma 3. Let A be a DAG on N with vertex cover N1 of size k. Then there is a set N2 ⊆N \ N1 of size at most k and arc sets B and C such that A = B ∪C and (a) B is a DAG on N1 ∪N2 with vertex cover N1, and (b) C contains only arcs uv with u ∈N1 and v /∈N1 ∪N2. Proof. First, let N2 = S v∈N1 Av \ N1  . By Lemma 2, each v ∈N1 can have at most one parent outside N1, so we have |N2| ≤|N1| ≤k. Now let B = {uv ∈A: u, v ∈N1 ∪N2} and C = A \ B. To see that (a) holds for this choice of B, we observe that the edge set of the moralised graph MB is a subset of the edges in MA, and thus N1 covers all edges of MB. For (b), the choice of N2 and Lemma 2 ensure that nodes in N \ (N1 ∪N2) have no children, and again by Lemma 2 their parents are all in N1. Dually, if we fix the core and peripheral node sets, we can construct a DAG with bounded vertex cover number by the selecting the core independently from the parents of the peripheral nodes. Formally: Lemma 4. Let N1, N2 ⊆N be disjoint. Let B be a DAG on N1 ∪N2 with vertex cover N1, and let C be a DAG on N such that C only contains arcs uv with u ∈N1 and v /∈N1 ∪N2. Then (a) A = B ∪C is a DAG on N with vertex cover N1, and (b) the score of A is f(A) = P v∈N1∪N2 fv(Bv) + P v /∈N1∪N2 fv(Cv). Proof. To see that (a) holds, we observe that B is acyclic by assumption, and addition of arcs from C cannot create cycles as there are no outgoing arcs from nodes in N \ (N1 ∪N2). Moreover, for v ∈N1 ∪N2, there are no arcs ending at v in C, and likewise for v /∈N1 ∪N2, there are no arcs ending at v in B. Thus, we have Av = Bv if v ∈N1 ∪N2 and Av = Cv otherwise. This implies that since conditions of Lemma 2 hold for both B and C, they also hold for A, and thus N1 is a vertex cover for A. Finally, the preceding observation implies also that fv(Av) = fv(Bv) for v ∈N1 ∪N2 and fv(Av) = fv(Cv) otherwise, which implies (b). Lemmas 3 and 4 give the basis of our strategy for finding an optimal Bayesian network structure with vertex cover number at most k. That is, we iterate over all possible n k n−k k  = O(n2k) choices for sets N1 and N2; for each choice, we construct the optimal core and periphery as follows, keeping track of the best found DAG A∗: Step 1. To find the optimal core B, we construct a Bayesian network structure learning instance on N1 ∪N2 by removing nodes outside N1 ∪N2 and restricting the possible choices of parent sets so that Fv = 2N1 for all v ∈N2, and Fv = {S ⊆N1∪N2 : |S ∩N2| ≤1} for v ∈N1. By Lemma 2, any solution for this instance is a DAG with vertex cover N1. Moreover, this instance has 2k nodes, so it can be solved in time O(k222k) using the dynamic programming algorithm of Silander and Myllymäki [23]. 4 Step 2. To construct the periphery C, we compute the value ˆfv(N1) = maxS⊆N1 fv(S) and select corresponding best parent set choice Cv for each v /∈N1 ∪N2; this can be done in time in O(nk2k) time using the dynamic programming algorithm of Ott and Miyano [21]. Step 3. We check if f(B ∪C) > f(A∗), and replace A∗with B ∪C if this holds. By Lemma 4(a), all DAGs considered by the algorithm are valid solutions for Bayesian network structure learning with bounded vertex cover number, and by Lemma 4(b), we can find the optimal solution for fixed N1 and N2 by optimising the choice of the core and the periphery separately. Moreover, by Lemma 3 each bounded vertex cover DAG is included in the search space, so we are guaranteed to find the optimal one. Thus, we have proven our main theorem: Theorem 5. Bounded vertex cover number Bayesian network structure learning can be solved in time 4kn2k+O(1). 3.2 Lower Bound Although the algorithm presented in the previous section runs in polynomial time in n, the degree of the polynomial depends on the size of vertex cover k, which poses a serious barrier to practical use when k grows. Moreover, the algorithm is essentially optimal in the general case, as the input has size Ω n n k  when parent sets of size at most k are allowed. However, in practice one often assumes that a node can have at most, say, 2 or 3 parents. Thus, it makes sense to consider settings where the input is restricted, by e.g. considering instances where parent set size is bounded from above by some constant w while allowing vertex cover number k to be higher. In this case, we might hope to do better, as the input size is not a restricting factor. Unfortunately, we show that it is not possible to obtain a algorithm where the degree of the polynomial does not depend on k even when the maximum parent set size is limited to 2, that is, there is no algorithm with running time g(k) poly(n) for any function g, unless the widely believed complexity assumption FPT ̸= W[1] fails. Specifically, we show that Bayesian network structure learning with bounded vertex cover number is W[1]-hard when restricted to instances with parent set size 2, implying the above claim. For full technical details on complexity classes FPT and W[1] and the related theory, we refer the reader to standard texts on the topic [11, 13, 20]; for our result, it suffices to note that the assumption FPT ̸= W[1] implies that finding a k-clique from a graph cannot be done in time g(k) poly(n) for any function g. Theorem 6. Bayesian network structure learning with bounded vertex cover number is W[1]-hard in parameter k, even when restricted to instances with maximum parent set size 2. Proof. We prove the result by a parameter-preserving reduction from clique, which is known to be W[1]-hard [10]. We use the same reduction strategy as Korhonen and Parviainen [16] use in proving that the bounded tree-width version of the problem is NP-hard. That is, given an instance (G = (V, E), k) of clique, we construct a new instance of bounded vertex cover number Bayesian network structure learning as follows. The node set of the instance is N = V ∪E. The parent scores are defined by setting fe({u, v}) = 1 for each e = {u, v} ∈E, and fv(S) = 0 for all other v and S; see Figure 1(b). Finally, the vertex cover size is required to be at most k. Clearly, the new instance can be constructed in polynomial time. It now suffices to show that the original graph G has a clique of size k if and only if the optimal DAG N with vertex cover number at most k has score k 2  : (⇒) Assume G has a k-clique C ⊆V . Let A be a DAG on N obtained by setting Ae = {u, v} for each e = {u, v} ⊆C, and Av = ∅for all other nodes v ∈N. All edges in the moralised graph MA are now clearly covered by C. Furthermore, since C is a clique in G, there are k 2  nodes with a non-empty parent set, giving f(A) = k 2  . (⇐) Assume now that there is a DAG A on N with vertex cover number k and a score f(A) ≥ k 2  . There must be at least k 2  nodes e = {u, v} ∈E such that Ae = {u, v}, as these are the only nodes that can contribute to a positive score. Each of these triangles Te = {e, u, v} for e = {u, v} must contain at least two nodes from a minimum vertex cover C; without loss of generality, we may assume that these nodes are u and v, as e cannot cover any other edges. However, this means that C ⊆V and there are at least k 2  edges e ⊆C, implying that C must be a k-clique in G. 5 4 Integer Linear Programming To complement the combinatorial algorithm of Section 3.1, we will formulate the bounded vertex cover number Bayesian network structure learning problem as an integer linear program (ILP). Without loss of generality, we may assume that nodes are labeled with integers [n]. As a basis for the formulation, let zSv be a binary variable that takes value 1 when S is the parent set of v and 0 otherwise. The objective function for the ILP is max X v∈N X S∈Fv fv(S)zSv . To ensure that the variables zSv encode a valid DAG, we use the standard constraints introduced by Jaakkola et al. [15] and Cussens [8]: X S∈Fv zSv = 1 ∀v ∈N (1) X v∈W X S∈Fv S∩W =∅ zSv ≥1 ∀W ⊆N : |W| ≥1 (2) zSv ∈{0, 1} ∀v ∈N, S ∈Fv. (3) Now it remains to bound the vertex cover number of the moralised graph. We introduce two sets of binary variables. The variable yuv takes value 1 if there is an edge between nodes u and v in the moralised graph and 0 otherwise. The variable cu takes value 1 if the node u is a part of the vertex cover and 0 otherwise. By combining a construction of the moralised graph and a well-known formulation for vertex cover, we get the following: X S∈Fv : u∈S zSv + X T ∈Fu : v∈T zT u −yuv ≤0 ∀u, v ∈N : u < v (4) zSv −yuw ≤0 ∀v ∈N, S ∈Fv : u, w ∈S, u < w (5) yuv −cu −cv ≤0 ∀u, v ∈N : u < v (6) X u∈N cu ≤k (7) yuv, cu ∈{0, 1} ∀u, v ∈N. (8) The constraints (4) and (5) guarantee that y-variables encode the moral graph. The constraint (6) guarantees that if there is an edge between u and v in the moral graph then either u or v is included in the vertex cover. Finally, the constraint (7) bounds the size of the vertex cover. 5 Experiments We implemented both the combinatorial algorithm of Section 3.1 and the ILP formulation of Section 4 to benchmark the practical performance of the algorithms and test how good approximations bounded vertex cover DAGs provide. The combinatorial algorithm was implemented in Matlab and is available online1. The ILPs were implemented using CPLEX Python API and solved using CPLEX 12. The implementation is available as a part of TWILP software2. Combinatorial algorithm. As the worst- and best-case running time of the combinatorial algorithm are the same, we tested it with synthetic data sets varying the number of nodes n and the vertex cover bound k, limiting each run to at most 24 hours. The results are shown in Figure 2. With reasonable vertex cover number bounds the polynomial-time algorithm scales only up to about 15 nodes; this is mainly due to the fact that, while the running time is polynomial in n, the degree of the polynomial depends on k and when k grows, the algorithm becomes quickly infeasible. 1http://research.cs.aalto.fi/pml/software/VCDP/ 2http://bitbucket.org/twilp/twilp 6 1 2 3 4 5 k 100 101 102 103 104 105 time (s) n = 16 n = 15 n = 14 n = 13 Figure 2: Running times of the polynomial time algorithm. Number of nodes varies from 13 to 16 and the vertex cover number from 1 to 5. For n = 15 and n = 16 with k = 5, the algorithm did not finish in 24 hours. Integer linear program. We ran our experiments using a union of the data sets used by Berg et al. [2] and those provided at GOBNILP homepage3. We benchmarked the results against other ILP-based algorithms, namely GOBNILP [8] for learning Bayesian networks without any restrictions to the structure and TWILP [22] for learning bounded tree-width Bayesian networks. In our tests, each algorithm was given 4 hours of CPU time. Figure 3 shows results for selected data sets. Due to space reasons, full results are reported in the supplement. The results show that optimal DAGs with moderate vertex cover number (7 for flag, 6 for carpo10000) tend to have higher scores than optimal trees. This suggests that often one can trade speed for accuracy by moving from trees to bounded vertex cover number DAGs. We also note that bounded vertex cover number DAGs are usually learned quickly, typically at least two orders-of-magnitude faster than bounded tree-width DAGs. However, bounded tree-width DAGs are a less constrained class, and thus in multiple cases the best found bounded tree-width DAG has better score than the corresponding bounded vertex cover number DAG even when the bounded tree-width DAG is not proven to be optimal. This seems to be the case also if we have mismatching bound, say, 5 for tree-width and 10 for vertex cover number. Finally, we notice that ILP solves easily problem instances with, say, 60 nodes and vertex cover bound 8; see the results for carpo10000 data set. Thus, in practice ILP scales up to significantly larger data sets and vertex cover number bounds than the combinatorial algorithm of Section 3.1. Presumably, this is due to the fact that ILP solvers tend to use heuristics that can quickly prune out provably non-optimal parts of choices for the vertex cover, while the combinatorial algorithm considers them all. 6 Discussion We have shown that bounded vertex cover number Bayesian networks both allow tractable inference and can be learned in polynomial time. The obvious point of comparison is the class of trees, which has the same properties. Structurally these two classes are quite different. In particular, neither is a subclass of the other – DAGs with vertex cover number k > 1 can contain dense substructures, while a path of n nodes (which is also a tree) has a vertex cover number ⌊n/2⌋= Ω(n). In contrast with trees, bounded vertex cover number Bayesian networks have a densely connected “core” , and each node outside the core is either connected to the core or it has no connections. Thus, we would expect them to perform better than trees when the “real” network has a few dense areas and only few connections between nodes outside these areas. On the other hand, bounding the vertex cover number bounds the total size of the core area, which can be problematic especially in large networks when some parts of the network are not represented in the minimum vertex cover. 3http://www.cs.york.ac.uk/aig/sw/gobnilp/ 7 1 2 3 4 5 6 7 8 9 10 k −16400 −16200 −16000 −15800 −15600 −15400 −15200 score abalone (n = 9), scores 1 2 3 4 5 6 7 8 9 10 k 0 100 101 102 103 104 time (s) abalone (n = 9), running times 1 2 3 4 5 6 7 8 9 10 k −3100 −3050 −3000 −2950 −2900 −2850 −2800 −2750 −2700 score flag (n = 29), scores 1 2 3 4 5 6 7 8 9 10 k 0 100 101 102 103 104 time (s) flag (n = 29), running times 1 2 3 4 5 6 7 8 9 10 k −230000 −220000 −210000 −200000 −190000 −180000 −170000 −160000 −150000 score carpo10000 (n = 60), scores 1 2 3 4 5 6 7 8 9 10 k 0 100 101 102 103 104 time (s) carpo10000 (n = 60), running times No structure constraints Bounded tree-width Bounded vertex cover Figure 3: Results for selected data sets. We report the score for the optimal DAG without structure constraints, and for the optimal DAGs with bounded tree-width and bounded vertex cover when the bound k changes, as well as the running time required for finding the optimal DAG in each case. If the computations were not finished at the time limit of 4 hours, we show the score of the best DAG found so far; the shaded area represents the unexplored part of the search space, that is, the upper bound of the shaded area is the best score upper bound proven by the ILP solver. We also note that bounded vertex cover Bayesian networks have a close connection to naive Bayes classifiers. That is, variables outside a vertex cover are conditionally independent of each other given the vertex cover. Thus, we can replace the vertex cover by a single variable whose states are a Cartesian product of the states of the vertex cover variables; this star-shaped network can then be viewed as a naive Bayes classifier. Finally, we note some open question related to our current work. From a theoretical perspective, we would like to classify different graph parameters in terms of complexity of learning. Ideally, we would want to have a graph parameter that has a fixed-parameter learning algorithm when we bound the maximum parent set size, circumventing the barrier of Theorem 6. From a practical perspective, there is clearly room for improvement in efficiency of our ILP-based learning algorithm; for instance, GOBNILP uses various optimisations beyond the basic ILP encoding to speed up the search. Acknowledgments We thank James Cussens for fruitful discussions. This research was partially funded by the Academy of Finland (Finnish Centre of Excellence in Computational Inference Research COIN, 251170). The experiments were performed using computing resources within the Aalto University School of Science “Science-IT” project. 8 References [1] Mark Bartlett and James Cussens. Advances in Bayesian network learning using integer programming. In 29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013. [2] Jeremias Berg, Matti Järvisalo, and Brandon Malone. Learning Optimal Bounded Treewidth Bayesian Networks via Maximum Satisfiability. In 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014. [3] David M. Chickering. Learning Bayesian networks is NP-Complete. In Learning from Data: Artificial Intelligence and Statistics V, pages 121–130. Springer-Verlag, 1996. [4] David M. Chickering, David Heckerman, and Chris Meek. Large-sample learning of Bayesian networks is NP-Hard. Journal of Machine Learning Research, 5:1287–1330, 2004. [5] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14(3):462–467, 1968. [6] Gregory. F. Cooper. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42:393–405, 1990. [7] Gregory F. Cooper and Edward Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9:309–347, 1992. [8] James Cussens. Bayesian network learning with cutting planes. In 27th Conference on Uncertainty in Artificial Intelligence (UAI), 2011. [9] Sanjoy Dasgupta. Learning polytrees. In 15th Conference on Uncertainty in Artificial Intelligence (UAI), 1999. [10] Rodney G. Downey and Michael R. Fellows. Parameterized computational feasibility. In Feasible Mathematics II, pages 219–244. Birkhauser, 1994. [11] Rodney G. Downey and Michael R. Fellows. Parameterized complexity. Springer-Verlag, 1999. [12] Gal Elidan and Stephen Gould. Learning bounded treewidth Bayesian networks. Journal of Machine Learning Research, 9:2699–2731, 2008. [13] Jörg Flum and Martin Grohe. Parameterized complexity theory. Springer-Verlag, 2006. [14] David Heckerman, Dan Geiger, and David M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197–243, 1995. [15] Tommi Jaakkola, David Sontag, Amir Globerson, and Marina Meila. Learning bayesian network structure using LP relaxations. In 13th International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. [16] Janne H. Korhonen and Pekka Parviainen. Learning bounded tree-width Bayesian networks. In 16th International Conference on Artificial Intelligence and Statistics (AISTATS), 2013. [17] Johan H. P. Kwisthout, Hans L. Bodlaender, and L. C. van der Gaag. The necessity of bounded treewidth for efficient inference in Bayesian networks. In 19th European Conference on Artificial Intelligence (ECAI), 2010. [18] Chris Meek. Finding a path is harder than finding a tree. Journal of Artificial Intelligence Research, 15: 383–389, 2001. [19] Siqi Nie, Denis Deratani Maua, Cassio Polpo de Campos, and Qiang Ji. Advances in Learning Bayesian Networks of Bounded Treewidth. In Advances in Neural Information Processing Systems 27 (NIPS), 2014. [20] Rolf Niedermeier. Invitation to fixed-parameter algorithms. Oxford University Press, 2006. [21] Sascha Ott and Satoru Miyano. Finding optimal gene networks using biological constraints. Genome Informatics, 14:124–133, 2003. [22] Pekka Parviainen, Hossein Shahrabi Farahani, and Jens Lagergren. Learning Bounded Tree-width Bayesian Networks using Integer Linear Programming. In 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014. [23] Tomi Silander and Petri Myllymäki. A simple approach for finding the globally optimal Bayesian network structure. In 22nd Conference on Uncertainty in Artificial Intelligence (UAI), 2006. 9
2015
159
5,657
Infinite Factorial Dynamical Model Isabel Valera∗ Max Planck Institute for Software Systems ivalera@mpi-sws.org Francisco J. R. Ruiz∗ Department of Computer Science Columbia University f.ruiz@columbia.edu Lennart Svensson Department of Signals and Systems Chalmers University of Technology lennart.svensson@chalmers.se Fernando Perez-Cruz Universidad Carlos III de Madrid, and Bell Labs, Alcatel-Lucent fernandop@ieee.org Abstract We propose the infinite factorial dynamic model (iFDM), a general Bayesian nonparametric model for source separation. Our model builds on the Markov Indian buffet process to consider a potentially unbounded number of hidden Markov chains (sources) that evolve independently according to some dynamics, in which the state space can be either discrete or continuous. For posterior inference, we develop an algorithm based on particle Gibbs with ancestor sampling that can be efficiently applied to a wide range of source separation problems. We evaluate the performance of our iFDM on four well-known applications: multitarget tracking, cocktail party, power disaggregation, and multiuser detection. Our experimental results show that our approach for source separation does not only outperform previous approaches, but it can also handle problems that were computationally intractable for existing approaches. 1 Introduction The central idea behind Bayesian nonparametrics (BNPs) is the replacement of classical finitedimensional prior distributions with general stochastic processes, allowing for an open-ended number of degrees of freedom in a model [8]. They constitute an approach to model selection and adaptation in which the model complexity is allowed to grow with data size [17]. In the literature, BNP priors have been applied for time series modeling. For example, the infinite hidden Markov model [2, 20] considers a potentially infinite cardinality of the state space; and the BNP construction of switching linear dynamical systems (LDS) [4] considers an unbounded number of dynamical systems with transitions among them occurring at any time during the observation period. In the context of signal processing, the source separation problem has captured the attention of the research community for decades due to its wide range of applications [12, 23, 7, 24]. The BNP literature for source separation includes [10], in which the authors introduce the nonparametric counterpart of independent component analysis (ICA), referred as infinite ICA (iICA); and [23], where the authors present the Markov Indian buffet process (mIBP), which places a prior over an infinite number of parallel Markov chains and is used to build the infinite factorial hidden Markov model (iFHMM) and the ICA iFHMM. These approaches can effectively adapt the number of hidden sources to fit the available data. However, they suffer from several limitations: i) the iFHMM is restricted to binary on/off hidden states, which may lead to hidden chains that do not match the actual hidden causes, and it is not able to deal with continuous-valued states, and ii) both the iICA and the ICA iFHMM make independence assumptions between consecutive values of active hidden states, which significantly restricts their ability to capture the underlying dynamical models. As a result, we find that existing approaches are not applicable to many well-known source separation ∗Both authors contributed equally. 1 problems, such as multitarget tracking [12], in which each target can be modeled as a Markov chain with continuous-valued states describing the target trajectory; or multiuser detection [24], in which the high cardinality of the hidden states makes this problem computationally intractable for the nonbinary extension of the iFHMM. Hence, there is a lack of both a general BNP model for source separation, and an efficient inference algorithm to address these limitations. In this paper, we provide a general BNP framework for source separation that can handle a wide range of dynamics and likelihood models. We assume a potentially infinite number of sources that are modeled as Markov chains that evolve according to some dynamical system model. We assume that only the active sources contribute to the observations, and the states of the Markov chains are not restricted to be discrete but they can also be continuous-valued. Moreover, we let the observations depend on both the current state of the hidden sequences, and on some previous states. This system memory is needed when dealing with applications in which the individual source signals propagate through the air and may thus suffer from some phenomena, such as reverberation, echo, or multipath propagation. Our approach results in a general and flexible dynamic model that we refer to as infinite factorial dynamical model (iFDM), and that can be particularized to recover other models previously proposed in the literature, e.g., the binary iFHMM. As for most BNP models, one of the main challenges of our iFDM is posterior inference. In discrete time series models, including the iFHMM, an approximate inference algorithm based on forwardfiltering backward-sampling (FFBS) sweeps is typically used [23, 5]. However, the exact FFBS algorithm has exponential computational complexity with respect to the memory length. The FFBS algorithm also becomes computationally intractable when dealing with on/off hidden states that are continuous-valued when active. In order to overcome these limitations, we develop a suitable inference algorithm for our iFDM by building a Markov chain Monte Carlo (MCMC) kernel using particle Gibbs with ancestor sampling (PGAS) [13]. This algorithm presents quadratic complexity with respect to the memory length and can easily handle a broad range of dynamical models. The versatility and efficiency of our approach is shown through a comprehensive experimental validation in which we tackle four well-known source separation problems: multitarget tracking [12], cocktail party [23], power disaggregation [7], and multiuser detection [24].1 Our results show that our iFDM provides meaningful estimations of the number of sources and their corresponding individual signal traces even in applications that previous approaches cannot handle. It also outperforms, in terms of accuracy, the iFHMM (extended to account for the actual state space cardinality) combined with FFBS-based inference in the cocktail party and power disaggregation problems. 2 Infinite Factorial Dynamical Model In this section, we detail our proposed iFDM. We assume that there is a potentially infinite number of sources contributing to the observed sequence {yt}T t=1, and each source is modeled by an underlying dynamic system model in which the state of the m-th source at time t, denoted by xtm ∈X, evolves over time as a first-order Markov chain. Here, the state space X can be either discrete or continuous. In addition, we introduce the auxiliary binary variables stm ∈{0, 1} to indicate whether the m-th source is active at time t, such that the observations only depend on the active sources. We assume that the variables stm follow a first-order Markov chain and let the states xtm evolve according to p(xtm|stm, x(t−1)m, s(t−1)m), i.e., the dynamic system model may depend on whether the source is active or inactive. We assume dummy states stm = 0 for t ≤0. As an example, in the cocktail party problem, yt denotes a sample of the recorded audio signal, which depends on the individual voice signals of the active speakers. The latent states xtm in this example are real-valued and the transition model p(xtm|stm = 1, x(t−1)m, s(t−1)m) describes the dynamics of the voice signal. In many real applications, the individual signals propagate though the air until they are mixed and gathered by the receiver. In such propagation, different phenomena (e.g., refraction or reflexion of the signal in the walls) may occur, leading to multipath propagation of the signals and, therefore, to different delayed copies of the individual signals at the receiver. In order to account for this “memory” effect, we consider that the state of the m-th source at time t, xtm, influences not only the observation yt, but also the future L −1 observations, yt+1, . . . , yt+L−1. Therefore, the likelihood of yt depends on the last L states of all the Markov chains, yielding p(yt|X, S) = p(yt|{xtm, stm, x(t−1)m, s(t−1)m, . . . , x(t−L+1)m, s(t−L+1)m}M m=1), (1) 1Code for these applications can be found at https://github.com/franrruiz/iFDM 2 ↵ am s0m s1m sT m . . . β0, β1 bm x0m x1m xT m x2m x3m . . . s2m s3m . . . y1 y2 y3 yT m = 1, . . . , 1 (a) ↵ am . . . β0, β1 bm . . . y1 y2 y3 yT s(e) 1m s(e) 0m s(e) 2m s(e) 3m s(e) T m m = 1, . . . , 1 (b) Figure 1: (a) Graphical representation of the iFDM with memory length L = 2. The dashed lines represent the memory. (b) Equivalent representation using extended states. where X and S are T × M matrices containing all the states xtm and stm, respectively. We remark that the likelihood of yt cannot depend on any hidden state xτm if sτm = 0. In order to be able to deal with an infinite number of sources, we place a BNP prior over the binary matrix S that contains all variables stm. In particular, we assume that S ∼mIBP(α, β0, β1), i.e., S is distributed as a mIBP [23] with parameters α, β0 and β1. The mIBP places a prior distribution over binary matrices with a finite number of rows T and an infinite number of columns M, in which each row represents a time instant, and each column represents a Markov chain. The mIBP ensures that, for any finite value of T, only a finite number of columns M+ in S are active almost surely, whereas the rest of them remain in the all-zero state and do not influence the observations. We make use of the stick-breaking construction of the mIBP, which is particularly useful to develop many practical inference algorithms [19, 23]. Under the stick-breaking construction, two hidden variables for each Markov chain are introduced, representing the transition probabilities between the active and inactive states. In particular, we define am = p(stm = 1|s(t−1)m = 0) as the transition probability from inactive to active, and bm = p(stm = 1|s(t−1)m = 1) as the selftransition probability of the active state of the m-th chain. In the stick-breaking representation, the columns of S are ordered according to their values of am, such that a1 > a2 > a3 > . . ., and the probability distribution over variables am is given by a1 ∼Beta(α, 1), and p(am|am−1) ∝ (am)α−1I(0 ≤am ≤am−1), being I(·) the indicator function [19]. Finally, we place a beta distribution over the transition probabilities bm of the form bm ∼Beta(β0, β1). The resulting iFDM model, particularized for L = 2, is shown in Figure 1a. Note that this model can be equivalently represented as shown in Figure 1b, using the extended states s(e) tm, with s(e) tm =  xtm, stm, x(t−1)m, s(t−1)m, . . . , x(t−L+1)m, s(t−L+1)m  . (2) This extended representation allows for an FFBS-based inference algorithm. However, the exponential complexity of the FFBS with the memory parameter L and with continuous-valued hidden states xtm makes the algorithm intractable in many real scenarios. Hence, we maintain the representation in Figure 1a because it allows us to derive an efficient inference algorithm. The proposed iFDM in Figure 1a can be particularized to resemble some other models that have been proposed in the literature. In particular, we recover: i) the iFHMM in [23] by choosing the state space X = {0, 1}, xtm = stm and L = 1, ii) the ICA iFHMM in [23] if we set X = R, L = 1 and assume that p(xtm|stm = 1, x(t−1)m, s(t−1)m) = p(xtm|stm = 1) is a Gaussian distribution, and iii) a BNP counterpart of the LDS [9] with on/off states by assuming L = 1 and X = R, and letting the variables xtm be Gaussian distributed with linear relationships among them. 3 Inference Algorithm We develop an inference algorithm for the proposed iFDM that can handle different dynamic and likelihood models. Our approach relies on a blocked Gibbs sampling algorithm that alternates between sampling the number of considered chains and the global variables conditioned on the current value of matrices S and X, and sampling matrices S and X conditioned on the current value of the remaining variables. In particular, the algorithm proceeds iteratively as follows: • Step 1: Add Mnew new inactive chains using an auxiliary slice variable and a slice sampling method. In this step, the number of considered chains is increased from its initial value M+ to M ‡ = M+ + Mnew (M+ is not updated because stm = 0 for all t for the new chains). 3 x1 t x2 t x3 t x3 t−1 x3 t+1 x2 t+1 x2 t−1 x1 t−1 x1 t+1 t a1 t = 1 a2 t = 1 a3 t = 2 a3 t+1 = 3 a2 t+1 = 2 a1 t+1 = 2 i = 1 i = 2 i = 3 t + 1 t −1 (a) Example of the connection of particles in PGAS. We represent P = 3 particles xi τ for τ = {t −1, t, t+ 1}. The index ai τ denotes the ancestor particle of xi τ. It can be seen that, e.g., the trajectories x1 1:t+1 and x2 1:t+1 only differ at time instant t+1. 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Algorithm 1 Particle Gibbs with ancestor sampling Input : Reference particle x0 t for t = 1, . . . , T , and global variables. Output: Sample xout 1:T from the PGAS Markov kernel Draw xi 1 ⇠r1(x1) for i = 1, . . . , P −1 (Eq. 4) 1 Set xP 1 = x0 1 2 Compute the weights wi 1 = W1(xi 1) for i = 1, . . . , P (Eq. 5) 3 for t = 2, . . . , T do 4 // Resampling and ancestor sampling Draw ai t ⇠Categorical(w1 t−1, . . . , wP t−1) for i = 1, . . . , P −1 5 Compute e wi t−1|T for i = 1, . . . , P (Eq. 6) 6 Draw aP t ⇠Categorical( e w1 t−1|T , . . . , e wP t−1|T ) 7 // Particle propagation Draw xi t ⇠rt(xt|x ai t 1:t−1) for i = 1, . . . , P −1 (Eq. 4) 8 Set xP t = x0 t 9 Set xi 1:t = (x ai t 1:t−1, xi t) for i = 1, . . . , P (Eq. 3) 10 // Weighting Compute the weights wi t = Wt(xi 1:t) for i = 1, . . . , P (Eq. 5) 11 Draw k ⇠Categorical(w1 T , . . . , wP T ) 12 return xout 1:T = xk 1:T 13 furthermore, they can switch on and off (i.e., start or stop transmitting) at any are allowed to switch on at any position. We generate synthetic data in which th move within a region of 800 ⇥800 metres, where 25 sensors are located on a re The state xtm = [x(1) tm, x(2) tm, v(1) tm, v(2) tm]> of each target consists of its position an dimensional plane, and we assume a linear Gaussian dynamic model such that evolves according to xtm = Gxx(t−1)m + Guut = 2 64 1 0 Ts 0 0 1 0 Ts 0 0 1 0 0 0 0 1 3 75 x(t−1)m + 2 664 T 2 s 2 0 Ts 0 where Ts = 0.5 is the sampling period, and ut ⇠N(0, I) is a vector that mod noise. For each considered target, we sample the initial position uniformly in space, and assume that the initial velocity is Gaussian distributed with zero m 0.01I. Similarly to [20, 12], we assume the observation of sensor j at time t is g signal strength (RSS), i.e., ytj = P m:stm=1 P0 · ⇣ d0 dmjt ⌘γ + ntj, where ntj ⇠N term, dmjt is the distance between target m and sensor j at time t, P0 = 10 is the and d0 = 100 metres and γ = 2 are respectively the reference distance and the which account for the radio propagation model. We apply our inference algorithm period of length T = 300. In our inference algorithm we sample the noise var InvGamma(1,1) as its prior distribution. In Figure 3, we show the true and inferred trajectories of the targets, and the te the position error. We have sorted the inferred targets in a way that the position In this figure, we observe that the proposed model and algorithm is able to detect their trajectories with an average position error of around 6 metres. We do not co algorithm because, to the best of our knowledge, there are not multitarget trackin literature that can deal with targets that may start and stop transmitting at any tim Cocktail Party. We now address a blind speech separation task, also known a problem. More specifically, we record multiple people who are simultaneousl set of microphones. Given the recorded signal, the goal is to separate out th signals. Speakers may start speaking or become silent at any given time. Si collect data from several speakers from the PASCAL ‘CHiME’ Speech Separati Challenge website.1 The voice signal for each speaker consists of 4 sentences with random pauses in between each sentence. We artificially mix the data 10 ti 1http://spandh.dcs.shef.ac.uk/projects/chime/PCC/datasets.html 6 (b) PGAS algorithm. Figure 2: Particle Gibbs with ancestor sampling. • Step 2: Jointly sample the states xtm and stm of all the considered chains. Compact the representation by removing those chains that remain inactive in the entire observation period, consequently updating M+. • Step 3: Sample the global variables in the model, which include the transition probabilities and the emission parameters, from their posterior distribution. In Step 1, we follow the slice sampling scheme for inference in BNP models based on the Indian buffet process (IBP) [19, 23], which effectively transforms the model into a finite factorial model with M ‡ = M+ + Mnew parallel chains. Step 2 consists in sampling the elements of the matrices S and X given the current value of the global variables. Here, we propose to use PGAS, an algorithm recently developed for inference in state-space models and non-Markovian latent variable models [13]. Each iteration of this algorithm presents quadratic complexity with respect to the memory length L, avoiding the exponential complexity of the standard FFBS algorithm when applied over the equivalent model with extended states in Figure 1b. Details on the PGAS approach are given in Section 3.1. After running PGAS, we remove those chains that remain inactive in the whole observation period. In Step 3, we sample the transition probabilities am and bm, as well as other model-dependent variables such as the observation variables needed to evaluate the likelihood p(yt|X, S). Further details on the inference algorithm can be found in the Supplementary Material. 3.1 Particle Gibbs with ancestor sampling PGAS [13] is a method within the framework of particle MCMC [1] that combines the main ideas, as well as the strengths, of sequential Monte Carlo and MCMC techniques. In contrast to other particle Gibbs with backward simulation methods [25, 14], this algorithm can also be conveniently applied to non-Markovian latent variable models, i.e., models that are not expressed on a state-space form. The PGAS algorithm is an MCMC kernel, and thus generates a new sample of the hidden state matrices (X, S) given an initial sample (X′, S′), which is the output of the previous iteration of the PGAS (extended to account for the Mnew new inactive chains). The machinery inside the PGAS algorithm resembles an ordinary particle filter, with two main differences: one of the particles is deterministically set to the reference input sample, and the ancestor of each particle is randomly chosen and stored during the algorithm execution. We briefly describe the PGAS approach below, but we refer to [13] for a rigorous analysis of the algorithm properties. In the proposed PGAS, we assume a set of P particles for each time instant, each representing the states {xtm, stm}M ‡ m=1. We denote by the vector xi t the state of the i-th particle at time t. We also introduce the ancestor indexes ai t ∈{1, . . . , P} in order to denote the particle that precedes the i-th particle at time t. That is, ai t corresponds to the index of the ancestor particle of xi t. Let also xi 1:t be the ancestral path of particle xi t, i.e., the particle trajectory that is recursively defined as xi 1:t = (xai t 1:t−1, xi t). Figure 2a shows an example to clarify the notation. 4 The algorithm is summarized in Figure 2b. For each time instant t, we first generate the ancestor indexes for the first P −1 particles according to the importance weights wi t−1. Given these ancestors, the particles are then propagated across time according to a distribution rt(xt|xat 1:t−1). For simplicity, and dropping the global variables from the notation for conciseness, we assume that rt(xt|xat 1:t−1) = p(xt|xat t−1) = M ‡ Y m=1 p(xtm|stm, xat (t−1)m, sat (t−1)m)p(stm|sat (t−1)m), (3) i.e., particles are propagated as in Figure 1a using a simple bootstrap proposal kernel, p(xtm, stm|s(t−1)m, x(t−1)m). The P-th particle is instead deterministically set to the reference particle, xP t = x′ t, whereas the ancestor indexes aP t are sampled according to some weights ewi t−1|T . Indeed, this is a crucial step that vastly improves the mixing properties of the MCMC kernel. We now focus on the computation on the importance weights wi t and the ancestor weights ewi t−1|T . For the former, the particles are weighted according to wi t = Wt(xi 1:t), where Wt(x1:t) = p(x1:t|y1:t) p(x1:t−1|y1:t−1)rt(xt|x1:t−1) ∝p(yt|xt−L+1:t), (4) being yτ1:τ2 the set of observations {yt}τ2 t=τ1. Eq. 4 implies that, in order to obtain the importance weights, it suffices to evaluate the likelihood at time t. The weights ewi t−1|T are given by ewi t−1|T = wi t−1 p(xi 1:t−1, x′ t:T |y1:T ) p(xi 1:t−1|y1:t−1) ∝wi t−1p(x′ t|xi t−1) t+L−2 Y τ=t p(yτ|xi 1:t−1, x′ t:T ). (5) Note that, for memoryless models (i.e., L = 1), Eq. 5 can be simplified, since the product in the last term is not present and, therefore, ewi t−1|T ∝wi t−1p(x′ t|xi t−1). For L > 1, the computation of the weights ewi t−1|T in (5) for i = 1, . . . , P has computational time complexity scaling as O(PM ‡L2). Since this computation needs to be performed for each time instant (and this is the most expensive calculation), the resulting algorithm complexity scales as O(PTM ‡L2). 4 Experiments We now evaluate the proposed model and inference algorithm on four different applications, which are detailed below and summarized in Table 1. For the PGAS kernel, we use P = 3, 000 particles in all our experiments. Additional details on the experiments are given in the Supplementary Material. Multitarget Tracking. In the multitarget tracking problem, we aim at locating the position of several moving targets based on noisy observations. Under a general setup, a varying number of indistinguishable targets are moving around in a region, appearing at random in space and time. Multitarget tracking plays an important role in many areas of engineering such as surveillance, computer vision and signal processing [18, 16, 21, 6, 12]. Here, we focus on a simple synthetic example to show that our proposed iFDM can handle time-dependent continuous-valued hidden states. We place three moving targets within a region of 800 × 800 metres, where 25 sensors are located on a square grid. The state xtm = [x(1) tm, x(2) tm, v(1) tm, v(2) tm]⊤of each target consists of its position and velocity in a two dimensional plane, and we assume a linear Gaussian dynamic model such that, while active, xtm evolves according to xtm = Gxx(t−1)m + Guut (6) where Gx = [1 0 Ts 0; 0 1 0 Ts; 0 0 1 0; 0 0 0 1], Gu = [ T 2 s 2 0; 0 T 2 s 2 ; Ts 0; 0 Ts], Ts = 0.5 is the sampling period, and ut ∼N(0, I) is a vector that models the acceleration noise. For each considered target, we sample the initial position uniformly in the sensor network space, and assume that the initial velocity is Gaussian distributed with zero mean and covariance 0.01I. Following [21, 12], we generate (T = 300) observations based on the received signal strength (RSS), where the measurement of sensor j at time t is given by ytj = P m:stm=1 P0 ·  d0 dmjt γ + ntj. Here, ntj ∼N(0, 2) is the noise term, dmjt is the distance between target m and sensor j at time t, P0 = 10 is the transmitted power, and d0 = 100 metres and γ = 2 are, respectively, the reference distance and the path loss exponent, which account for the radio propagation model. In our inference algorithm, we sample the noise variance by placing an InvGamma(1,1) distribution as its prior. Here, we compare 5 Application Model X p(xtm|stm = 1, x(t−1)m, s(t−1)m = 1) L Multitarget Tracking Infinite factorial LDS R4 N (xtm|Gxx(t−1)m, GuG⊤ u ) 1 Cocktail Party ICA iFHMM R N (xtm|0, σ2 x) 1 Power Dissagregation Non-binary iFHMM {0, 1, . . . , Q −1} am jk = p(xtm = k|x(t−1)m = j) 1 Multiuser Detection − A S{0} U(A) ∈N Table 1: Applications of the iFDM. 0 200 400 600 800 0 200 400 600 800 Target 1 Target 2 Target 3 Inferred Target 1 Inferred Target 2 Inferred Target 3 Sensors (a) Target trajectories. Time 0 100 200 300 Error (m) 0 5 10 15 20 25 30 35 Target 1 Target 2 Target 3 (b) Position error. iFDM Genie-aided model Target 1 7.0 4.8 Target 2 5.9 6.0 Target 3 6.3 5.4 Average 6.4 5.9 (c) Average position error. Figure 3: Results for the multitarget tracking problem. the performance of the iFDM with a ‘genie-aided’ finite factorial model with perfect knowledge of the number of targets and noise variance. In Figures 3a and 3b, we show the true and inferred trajectories of the targets, and the temporal evolution of the position error of the iFDM. Additionally, Figure 3c shows the average position error (in absolute value) for our iFDM and the genie-aided method. In these figures, we observe that the proposed model and algorithm is able to detect the three targets and their trajectories, providing similar performance to the genie-aided method. In particular, both approaches provide average position errors of around 6 metres, which is thrice the noise variance. Cocktail Party. We now address a blind speech separation task, also known as the cocktail party problem. Given the recorded audio signals from a set of microphones, the goal is to separate out the individual speech signals of multiple people who are speaking simultaneously. Speakers may start speaking or become silent at any time. Similarly to [23], we collect data from several speakers from the PASCAL ‘CHiME’ Speech Separation and Recognition Challenge website.2 The voice signal for each speaker consists of 4 sentences, which we append with random pauses in between each sentence. We artificially mix the data 10 times (corresponding to 10 microphones) with mixing weights sampled from Uniform(0, 1), such that each microphone receives a linear combination of all the considered signals, corrupted by Gaussian noise with standard deviation 0.3. We consider two scenarios, with 5 and 15 speakers, and subsample the data so that we learn from T = 1, 354 and T = 1, 087 datapoints, respectively. Following [23], our model assumes p(xtm|stm = 1, x(t−1)m, s(t−1)m) = N(xtm|0, 2), and xtm = 0 whenever stm = 0. We also model yt as a linear combination of all the voice signals under Gaussian noise, i.e., yt = PM+ m=1 wmxtm + nt, where nt ∼N(0, σ2 yI) is the noise term, wm ∼N(0, I) is the 10-dimensional weighting vector associated to the m-th speaker, and σ2 y ∼InvGamma(1, 1). We compare our iFDM with the ICA iFHMM in [23] using FFBS sweeps for inference, with (i) p(xtm|stm = 1) = N(xtm|0, 2) (denoted as FFBS-G), and (ii) p(xtm|stm = 1) = Laplace(xtm|0, 2) (denoted as FFBS-L). For the scenario with 5 speakers, we show the true and the inferred (after iteration 10, 000) number of speakers in Figures 4a, 4b, 4c and 4d, along with their activities during the observation period. In order to quantitatively evaluate the performance of the different algorithms, we show in Figure 4e (top) the activity detection error rate (ADER), which is computed as the probability of detecting activity (inactivity) of a speaker while that speaker is actually inactive (active). As the algorithms are unsupervised, we sort the estimated chains so that the ADER is minimized. If the inferred number of speakers M+ is smaller (larger) than the true number of speakers, we consider some extra inferred inactive chains (additional speakers). The PGAS-based approach outperforms the two FFBS-based methods because it can jointly sample the states of all chains (speakers) for each time instant, whereas the FFBS requires sampling each chain conditioned on the current states of the other chains, leading to poor mixing, as discussed in [22]. As a consequence, the FFBS tends to overestimate the number of speakers, as shown in Figure 4e (bottom). 2http://spandh.dcs.shef.ac.uk/projects/chime/PCC/datasets.html 6 (a) Ground truth. (b) PGAS. (c) FFBS-G. (d) FFBS-L. Method # of Speakers 5 15 ADER PGAS 0.08 0.08 FFBS-G 0.25 0.14 FFBS-L 0.14 0.12 M+ PGAS 5 15 FFBS-G 7 15 FFBS-L 8 15 (e) ADER / Inferred M+. Figure 4: Results for the cocktail party problem. Algorithm H. 1 H. 2 H. 3 H. 4 H. 5 PGAS 0.68 0.79 0.60 0.58 0.55 FFBS 0.59 0.78 0.56 0.53 0.43 (a) REDD (‘H’ stands for ‘House’). Algorithm Day 1 Day 2 PGAS 0.76 0.82 FFBS 0.67 0.72 (b) AMP. Table 2: Accuracy for the power disaggregation problem. Power Disaggregation. Given the aggregate whole-home power consumption signal, the power disaggregation problem consists in estimating both the number of active devices in the house and the power draw of each individual device [11, 7]. We validate the performance of the iFDM on two different real databases: the Reference Energy Disaggregation Data Set (REDD) [11], and the Almanac of Minutely Power Dataset (AMP) [15]. For the AMP database, we consider two 24-hour segments and 8 devices. For the REDD database, we consider a 24-hour segment across 5 houses and 6 devices. Our model assumes that each device can take Q = 4 different states (one inactive state and three active states with different power consumption), i.e., xtm ∈{0, 1, . . . , Q −1}, with xtm = 0 if stm = 0. We place a symmetric Dirichlet prior over the transition probability vectors of the form am j ∼Dirichlet(1), where each element am jk = p(xtm = k|stm = 1, x(t−1)m = j, s(t−1)m). When xtm = 0, the power consumption of device m at time t is zero (P m 0 = 0), and when xtm ∈{1, . . . , Q−1} its average power consumption is given by P m xtm. Thus, the total power consumption is given by yt = PM+ m=1 P m xtm + nt, where nt ∼N(0, 0.5) represents the additive Gaussian noise. For q ∈{1, . . . , Q −1}, we assume a prior power consumption P m q ∼N(15, 10). In this case, the proposed model for the iFDM resembles a non-binary iFHMM and, therefore, we can also apply the FFBS algorithm to infer the power consumption draws of each device. In order to evaluate the performance of the different algorithms, we compute the mean accuracy of the estimated consumption of each device (higher is better), i.e., acc = 1 − PT t=1 PM m=1 |x(m) t −ˆx(m) t | 2 PT t=1 PM m=1 x(m) t , where x(m) t and ˆx(m) t = P m xtm are, respectively, the true and the estimated power consumption by device m at time t. In order to compute the accuracy, we assign each estimated chain to a device so that the accuracy is maximized. If the inferred number of devices M+ is smaller than the true number of devices, we use ˆx(m) t = 0 for the undetected devices. If M+ is larger than the true number of devices, we group all the extra chains as an “unknown” device and use x(unk) t = 0. In Table 2 we show the results provided by both algorithms. The PGAS approach outperforms the FFBS algorithm in the five houses of the REDD database and the two selected days of the AMP database. This occurs because the PGAS can simultaneously sample the hidden states of all devices for each time instant, whereas the FFBS requires conditioning on the current states of all but one device. Multiuser Detection. We now consider a digital communication system in which users are allowed to enter or leave the system at any time, and several receivers cooperate to estimate the number of users, the (digital) symbols they transmit, and the propagation channels they face. Multipath propagation affects the radio signal, thus causing inter-symbol interference. To capture this phenomenon in our model, we use L ≥1 in this application. We consider a multiuser Wi-Fi communication system, and we use a ray tracing algorithm (WISE software [3]) to design a realistic indoor wireless system in an office located at Bell Labs Crawford Hill. We place 12 receivers and 6 transmitters across the office, in the positions respectively marked with circles and crosses in Figure 5 (all transmitters and receivers are placed at a height of 2 metres). Transmitted symbols belong to a quadrature phaseshift keying (QPSK) constellation, A = { ±1±√−1 √ 2 }, such that, while active, the transmitted symbols are independent and uniformly distributed in A, i.e., p(xtm|stm = 1, x(t−1)m, s(t−1)m) = U(A). 7 120m 14m 1 2 3 4 5 6 7 8 9 10 11 12 1 23 4 5 6 Figure 5: Plane of the office building at Bell Labs Crawford Hill. Model L 1 2 3 4 5 iFDM 6/6 6/6 6/6 6/6 6/6 iFHMM 3/11 3/11 3/8 1/10 − (a) # Recovered transmitters / Inferred M+. Model L 1 2 3 4 5 iFDM 2.58 2.51 0.80 0.30 0.16 iFHMM 2.79 1.38 5.53 1.90 − (b) MSE of the channel coefficients (×10−6). Table 3: Results for the multiuser detection problem. The observations of all the receivers are weighted replicas of the transmitted symbols under noise, yt = PM+ m=1 PL ℓ=1 hm ℓx(t−ℓ+1)m + nt, where xtm = 0 for the inactive states, and the channel coefficients hm ℓand noise variance σ2 y are provided by WISE software. For inference, we assume Rayleigh-fading channels and, therefore, we place a circularly symmetric complex Gaussian prior distribution over the channel coefficients, hm ℓ|σ2 ℓ∼CN(0, σ2 ℓI, 0), and over the noise term, nt ∼CN(0, σ2 yI, 0). We place an inverse gamma prior over σ2 ℓwith mean and standard deviation 0.01e−0.5(ℓ−1). The choice of this particular prior is based on the assumption that the channel coefficients hm ℓare a priori expected to decay with the memory index ℓ, since the radio signal suffers more attenuation as it propagates through the walls or bounces off them. We use an observation period T = 2, 000, and vary L from 1 to 5. Five channel taps correspond to the radio signal travelling a distance of 750 m, which should be enough given the dimensions of this office space. We compare our iFDM with a non-binary iFHMM model with state space cardinality |X| = 5L using FFBS sweeps for inference (we do not run the FFBS algorithm for L = 5 due to its computational complexity). We show in Table 3a the number of recovered transmitters (i.e., the number of transmitters for which we recover all the transmitted symbols with no error) found after running the inference algorithms, together with the inferred value of M+. We see that the iFHMM tends to overestimate the number of transmitters, which deteriorates the overall symbol estimates and, as a consequence, not all the transmitted symbols are recovered. We additionally report in Table 3b the MSE of the first channel tap, i.e., 1 6×12 P m ||hm 1 −bhm 1 ||2, being bhm ℓthe inferred channel coefficients. We sort the transmitters so that the MSE is minimized, and ignore the extra inferred transmitters. In general, the iFDM outperforms the iFHMM approach, as discussed above. Under our iFDM, the MSE decreases as we consider a larger value of L, since the model better fits the actual radio propagation model. 5 Conclusions We have proposed a general BNP approach to solve source separation problems in which the number of sources is unknown. Our model builds on the mIBP to consider a potentially unbounded number of hidden Markov chains that evolve independently according to some dynamics, in which the state space can be either discrete or continuous. For posterior inference, we have developed an algorithm based on PGAS that solves the intractable complexity that the FFBS presents in many scenarios, enabling the application of our iFDM in problems such as multitarget tracking or multiuser detection. In addition, we have shown empirically that our PGAS approach outperforms the FFBS-based algorithm (in terms of accuracy) in the cocktail party and power disaggregation problems, since the FFBS gets more easily trapped in local modes of the posterior in which several Markov chains correspond to a single hidden source. Acknowledgments I. Valera is currently supported by the Humboldt research fellowship for postdoctoral researchers program and acknowledges the support of Plan Regional-Programas I+D of Comunidad de Madrid (AGES-CM S2010/BMD-2422). F. J. R. Ruiz is supported by an FPU fellowship from the Spanish Ministry of Education (AP2010-5333). This work is also partially supported by Ministerio de Econom´ıa of Spain (projects COMPREHENSION, id. TEC2012-38883-C02-01, and ALCIT, id. TEC2012-38800-C03-01), by Comunidad de Madrid (project CASI-CAM-CM, id. S2013/ICE2845), by the Office of Naval Research (ONR N00014-11-1-0651), and by the European Union 7th Framework Programme through the Marie Curie Initial Training Network ‘Machine Learning for Personalized Medicine’ (MLPM2012, Grant No. 316861). 8 References [1] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society Series B, 72(3):269–342, 2010. [2] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in Neural Information Processing Systems, volume 14, 2002. [3] S. J. Fortune, D. M. Gay, B. W. Kernighan, O. Landron, R. A. Valenzuela, and M. H. Wright. WISE design of indoor wireless systems: Practical computation and optimization. IEEE Computing in Science & Engineering, 2(1):58–68, March 1995. [4] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. Bayesian nonparametric methods for learning Markov switching processes. IEEE Signal Processing Magazine, 27(6):43–54, 2010. [5] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. A sticky HDP-HMM with application to speaker diarization. Annals of Applied Statistics, 5(2A):1020–1056, 2011. [6] L. Jiang, S. S. Singh, and S. Yıldırım. Bayesian tracking and parameter learning for non-linear multiple target tracking models. arXiv preprint arXiv:1410.2046, 2014. [7] M. J. Johnson and A. S. Willsky. Bayesian nonparametric hidden semi-Markov models. Journal of Machine Learning Research, 14:673–701, February 2013. [8] M. I. Jordan. Hierarchical models, nested models and completely random measures. Springer, New York, (NY), 2010. [9] R. E. Kalman. A new approach to linear filtering and prediction problems. ASME Journal of Basic Engineering, 82(Series D):35–45, 1960. [10] D. Knowles and Z. Ghahramani. Nonparametric Bayesian sparse factor models with application to gene expression modeling. The Annals of Applied Statistics, 5(2B):1534–1552, June 2011. [11] J Z. Kolter and T. Jaakkola. Approximate inference in additive factorial hmms with application to energy disaggregation. In International conference on artificial intelligence and statistics, pages 1472–1482, 2012. [12] J. Lim and U. Chong. Multitarget tracking by particle filtering based on RSS measurement in wireless sensor networks. International Journal of Distributed Sensor Networks, March 2015. [13] F. Lindsten, M. I. Jordan, and T. B. Sch¨on. Particle Gibbs with ancestor sampling. Journal of Machine Learning Research, 15(1):2145–2184, 2014. [14] F. Lindsten and T. B. Sch¨on. Backward simulation methods for Monte Carlo statistical inference. Foundations and Trends in Machine Learning, 6(1):1–143, 2013. [15] S. Makonin, F. Popowich, L. Bartram, B. Gill, and I. V. Bajic. AMPds: A public dataset for load disaggregation and eco-feedback research. In Proceedings of the 2013 IEEE Electrical Power and Energy Conference (EPEC), 2013. [16] S. Oh, S. Russell, and S. Sastry. Markov chain Monte Carlo data association for general multiple-target tracking problems. In IEEE Conference on Decision and Control, volume 1, pages 735–742, Dec 2004. [17] P. Orbanz and Y. W. Teh. Bayesian nonparametric models. In Encyclopedia of Machine Learning. Springer, 2010. [18] S. S¨arkk¨a, A. Vehtari, and J. Lampinen. Rao-blackwellized particle filter for multiple target tracking. Information Fusion, 8(1):2–15, 2007. [19] Y. W. Teh, D. G¨or¨ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 11, 2007. [20] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [21] F. Thouin, S. Nannuru, and M. Coates. Multi-target tracking for measurement models with additive contributions. In Proceedings of the 14th International Conference on Information Fusion (FUSION), pages 1–8, July 2011. [22] M. K. Titsias and C. Yau. Hamming ball auxiliary sampling for factorial hidden Markov models. In Advances in Neural Information Processing Systems 27, 2014. [23] J. Van Gael, Y. W. Teh, and Z. Ghahramani. The infinite factorial hidden Markov model. In Advances in Neural Information Processing Systems, volume 21, 2009. [24] M. A. V´azquez and J. M´ıguez. User activity tracking in DS-CDMA systems. IEEE Transactions on Vehicular Technology, 62(7):3188–3203, 2013. [25] N. Whiteley, C. Andrieu, and A. Doucet. Efficient Bayesian inference for switching state-space models using particle Markov chain Monte Carlo methods. Technical report, Bristol Statistics Research Report 10:04, 2010. 9
2015
16
5,658
Convergence Analysis of Prediction Markets via Randomized Subspace Descent Rafael Frongillo Department of Computer Science University of Colorado, Boulder raf@colorado.edu Mark D. Reid Research School of Computer Science The Australian National University & NICTA mark.reid@anu.edu.au Abstract Prediction markets are economic mechanisms for aggregating information about future events through sequential interactions with traders. The pricing mechanisms in these markets are known to be related to optimization algorithms in machine learning and through these connections we have some understanding of how equilibrium market prices relate to the beliefs of the traders in a market. However, little is known about rates and guarantees for the convergence of these sequential mechanisms, and two recent papers cite this as an important open question. In this paper we show how some previously studied prediction market trading models can be understood as a natural generalization of randomized coordinate descent which we call randomized subspace descent (RSD). We establish convergence rates for RSD and leverage them to prove rates for the two prediction market models above, answering the open questions. Our results extend beyond standard centralized markets to arbitrary trade networks. 1 Introduction In recent years, there has been an increasing appreciation of the shared mathematical foundations between prediction markets and a variety of techniques in machine learning. Prediction markets consist of agents who trade securities that pay out depending on the outcome of some uncertain, future event. As trading takes place, the prices of these securities reflect an aggregation of the beliefs the traders have about the future event. A popular class of mechanisms for updating these prices as trading occurs has been shown to be closely related to techniques from online learning [7, 1, 21], convex optimization [10, 19, 13], probabilistic aggregation [24, 14], and crowdsourcing [3]. Building these connections serve several purposes, however one important line of research has been to use insights from machine learning to better understand how to interpret prices in a prediction market as aggregations of trader beliefs, and moreover, how the market together with the traders can be viewed as something akin to a distributed machine learning algorithm [24]. The analysis in this paper was motivated in part by two pieces of work that considered the equilibria of prediction markets with specific models of trader behavior: traders as risk minimizers [13]; and traders who maximize expected exponential utility using beliefs from exponential families [2]. In both cases, the focus was on understanding the properties of the market at convergence, and questions concerning whether and how convergence happened were left as future work. In [2], the authors note that “we have not considered the dynamics by which such an equilibrium would be reached, nor the rate of convergence etc., yet we think such questions provide fruitful directions for future research.” In [13], “One area of future work would be conducting a detailed analysis of this framework using the tools of convex optimisation. A particularly interesting topic is to find the conditions under which the market will converge.” 1 The main contribution of this paper is to answer these questions of convergence. We do so by first proposing a new and very general model of trading networks and dynamics (§3) that subsumes the models used in [2] and [13] and provide a key structural result for what we call efficient trades in these networks (Theorem 2). As an aside, this structural result provides an immediate generalization of an existing aggregation result in [2] to trade networks of “compatible” agents (Theorem 8). In §4, we argue that efficient trades in our networks model can be viewed as steps of what we call Random Subspace Descent (RSD) algorithm (Algorithm 1). This novel generalization of coordinate descent allows an objective to be minimized by taking steps along affinely constrained subspaces, and maybe be of independent interest beyond prediction market analysis. We provide a convergence analysis of RSD under two sets of regularity constraints (Theorems 3 & 9) and show how these can be used to derive (slow & fast) convergence rates in trade networks (Theorems 4 & 5). Before introducing our general trading networks and convergence rate results, we first introduce the now standard presentation of potential-based prediction markets [1] and the recent variant in which all agents determine their trades using risk measures [13]. We will then state informal versions of our main results so as to highlight how we address issues of convergence in existing frameworks. 2 Background and Informal Results Prediction markets are mechanisms for eliciting and aggregating distributed information or beliefs about uncertain future events. The set of events or outcomes under consideration in the market will be denoted Ωand may be finite or infinite. For example, each outcome ω ∈Ωmight represent a certain presidential candidate winning an election, the location of a missing submarine, or an unknown label for an item in a data set. Following [1], the goods that are traded in a prediction market are k outcome-dependent securities {φ(·)i}k i=1 that pay φ(ω)i dollars should the outcome ω ∈Ωoccur. We denote the set of distributions over Ωby ∆Ωand note, for any p ∈∆Ω, that the expected pay off for the securities under p is Eω∼p [φ(ω)] and the set of all expected pay offs is just the convex hull, denoted Π := conv(φ(Ω)). A simple and commonly studied case is when Ω= [k] := {1, . . . , k} (i.e., when there are exactly k outcomes) and the securities are the Arrow-Debreu securities that pay out $1 should a specific outcome occur and nothing otherwise (i.e., φ(ω)i = 1 if ω = i and φ(ω)i = 0 for ω ̸= i). Here, the securities are just basis vectors for Rk and Π = ∆Ω. Traders in a prediction market hold portfolios of securities r ∈Rk called positions that pay out a total of r · φ(ω) = Pk i=1 riφ(ω)i dollars should outcome ω occur. We denote the set of positions by R = Rk. We will assume that R always contains a position r$ that returns a dollar regardless of which outcome occurs, meaning r$ · φ(ω) = 1 for all ω ∈Ω. We therefore interpret r$ as “cash” within the market in the sense that buying or selling r$ guarantees a fixed change in wealth. In order to address the questions about convergence in [2, 13] we will consider a common form of prediction market that is run through a market maker. This is an automated agent that is willing to buy or sell securities in return for cash. The specific and well-studied prediction market mechanism we consider is the potential-based market maker [1]. Here, traders interact with the market maker sequentially, and the cost for each trade is determined by a convex potential function C : R →R applied to the market maker’s state s ∈R. Specifically, the cost for a trade dr when the market maker has state s is given by cost(dr; s) = C(s−dr)−C(s), i.e., the change in potential value of the market maker’s position due to the market maker accepting the trade. After a trade, the market maker updates the state to s ←s −dr.1 As noted in the next section, the usual axiomatic requirements for a cost function (e.g., in [1]) specify a function that is effectively a risk measure, commonly studied in mathematical finance (see, e.g., [9]). 2.1 Risk Measures As in [13], agents in our framework will each quantify their uncertainty in positions using what is known as risk measure. This is a function that assigns dollar values to positions. As Example 1 below shows, this assumption will also cover the case of agents maximizing exponential utility, as considered in [2]. 1It is more common in the prediction market literature for s to be a liability vector, tracking what the market maker stands to lose instead of gain. Here we adopt positive positions to match the convention for risk measures. 2 A (convex monetary) risk measure is a function ρ : R →R satisfying, for all r, r′ ∈R: • Monotonicity: ∀ω r · φ(ω) ≤r′ · φ(ω) =⇒ρ(r) ≥ρ(r′). • Cash invariance: ρ(r + c r$) = ρ(r) −c for all c ∈R. • Convexity: ρ λr + (1 −λ)r′ ≤λρ(r) + (1 −λ)ρ(r′) for all λ ∈(0, 1). • Normalization: ρ(0) = 0. The reasonableness of these properties is usually argued as follows (see, e.g., [9]). Monotonicity ensures that positions that result in strictly smaller payoffs regardless of the outcome are considered more risky. Cash invariance captures the idea that if a guaranteed payment of $c is added to the payment on each outcome then the risk will decrease by $c. Convexity states that merging positions results in lower risk. Finally, normalization requires that holding no securities should carry no risk. This last condition is only for convenience since any risk without this condition can trivially have its argument translated so it holds without affecting the other three properties. A key result concerning convex risk measures is the following representation theorem (cf. [9, Theorem 4.15], ). Theorem 1 (Risk Representation). A functional ρ : R →R is a convex risk measure if and only if there is a closed convex function α : Π →R∪{∞} such that ρ(r) = supπ∈relint(Π) ⟨π, −r⟩−α(π). Here relint(Π) denotes the relative interior of Π, the interior relative to the affine hull of Π. Notice that if f ∗denotes the convex conjugate f ∗(y) := supx ⟨y, x⟩−f(x), then this theorem states that ρ(r) = α∗(−r), that is, ρ and α are “dual” in the same way prices and positions are dual [5, §5.4.4]. This suggests that the function α can be interpreted as a penalty function, assigning a measure of “unlikeliness” α(π) to each expected value π of the securities defined above. Equivalently, α(Ep [φ]) measures the unlikeliness of distribution p over the outcomes. We can then see that the risk is the greatest expected loss under each distribution, taking into account the penalties assigned by α. Example 1. A well-studied risk measure is the entropic risk relative to a reference distribution q ∈∆Ω[9]. This is defined on positions r ∈R by ρβ(r) := β log Eω∼q [exp(−r · φ(ω)/β)]. The cost function C(r) = ρβ(−r) associated with this risk exactly corresponds to the logarithmic market scoring rule (LMSR). Its associated convex function αβ over distributions is the scaled relative entropy αβ(p) = β KL(p | q). As discussed in [2, 13], the entropic risk is closely related to exponential utility Uβ(w) := −1 β exp(−βw). Indeed, ρβ(r) = −Uβ (Eω∼q [Uβ(r · φ(ω))]) which is just the negative certainty equivalent of the position r — i.e., the amount of cash an agent with utility Uβ and belief q would be willing to trade for the uncertain position r. Due to the monotonicity of U −1 β , it follows that a trader maximizing expected utility Eω∼q [Uβ(r · φ(ω))] of holding position r is equivalent to minimizing the entropic risk ρβ(r). For technical reasons, in addition to the standard assumptions for convex risk measures, we will also make two weak regularity assumptions. These are similar to properties required of cost functions in the prediction market literature (cf. [1, Theorem 3.2]): • Expressiveness: ρ is everywhere differentiable, and closure{∇ρ(r) : r ∈R} = Π. • Strict risk aversion: the Convexity inequality is strict unless r −r′ = c r$ for some c ∈R. As discussed in [1], expressiveness is related to the dual formulation given above; roughly, it says that the agent must take into account every possible expected value of the securities when calculating the risk. Strict risk aversion says that an agent should strictly prefer a mixture of positions, unless of course the difference is outcome-independent. Under these assumptions, the representation result of Theorem 1 and a similar result for cost functions [1, Theorem 3.2]) coincide and we are able to show that cost functions and risk measures are exactly the same object; we write ρC(r) = C(r) when we think of C as a risk measure. Unfolding the definition of cost now using cash invariance, we have ρC(s −dr + cost(dr; s)r$) = ρC(s −dr) −cost(dr; s) = C(s −dr) −C(s −dr) + C(s) = ρC(s). Thus, we may view a potential-based market maker as a constant-risk agent. 2.2 Trading Dynamics and Aggregation As described above, we consider traders who approach the market maker sequentially and at random, and select the optimal trade based on their current position, the market state, and the cost function C. 3 As we just observed, we may think of the market maker as a constant-risk agent with ρC = C. Let us examine the optimization problem faced by the trader with position r when the current market state is s. This trader will choose a portfolio dr∗from the market maker so as to minimise her risk: dr∗∈arg min dr∈Rk ρ (r + dr −cost(dr)r$) = arg min dr∈Rk ρ(r + dr) + ρC(s −dr) . (1) Since, by the cash invariance of ρ and the definition of cost, the objective is ρ(r + dr) + ρC(s − dr) −ρC(s), and ρC(s) does not depend on dr. Thus, if we think of F(r, s) = ρ(r) + ρC(s) as a kind of “social risk”, we can define the surplus as simply the net risk taken away by an optimal trade, namely F(r, s) −F(r + dr∗, s −dr∗). We can now state our central question: if a set of N such traders arrive at random and execute optimal (or perhaps near-optimal) trades with the market maker, will the market state converge to the optimal risk, and if so how fast? As discussed in the introduction, this is precisely the question asked in [2, 13] that we set out to answer. To do so we will draw a close connection to the literature on distributed optimization algorithms for machine learning. Specifically, if we encode the entire state of our system in the positions R = (r0 = s, r1, . . . , rn) of the market maker and each of the n traders, we may view the optimal trade in eq. (1) as performing a coordinate descent step, by optimizing only with respect to coordinates 0 and i. We build on this connection in Section 4 and leverage a generalization of coordinate descent methods to show the following in Theorem 4: If a set of risk-based traders is sampled at random to sequentially trade in the market, the market state and prices converge to within ϵ of the optimal total risk in O(1/ϵ) rounds. In fact, under mild smoothness assumptions on the cost potential function C, we can improve this rate to O(log(1/ϵ)). We can also relax the optimality of the trader behavior; as long as traders find a trade dr which extracts at least a constant fraction of the surplus, the rate remains intact. With convergence rates in hand, the next natural question might be: to what does the market converge? Abernethy et al. [2] show that when traders minimize expected exponential utility and have exponential family beliefs, the market equilibrium price can be thought of as a weighted average of the parameters of the traders, with the weights being a measure of their risk tolerance. Even though our setting is far more general than exponential utility and exponential families, the framework we develop can also be used to show that their results can be extended to interactions between traders who have what we call “compatible” risks and beliefs. Specifically, for any risk-based trader possessing a risk ρ with dual α, we can think of that trader’s “belief” as the least surprising distribution p according to α. This view induces a family of distributions (which happen to be generalized exponential families [11]) that are parameterized by the initial positions of the traders. Furthermore, the risk tolerance b is given by how sensitive this belief is to small changes of an agent’s position. The results of [2] are then a special case of our Theorem 8 for agents with ρ being entropic risk (cf. Example 1): If each trader i has risk tolerance bi and a belief parameterized by θi, and the initial market state is θ0, then the equilibrium state of the market, to which the market converges, is given by θ∗= θ0+P i biθi 1+P i bi . As the focus of this paper is on the convergence, the details for this result are given in Appendix C. The main insight that drives the above analysis of the interaction between a risk-based trader and a market maker is that each trade minimizes a global objective for the market that is the infimal convolution [6] of the traders’ and market maker’s risks. In fact, this observation naturally generalizes to trades between three or more agents and the same convergence analysis applies. In other words, our analysis also holds when bilateral trade with a fixed market maker is replaced by multilateral trade among arbitrarily overlapping subsets of agents. Viewed as a graph with agents as nodes, the standard prediction market framework is represented by the star graph, where the central market market interacts with traders sequentially and individually. More generally we have what we call a trading network, in which the structure of trades can form arbitrary connected graphs or even hypergraphs. An obvious choice is the complete graph, which can model a decentralized market, and in fact we can even compare the convergence rate of our dynamics between the centralized and decentralized models; see Appendix D.2 and the discussion in § 5. 4 3 General Trading Dynamics The previous section described the two agent case of what is more generally known as the optimal risk allocation problem [6] where two or more agents express their preferences for positions via risk measures. This is formalized by considering N agents with risk measures ρi : R →R for i ∈[N] := {1, . . . , N} who are asked to split a position r ∈R in to per-agent positions ri ∈R satisfying P i ri = r so as to minimise the total risk P i ρi(ri). They note that the value of the total risk is given by the infimal convolution ∧iρi of the individual agent risks — that is, (∧iρi)(r) := inf (X i ρi(ri) : X i ri = r , ri ∈R ) . (2) A key property of the infimal convolution, which will underly much of our analysis, is that its convex conjugate is the sum of the conjugates of its constituent functions. See e.g. [23] for a proof. (∧iρi)∗= X i∈[N] ρ∗ i . (3) One can think of ∧iρi as the “market risk”, which captures the risk of the entire market (i.e., as if it were a single risk-based agent) as a function of the net position P i ri of its constituents. By definition, eq. (2) says that the market is trying to reallocate the risk so as to minimize this net risk. This interpretation is confirmed by eq. (3) when we interpret the duals as penalty functions as above: the penalty of π is the sum of the penalties of the market participants. As alluded to above, we allow our agents to interact round by round by conducting trades, which are simply the exchange of outcome-contingent securities. Since by assumption our position space R is closed under linear combinations, a trade between two agents is simply a position which is added to one agent and subtracted from another. Generalizing from this two agent interaction, a trade among a set of agents S ⊆[N] is just a collection of trade vectors, one for each agent, which sum to 0. Formally, let S ⊆[N] be a subset of agents. A trade on S is then a vector of positions dr ∈RN (i.e., a matrix in RN×k) such that P i∈S dri = 0 ∈R and dri = 0 for all i /∈S. This last condition specifies that agents not in S do not change their position. A key quantity in our analysis is a measure of how much the total risk of a collection of traders drops due to trading. Given some subset of traders S, the S-surplus is a function ΦS : RN →R defined by ΦS(r) = P i∈S ρi(ri) −(∧iρi)(P i∈S ri) which measures the maximum achievable drop in risk (since ∧iρi is an infimum). In particular, Φ(r) := Φ[N](r) is the surplus function. The trades that achieve this optimal drop in risk are called efficient: given current state r ∈RN, a trade dr ∈RN on S ⊆[N] is efficient if ΦS(r + dr) = 0. Our following key result shows that efficient trades have remarkable structure: once the state r and subset S is specified, there is a unique efficient trade, up to cash transfers. In other words, the surplus is removed from the position vectors and then redistributed as cash to the traders; the choice of trade is merely in how this redistribution takes place. The fact that the derivatives match has strong intuition from prediction markets: agents must agree on the price.2 The proof is in Appendix A.1. Theorem 2. Let r ∈RN and S ⊆[N] be given. i. The surplus is always finite: 0 ≤ΦS(r) < ∞. ii. The set of efficient trades on S is nonempty. iii. Efficient trades are unique up to zero-sum cash transfers: Given efficient trades dr∗, dr ∈RN on S, we have dr = dr∗+ (z1r$, . . . , zNr$) for some z ∈RN with P i zi = 0. iv. Traders agree on “prices”: A trade dr on S is efficient if and only if for all i, j ∈S, ∇ρi(ri + dri) = ∇ρj(rj + drj). v. There is a unique “efficient price”: If dr is an efficient trade on S, for all i ∈S we have ∇ρi(ri + dri) = −π∗ S, where π∗ S = arg min π∈Π P i∈S αi(π) − π, P i∈S ri . 2As intuition for the term “price”, consider that the highest price-per-unit agent i would be willing to pay for an infinitesimal quantity of a position dri is dri · (−∇ρi(ri)), and likewise the lowest price-per-unit to sell. Thus, the entries of −∇ρi(ri) act as the “fair” prices for their corresponding basis positions/securities. 5 The above properties of efficient trades drive the remainder of our convergence analysis of network dynamics. It also allows us to write a simple closed form for the market price when traders share a common risk profile (Theorem 8). Details are in Appendix C. Beyond our current focus on rates, Theorem 2 has implications for a variety of other economic properties of trade networks. For example, in Appendix B we show that efficient trades correspond to fixed points for more general dynamics, market clearing equilibria, and equilibria of natural bargaining games among the traders. Recall that in the prediction market framework of [13], each round has a single trader, say i > 1, interacting with the market maker who we will assume has index 1. In the notation just defined this corresponds to choosing S = {1, i}. We now wish to consider richer dynamics where groups of two or more agents trade efficiently each round. To this end will we call a collection S = {Sj ⊆[N]}m j=1 of groups of traders a trading network and assume there is some fixed distribution D over S with full support. A trade dynamic over S is a process that begins at t = 0 with some initial positions r0 ∈RN for the N traders, and at each round t, draws a random group of traders St ∈S according to D, selects some efficient trade drt on S, then updates the trader positions using rt+1 = rt + drt. For the purposes of proving the convergence of trade dynamics, a crucial property is whether all traders can directly or indirectly affect the others. To capture this we will say a trade network is connected if the hypergraph on [N] with edges given by S is connected; i.e., information can propagate throughout the entire network. Dynamics over classical prediction markets are always connected since any pair of groups from its network will always contain the market maker. 4 Convergence Analysis of Randomized Subspace Descent Before briefly reviewing the literature on coordinate descent, let us see why this might be a useful way to think of our dynamics. Recall that we have a set S of subsets of agents, and that in each step, an efficient trade dr is chosen which only modifies the positions of agents in the sampled S ∈S. Thinking of (r1, . . . , rN) as a vector of dimension N · k vector (recall R = Rk), changing rt to rt+1 = rt + dr thus only modifies |S| blocks of k entries. Moreover, efficiency ensures that dr minimizes the sum of the risks of agents in S. Hence, ignoring for now the constraint that the sum of the positions must remain constant, the trade dynamic seems to be performing a kind of block coordinate descent of the surplus function Φ. 4.1 Randomized Subspace Descent Several randomized coordinate descent methods have appeared in the literature recently, with increasing levels of sophistication. While earlier methods focused on updates which only modified disjoint blocks of coordinates [18, 22], more recent methods allow for more general configurations, such as overlapping blocks [17, 16, 20]. In fact, these last three methods are closest to what we study here; the authors consider an objective which decomposes as the sum of convex functions on each coordinate, and study coordinate updates which follow a graph structure, all under the constraint that coordinates sum to 0. Despite the similarity of these methods to our trade dynamics, we require even more general updates, as we allow coordinate i to correspond to arbitrary subsets Si ∈S. Instead, we establish a unification of these methods which we call randomized subspace descent (RSD), listed in Algorithm 1. Rather than blocks of coordinates or specific linear constraints, RSD abstracts away these constructs by simply specifying “coordinate subspaces” in which the optimization is to be performed. Specifically, the algorithm takes a list of projection matrices {Πi}n i=1 which define the subspaces, and at each step t selects a Πi at random and tries to optimize the objective under the constraint that it may only move within the image space of Πi; that is, if the current point is xt, then xt+1 −xt ∈im(Πi). Before stating our convergence results for Algorithm 1, we will need a notion of smoothness relative to our subspaces. Specifically, we say F is Li-Πi-smooth if for all i there are constants Li > 0 such that for all y ∈im(Πi), F(x + y) ≤F(x) + ⟨∇F(x), y⟩+ Li 2 ∥y∥2 2 . (4) Finally, let F min := miny∈span{im(Πi)}i F(x0 + y) be the global minimizer of F subject to the constraints from the Πi. Then we have the following result for a constant R(x0) which increases in: 6 ALGORITHM 1: Randomized Subspace Descent Input: Smooth convex function F : Rn →R, initial point x0 ∈Rn, matrices {Πi ∈Rn×n}m i=1, smoothness parameters {Li}m i=1, distribution p ∈∆m for iteration t in {0, 1, 2, · · · } do sample i from p xt+1 ←xt − 1 Li Πi∇F(xt) end (1) the distance from the point x0 to furthest minimizer of F, (2) the Lipschitz constants of F w.r.t. the Πi, and (3) the connectivity of the hypergraph induced by the projections. Theorem 3. Let F, {Πi}i, {Li}i, x0, and p be given as in Algorithm 1, with the condition that F is Li-Πi-smooth for all i. Then E  F(xt) −F min ≤2R2(x0) / t. The proof is in Appendix D. Additionally, when F is strongly convex, meaning it has a uniform local quadratic lower bound, RSD enjoys faster, linear convergence. Formally, this condition requires F to be µ-strongly convex for some constant µ > 0, that is, for all x, y ∈dom F we require F(y) ≥F(x) + ∇F(x) · (y −x) + µ 2 ∥y −x∥2 . (5) The statement and details of this stronger result is given in Appendix D.1. Importantly for our setting these results only track the progress per iteration. Thus, they apply to more sophisticated update steps than a simple gradient step as long as they improve the objective by at least as much. For example, if in each step the algorithm computed the exact minimizer xt+1 = arg miny∈im(Πi) F(xt + y), both theorems would still hold. 4.2 Convergence Rates for Trade Dynamics To apply Theorem 3 to the convergence of trading dynamics, we let F = Φ and x = (r1, . . . , rN) ∈ RN ∼= RNk be the joint position of all agents. For each subset S ∈S of agents, we have a subspace of RN consisting of all possible trades on S, namely {dr ∈RN : dri = 0 for i ̸= S, P i∈S dri = 0}, with corresponding projection matrix ΠS. For the special case of prediction markets with a centralized market maker, we have N −1 subspaces S = {{1, i} : i ∈{2, . . . , N}} and Π1,i projects onto {dr ∈RN : dri = −dr1, drj = 0 for j ̸= 1, i}. The intuition of coordinate descent is clear now: the subset S of agents seek to minimize the total surplus within the subspace of trades on S, and thus the coordinate descent steps of Algorithm 1 will correspond to roughly efficient trades. We now apply Theorem 3 to show that trade dynamics achieve surplus ϵ > 0 in time O(1/ϵ). Note that we will have to assume the risk measure ρi of agent i is Li-smooth for some Li > 0. This is a very loose restriction, as our risk measures are all differentiable by the expressiveness condition. Theorem 4. Let ρi be an Li-smooth risk measure for all i. Then for any connected trade dynamic, we have E [Φ(rt)] = O(1/t). Proof. Taking LS = maxi∈S Li, one can check that F is LS-ΠS-smooth for all S ∈S by eq. (4). Since Algorithm 1 has no state aside from xt, and the proof of Theorem 3 depends only the drop in F per step, any algorithm selecting the sets S ∈S with the same distribution and satisfying F(xt+1) ≤F(xt − 1 Li Πi∇F(xt)) will yield the same convergence rate. As trade dynamics satisfy F(xt+1) = miny∈RNk F(xt −Πiy), this property trivially holds, and so Theorem 3 applies. If we assume slightly more, that our risk measures have local quadratic lower bounds, then we can obtain linear convergence. Note that this is also a relatively weak assumption, and holds whenever the risk measure has a Hessian with only one zero eigenvalue (for r$) at each point. This is satisfied, for example, by all the variants of entropic risk we discuss in the paper. The proof is in Appendix D. Theorem 5. Suppose for each i we have a continuous function µi : R →R+ such that for all r, risk ρi is µi(r)-strongly convex with respect to r$⊥in a neighborhood of r; in other words, eq. (5) holds for F = ρi, µ = µi(r), and all y in a neighborhood of r such that (r −y) · r$ = 0. Then for all connected trade dynamics, E [Φ(rt)] = O(2−t). 7 Graph |V (G)| |E(G)| λ2(G) Kn n n(n −1)/2 n Pn n n −1 2(1−cos π n) Cn n n 2(1−cos 2π n ) Kℓ,k ℓ+ k ℓk k Bk 2k k2k−1 2 Table 1: Algebraic connectivities for common graphs. Figure 1: Average (in bold) of 30 market simulations for the complete and star graphs. The empirical gap in iteration complexity is just under 2 (cf. Fig. 3). Amazingly, the convergence rates in Theorem 4 and Theorem 5 hold for all connected trade dynamics. The constant hidden in the O(·) does depend on the structure of the network but can be explicitly determined in terms its algebraic connectivity. This is discussed further in Appendix D.2. The intuition behind these convergence rates given here is that agents in whichever group S is chosen always trade to fully minimize their surplus. Because the proofs (in Appendix D) of these methods merely track the reduction in surplus per trading round, the bounds apply as long as the update is at least as good as a gradient step. In fact, we can say even more: if only an ϵ fraction of the surplus is taken at each round, the rates are still O(1/(ϵt)) and O((1 −ϵµ)t), respectively. This suggests that our convergence results are robust with respect to the model of rationality one employs; if agents have bounded rationality and can only compute positions which approximately minimize their risk, the rates remain intact (up to constant factors) as long as the inefficiency is bounded. 5 Conclusions & Future Work Using the tools of convex analysis to analyse the behavior of markets allows us to make precise, quantitative statements about their global behavior. In this paper we have seen that, with appropriate assumptions on trader behaviour, we can determine the rate at which the market will converge to equilibrium prices, thereby closing some open questions raised in [2] and [13]. In addition, our newly proposed trading networks model allow us to consider a variety of prediction market structures. As discussed in §3, the usual prediction market setting is centralized, and corresponds to a star graph with the market maker at the center. A decentralized market where any trader can trade with any other corresponds to a complete graph over the traders. We can also model more exotic networks, such as two or more market maker-based prediction markets with a risk minimizing arbitrageur or small-world networks where agents only trade with a limited number of “neighbours”. Furthermore, because these arrangements are all instances of trade networks, we can immediately compare the convergence rates across various constraints on how traders may interact. For example, in Appendix D.2, we show that a market that trades through a centralized market maker incurs an quantifiable efficiency overhead: convergence takes twice as long (see Figure 1). More generally, we show that the rates scale as λ2(G)/|E(G)|, allowing us to make similar comparisons between arbitrary networks; see Table 1. This raises an interesting question for future work: given some constraints such as a bound on how many traders a single agent can trade with, the total number of edges, etc, which network optimizes the convergence rate of the market? These new models and the analysis of their convergence may provide new principles for building and analyzing distributed systems of heterogeneous and self-interested learning agents. Acknowledgments We would like to thank Matus Telgarsky for his generous help, as well as the lively discussions with, and helpful comments of, S´ebastien Lahaie, Miro Dud´ık, Jenn Wortman Vaughan, Yiling Chen, David Parkes, and Nageeb Ali. MDR is supported by an ARC Discovery Early Career Research Award (DE130101605). Part of this work was developed while he was visiting Microsoft Research. 8 References [1] Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation, 1(2):12, 2013. [2] Jacob Abernethy, Sindhu Kutty, S´ebastien Lahaie, and Rahul Sami. Information aggregation in exponential family markets. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 395–412. ACM, 2014. [3] Jacob D Abernethy and Rafael M Frongillo. A collaborative mechanism for crowdsourcing prediction problems. In Advances in Neural Information Processing Systems, pages 2600–2608, 2011. [4] Aharon Ben-Tal and Marc Teboulle. An old-new concept of convex risk measures: The optimized certainty equivalent. Mathematical Finance, 17(3):449–476, 2007. [5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [6] Christian Burgert and Ludger R¨uschendorf. On the optimal risk allocation problem. Statistics & decisions, 24(1/2006):153–171, 2006. [7] Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via no-regret learning. In Proceedings of the 11th ACM conference on Electronic commerce, pages 189–198. ACM, 2010. [8] Nair Maria Maia de Abreu. Old and new results on algebraic connectivity of graphs. Linear algebra and its applications, 423(1):53–73, 2007. [9] Hans F¨ollmer and Alexander Schied. Stochastic Finance: An Introduction in Discrete Time, volume 27 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., Berlin, 2nd edition, 2004. [10] Rafael M Frongillo, Nicol´as Della Penna, and Mark D Reid. Interpreting prediction markets: a stochastic approach. In Proceedings of Neural Information Processing Systems, 2012. [11] P.D. Gr¨unwald and A.P. Dawid. Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory. The Annals of Statistics, 32(4):1367–1433, 2004. [12] JB Hiriart-Urruty and C Lemar´echal. Grundlehren der mathematischen wissenschaften. Convex Analysis and Minimization Algorithms II, 306, 1993. [13] Jinli Hu and Amos Storkey. Multi-period trading prediction markets with connections to machine learning. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014. [14] Jono Millin, Krzysztof Geras, and Amos J Storkey. Isoelastic agents and wealth updates in machine learning markets. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1815–1822, 2012. [15] Bojan Mohar. The Laplacian spectrum of graphs. In Graph Theory, Combinatorics, and Applications, 1991. [16] I Necoara, Y Nesterov, and F Glineur. A random coordinate descent method on large-scale optimization problems with linear constraints. Technical Report, 2014. [17] Ion Necoara. Random coordinate descent algorithms for multi-agent convex optimization over networks. Automatic Control, IEEE Transactions on, 58(8):2001–2012, 2013. [18] Yurii Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012. [19] Mindika Premachandra and Mark Reid. Aggregating predictions via sequential mini-trading. In Asian Conference on Machine Learning, pages 373–387, 2013. [20] Sashank Reddi, Ahmed Hefny, Carlton Downey, Avinava Dubey, and Suvrit Sra. Large-scale randomizedcoordinate descent methods with non-separable linear constraints. arXiv preprint arXiv:1409.2617, 2014. [21] Mark D Reid, Rafael M Frongillo, Robert C Williamson, and Nishant Mehta. Generalized mixability via entropic duality. In Proc. of Conference on Learning Theory (COLT), 2015. [22] Peter Richt´arik and Martin Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144(1-2):1–38, 2014. [23] R.T. Rockafellar. Convex analysis. Princeton University Press, 1997. [24] Amos J Storkey. Machine learning markets. In International Conference on Artificial Intelligence and Statistics, pages 716–724, 2011. 9
2015
160
5,659
SGD Algorithms based on Incomplete U-statistics: Large-Scale Minimization of Empirical Risk Guillaume Papa, St´ephan Cl´emenc¸on LTCI, CNRS, T´el´ecom ParisTech Universit´e Paris-Saclay, 75013 Paris, France first.last@telecom-paristech.fr Aur´elien Bellet Magnet Team, INRIA Lille - Nord Europe 59650 Villeneuve d’Ascq, France aurelien.bellet@inria.fr Abstract In many learning problems, ranging from clustering to ranking through metric learning, empirical estimates of the risk functional consist of an average over tuples (e.g., pairs or triplets) of observations, rather than over individual observations. In this paper, we focus on how to best implement a stochastic approximation approach to solve such risk minimization problems. We argue that in the largescale setting, gradient estimates should be obtained by sampling tuples of data points with replacement (incomplete U-statistics) instead of sampling data points without replacement (complete U-statistics based on subsamples). We develop a theoretical framework accounting for the substantial impact of this strategy on the generalization ability of the prediction model returned by the Stochastic Gradient Descent (SGD) algorithm. It reveals that the method we promote achieves a much better trade-off between statistical accuracy and computational cost. Beyond the rate bound analysis, experiments on AUC maximization and metric learning provide strong empirical evidence of the superiority of the proposed approach. 1 Introduction In many machine learning problems, the statistical risk functional is an expectation over d-tuples (d ≥2) of observations, rather than over individual points. This is the case in supervised metric learning [3], where one seeks to optimize a distance function such that it assigns smaller values to pairs of points with the same label than to those with different labels. Other popular examples include bipartite ranking (see [27] for instance), where the goal is to maximize the number of concordant pairs (i.e. AUC maximization), and more generally multi-partite ranking (cf [12]), as well as pairwise clustering (see [7]). Given a data sample, the most natural empirical risk estimate (which is known to have minimal variance among all unbiased estimates) is obtained by averaging over all tuples of observations and thus takes the form of a U-statistic (an average of dependent variables generalizing the means, see [19]). The Empirical Risk Minimization (ERM) principle, one of the main paradigms of statistical learning theory, has been extended to the case where the empirical risk of a prediction rule is a U-statistic [5], using concentration properties of U-processes (i.e. collections of U-statistics). The computation of the empirical risk is however numerically unfeasible in large and even moderate scale situations due to the exploding number of possible tuples. In practice, the minimization of such empirical risk functionals is generally performed by means of stochastic optimization techniques such as Stochastic Gradient Descent (SGD), where at each iteration only a small number of randomly selected terms are used to compute an estimate of the gradient (see [27, 24, 16, 26] for instance). A drawback of the original SGD learning method, introduced in the case where empirical risk functionals are computed by summing over independent observations (sample mean statistics), is its slow convergence due to the variance of the gradient estimates, see [15]. This has recently motivated the development of a wide variety of SGD variants implementing a variance reduction method in order to improve convergence. Variance reduction is 1 achieved by occasionally computing the exact gradient (see SAG [18], SVRG [15], MISO [20] and SAGA [9] among others) or by means of nonuniform sampling schemes (see [21, 28] for instance). However, such ideas can hardly be applied to the case under study here: due to the overwhelming number of possible tuples, computing even a single exact gradient or maintaining a probability distribution over the set of all tuples is computationally unfeasible in general. In this paper, we leverage the specific structure and statistical properties of the empirical risk functional when it is of the form of a U-statistic to design an efficient implementation of the SGD learning method. We study the performance of the following sampling scheme for the gradient estimation step involved in the SGD algorithm: drawing with replacement a set of tuples directly (in order to build an incomplete U-statistic gradient estimate), rather than drawing a subset of observations without replacement and forming all possible tuples based on these (the corresponding gradient estimate is then a complete U-statistic based on a subsample). While [6] has investigated maximal deviations between U-processes and their incomplete approximations, the performance analysis carried out in the present paper is inspired from [4] and involves both the optimization error of the SGD algorithm and the estimation error induced by the statistical finite-sample setting. We first provide non-asymptotic rate bounds and asymptotic convergence rates for the SGD procedure applied to the empirical minimization of a U-statistic. These results shed light on the impact of the conditional variance of the gradient estimators on the speed of convergence of SGD. We then derive a novel generalization bound which depends on the variance of the sampling strategies. This bound establishes the indisputable superiority of the incomplete U-statistic estimation approach over the complete variant in terms of the trade-off between statistical accuracy and computational cost. Our experimental results on AUC maximization and metric learning tasks on large-scale datasets are consistent with our theoretical findings and show that the use of the proposed sampling strategy can provide spectacular performance gains in practice. We conclude this paper with promising lines for future research, in particular regarding the trade-offs involved in a possible implementation of nonuniform sampling strategies to further improve convergence. The rest of this paper is organized as follows. In Section 2, we briefly review the theory of Ustatistics and their approximations, together with elementary notions of gradient-based stochastic approximation. Section 3 provides a detailed description of the SGD implementation we propose, along with a performance analysis conditional upon the data sample. In Section 4, based on these results, we derive a generalization bound based on a decomposition into optimization and estimation errors. Section 5 presents our numerical experiments, and we conclude in Section 6. Technical proofs are sketched in the Appendix, and further details can be found in the Supplementary Material. 2 Background and Problem Setup Here and throughout, the indicator function of any event E is denoted by I{E} and the variance of any square integrable r.v. Z by σ2(Z). 2.1 U-statistics: Definition and Examples Generalized U-statistics are extensions of standard sample mean statistics, as defined below. Definition 1. Let K ≥1 and (d1, . . . , dK) ∈N∗K. Let X{1, ..., nk} = (X(k) 1 , . . . , X(k) nk ), 1 ≤ k ≤K, be K independent samples of sizes nk ≥dk and composed of i.i.d. random variables taking their values in some measurable space Xk with distribution Fk(dx) respectively. Let H : X d1 1 ×· · ·× X dK K →R be a measurable function, square integrable with respect to the probability distribution µ = F ⊗d1 1 ⊗· · · ⊗F ⊗dK K . Assume in addition (without loss of generality) that H(x(1), . . . , x(K)) is symmetric within each block of arguments x(k) (valued in X dk k ), 1 ≤k ≤K. The generalized (or K-sample) U-statistic of degrees (d1, . . . , dK) with kernel H, is then defined as Un(H) = 1 QK k=1 nk dk  X I1 . . . X IK H  X(1) I1 ; X(2) I2 ; . . . ; X(K) IK  , (1) where n = (n1, . . . , nK), the symbol P I1 · · · P IK refers to summation over all elements of Λ, the set of the QK k=1 nk dk  index vectors (I1, . . . , IK), Ik being a set of dk indexes 1 ≤i1 < . . . < idk ≤nk and X(k) Ik = (X(k) i1 , . . . , X(k) idk ) for 1 ≤k ≤K. 2 In the above definition, standard mean statistics correspond to the case where K = 1 = d1. More generally when K = 1, Un(H) is an average over all d1-tuples of observations. Finally, K ≥2 corresponds to the multi-sample situation where a dk-tuple is used for each sample k ∈{1, . . . , K}. The key property of the statistic (1) is that it has minimum variance among all unbiased estimates of µ(H) = E h H  X(1) 1 , . . . , X(1) d1 , . . . , X(K) 1 , . . . , X(K) dk i = E [Un(H)] . One may refer to [19] for further results on the theory of U-statistics. In machine learning, generalized U-statistics are used as performance criteria in various problems, such as those listed below. Clustering. Given a distance D : X1 × X1 →R+, the quality of a partition P of X1 with respect to the clustering of an i.i.d. sample X1, . . . , Xn drawn from F1(dx) can be assessed through the within cluster point scatter: c Wn(P) = 2 n(n −1) X i<j D(Xi, Xj) · X C∈P I  (Xi, Xj) ∈C2 . (2) It is a one sample U-statistic of degree 2 with kernel HP(x, x′) = D(x, x′)·P C∈P I{(x, x′) ∈C2}. Multi-partite ranking. Suppose that K independent i.i.d. samples X(k) 1 , . . . , X(k) nk with nk ≥1 and 1 ≤k ≤K on X1 ⊂Rp have been observed. The accuracy of a scoring function s : X1 →R with respect to the K-partite ranking is empirically estimated by the rate of concordant K-tuples (sometimes referred to as the Volume Under the ROC Surface): [ VUSn(s) = 1 n1 × · · · × nK K X k=1 nk X ik=1 I n s(X(1) i1 ) < · · · < s(X(K) iK ) o . The quantity above is a K-sample U-statistic with degrees d1 = . . . = dK = 1 and kernel ¯Hs(x1, . . . , xK) = I{s(x1) < · · · < s(xK)}. Metric learning. Based on an i.i.d. sample of labeled data (X1, Y1), . . . , (Xn, Yn) on X1 = Rp×{1, . . . , J}, the empirical pairwise classification performance of a distance D : X1×X1 →R+ can be evaluated by: bRn(D) = 6 n(n −1)(n −2) X i<j<k I {D(Xi, Xj) < D(Xi, Xk), Yi = Yj ̸= Yk} , (3) which is a one sample U-statistic of degree three with kernel ˜HD((x, y), (x′, y′), (x′′, y′′)) = I {D(x, x′) < D(x, x′′), y = y′ ̸= y′′}. 2.2 Gradient-based minimization of U-statistics Let Θ ⊂Rq with q ≥1 be some parameter space and consider the risk minimization problem minθ∈Θ L(θ) with L(θ) = E[H(X(1) 1 , . . . , X(1) d1 , . . . , X(K) 1 , . . . , X(K) dK ; θ)] = µ(H(.; θ)), where H : QK k=1 X dk k × Θ →R is a convex loss function, the (X(k) 1 , . . . , X(k) dk )’s, 1 ≤k ≤K, are K independent random variables with distribution F ⊗dk k (dx) on X dk k respectively so that H is square integrable for any θ ∈Θ. Based on K independent i.i.d. samples X(k) 1 , . . . , X(k) nk with 1 ≤k ≤K, the empirical version of the risk function is θ ∈Θ 7→bLn(θ) = Un(H(.; θ)). We denote by ∇θ the gradient operator w.r.t. θ. Many learning algorithms are based on gradient descent, following the iterations θt+1 = θt − γt∇θ bLn(θt), with an arbitrary initial value θ0 ∈Θ and a learning rate (step size) γt ≥0 such that P+∞ t=1 γt = +∞and P+∞ t=1 γ2 t < +∞. Here we place ourselves in a large-scale setting, where the sample sizes n1, . . . , nK of the training datasets are such that computing the empirical gradient bgn(θ) def = ∇θ bLn(θ) = 1/ K Y k=1 nk dk ! X I1 . . . X IK ∇θH(X(1) I1 ; X(2) I2 ; . . . ; X(K) IK ; θ) (4) at each iteration is intractable due to the number #Λ = QK k=1 nk dk  of terms to be averaged. Instead, stochastic approximation suggests the use of an unbiased estimate of (4) that is cheap to compute. 3 3 SGD Implementation based on Incomplete U-Statistics A possible approach consists in replacing (4) by a (complete) U-statistic computed from subsamples of reduced sizes n′ k << nk, {(X′(k) 1 , . . . , X′(k) n′ k ) : k = 1, . . . , K} say, drawn uniformly at random without replacement among the original samples, leading to the following gradient estimator: ˜gn′(θ) = 1 QK k=1 n′ k dk  X I1 . . . X IK ∇θH(X′(1) I1 ; X′(2) I2 ; . . . ; X′(K) IK ; θ), (5) where P Ik refers to summation over all n′ k dk  subsets X′(k) Ik = (X′(k) i1 , . . . , X′(k) idk ) related to a set Ik of dk indexes 1 ≤i1 < . . . < idk ≤n′ k and n′ = (n′ 1, . . . , n′ K). Although this approach is very natural, one can obtain a better estimate for the same computational cost, as shall be seen below. 3.1 Monte-Carlo Estimation of the Empirical Gradient From a practical perspective, the alternative strategy we propose is of disarming simplicity. It is based on a Monte-Carlo sampling scheme that consists in drawing independently with replacement among the set of index vectors Λ, yielding a gradient estimator in the form of a so-called incomplete U-statistic (see [19]): ¯gB(θ) = 1 B X (I1, ..., IK)∈DB ∇θH(X(1) I1 , . . . , X(K) IK ; θ), (6) where DB is built by sampling B times with replacement in the set Λ. We point out that the conditional expectation of (6) given the K observed data samples is equal to bgn(θ). The parameter B, corresponding to the number of terms to be averaged, controls the computational complexity of the SGD implementation. Observe incidentally that an incomplete U-statistic is not a U-statistic in general. Hence, as an unbiased estimator of the gradient of the statistical risk L(θ), (6) is of course less accurate than the full empirical gradient (4) (i.e., it has larger variance), but this slight increase in variance leads to a large reduction in computational cost. In our subsequent analysis, we will show that for the same computational cost (i.e., taking B = QK k=1 n′ k dk  ), implementing SGD with (6) rather than (5) leads to much more accurate results. We will rely on the fact that (6) has smaller variance w.r.t. to ∇L(θ) (except in the case where K = 1 = d1), as shown in the proposition below. Proposition 1. Set B = QK k=1 n′ k dk  . There exists a universal constant c > 0, such that we have: σ2 (˜gn′(θ)) ≤c · σ2 θ/ K X k=1 n′ k and σ2 (¯gB(θ)) ≤c · σ2 θ/ K Y k=1 n′ k dk  , for all n ∈N∗K, with σ2 θ = σ2(∇θH(X(1) 1 , . . . , X(K) dK ; θ)). Explicit but lengthy expressions of the variances are given in [19]. Remark 1. The results of this paper can be extended to other sampling schemes to approximate (4), such as Bernoulli sampling or sampling without replacement in Λ, following the proposal of [14]. For clarity, we focus on sampling with replacement, which is computationally more efficient. 3.2 A Conditional Performance Analysis As a first go, we investigate and compare the performance of the SGD methods described above conditionally upon the observed data samples. For simplicity, we denote by Pn(.) the conditional probability measure given the data and by En[.] the Pn-expectation. Given a matrix M, we denote by M T the transpose of M and ∥M∥HS := p Tr(MM T ) its Hilbert-Schmidt norm. We assume that the loss function H is l-smooth in θ, i.e its gradient is l-Lipschitz, with l > 0. We also restrict ourselves to the case where bLn is α-strongly convex for some deterministic constant α: bLn(θ1) −bLn(θ2) ⩽∇θ bLn(θ1)T (x −y) −α 2 ∥θ1 −θ2∥2 (7) and we denote by θ∗ n its unique minimizer. We point out that the present analysis can be extended to the smooth but non-strongly convex case, see [1]. A classical argument based on convex analysis and 4 stochastic optimization (see [1, 22] for instance) shows precisely how the conditional variance of the gradient estimator impacts the empirical performance of the solution produced by the corresponding SGD method and thus strongly advocates the use of the SGD variant proposed in Section 3.1. Proposition 2. Consider the recursion θt+1 = θt −γtg(θt) where En[g(θt)|θt] = ∇θ bLn(θt), and denote by σ2 n(g(θ)) the conditional variance of g(θ). For step size γt = γ1/tβ, the following holds. 1. If 1 2 < β < 1, then: En[bLn(θt+1) −bLn(θ∗ n)] ⩽σ2 n(g(θ∗ n)) tβ γ1l2β−1( 1 2α + lγ2 1 2β −1) + o( 1 tβ ) | {z } C1 . 2. If β = 1 and γ1 > 1 2α, then: En[bLn(θt+1) −bLn(θ∗ n)] ⩽σ2 n(g(θ∗ n)) t + 1 2αγ1l exp(2αlγ2 1)γ2 1 (2αγ1 −1) + o(1 t ) | {z } C2 . Proposition 2 illustrates the well-known fact that the convergence rate of SGD is dominated by the variance term and thus one needs to focus on reducing this term to improve its performance. We are also interested in the asymptotic behavior of the algorithm (when t →+∞), under the following assumptions: A1 The function bLn(θ) is twice differentiable on a neighborhood of θ∗ n. A2 The function ∇bLn(θ) is bounded. Let us set Γ = ∇2bLn(θ∗ n). We establish the following result (refer to the Supplementary Material for a detailed proof). Theorem 1. Let the covariance matrix Σ∗ n be the unique solution of the Lyapunov equation: ΓΣ∗ n + Σ∗ nΓ −ηΣ∗ n = Σn(θ∗ n), (8) where Σn(θ∗ n) = En[g(θ∗ n)g(θ∗ n)T ] and η = γ1 > 1 2α if β = 1, 0 if not. Then, under Assumptions A1 −A2, we have: 1/γt  bLn(θt) −bLn(θ∗)  ⇒1 2U T (Σ∗ n)1/2Γ(Σ∗ n)1/2U, where U ∼N(0, Iq). In addition, in the case η = 0, we have : ∥(Σ∗ nΓ)1/2∥2 HS = E[U T (Σ∗ n)1/2Γ(Σ∗ n)1/2U] = 1 2σ2 n(g(θ∗ n)). (9) Theorem 1 reveals that the conditional variance term again plays a key role in the asymptotic performance of the algorithm. In particular, it is the dominating term in the precision of the solution. In the next section, we build on these results to derive a generalization bound in the spirit of [4] which explicitly depend on the true variance of the gradient estimator. 4 Generalization Bounds Let θ∗= argminθ∈Θ L(θ) be the minimizer of the true risk. As proposed in [4], the mean excess risk can be decomposed as follows: ∀n ∈N∗K, E[L(θt) −L(θ∗)] ≤2E  sup θ∈Θ |bLn(θ) −L(θ)|  | {z } E1 + E h bLn(θt) −bLn(θ∗ n) i | {z } E2 . (10) Beyond the optimization error (the second term on the right hand side of (10)), the analysis of the generalization ability of the learning method previously described requires to control the estimation error (the first term). This can be achieved by means of the result stated below, which extends Corollary 3 in [5] to the K-sample situation. 5 Proposition 3. Let H be a collection of bounded symmetric kernels on QK k=1 X dk k such that MH = sup(H,x)∈H×X |H(x)| < +∞. Suppose also that H is a VC major class of functions with finite Vapnik-Chervonenkis dimension V < +∞. Let κ = min {⌊n1/d1⌋, . . . , ⌊nK/dK⌋}. Then, for any n ∈N∗K E  sup H∈H |Un(H) −µ(H)|  ≤MH ( 2 r 2V log(1 + κ) κ ) . (11) We are now ready to derive our main result. Theorem 2. Let θt be the sequence generated by SGD using the incomplete statistic gradient estimator (6) with B = QK k=1 n′ k dk  terms for some n′ 1, . . . , n′ K. Assume that {L(.; θ) : θ ∈Θ} is a VC major class class of finite VC dimension V s.t. MΘ = sup θ∈Θ, (x(1), ..., x(K))∈QK k=1 X dk k |H(x(1), . . . , x(K); θ)| < +∞, (12) and NΘ = supθ∈Θ σ2 θ < +∞. If the step size satisfies the condition of Proposition 2, we have: ∀n ∈N∗K, E[|L(θt) −L(θ∗)|] ⩽CNΘ Btβ + 2MΘ ( 2 r 2V log(1 + κ) κ ) . For any δ ∈(0, 1), we also have with probability at least 1 −δ: ∀n ∈N∗K, |L(θt) −L(θ∗)| ⩽ CNΘ Btβ + r Dβ log(2/δ) tβ ! + 2MΘ ( 2 r 2V log(1 + κ) κ + r log(4/δ) κ ) . (13) for some constants C and Dβ depending on the parameters l, α, γ1, a1. The generalization bound provided by Theorem 2 shows the advantage of using an incomplete Ustatistic (6) as the gradient estimator. In particular, we can obtain results of the same form as Theorem 2 for the complete U-statistic estimator (5), but B = QK k=1 n′ k dk  is then replaced by PK k=1 n′ k (following Proposition 1), leading to greatly damaged bounds. Using an incomplete U-statistic, we thus achieve better performance on the test set while reducing the number of iterations (and therefore the numbers of gradient computations) required to converge to a accurate solution. To the best of our knowledge, this is the first result of this type for empirical minimization of U-statistics. In the next section, we provide experiments showing that these gains are very significant in practice. 5 Numerical Experiments In this section, we provide numerical experiments to compare the incomplete and complete Ustatistic gradient estimators (5) and (6) in SGD when they rely on the same number of terms B. The datasets we use are available online.1 In all experiments, we randomly split the data into 80% training set and 20% test set and sample 100K pairs from the test set to estimate the test performance. We used a step size of the form γt = γ1/t, and the results below are with respect to the number of SGD iterations. Computational time comparisons can be found in the supplementary material. AUC Optimization We address the problem of learning a binary classifier by optimizing the Area Under the Curve, which corresponds to the VUS criterion (Eq. 2) when K = 2. Given a sequence of i.i.d observations Zi = (Xi, Yi) where Xi ∈Rp and Yi ∈{−1, 1}, we denote by X+ = {Xi; Yi = 1}, X−= {Xi; Yi = −1} and N = |X+||X−|. As done in [27, 13], we take a linear scoring rule sθ(x) = θT x where θ ∈Rp is the parameter to learn, and use the logistic loss as a smooth convex function upper bounding the Heaviside function, leading to the following ERM problem: min θ∈Rp 1 N X X+ i ∈X+ X X− j ∈X− log(1 + exp(sθ(X− i ) −sθ(X+ i ))). 1http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/ 6 (a) Covtype, Batch size = 9, γ1 = 1 (b) Covtype, Batch size = 400, γ1 = 1 (c) Ijcnn1, Batch size = 25, γ1 = 2 (d) Ijcnn1, Batch size = 100, γ1 = 5 Figure 1: Average over 50 runs of the risk estimate with the number of iterations (solid lines) +/their standard deviation (dashed lines) We use two datasets: IJCNN1 (∼200K examples, 22 features) and covtype (∼600K examples, 54 features). We try different values for the initial step size γ1 and the batch size B. Some results, averaged over 50 runs of SGD, are displayed in Figure 1. As predicted by our theoretical findings, we found that the incomplete U-statistic estimator always outperforms its complete variant. The performance gap between the two strategies can be small (for instance when B is very large or γ1 is unnecessarily small), but for values of the parameters that are relevant in practical scenarios (i.e., B reasonably small and γ1 ensuring a significant decrease in the objective function), the difference can be substantial. We also observe a smaller variance between SGD runs with the incomplete version. Metric Learning We now turn to a metric learning formulation, where we are given a sample of N i.i.d observations Zi = (Xi, Yi) where Xi ∈Rp and Yi ∈{1, . . . , c}. Following the existing literature [2], we focus on (pseudo) distances of the form DM(x, x′) = (x −x′)T M(x −x′) where M is a p × p symmetric positive semi-definite matrix. We again use the logistic loss to obtain a convex and smooth surrogate for (3). The ERM problem is as follows: min M 6 N(N −1)(N −2) X i<j<k I {Yi = Yj ̸= Yk} log(1 + exp(DM(Xi, Xj) −DM(Xi, Xk))). We use the binary classification dataset SUSY (5M examples, 18 features). Figure 2 shows that the performance gap between the two strategies is much larger on this problem. This is consistent with the theory: one can see from Proposition 1 that the variance gap between the incomplete and the complete approximations is much wider for a one-sample U-statistic of degree 3 (metric learning) than for a two-sample U-statistic of degree 1 (AUC optimization). 6 Conclusion and Perspectives In this paper, we have studied a specific implementation of the SGD algorithm when the natural empirical estimates of the objective function are of the form of generalized U-statistics. This situation 7 (a) SUSY, Batch size = 120, γ1 = 0.5 (b) SUSY, Batch size = 455, γ1 = 1 Figure 2: Average over 50 runs of the error test with the number of iterations (solid lines) +/- their standard deviation (dashed lines) covers a wide variety of statistical learning problems such as multi-partite ranking, pairwise clustering and metric learning. The gradient estimator we propose in this context is based on an incomplete U-statistic obtained by sampling tuples with replacement. Our main result is a thorough analysis of the generalization ability of the predictive rules produced by this algorithm involving both the optimization and the estimation error in the spirit of [4]. This analysis shows that the SGD variant we propose far surpasses a more naive implementation (of same computational cost) based on subsampling the data points without replacement. Furthermore, we have shown that these performance gains are very significant in practice when dealing with large-scale datasets. In future work, we plan to investigate how one may extend the nonuniform sampling strategies proposed in [8, 21, 28] to our setting in order to further improve convergence. This is a challenging goal since we cannot hope to maintain a distribution over the set of all possible tuples of data points. A tractable solution could involve approximating the distribution in order to achieve a good trade-off between statistical performance and computational/memory costs. Appendix - Sketch of Technical Proofs Note that the detailed proofs can be found in the Supplementary Material. Sketch Proof of Proposition 2 Set at = En[∥θt+1 −θ∗ n∥2] and following [1], observe that the sequence (at) satisfies the recursion at+1 ⩽at (1 −2αγt(1 −γtL))+2γ2 t σ2 n(θ∗ n). A standard stochastic approximation argument yields an upper bound for at (cf [17, 1]), which, combined with bLn(θ) −bLn(θ∗ n) ⩽L 2 ∥θ −θ∗ n∥2 (see [23] for instance), give the desired result. Sketch of Proof of Theorem 1 The proof relies on stochastic approximation arguments (see [10, 25, 11]). We first show that p 1/γt (θt −θ∗ n) ⇒N(0.Σ∗ n). Then, we apply the second order delta-method to derive the asymptotic behavior of the objective function. Eq. (9) is obtained by standard algebra. Sketch of Proof of Theorem 2 Combining (10), (12) and Proposition 2 leads to the first part of the result. To derive sharp probability bounds, we apply the union bound on E1 + E2. To deal with E1, we use concentration results for U-processes, while we adapt the proof of Proposition 2 to control E2: the r.v.’s are recentered to make martingale increments appear, and finally we apply Azuma and Hoeffding inequalities. Acknowledgements This work was supported by the chair Machine Learning for Big Data of T´el´ecom ParisTech, and was conducted when A. Bellet was affiliated with T´el´ecom ParisTech. 8 References [1] F. R. Bach and E. Moulines. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning. In NIPS, 2011. [2] A. Bellet, A. Habrard, and M. Sebban. A Survey on Metric Learning for Feature Vectors and Structured Data. Technical report, arXiv:1306.6709, June 2013. [3] A. Bellet, A. Habrard, and M. Sebban. Metric Learning. Morgan & Claypool Publishers, 2015. [4] L. Bottou and O. Bousquet. The Tradeoffs of Large Scale Learning. In NIPS, 2007. [5] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of U-statistics. Ann. Statist., 36, 2008. [6] S. Cl´emenc¸on, S. Robbiano, and J. Tressou. Maximal deviations of incomplete U-processes with applications to Empirical Risk Sampling. In SDM, 2013. [7] S. Cl´emenc¸on. On U-processes and clustering performance. In NIPS, pages 37–45, 2011. [8] S. Cl´emenc¸on, P. Bertail, and E. Chautru. Scaling up M-estimation via sampling designs: The HorvitzThompson stochastic gradient descent. In IEEE Big Data, 2014. [9] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. In NIPS, 2014. [10] B. Delyon. Stochastic Approximation with Decreasing Gain: Convergence and Asymptotic Theory, 2000. [11] G. Fort. Central limit theorems for stochastic approximation with controlled Markov Chain. EsaimPS, 2014. [12] J. F¨urnkranz, E. H¨ullermeier, and S. Vanderlooy. Binary Decomposition Methods for Multipartite Ranking. In ECML/PKDD, pages 359–374, 2009. [13] A. Herschtal and B. Raskutti. Optimising area under the ROC curve using gradient descent. In ICML, page 49, 2004. [14] S. Janson. The asymptotic distributions of Incomplete U-statistics. Z. Wahrsch. verw. Gebiete, 66:495– 505, 1984. [15] R. Johnson and T. Zhang. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. In NIPS, pages 315–323, 2013. [16] P. Kar, B. Sriperumbudur, P. Jain, and H. Karnick. On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions. In ICML, 2013. [17] H. J. Kushner and G. Yin. Stochastic approximation and recursive algorithms and applications, volume 35. Springer Science & Business Media, 2003. [18] N. Le Roux, M. W. Schmidt, and F. Bach. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. In NIPS, 2012. [19] A. J. Lee. U-Statistics: Theory and Practice. 1990. [20] J. Mairal. Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning. Technical report, arXiv:1402.4419, 2014. [21] D. Needell, R. Ward, and N. Srebro. Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm. In NIPS, pages 1017–1025, 2014. [22] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust Stochastic Approximation Approach to Stochastic Programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [23] Y. Nesterov. Introductory lectures on convex optimization, volume 87. Springer, 2004. [24] M. Norouzi, D. J. Fleet, and R. Salakhutdinov. Hamming Distance Metric Learning. In NIPS, pages 1070–1078, 2012. [25] M. Pelletier. Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing. Ann. Appl.Prob, 1998. [26] Q. Qian, R. Jin, J. Yi, L. Zhang, and S. Zhu. Efficient Distance Metric Learning by Adaptive Sampling and Mini-Batch Stochastic Gradient Descent (SGD). Machine Learning, 99(3):353–372, 2015. [27] P. Zhao, S. Hoi, R. Jin, and T. Yang. AUC Maximization. In ICML, pages 233–240, 2011. [28] P. Zhao and T. Zhang. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization. In ICML, 2015. 9
2015
161
5,660
Multi-Layer Feature Reduction for Tree Structured Group Lasso via Hierarchical Projection Jie Wang1, Jieping Ye1,2 1Computational Medicine and Bioinformatics 2Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI 48109 {jwangumi, jpye}@umich.edu Abstract Tree structured group Lasso (TGL) is a powerful technique in uncovering the tree structured sparsity over the features, where each node encodes a group of features. It has been applied successfully in many real-world applications. However, with extremely large feature dimensions, solving TGL remains a significant challenge due to its highly complicated regularizer. In this paper, we propose a novel MultiLayer Feature reduction method (MLFre) to quickly identify the inactive nodes (the groups of features with zero coefficients in the solution) hierarchically in a top-down fashion, which are guaranteed to be irrelevant to the response. Thus, we can remove the detected nodes from the optimization without sacrificing accuracy. The major challenge in developing such testing rules is due to the overlaps between the parents and their children nodes. By a novel hierarchical projection algorithm, MLFre is able to test the nodes independently from any of their ancestor nodes. Moreover, we can integrate MLFre—that has a low computational cost—with any existing solvers. Experiments on both synthetic and real data sets demonstrate that the speedup gained by MLFre can be orders of magnitude. 1 Introduction Tree structured group Lasso (TGL) [13, 30] is a powerful regression technique in uncovering the hierarchical sparse patterns among the features. The key of TGL, i.e., the tree guided regularization, is based on a pre-defined tree structure and the group Lasso penalty [29], where each node represents a group of features. In recent years, TGL has achieved great success in many real-world applications such as brain image analysis [10, 18], gene data analysis [14], natural language processing [27, 28], and face recognition [12]. Many algorithms have been proposed to improve the efficiency of TGL [1, 6, 11, 7, 16]. However, the application of TGL to large-scale problems remains a challenge due to its highly complicated regularizer. As an emerging and promising technique in scaling large-scale problems, screening has received much attention in the past few years. Screening aims to identify the zero coefficients in the sparse solutions by simple testing rules such that the corresponding features can be removed from the optimization. Thus, the size of the data matrix can be significantly reduced, leading to substantial savings in computational cost and memory usage. Typical examples include TLFre [25], FLAMS [22], EDPP [24], Sasvi [17], DOME [26], SAFE [8], and strong rules [21]. We note that strong rules are inexact in the sense that features with nonzero coefficients may be mistakenly discarded, while the others are exact. Another important direction of screening is to detect the non-support vectors for support vector machine (SVM) and least absolute deviation (LAD) [23, 19]. Empirical studies have shown that the speedup gained by screening methods can be several orders of magnitude. Moreover, the exact screening methods improve the efficiency without sacrificing optimality. However, to the best of our knowledge, existing screening methods are only applicable to sparse models with simple structures such as Lasso, group Lasso, and sparse group Lasso. In this paper, we 1 propose a novel Multi-Layer Feature reduction method, called MLFre, for TGL. MLFre is exact and it tests the nodes hierarchically from the top level to the bottom level to quickly identify the inactive nodes (the groups of features with zero coefficients in the solution vector), which are guaranteed to be absent from the sparse representation. To the best of our knowledge, MLFre is the first screening method that is applicable to TGL with the highly complicated tree guided regularization. The major technical challenges in developing MLFre for TGL lie in two folds. The first is that most existing exact screening methods are based on evaluating the norm of the subgradients of the sparsity-inducing regularizers with respect to the variables or groups of variables of interests. However, for TGL, we only have access to a mixture of the subgradients due to the overlaps between parents and their children nodes. Therefore, our first major technical contribution is a novel hierarchical projection algorithm that is able to exactly and efficiently recover the subgradients with respect to every node from the mixture (Sections 3 and 4). The second technical challenge is that most existing exact screening methods need to estimate an upper bound involving the dual optimum. This turns out to be a complicated nonconvex optimization problem for TGL. Thus, our second major technical contribution is to show that this highly nontrivial nonconvex optimization problem admits closed form solutions (Section 5). Experiments on both synthetic and real data sets demonstrate that the speedup gained by MLFre can be orders of magnitude (Section 6). Please see supplements for detailed proofs of the results in the main text. Notation: Let ∥·∥be the ℓ2 norm, [p] = {1, . . . , p} for a positive integer p, G ⊆[p], and ¯G = [p]\G. For u ∈Rp, let ui be its ith component. For G ⊆[p], we denote uG = [u]G = {v : vi = ui if i ∈ G, vi = 0 otherwise} and HG = {u ∈Rp : u ¯ G = 0}. If G1, G2 ⊆[n] and G1 ⊂G2, we emphasize that G2 \ G1 ̸= ∅. For a set C, let int C, ri C, bd C, and rbd C be its interior, relative interior, boundary, and relative boundary, respectively [5]. If C is closed and convex, the projection operator is PC(z) := argminu∈C∥z −u∥, and its indicator function is IC(·), which is 0 on C and ∞ elsewhere. Let Γ0(Rp) be the class of proper closed convex functions on Rp. For f ∈Γ0(Rp), let ∂f be its subdifferential and dom f := {z : f(z) < ∞}. We denote by γ+ = max(γ, 0). 2 Basics We briefly review some basics of TGL. First, we introduce the so-called index tree. Definition 1. [16] For an index tree T of depth d, we denote the node(s) of depth i by Ti = {Gi 1, . . . , Gi ni}, where n0 = 1, G0 1 = [p], Gi j ⊂[p], and ni ≥1, ∀i ∈[d]. We assume that (i): Gi j1 ∩Gi j2 = ∅, ∀i ∈[d] and j1 ̸= j2 (different nodes of the same depth do not overlap). (ii): If Gi j is a parent node of Gi+1 ℓ , then Gi+1 ℓ ⊂Gi j. When the tree structure is available (see supplement for an example), the TGL problem is min β 1 2∥y −Xβ∥2 + λ Xd i=0 Xni j=1 wi j∥βGi j∥, (TGL) where y ∈RN is the response vector, X ∈RN×p is the data matrix, βGi j and wi j are the coefficients vector and positive weight corresponding to node Gi j, respectively, and λ > 0 is the regularization parameter. We derive the Lagrangian dual problem of TGL as follows. Theorem 2. For the TGL problem, let φ(β) = Pd i=0 Pni j=1 wi j∥βGi j∥. The following hold: (i): Let φi j(β) = ∥βGi j∥and Bi j = {ζ ∈HGi j : ∥ζ∥≤wi j}. We can write ∂φ(0) as ∂φ(0) = Xd i=0 Xni j=1 wi j∂φi j(0) = Xd i=0 Xni j=1 Bi j. (1) (ii): Let F = {θ : XT θ ∈∂φ(0)}. The Lagrangian dual of TGL is sup θ  1 2∥y∥2 −1 2∥y λ −θ∥2 : θ ∈F . (2) (iii): Let β∗(λ) and θ∗(λ) be the optimal solution of problems (TGL) and (2), respectively. Then, y = Xβ∗(λ) + λθ∗(λ), (3) XT θ∗(λ) ∈ Xd i=0 Xni j=1 wi j∂φi j(β∗(λ)). (4) The dual problem of TGL in (2) is equivalent to a projection problem, i.e., θ∗(λ) = PF(y/λ). This geometric property plays a fundamentally important role in developing MLFre (see Section 5). 2 3 Testing Dual Feasibility via Hierarchical Projection Although the dual problem in (2) has nice geometric properties, it is challenging to determine the feasibility of a given θ due to the complex dual feasible set F. An alternative approach is to test if XT θ = P∂φ(0)(XT θ). Although ∂φ(0) is very complicated, we show that P∂φ(0)(·) admits a closed form solution by hierarchically splitting P∂φ(0)(·) into a sum of projection operators with respect to a collection of simpler sets. We first introduce some notations. For an index tree T, let Ai j = nX t,k Bt k : Gt k ⊆Gi j o , ∀i ∈0 ∪[d], j ∈[ni], (5) Ci j = nX t,k Bt k : Gt k ⊂Gi j o , ∀i ∈0 ∪[d], j ∈[ni]. (6) For a node Gi j, the set Ai j is the sum of Bt k corresponding to all its descendant nodes and itself, and the set Ci j the sum excluding itself. Therefore, by the definitions of Ai j, Bi j, and Ci j, we have ∂φ(0) = A0 1, Ai j = Bi j + Ci j, ∀non-leaf node Gi j, Ai j = Bi j, ∀leaf node Gi j, (7) which implies that P∂φ(0)(·) = PA0 1(·) = PB0 1+C0 1(·). This motivates the first pillar of this paper, i.e., Lemma 3, which splits PB0 1+C0 1(·) into the sum of two projections onto B0 1 and C0 1, respectively. Lemma 3. Let G ⊆[p], B = {u ∈HG : ∥u∥≤γ} with γ > 0, C ⊆HG a nonempty closed convex set, and z an arbitrary point in HG. Then, the following hold: (i): [2] PB(z) = min{1, γ/∥z∥}z if z ̸= 0. Otherwise, PB(z) = 0. (ii): IB+C(z) = IB(z −PC(z)), i.e., PC(z) ∈argminu∈C IB(z −u). (iii): PB+C(z) = PC(z) + PB(z −PC(z)). By part (iii) of Lemma 3, we can split PA0 1(XT θ) in the following form: PA0 1(XT θ) = PC0 1(XT θ) + PB0 1(XT θ −PC0 1(XT θ)). (8) As PB0 1(·) admits a closed form solution by part (i) of Lemma 3, we can compute PA0 1(XT θ) if we have PC0 1(XT θ) computed. By Eq. (5) and Eq. (6), for a non-leaf node Gi j, we note that Ci j = X k∈Ic(Gi j) Ai+1 k , where Ic(Gi j) = {k : Gi+1 k ⊂Gi j}. (9) Inspired by (9), we have the following result. Lemma 4. Let {Gℓ⊂[p]}ℓbe a set of nonoverlapping index sets, {Cℓ⊆HGℓ}ℓbe a set of nonempty closed convex sets, and C = P ℓCℓ. Then, PC(z) = P ℓPCℓ(zGℓ) for z ∈Rp. Remark 1. For Lemma 4, if all Cℓare balls centered at 0, then PC(z) admits a closed form solution. By Lemma 4 and Eq. (9), we can further splits PC0 1(XT θ) in Eq. (8) in the following form. PC0 1(XT θ) = X k∈Ic(G0 1) PA1 k([XT θ]G1 k), where Ic(G0 1) = {k : G1 k ⊂G0 1}. (10) Consider the right hand side of Eq. (10). If G1 k is a leaf node, Eq. (7) implies that A1 k = B1 k and thus PA1 k(·) admits a closed form solution by part (i) of Lemma 3. Otherwise, we continue to split PA1 k(·) by Lemmas (3) and (4). This procedure will always terminate as we reach the leaf nodes [see the last equality in Eq. (7)]. Therefore, by a repeated application of Lemmas (3) and (4), the following algorithm computes the closed form solution of PA0 1(·). Algorithm 1 Hierarchical Projection: PA0 1(·). Input: z ∈Rp, the index tree T as in Definition 1, and positive weights wi j for all nodes Gi j in T. Output: u0 = PA0 1(z), vi for ∀i ∈0 ∪[d]. 1: Set ui ←0 ∈Rp, ∀i ∈0 ∪[d + 1], vi ←0 ∈Rp, ∀i ∈0 ∪[d]. 2: for i = d to 0 do /*hierarchical projection*/ 3: for j = 1 to ni do 4: vi Gi j = PBi j(zGi j −ui+1 Gi j ), (11) ui Gi j ←ui+1 Gi j + vi Gi j. (12) 5: end for 6: end for 3 The time complexity of Algorithm 1 is similar to that of solving its proximal operator [16], i.e., O(Pd i=0 Pni j=1 |Gi j|), where |Gi j| is the number of features contained in the node Gi j. As Pni j=1 |Gi j| ≤p by Definition 1, the time complexity of Algorithm 1 is O(pd), and thus O(p log p) for a balanced tree, where d = O(log p). The next result shows that u0 returned by Algorithm 1 is the projection of z onto A0 1. Indeed, we have more general results as follows. Theorem 5. For Algorithm 1, the following hold: (i): ui Gi j = PAi j  zGi j  , ∀i ∈0 ∪[d], j ∈[ni]. (ii): ui+1 Gi j = PCi j  zGi j  , for any non-leaf node Gi j. 4 MLFre Inspired by the KKT Conditions and Hierarchical Projection In this section, we motivate MLFre via the KKT condition in Eq. (4) and the hierarchical projection in Algorithm 1. Note that for any node Gi j, we have wi j∂φi j(β∗(λ)) = ( {ζ ∈HGi j : ∥ζ∥≤wi j}, if [β∗(λ)]Gi j = 0, wi j[β∗(λ)]Gi j/∥[β∗(λ)]Gi j∥, otherwise. (13) Moreover, the KKT condition in Eq. (4) implies that ∃{ξi j ∈wi j∂φi j(β∗(λ)) : ∀i ∈0 ∪[d], j ∈[ni]} such that XT θ∗(λ) = Xd i=0 Xni j=1 ξi j. (14) Thus, if ∥ξi j∥< wi j, we can see that [β∗(λ)]Gi j = 0. However, we do not have direct access to ξi j even if θ∗(λ) is known, because XT θ∗(λ) is a mixture (sum) of all ξi j as shown in Eq. (14). Indeed, Algorithm 1 turns out to be much more useful than testing the feasibility of a given θ: it is able to split all ξi j ∈wi j∂φi j(β∗(λ)) from XT θ∗(λ). This will serve as a cornerstone in developing MLFre. Theorem 6 rigorously shows this property of Algorithm 1. Theorem 6. Let vi, i ∈0 ∪[d] be the output of Algorithm 1 with input XT θ∗(λ), and {ξi j : i ∈ 0 ∪[d], j ∈[ni]} be the set of vectors that satisfy Eq. (14). Then, the following hold. (i) If [β∗(λ)]Gi j = 0, and [β∗(λ)]Glr ̸= 0 for all Gl r ⊃Gi j, then PAi j  [XT θ∗(λ)]Gi j  = P {(k,t):Gt k⊆Gi j} ξt k. (ii) If Gi j is a non-leaf node, and [β∗(λ)]Gi j ̸= 0, then PCi j  [XT θ∗(λ)]Gi j  = P {(k,t):Gt k⊂Gi j} ξt k. (iii) vi Gi j ∈wi j∂φi j(β∗(λ)), ∀i ∈0 ∪[d], j ∈[ni]. Combining Eq. (13) and part (iii) of Theorem 6, we can see that ∥vi Gi j∥< wi j ⇒[β∗(λ)]Gi j = 0. (15) By plugging Eq. (11) and part (ii) of Theorem 5 into (15), we have [β∗(λ)]Gi j = 0 if (a): PBi j  [XT θ∗(λ)]Gi j −PCi j  [XT θ∗(λ)]Gi j  < wi j, if Gi j is a non-leaf node, (R1) (b): PBi j  [XT θ∗(λ)]Gi j  < wi j, if Gi j is a leaf node. (R2) Moreover, the definition of PBi j implies that we can simplify (R1) and (R2) to the following form: [XT θ∗(λ)]Gi j −PCi j  [XT θ∗(λ)]Gi j  < wi j ⇒[β∗(λ)]Gi j = 0, if Gi j is a non-leaf node, (R1’) [XT θ∗(λ)]Gi j < wi j ⇒[β∗(λ)]Gi j = 0, if Gi j is a leaf node. (R2’) However, (R1’) and (R2’) are not applicable to detect inactive nodes as they involve θ∗(λ). Inspired by SAFE [8], we first estimate a set Θ containing θ∗(λ). Let [XT Θ]Gi j = {[XT θ]Gi j : θ ∈Θ} and Si j(z) = zGi j −PCi j  zGi j  . (16) 4 Then, we can relax (R1’) and (R2’) as supζ n Si j (ζ) : ζGi j ∈Ξi j ⊇[XT Θ]Gi j o < wi j ⇒[β∗(λ)]Gi j = 0, if Gi j is a non-leaf node, (R1∗) supζ n ζGi j : ζGi j ∈[XT Θ]Gi j o < wi j ⇒[β∗(λ)]Gi j = 0, if Gi j is a leaf node. (R2∗) In view of (R1∗) and (R2∗), we sketch the procedure to develop MLFre in the following three steps. Step 1 We estimate a set Θ that contains θ∗(λ). Step 2 We solve for the supreme values in (R1∗) and (R2∗), respectively. Step 3 We develop MLFre by plugging the supreme values obtained in Step 2 to (R1∗) and (R2∗). 4.1 The Effective Interval of the Regularization Parameter λ The geometric property of the dual problem in (2), i.e., θ∗(λ) = PF(y/λ), implies that θ∗(λ) = y/λ if y/λ ∈F. Moreover, (R1) for the root node G0 1 leads to β∗(λ) = 0 if y/λ is an interior point of F. Indeed, the following theorem presents stronger results. Theorem 7. For TGL, let λmax = max {λ : y/λ ∈F} and S0 1(·) be defined by Eq. (16). Then, (i): λmax = {λ : ∥S0 1(XT y/λ)∥= w0 1}. (ii): y λ ∈F ⇔λ ≥λmax ⇔θ∗(λ) = y λ ⇔β∗(λ) = 0. For more discussions on λmax, please refer to Section H in the supplements. 5 The Proposed Multi-Layer Feature Reduction Method for TGL We follow the three steps in Section 4 to develop MLFre. Specifically, we first present an accurate estimation of the dual optimum in Section 5.1, then we solve for the supreme values in (R1∗) and (R2∗) in Section 5.2, and finally we present the proposed MLFre in Section 5.3. 5.1 Estimation of the Dual Optimum We estimate the dual optimum by the geometric properties of projection operators [recall that θ∗(λ) = PF(y/λ)]. We first introduce a useful tool to characterize the projection operators. Definition 8. [2] For a closed convex set C and a point z0 ∈C, the normal cone to C at z0 is NC(z0) = {ζ : ⟨ζ, z −z0⟩≤0, ∀z ∈C}. Theorem 7 implies that θ∗(λ) is known with λ ≥λmax. Thus, we can estimate θ∗(λ) in terms of a known θ∗(λ0). This leads to Theorem 9 that bounds the dual optimum by a small ball. Theorem 9. For TGL, suppose that θ∗(λ0) is known with λ0 ≤λmax. For λ ∈(0, λ0), we define n(λ0) = ( y λ0 −θ∗(λ0), if λ0 < λmax, XS0 1  XT y λmax  , if λ0 = λmax, r(λ, λ0) = y λ −θ∗(λ0), r⊥(λ, λ0) = r(λ, λ0) −⟨r(λ,λ0),n(λ0)⟩ ∥n(λ0)∥2 n(λ0). Then, the following hold: (i): n(λ0) ∈NF(θ∗(λ0)). (ii): ∥θ∗(λ) −(θ∗(λ0) + 1 2r⊥(λ, λ0))∥≤1 2∥r⊥(λ, λ0)∥. Theorem 9 indicates that θ∗(λ) lies inside the ball of radius 1 2∥r⊥(λ, λ0)∥centered at o(λ, λ0) = θ∗(λ0) + 1 2r⊥(λ, λ0). 5.2 Solving the Nonconvex Optimization Problems in (R1∗) and (R2∗) We solve for the supreme values in (R1∗) and (R2∗). For notational convenience, let Θ = {θ : ∥θ −o(λ, λ0)∥≤1 2∥r⊥(λ, λ0)∥}, (17) Ξi j = {ζ : ζ ∈HGi j, ∥ζ −[XT o(λ, λ0)]Gi j∥≤1 2∥r⊥(λ, λ0)∥∥XGi j∥2}. (18) Theorem 9 implies that θ∗(λ) ∈Θ, and thus [XT Θ]Gi j ⊆Ξi j for all non-leaf nodes Gi j. To develop MLFre by (R1∗) and (R2∗), we need to solve the following optimization problems: si j(λ, λ0) = supζ{∥Si j(ζ)∥: ζ ∈Ξi j}, if Gi j is a non-leaf node, (19) si j(λ, λ0) = supζ{∥ζ∥: ζ ∈Ξi j}, if Gi j is a leaf node. (20) 5 Before we solve problems (19) and (20), we first introduce some notations. Definition 10. For a non-leaf node Gi j of an index tree T, let Ic(Gi j) = {k : Gi+1 k ⊂Gi j}. If Gi j \ ∪k∈Ic(Gi j)Gi+1 k ̸= ∅, we define a virtual child node of Gi j by Gi+1 j′ = Gi j \ ∪k∈Ic(Gi j)Gi+1 k for j′ ∈{ni+1 + 1, ni+1 + 2, . . . , ni+1 + n′ i+1}, where n′ i+1 is the number of virtual nodes of depth i + 1. We set the weights wi j′ = 0 for all virtual nodes Gi j′. Another useful concept is the so-called unique path between the nodes in the tree. Lemma 11. [16] For any non-root node Gi j, we can find a unique path from Gi j to the root G0 1. Let the nodes on this path be Gl rl, where l ∈0 ∪[i], r0 = 1, and ri = j. Then, the following hold: Gi j ⊂Gl rl, ∀l ∈0 ∪[i −1]. (21) Gi j ∩Gl r = ∅, ∀r ̸= rl, l ∈[i −1], r ∈[ni]. (22) Solving Problem (19) We consider the following equivalent problem of (19). 1 2(si j(λ, λ0))2 = supζ{ 1 2∥Si j(ζ)∥2 : ζ ∈Ξi j}, if Gi j is a non-leaf node. (23) Although both the objective function and feasible set of problem (23) are convex, it is nonconvex as we need to find the supreme value. We derive the closed form solutions of (19) and (23) as follows. Theorem 12. Let c = [XT o(λ, λ0)]Gi j, γ = 1 2∥r⊥(λ, λ0)∥∥XGi j∥2, and vi, i ∈0 ∪[d] be the output of Algorithm 1 with input XT o(λ, λ0). (i): Suppose that c /∈Ci j. Then, si j(λ, λ0) = ∥vi Gi j∥+ γ. (ii): Suppose that node Gi j has a virtual child node. Then, for any c ∈Ci j, si j(λ, λ0) = γ. (iii): Suppose that node Gi j has no virtual child node. Then, the following hold. (iii.a): If c ∈rbd Ci j, then si j(λ, λ0) = γ. (iii.b): If c ∈ri Ci j, then, for any node Gt k ⊂Gi j, where t ∈{i + 1, . . . , d} and k ∈[nt + n′ t], let the nodes on the path from Gt k to Gi j be Gl rl, where l = i, . . . , t, ri = j, and rt = k, and Γ(Gi+1 ri+1, Gt k) = Xt l=i+1  wl rl −∥vl Glrl ∥  . (24) Then, si j(λ, λ0) =  γ −min{(k,t):Gt k⊂Gi j} Γ(Gi+1 ri+1, Gt k)  +. Solving Problem (20) We can solve problem (20) by the Cauchy-Schwarz inequality. Theorem 13. For problem (20), we have si j(λ, λ0) = ∥[XT o(λ, λ0)]Gi j∥+ 1 2∥r⊥(λ, λ0)∥∥XGi j∥2. 5.3 The Multi-Layer Screening Rule In real-world applications, the optimal parameter values are usually unknown. Commonly used approaches to determine an appropriate parameter value, such as cross validation and stability selection, solve TGL many times along a grid of parameter values. This process can be very time consuming. Motivated by this challenge, we present MLFre in the following theorem by plugging the supreme values found by Theorems 12 and 13 into (R1∗) and (R2∗), respectively. Theorem 14. For the TGL problem, suppose that we are given a sequence of parameter values λmax = λ0 > λ1 > · · · > λK. For each integer k = 0, . . . , K −1, we compute θ∗(λk) from a given β∗(λk) via Eq. (3). Then, for i = 1, . . . , d, MLFre takes the form of si j(λk+1, λk) < wi j ⇒[β∗(λ)]Gi j = 0, ∀j ∈[ni]. (MLFre) Remark 2. We apply MLFre to identify inactive nodes hierarchically in a top-down fashion. Note that, we do not need to apply MLFre to node Gi j if one of its ancestor nodes passes the rule. Remark 3. To simplify notations, we consider TGL with a single tree, in the proof. However, all major results are directly applicable to TGL with multiple trees, as they are independent from each other. We note that, many sparse models, such as Lasso, group Lasso, and sparse group Lasso, are special cases of TGL with multiple trees. 6 (a) synthetic 1, p = 20000 (b) synthetic 1, p = 50000 (c) synthetic 1, p = 100000 (d) synthetic 2, p = 20000 (e) synthetic 2, p = 50000 (f) synthetic 2, p = 100000 Figure 1: Rejection ratios of MLFre on two synthetic data sets with different feature dimensions. 6 Experiments We evaluate MLFre on both synthetic and real data sets by two measurements. The first measure is the rejection ratios of MLFre for each level of the tree. Let p0 be the number of zero coefficients in the solution vector and Gi be the index set of the inactive nodes with depth i identified by MLFre. The rejection ratio of the ith layer of MLFre is defined by ri = P k∈Gi |Gi k| p0 , where |Gi k| is the number of features contained in node Gi k. The second measure is speedup, namely, the ratio of the running time of the solver without screening to the running time of solver with MLFre. For each data set, we run the solver combined with MLFre along a sequence of 100 parameter values equally spaced on the logarithmic scale of λ/λmax from 1.0 to 0.05. The solver for TGL is from the SLEP package [15]. It also provides an efficient routine to compute λmax. 6.1 Simulation Studies Table 1: Running time (in seconds) for solving TGL along a sequence of 100 tuning parameter values of λ equally spaced on the logarithmic scale of λ/λmax from 1.0 to 0.05 by (a): the solver [15] without screening (see the third column); (b): the solver with MLFre (see the fifth column). Dataset p solver MLFre MLFre+solver speedup synthetic 1 20000 483.96 1.03 30.17 16.04 50000 1175.91 2.95 39.49 29.78 100000 2391.43 6.57 58.91 40.60 synthetic 2 20000 470.54 1.19 37.87 12.43 50000 1122.30 3.13 43.97 25.53 100000 2244.06 6.18 60.96 36.81 ADNI+GMV 406262 20911.92 81.14 492.08 42.50 ADNI+WMV 406262 21855.03 80.83 556.19 39.29 ADNI+WBV 406262 20812.06 82.10 564.36 36.88 We perform experiments on two synthetic data sets, named synthetic 1 and synthetic 2, which are commonly used in the literature [21, 31]. The true model is y = Xβ∗+ 0.01ϵ, ϵ ∼ N(0, 1). For each of the data set, we fix N = 250 and select p = 20000, 50000, 100000. We create a tree with height 4, i.e., d = 3. The average sizes of the nodes with depth 1, 2 and 3 are 50, 10, and 1, respectively. Thus, if p = 100000, we have roughly n1 = 2000, n2 = 10000, and n3 = 100000. For synthetic 1, the entries of the data matrix X are i.i.d. standard Gaussian with zero pair-wise correlation, i.e., corr (xi, xj) = 0 for the ith and jth columns of X with i ̸= j. For synthetic 2, the entries of X are drawn from standard Gaussian with pair-wise correlation corr (xi, xj) = 0.5|i−j|. To construct β∗, we first randomly select 50% of the nodes with depth 1, and then randomly select 20% of the children nodes (with depth 2) of the remaining nodes with depth 1. The components of β∗corresponding to the remaining nodes are populated from a standard Gaussian, and the remaining ones are set to zero. 7 (a) ADNI+GMV (b) ADNI+WMV (c) ADNI+WBV Figure 2: Rejection ratios of MLFre on ADNI data set with grey matter volume (GMV), white mater volume (WMV), and whole brain volume (WBV) as response vectors, respectively. Fig. 1 shows the rejection ratios of all three layers of MLFre. We can see that MLFre identifies almost all of the inactive nodes, i.e., P3 i=1 ri ≥90%, and the first layer contributes the most. Moreover, Fig. 1 also indicates that, as the feature dimension (and the number of nodes in each level) increases, MLFre identifies more inactive nodes, i.e., P3 i=1 ri ≈100%. Thus, we can expect a more significant capability of MLFre in identifying inactive nodes on data sets with higher dimensions. Table 1 shows the running time of the solver with and without MLFre. We can observe significant speedups gained by MLFre, which are up to 40 times. Take synthetic 1 with p = 100000 for example. The solver without MLFre takes about 40 minutes to solve TGL at 100 parameter values. Combined with MLFre, the solver only needs less than one minute for the same task. Table 1 also shows that the computational cost of MLFre is very low—that is negligible compared to that of the solver without MLFre. Moreover, as MLFre identifies more inactive nodes with increasing feature dimensions, Table 1 shows that the speedup gained by MLFre becomes more significant as well. 6.2 Experiments on ADNI data set We perform experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) data set (http://adni.loni.usc.edu/). The data set consists of 747 patients with 406262 single nucleotide polymorphisms (SNPs). We create the index tree such that n1 = 4567, n2 = 89332, and n3 = 406262. Fig. 2 presents the rejection ratios of MLFre on the ADNI data set with grey matter volume (GMV), white matter volume (WMV), and whole brain volume (WBV) as response, respectively. We can see that MLFre identifies almost all inactive nodes, i.e., P3 i=1 ri ≈100%. As a result, we observe significant speedups gained by MLFre—that are about 40 times—from Table 1. Specifically, with GMV as response, the solver without MLFre takes about six hours to solve TGL at 100 parameter values. However, combined with MLFre, the solver only needs about eight minutes for the same task. Moreover, Table 1 also indicates that the computational cost of MLFre is very low—that is negligible compared to that of the solver without MLFre. 7 Conclusion In this paper, we propose a novel multi-layer feature reduction (MLFre) method for TGL. Our major technical contributions lie in two folds. The first is the novel hierarchical projection algorithm that is able to exactly and efficiently recover the subgradients of the tree-guided regularizer with respect to each node from their mixture. The second is that we show a highly nontrivial nonconvex problem admits a closed form solution. To the best of our knowledge, MLFre is the first screening method that is applicable to TGL. An appealing feature of MLFre is that it is exact in the sense that the identified inactive nodes are guaranteed to be absent from the sparse representations. Experiments on both synthetic and real data sets demonstrate that MLFre is very effective in identifying inactive nodes, leading to substantial savings in computational cost and memory usage without sacrificing accuracy. Moreover, the capability of MLFre in identifying inactive nodes on higher dimensional data sets is more significant. We plan to generalize MLFre to more general and complicated sparse models, e.g., over-lapping group Lasso with logistic loss. In addition, we plan to apply MLFre to other applications, e.g., brain image analysis [10, 18] and natural language processing [27, 28]. Acknowledgments This work is supported in part by research grants from NIH (R01 LM010730, U54 EB020403) and NSF (IIS- 0953662, III-1539991, III-1539722). 8 References [1] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1–106, Jan. 2012. [2] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. [3] M. Bazaraa, H. Sherali, and C. Shetty. Nonlinear Programming: Theory and Algorithms. WileyInterscience, 2006. [4] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization, Second Edition. Canadian Mathematical Society, 2006. [5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [6] X. Chen, Q. Lin, S. Kim, J. Carbonell, and E. Xing. Smoothing proximal gradient method for general structured sparse regression. Annals of Applied Statistics, pages 719–752, 2012. [7] W. Deng, W. Yin, and Y. Zhang. Group sparse optimization by alternating direction method. Technical report, Rice CAAM Report TR11-06, 2011. [8] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Pacific Journal of Optimization, 8:667–698, 2012. [9] J.-B. Hiriart-Urruty. From convex optimization to nonconvex optimization. necessary and sufficient conditions for global optimality. In Nonsmooth optimization and related topics. Springer, 1988. [10] R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach, and B. Thirion. Multiscale mining of fmri data with hierarchical structured sparsity. SIAM Journal on Imaging Science, pages 835–856, 2012. [11] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding. Journal of Machine Learning Research, 12:2297–2334, 2011. [12] K. Jia, T. Chan, and Y. Ma. Robust and practical face recognition via structured sparsity. In European Conference on Computer Vision, 2012. [13] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In International Conference on Machine Learning, 2010. [14] S. Kim and E. Xing. Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eqtl mapping. The Annals of Applied Statistics, 2012. [15] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009. [16] J. Liu and J. Ye. Moreau-Yosida regularization for grouped tree structure learning. In Advances in neural information processing systems, 2010. [17] J. Liu, Z. Zhao, J. Wang, and J. Ye. Safe screening with variational inequalities and its application to lasso. In International Conference on Machine Learning, 2014. [18] M. Liu, D. Zhang, P. Yap, and D. Shen. Tree-guided sparse coding for brain disease classification. In Medical Image Computing and Computer-Assisted Intervention, 2012. [19] K. Ogawa, Y. Suzuki, and I. Takeuchi. Safe screening of non-support vectors in pathwise SVM computation. In ICML, 2013. [20] A. Ruszczy´nski. Nonlinear Optimization. Princeton University Press, 2006. [21] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society Series B, 74:245– 266, 2012. [22] J. Wang, W. Fan, and J. Ye. Fused lasso screening rules via the monotonicity of subdifferentials. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1–1, 2015. [23] J. Wang, P. Wonka, and J. Ye. Scaling svm and least absolute deviations via exact data reduction. In International Conference on Machine Learning, 2014. [24] J. Wang, P. Wonka, and J. Ye. Lasso screening rules via dual polytope projection. Journal of Machine Learning Research, 16:1063–1101, 2015. [25] J. Wang and J. Ye. Two-Layer feature reduction for sparse-group lasso via decomposition of convex sets. Advances in neural information processing systems, 2014. [26] Z. J. Xiang, H. Xu, and P. J. Ramadge. Learning sparse representation of high dimensional data on large scale dictionaries. In NIPS, 2011. [27] D. Yogatama, M. Faruqui, C. Dyer, and N. Smith. Learning word representations with hierarchical sparse coding. In International Conference on Machine Learning, 2015. [28] D. Yogatama and N. Smith. Linguistic structured sparsity in text categorization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2014. [29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68:49–67, 2006. [30] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics, 2009. [31] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B, 67:301–320, 2005. 9
2015
162
5,661
From random walks to distances on unweighted graphs Tatsunori B. Hashimoto MIT EECS thashim@mit.edu Yi Sun MIT Mathematics yisun@mit.edu Tommi S. Jaakkola MIT EECS tommi@mit.edu Abstract Large unweighted directed graphs are commonly used to capture relations between entities. A fundamental problem in the analysis of such networks is to properly define the similarity or dissimilarity between any two vertices. Despite the significance of this problem, statistical characterization of the proposed metrics has been limited. We introduce and develop a class of techniques for analyzing random walks on graphs using stochastic calculus. Using these techniques we generalize results on the degeneracy of hitting times and analyze a metric based on the Laplace transformed hitting time (LTHT). The metric serves as a natural, provably well-behaved alternative to the expected hitting time. We establish a general correspondence between hitting times of the Brownian motion and analogous hitting times on the graph. We show that the LTHT is consistent with respect to the underlying metric of a geometric graph, preserves clustering tendency, and remains robust against random addition of non-geometric edges. Tests on simulated and real-world data show that the LTHT matches theoretical predictions and outperforms alternatives. 1 Introduction Many network metrics have been introduced to measure the similarity between any two vertices. Such metrics can be used for a variety of purposes, including uncovering missing edges or pruning spurious ones. Since the metrics tacitly assume that vertices lie in a latent (metric) space, one could expect that they also recover the underlying metric in some well-defined limit. Surprisingly, there are nearly no known results on this type of consistency. Indeed, it was recently shown [19] that the expected hitting time degenerates and does not measure any notion of distance. We analyze an improved hitting-time metric – Laplace transformed hitting time (LTHT) – and rigorously evaluate its consistency, cluster-preservation, and robustness under a general network model which encapsulates the latent space assumption. This network model, specified in Section 2, posits that vertices lie in a latent metric space, and edges are drawn between nearby vertices in that space. To analyze the LTHT, we develop two key technical tools. We establish a correspondence between functionals of hitting time for random walks on graphs, on the one hand, and limiting Itô processes (Corollary 4.4) on the other. Moreover, we construct a weighted random walk on the graph whose limit is a Brownian motion (Corollary 4.1). We apply these tools to obtain three main results. First, our Theorem 3.5 recapitulates and generalizes the result of [19] pertaining to degeneration of expected hitting time in the limit. Our proof is direct and demonstrates the broader applicability of the techniques to general random walk based algorithms. Second, we analyze the Laplace transformed hitting time as a one-parameter family of improved distance estimators based on random walks on the graph. We prove that there exists a scaling limit for the parameter β such that the LTHT can become the shortest path distance (Theorem S5.2) or a consistent metric estimator averaging over many paths (Theorem 4.5). Finally, we prove that the LTHT captures the advantages 1 of random-walk based metrics by respecting the cluster structure (Theorem 4.6) and robustly recovering similarity queries when the majority of edges carry no geometric information (Theorem 4.9). We now discuss the relation of our work to prior work on similarity estimation. Quasi-walk metrics: There is a growing literature on graph metrics that attempts to correct the degeneracy of expected hitting time [19] by interpolating between expected hitting time and shortest path distance. The work closest to ours is the analysis of the phase transition of the p-resistance metric in [1] which proves that p-resistances are nondegenerate for some p; however, their work did not address consistency or bias of p-resistances. Other approaches to quasi-walk metrics such as logarithmic-forest [3], distributed routing distances [16], truncated hitting times [12], and randomized shortest paths [8, 21] exist but their statistical properties are unknown. Our paper is the first to prove consistency properties of a quasi-walk metric. Nonparametric statistics: In the nonparametric statistics literature, the behavior of k-nearest neighbor and ε-ball graphs has been the focus of extensive study. For undirected graphs, Laplacian-based techniques have yielded consistency for clusters [18] and shortest paths [2] as well as the degeneracy of expected hitting time [19]. Algorithms for exactly embedding k-nearest neighbor graphs are similar and generate metric estimates, but require knowledge of the graph construction method, and their consistency properties are unknown [13]. Stochastic differential equation techniques similar to ours were applied to prove Laplacian convergence results in [17], while the process-level convergence was exploited in [6]. Our work advances the techniques of [6] by extracting more robust estimators from process-level information. Network analysis: The task of predicting missing links in a graph, known as link prediction, is one of the most popular uses of similarity estimation. The survey [9] compares several common link prediction methods on synthetic benchmarks. The consistency of some local similarity metrics such as the number of shared neighbors was analyzed under a single generative model for graphs in [11]. Our results extend this analysis to a global, walk-based metric under weaker model assumptions. 2 Continuum limits of random walks on networks 2.1 Definition of a spatial graph We take a generative approach to defining similarity between vertices. We suppose that each vertex i of a graph is associated with a latent coordinate xi ∈Rd and that the probability of finding an edge between two vertices depends solely on their latent coordinates. In this model, given only the unweighted edge connectivity of a graph, we define natural distances between vertices as the distances between the latent coordinates xi. Formally, let X = {x1, x2, . . .} ⊂Rd be an infinite sequence of points drawn i.i.d. from a differentiable density with bounded log gradient p(x) with compact support D. A spatial graph is defined by the following: Definition 2.1 (Spatial graph). Let εn : Xn →R>0 be a local scale function and h : R≥0 →[0, 1] a piecewise continuous function with h(x) = 0 for x > 1, h(1) > 0, and h left-continuous at 1. The spatial graph Gn corresponding to εn and h is the random graph with vertex set Xn and a directed edge from xi to xj with probability pij = h(|xi −xj|εn(xi)−1). This graph was proposed in [6] as the generalization of k-nearest neighbors to isotropic kernels. To make inference tractable, we focus on the large-graph, small-neighborhood limit as n →∞and εn(x) →0. In particular, we will suppose that there exist scaling constants gn and a deterministic continuous function ε : D →R>0 so that gn →0, gnn 1 d+2 log(n)− 1 d+2 →∞, εn(x)g−1 n →ε(x) for x ∈Xn, where the final convergence is uniform in x and a.s. in the draw of X. The scaling constant gn represents a bound on the asymptotic sparsity of the graph. We give a few concrete examples to make the quantities h, gn, and εn clear. 1. The directed k-nearest neighbor graph is defined by setting h(x) = 1x∈[0,1], the indicator function of the unit interval, εn(x) the distance to the kth nearest neighbor, and gn = (k/n)1/d the rate at which εn(x) approaches zero. 2 2. A Gaussian kernel graph is approximated by setting h(x) = exp(−x2/σ2)1x∈[0,1]. The truncation of the Gaussian tails at σ is an analytic convenience rather than a fundamental limitation, and the bandwidth can be varied by rescaling εn(x). 2.2 Continuum limit of the random walk Our techniques rely on analysis of the limiting behavior of the simple random walk Xn t on a spatial graph Gn, viewed as a discrete-time Markov process with domain D. The increment at step t of Xn t is a jump to a random point in Xn which lies within the ball of radius εn(Xn t ) around Xn t . We observe three effects: (A) the random walk jumps more frequently towards regions of high density; (B) the random walk moves more quickly whenever εn(Xn t ) is large; (C) for εn small and a large step count t, the random variable Xn t −Xn 0 is the sum of many small independent (but not necessarily identically distributed) increments. In the n →∞limit, we may identify Xn t with a continuous-time stochastic process satisfying (A), (B), and (C) via the following result, which is a slight strengthening of [6, Theorem 3.4] obtained by applying [15, Theorem 11.2.3] in place of the original result of Stroock-Varadhan. Theorem 2.2. The simple random walk Xn t converges uniformly in Skorokhod space D([0, ∞), D) after a time scaling bt = tg2 n to the Itô process Ybt valued in the space of continuous functions C([0, ∞), D) with reflecting boundary conditions on D defined by dYbt = ∇log(p(Ybt))ε(Ybt)2/3dbt + ε(Ybt)/ √ 3dWbt. (1) Effects (A), (B), and (C) may be seen in the stochastic differential equation (1) as follows. The direction of the drift is controlled by ∇log(p(Ybt)), the rate of drift is controlled by ε(Ybt)2, and the noise is driven by a Brownian motion Wbt with location-dependent scaling ε(Ybt)/ √ 3.1 We view Theorem 2.2 as a method to understand the simple random walk Xn t through the continuous walk Ybt. Attributes of stochastic processes such as stationary distribution or hitting time may be defined for both Ybt and Xn t , and in many cases Theorem 2.2 implies that an appropriately-rescaled version of the discrete attribute will converge to the continuous one. Because attributes of the continuous process Ybt can reveal information about proximity between points, this provides a general framework for inference in spatial graphs. We use hitting times of the continuous process to a domain E ⊂D to prove properties of the hitting time of a simple random walk on a graph via the limit arguments of Theorem 2.2. 3 Degeneracy of expected hitting times in networks The hitting time, commute time, and resistance distance are popular measures of distance based upon the random walk which are believed to be robust and capture the cluster structure of the network. However, it was shown in a surprising result in [19] that on undirected geometric graphs the scaled expected hitting time from xi to xj converges to inverse of the degree of xj. In Theorem 3.5, we give an intuitive explanation and generalization of this result by showing that if the random walk on a graph converges to any limiting Itô process in dimension d ≥2, the scaled expected hitting time to any point converges to the inverse of the stationary distribution. This answers the open problem in [19] on the degeneracy of hitting times for directed graphs and graphs with general degree distributions such as directed k-nearest neighbor graphs, lattices, and power-law graphs with convergent random walks. Our proof can be understood as first extending the transience or neighborhood recurrence of Brownian motion for d ≥2 to more general Itô processes and then connecting hitting times on graphs to their Itô process equivalents. 3.1 Typical hitting times are large We will prove the following lemma that hitting a given vertex quickly is unlikely. Let T xi xj,n be the hitting time to xj of Xn t started at xi and T xi E be the continuous equivalent for Ybt to hit E ⊂D . 1Both the variance Θ(εn(x)2) and expected value Θ(∇log(p(x))εn(x)2) of a single step in the simple random walk are Θ(g2 n). The time scaling bt = tg2 n in Theorem 2.2 was chosen so that as n →∞there are g−2 n discrete steps taken per unit time, meaning the total drift and variance per unit time tend to a non-trivial limit. 3 Lemma 3.1 (Typical hitting times are large). For any d ≥2, c > 0, and δ > 0, for large enough n we have P(T xi xj,n > cg−2 n ) > 1 −δ. To prove Lemma 3.1, we require the following tail bound following from the Feynman-Kac theorem. Theorem 3.2 ([10, Exercise 9.12] Feynman-Kac for the Laplace transform). The Laplace transform of the hitting time (LTHT) u(x) = E[exp(−βT x E)] is the solution to the boundary value problem with boundary condition u|∂E = 1: 1 2Tr[σT H(u)σ] + µ(x) · ∇u −βu = 0. This will allow us to bound the hitting time to the ball B(xj, s) of radius s centered at xj. Lemma 3.3. For x, y ∈D, d ≥2, and any δ > 0, there exists s > 0 such that E[e−T x B(y,s)] < δ. Proof. We compare the Laplace transformed hitting time of the general Itô process to that of Brownian motion via Feynman-Kac and handle the latter case directly. Details are in Section S2.1. We now use Lemma 3.3 to prove Lemma 3.1. Proof of Lemma 3.1. Our proof proceeds in two steps. First, we have T xi xj,n ≥T xi B(xj,s),n a.s. for any s > 0 because xj ∈B(xj, s), so by Theorem 2.2, we have lim n→∞E[e−T xi xj ,ng−2 n ] ≤lim n→∞E[e −T xi B(xj ,s),ng−2 n ] = E[e −T xi B(xj ,s)]. (2) Applying Lemma 3.3, we have E[e −T xi B(xj ,s)] < 1 2δe−c for some s > 0. For large enough n, this combined with (2) implies P(T xi xj,n ≤cg−2 n )e−c < δe−c and hence P(T xi xj,n ≤cg−2 n ) < δ. 3.2 Expected hitting times degenerate to the stationary distribution To translate results from Itô processes to directed graphs, we require a regularity condition. Let qt(xj, xi) denote the probability that Xn t = xj conditioned on Xn 0 = xi. We make the following technical conjecture which we assume holds for all spatial graphs. (⋆) For t = Θ(g−2 n ), the rescaled marginal nqt(x, xi) is a.s. eventually uniformly equicontinuous.2 Let πXn(x) denote the stationary distribution of Xn t . The following was shown in [6, Theorem 2.1] under conditions implied by our condition (⋆) (Corollary S2.6). Theorem 3.4. Assuming (⋆), for a−1 = R p(x)2ε(x)−2dx, we have the a.s. limit bπ(x) := lim n→∞nπXn(x) = a p(x) ε(x)2 . We may now express the limit of expected hitting time in terms of this result. Theorem 3.5. For d ≥2 and any i, j, we have E[T xi xj,n] n a.s. → 1 bπ(xj). Proof. We give a sketch. By Lemma 3.1, the random walk started at xi does not hit xj within cg−2 n steps with high probability. By Theorem S2.5, the simple random walk Xn t mixes at exponential rate, implying in Lemma S2.8 that the probability of first hitting at step t > cg−2 n is approximately the stationary distribution at xj. Expected hitting time is then shown to approximate the expectation of a geometric random variable. See Section S2 for a full proof. Theorem 3.5 is illustrated in Figures 1A and 1B, which show with only 3000 points, expected hitting times on a k-nearest neighbor graph degenerates to the stationary distribution. 3 2Assumption (⋆) is related to smoothing properties of the graph Laplacian and is known to hold for undirected graphs [4]. No directed analogue is known, and [6] conjectured a weaker property for all spatial graphs. See Section S1 for further details. 3Surprisingly, [19] proved that 1-D hitting times diverge despite convergence of the continuous equivalent. This occurs because the discrete walk can jump past the target point. In Section S2.4, we consider 1-D hitting 4 Figure 1: Estimated distance from orange starting point on a k-nearest neighbor graph constructed on two clusters. A and B show degeneracy of hitting times (Theorem 3.5). C, D, and E show that log-LTHT interpolate between hitting time and shortest path. 4 The Laplace transformed hitting time (LTHT) In Theorem 3.5 we showed that expected hitting time is degenerate because a simple random walk mixes before hitting its target. To correct this we penalize longer paths. More precisely, consider for bβ > 0 and βn = bβg2 n the Laplace transforms E[e−bβT x E] and E[e−βnT x E,n] of T x E and T x E,n. These Laplace transformed hitting times (LTHT’s) have three advantages. First, while the expected hitting time of a Brownian motion to a domain is dominated by long paths, the LTHT is dominated by direct paths. Second, the LTHT for the Itô process can be derived in closed form via the FeynmanKac theorem, allowing us to make use of techniques from continuous stochastic processes to control the continuum LTHT. Lastly, the LTHT can be computed both by sampling and in closed form as a matrix inversion (Section S3). Now define the scaled log-LTHT as −log(E[e−βnT xi xj ,n])/ p 2βngn. Taking different scalings for βn with n interpolates between expected hitting time (βn →0 on a fixed graph) and shortest path distance (βn →∞) (Figures 1C, D, and E). In Theorem 4.5, we show that the intermediate scaling βn = Θ(bβg2 n) yields a consistent distance measure retaining the unique properties of hitting times. Most of our results on the LTHT are novel for any quasi-walk metric. While considering the Laplace transform of the hitting time is novel to our work, this metric has been used in the literature in an ad-hoc manner in various forms as a similarity metric for collaboration networks [20], hidden subgraph detection [14], and robust shortest path distance [21]. However, these papers only considered the elementary properties of the limits βn →0 and βn →∞. Our consistency proof demonstrates the advantage of the stochastic process approach. 4.1 Consistency It was shown previously that for n fixed and βn →∞, −log(E[−βnT xi xj,n])/βngn converges to shortest path distance from xi to xj. We investigate more precise behavior in terms of the scaling of βn. There are two regimes: if βn = ω(log(gd nn)), then the shortest path dominates and the LTHT converges to shortest path distance (See Theorem S5.2). If βn = Θ(bβg2 n), the graph log-LTHT converges to its continuous equivalent, which for large bβ averages over random walks concentrated around the geodesic. To show consistency for βn = Θ(bβg2 n), we proceed in three steps: (1) we reweight the random walk on the graph so the limiting process is Brownian motion; (2) we show that log-LTHT for Brownian motion recovers latent distance; (3) we show that log-LTHT for the reweighted walk converges to its continuous limit; (4) we conclude that log-LTHT of the reweighted walk recovers latent distance. (1) Reweighting the random walk to converge to Brownian motion: We define weights using the estimators bp and bε for p(x) and ε(x) from [6]. times to small out neighbors which corrects this problem and derive closed form solutions (Theorem S2.12). This hitting time is non-degenerate but highly biased due to boundary terms (Corollary S2.14). 5 Theorem 4.1. Let bp and bε be consistent estimators of the density and local scale and A be the adjacency matrix. Then the random walk b Xn t defined below converges to a Brownian motion. P( b Xn t+1 = xj | b Xn t = xi) = ( Ai,j bp(xj)−1 P k Ai,kbp(xk)−1 bε(xi)−2 i ̸= j 1 −bε(xi)−2 i = j Proof. Reweighting by bp and bε is designed to cancel the drift and diffusion terms in Theorem 2.2 by ensuring that as n grows large, jumps have means approaching 0 and variances which are asymptotically equal (but decaying with n). See Theorem S4.1. 4 (2) Log-LTHT for a Brownian motion: Let Wt be a Brownian motion with W0 = xi, and let T xi B(xj,s) be the hitting time of Wt to B(xj, s). We show that log-LTHT converges to distance. Lemma 4.2. For any α < 0, if bβ = sα, as s →0 we have −log(E[exp(−bβT xi B(xj,s))])/ q 2bβ →|xi −xj|. Proof. We consider hitting time of Brownian motion started at distance |xi −xj| from the origin to distance s of the origin, which is controlled by a Bessel process. See Subsection S6.1 for details. (3) Convergence of LTHT for βn = Θ(bβg2 n): To compare continuous and discrete log-LTHT’s, we will first define the s-neighborhood of a vertex xi on Gn as the graph equivalent of the ball B(xi, s). Definition 4.3 (s-neighborhood). Let bε(x) be the consistent estimate of the local scale from [6] so that bε(x) →ε(x) uniformly a.s. as n →∞. The bε-weight of a path xi1 →· · · →xil is the sum Pl−1 m=1 bε(xim) of vertex weights bε(xi). For s > 0 and x ∈Gn, the s-neighborhood of x is NBs n(x) := {y | there is a path x →y of bε-weight ≤g−1 n s}. For xi, xj ∈Gn, let bT xi B(xj,s) be the hitting time of the transformed walk on Gn from xi to NBs n(xj). We now verify that hitting times to the s-neighborhood on graphs and the s-radius ball coincide. Corollary 4.4. For s > 0, we have g2 n bT xi NBs n(xj),n d→T xi B(xj,s). Proof. We verify that the ball and the neighborhood have nearly identical sets of points and apply Theorem 2.2. See Subsection S6.2 for details. (4) Proving consistency of log-LTHT: Properly accounting for boundary effects, we obtain a consistency result for the log-LTHT for small neighborhood hitting times. Theorem 4.5. Let xi, xj ∈Gn be connected by a geodesic not intersecting ∂D. For any δ > 0, there exists a choice of bβ and s > 0 so that if βn = bβg2 n, for large n we have with high probability −log(E[exp(−βn bT xi NBs n(xj),n)])/ q 2bβ −|xi −xj| < δ. Proof of Theorem 4.5. The proof has three steps. First, we convert to the continuous setting via Corollary 4.4. Second, we show the contribution of the boundary is negligible. The conclusion follows from the explicit computation of Lemma S6.1. Full details are in Section S6. The stochastic process limit based proof of Theorem 4.5 implies that the log-LTHT is consistent and robust to small perturbations to the graph which preserve the same limit (Supp. Section S8). 4This is a special case of a more general theorem for transforming limits of graph random walks (Theorem S4.1). Figure S1 shows that this modification is highly effective in practice. 6 4.2 Bias Random walk based metrics are often motivated as recovering a cluster preserving metric. We now show that the log-LTHT of the un-weighted simple random walk preserves the underlying cluster structure. In the 1-D case, we provide a complete characterization. Theorem 4.6. Suppose the spatial graph has d = 1 and h(x) = 1x∈[0,1]. Let T xi NB bε(xj )gn n (xj),n be the hitting time of a simple random walk from xi to the out-neighborhood of xj. It converges to −log(E[−βT xi NB bε(xj )gn n (xj),n])/ p 8β → Z xj xi p m(x)dx + o  log(1 + e−√2β)/ p 2β  , where m(x) = 2 ε(x)2 + 1 β ∂log(p(x)) ∂x2 + 1 β  ∂log(p(x)) ∂x 2 defines a density-sensitive metric. Proof. Apply the WKBJ approximation for Schrodinger equations to the Feynman-Kac PDE from Theorem 3.2. See Corollary S7.2 and Corollary S2.13 for a full proof. The leading order terms of the density-sensitive metric appropriately penalize crossing regions of large changes to the log density; this is not the case for the expected hitting time (Theorem S2.12). 4.3 Robustness While shortest path distance is a consistent measure of the underlying metric, it breaks down catastrophically with the addition of a single non-geometric edge and does not meaningfully rank vertices that share an edge. In contrast, we show that LTHT breaks ties between vertices via the resource allocation (RA) index, a robust local similarity metric under Erd˝os-Rényi-type noise. 5 Definition 4.7. The noisy spatial graph Gn over Xn with noise terms q1(n), . . ., qn(n) is constructed by drawing an edge from xi to xj with probability pij = h(|xi −xj|εn(xi)−1)(1 −qj(n)) + qj(n). Define the directed RA index in terms of the out-neighborhood set NBn(xi) and the in-neighborhood set NBin n(xi) as Rij := P xk∈NBn(xi)∩NBin n(xj) |NBn(xk)|−1 and two step log-LTHT by M ts ij := −log(E[exp(−βT xi xj,n) | T xi xj,n > 1]). 6 We show two step log-LTHT and RA index give equivalent methods for testing if vertices are within distance εn(x). Theorem 4.8. If β = ω(log(gd nn)) and xi and xj have at least one common neighbor, then M ts ij −2β →−log(Rij) + log(|NBn(xi)|). Proof. Let Pij(t) be the probability of going from xi to xj in t steps, and Hij(t) the probability of not hitting before time t. Factoring the two-step hitting time yields M ts ij = 2β −log(Pij(2)) −log  1 + ∞ X t=3 Pij(t) Pij(2)Hij(t)e−β(t−2) . Let kmax be the maximal out-degree in Gn. The contribution of paths of length greater than 2 vanishes because Hij(t) ≤1 and Pij(t)/Pij(2) ≤k2 max, which is dominated by e−β for β = ω(log(gnn)). Noting that Pij(2) = Rij |NBn(xi)| concludes. For full details see Theorem S9.1. For edge identification within distance εn(x), the RA index is robust even at noise level q = o(gd/2 n ). 5Modifying the graph by changing fewer than g2 n/n edges does not affect the continuum limit of the random graph, and therefore preserve the LTHT with parameter β = Θ(g2 n). While this weak bound allows on average o(1) noise edges per vertex, it does show that the LTHT is substantially more robust than shortest paths without modification. See Section S8 for proofs. 6The conditioning T xi xj,n > 1 is natural in link-prediction tasks where only pairs of disconnected vertices are queried. Empirically, we observe it is critical to performance (Figure 3). 7 Figure 2: The LTHT recovered deleted edges most consistently on a citation network Figure 3: The two-step LTHT (defined above Theorem 4.8) outperforms others at word similarity estimation including the basic log-LTHT. Theorem 4.9. If qi = q = o(gd/2 n ) for all i, for any δ > 0 there are c1, c2 and hn so that for any i, j, with probability at least 1 −δ we have • |xi −xj| < min{εn(xi), εn(xj)} if Rijhn < c1; • |xi −xj| > 2 max{εn(xi), εn(xj)} if Rijhn > c2. Proof. The minimal RA index follows from standard concentration arguments (see S9.2). 5 Link prediction tasks We compare the LTHT against other baseline measures of vertex similarity: shortest path distance, expected hitting time, number of common neighbors, and the RA index. A comprehensive evaluation of these quasi-walk metrics was performed in [8] who showed that a metric equivalent to the LTHT performed best. We consider two separate link prediction tasks on the largest connected component of vertices of degree at least five, fixing β = 0.2.7 The degree constraint is to ensure that local methods using number of common neighbors such as the resource allocation index do not have an excessive number of ties. Code to generate figures in this paper are contained in the supplement. Citation network: The KDD 2003 challenge dataset [5] includes a directed, unweighted network of e-print arXiv citations whose dense connected component has 11,042 vertices and 222,027 edges. We use the same benchmark method as [9] where we delete a single edge and compare the similarity of the deleted edge against the set of control pair of vertices i, j which do not share an edge. We count the fraction of pairs on which each method rank the deleted edge higher than all other methods. We find that LTHT is consistently best at this task (Figure 2). 8 Associative Thesaurus network: The Edinburgh associative thesaurus [7] is a network with a dense connected component of 7754 vertices and 246,609 edges in which subjects were shown a set of ten words and for each word was asked to respond with the first word to occur to them. Each vertex represents a word and each edge is a weighted, directed edge where the weight from xi to xj is the number of subjects who responded with word xj given word xi. We measure performance by whether strong associations with more than ten responses can be distinguished from weak ones with only one response. We find that the LTHT performs best and that preventing one-step jumps is critical to performance as predicted by Theorem 4.8 (Figure 3). 6 Conclusion Our work has developed an asymptotic equivalence between hitting times for random walks on graphs and those for diffusion processes. Using this, we have provided a short extension of the proof for the divergence of expected hitting times, and derived a new consistent graph metric that is theoretically principled, computationally tractable, and empirically successful at well-established link prediction benchmarks. These results open the way for the development of other principled quasi-walk metrics that can provably recover underlying latent similarities for spatial graphs. 7Results are qualitatively identical when varying β from 0.1 to 1; see supplement for details. 8The two-step LTHT is not shown since it is equivalent to the LTHT in missing link prediction. 8 References [1] M. Alamgir and U. von Luxburg. Phase transition in the family of p-resistances. In Advances in Neural Information Processing Systems, pages 379–387, 2011. [2] M. Alamgir and U. von Luxburg. Shortest path distance in random k-nearest neighbor graphs. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1031–1038, 2012. [3] P. Chebotarev. A class of graph-geodetic distances generalizing the shortest-path and the resistance distances. Discrete Applied Mathematics, 159(5):295–302, 2011. [4] D. A. Croydon and B. M. Hambly. Local limit theorems for sequences of simple random walks on graphs. Potential Analysis, 29(4):351–389, 2008. [5] J. Gehrke, P. Ginsparg, and J. Kleinberg. Overview of the 2003 KDD Cup. ACM SIGKDD Explorations Newsletter, 5(2):149–151, 2003. [6] T. B. Hashimoto, Y. Sun, and T. S. Jaakkola. Metric recovery from directed unweighted graphs. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages 342–350, 2015. [7] G. R. Kiss, C. Armstrong, R. Milroy, and J. Piper. An associative thesaurus of English and its computer analysis. The Computer and Literary Studies, pages 153–165, 1973. [8] I. Kivimäki, M. Shimbo, and M. Saerens. Developments in the theory of randomized shortest paths with a comparison of graph node distances. Physica A: Statistical Mechanics and its Applications, 393:600–616, 2014. [9] L. Lü and T. Zhou. Link prediction in complex networks: A survey. Physica A: Statistical Mechanics and its Applications, 390(6):1150–1170, 2011. [10] B. Øksendal. Stochastic differential equations: An introduction with applications. Universitext. SpringerVerlag, Berlin, sixth edition, 2003. [11] P. Sarkar, D. Chakrabarti, and A. W. Moore. Theoretical justification of popular link prediction heuristics. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, page 2722, 2011. [12] P. Sarkar and A. W. Moore. A tractable approach to finding closest truncated-commute-time neighbors in large graphs. In In Proc. UAI, 2007. [13] B. Shaw and T. Jebara. Structure preserving embedding. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 937–944. ACM, 2009. [14] S. T. Smith, E. K. Kao, K. D. Senne, G. Bernstein, and S. Philips. Bayesian discovery of threat networks. IEEE Transactions on Signal Processing, 62:5324–5338, 2014. [15] D. W. Stroock and S. S. Varadhan. Multidimensional diffussion processes, volume 233. Springer Science & Business Media, 1979. [16] A. Tahbaz-Salehi and A. Jadbabaie. A one-parameter family of distributed consensus algorithms with boundary: From shortest paths to mean hitting times. In Decision and Control, 2006 45th IEEE Conference on, pages 4664–4669. IEEE, 2006. [17] D. Ting, L. Huang, and M. I. Jordan. An analysis of the convergence of graph Laplacians. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 1079–1086, 2010. [18] U. von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. The Annals of Statistics, pages 555–586, 2008. [19] U. von Luxburg, A. Radl, and M. Hein. Hitting and commute times in large random neighborhood graphs. Journal of Machine Learning Research, 15:1751–1798, 2014. [20] M. Yazdani. Similarity Learning Over Large Collaborative Networks. PhD thesis, École Polytechnique Fédérale de Lausanne, 2013. [21] L. Yen, M. Saerens, A. Mantrach, and M. Shimbo. A family of dissimilarity measures between nodes generalizing both the shortest-path and the commute-time distances. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–793. ACM, 2008. 9
2015
163
5,662
Tensorizing Neural Networks Alexander Novikov1,4 Dmitry Podoprikhin1 Anton Osokin2 Dmitry Vetrov1,3 1Skolkovo Institute of Science and Technology, Moscow, Russia 2INRIA, SIERRA project-team, Paris, France 3National Research University Higher School of Economics, Moscow, Russia 4Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow, Russia novikov@bayesgroup.ru podoprikhin.dmitry@gmail.com anton.osokin@inria.fr vetrovd@yandex.ru Abstract Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train [17] format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks [21] we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times. 1 Introduction Deep neural networks currently demonstrate state-of-the-art performance in many domains of largescale machine learning, such as computer vision, speech recognition, text processing, etc. These advances have become possible because of algorithmic advances, large amounts of available data, and modern hardware. For example, convolutional neural networks (CNNs) [13, 21] show by a large margin superior performance on the task of image classification. These models have thousands of nodes and millions of learnable parameters and are trained using millions of images [19] on powerful Graphics Processing Units (GPUs). The necessity of expensive hardware and long processing time are the factors that complicate the application of such models on conventional desktops and portable devices. Consequently, a large number of works tried to reduce both hardware requirements (e. g. memory demands) and running times (see Sec. 2). In this paper we consider probably the most frequently used layer of the neural networks: the fullyconnected layer. This layer consists in a linear transformation of a high-dimensional input signal to a high-dimensional output signal with a large dense matrix defining the transformation. For example, in modern CNNs the dimensions of the input and output signals of the fully-connected layers are of the order of thousands, bringing the number of parameters of the fully-connected layers up to millions. We use a compact multiliniear format – Tensor-Train (TT-format) [17] – to represent the dense weight matrix of the fully-connected layers using few parameters while keeping enough flexibility to perform signal transformations. The resulting layer is compatible with the existing training algorithms for neural networks because all the derivatives required by the back-propagation algorithm [18] can be computed using the properties of the TT-format. We call the resulting layer a TT-layer and refer to a network with one or more TT-layers as TensorNet. We apply our method to popular network architectures proposed for several datasets of different scales: MNIST [15], CIFAR-10 [12], ImageNet [13]. We experimentally show that the networks 1 with the TT-layers match the performance of their uncompressed counterparts but require up to 200 000 times less of parameters, decreasing the size of the whole network by a factor of 7. The rest of the paper is organized as follows. We start with a review of the related work in Sec. 2. We introduce necessary notation and review the Tensor Train (TT) format in Sec. 3. In Sec. 4 we apply the TT-format to the weight matrix of a fully-connected layer and in Sec. 5 derive all the equations necessary for applying the back-propagation algorithm. In Sec. 6 we present the experimental evaluation of our ideas followed by a discussion in Sec. 7. 2 Related work With sufficient amount of training data, big models usually outperform smaller ones. However stateof-the-art neural networks reached the hardware limits both in terms the computational power and the memory. In particular, modern networks reached the memory limit with 89% [21] or even 100% [25] memory occupied by the weights of the fully-connected layers so it is not surprising that numerous attempts have been made to make the fully-connected layers more compact. One of the most straightforward approaches is to use a low-rank representation of the weight matrices. Recent studies show that the weight matrix of the fully-connected layer is highly redundant and by restricting its matrix rank it is possible to greatly reduce the number of parameters without significant drop in the predictive accuracy [6, 20, 25]. An alternative approach to the problem of model compression is to tie random subsets of weights using special hashing techniques [4]. The authors reported the compression factor of 8 for a twolayered network on the MNIST dataset without loss of accuracy. Memory consumption can also be reduced by using lower numerical precision [1] or allowing fewer possible carefully chosen parameter values [9]. In our paper we generalize the low-rank ideas. Instead of searching for low-rank approximation of the weight matrix we treat it as multi-dimensional tensor and apply the Tensor Train decomposition algorithm [17]. This framework has already been successfully applied to several data-processing tasks, e. g. [16, 27]. Another possible advantage of our approach is the ability to use more hidden units than was available before. A recent work [2] shows that it is possible to construct wide and shallow (i. e. not deep) neural networks with performance close to the state-of-the-art deep CNNs by training a shallow network on the outputs of a trained deep network. They report the improvement of performance with the increase of the layer size and used up to 30 000 hidden units while restricting the matrix rank of the weight matrix in order to be able to keep and to update it during the training. Restricting the TT-ranks of the weight matrix (in contrast to the matrix rank) allows to use much wider layers potentially leading to the greater expressive power of the model. We demonstrate this effect by training a very wide model (262 144 hidden units) on the CIFAR-10 dataset that outperforms other non-convolutional networks. Matrix and tensor decompositions were recently used to speed up the inference time of CNNs [7, 14]. While we focus on fully-connected layers, Lebedev et al. [14] used the CP-decomposition to compress a 4-dimensional convolution kernel and then used the properties of the decomposition to speed up the inference time. This work shares the same spirit with our method and the approaches can be readily combined. Gilboa et al. exploit the properties of the Kronecker product of matrices to perform fast matrix-byvector multiplication [8]. These matrices have the same structure as TT-matrices with unit TT-ranks. Compared to the Tucker format [23] and the canonical format [3], the TT-format is immune to the curse of dimensionality and its algorithms are robust. Compared to the Hierarchical Tucker format [11], TT is quite similar but has simpler algorithms for basic operations. 3 TT-format Throughout this paper we work with arrays of different dimensionality. We refer to the onedimensional arrays as vectors, the two-dimensional arrays – matrices, the arrays of higher dimensions – tensors. Bold lower case letters (e. g. a) denote vectors, ordinary lower case letters (e. g. a(i) = ai) – vector elements, bold upper case letters (e. g. A) – matrices, ordinary upper case letters (e. g. A(i, j)) – matrix elements, calligraphic bold upper case letters (e. g. A) – for tensors and 2 ordinary calligraphic upper case letters (e. g. A(i) = A(i1, . . . , id)) – tensor elements, where d is the dimensionality of the tensor A. We will call arrays explicit to highlight cases when they are stored explicitly, i. e. by enumeration of all the elements. A d-dimensional array (tensor) A is said to be represented in the TT-format [17] if for each dimension k = 1, . . . , d and for each possible value of the k-th dimension index jk = 1, . . . , nk there exists a matrix Gk[jk] such that all the elements of A can be computed as the following matrix product: A(j1, . . . , jd) = G1[j1]G2[j2] · · · Gd[jd]. (1) All the matrices Gk[jk] related to the same dimension k are restricted to be of the same size rk−1 × rk. The values r0 and rd equal 1 in order to keep the matrix product (1) of size 1 × 1. In what follows we refer to the representation of a tensor in the TT-format as the TT-representation or the TT-decomposition. The sequence {rk}d k=0 is referred to as the TT-ranks of the TT-representation of A (or the ranks for short), its maximum – as the maximal TT-rank of the TT-representation of A: r = maxk=0,...,d rk. The collections of the matrices (Gk[jk])nk jk=1 corresponding to the same dimension (technically, 3-dimensional arrays Gk) are called the cores. Oseledets [17, Th. 2.1] shows that for an arbitrary tensor A a TT-representation exists but is not unique. The ranks among different TT-representations can vary and it’s natural to seek a representation with the lowest ranks. We use the symbols Gk[jk](αk−1, αk) to denote the element of the matrix Gk[jk] in the position (αk−1, αk), where αk−1 = 1, . . . , rk−1, αk = 1, . . . , rk. Equation (1) can be equivalently rewritten as the sum of the products of the elements of the cores: A(j1, . . . , jd) = X α0,...,αd G1[j1](α0, α1) . . . Gd[jd](αd−1, αd). (2) The representation of a tensor A via the explicit enumeration of all its elements requires to store Qd k=1 nk numbers compared with Pd k=1 nk rk−1 rk numbers if the tensor is stored in the TT-format. Thus, the TT-format is very efficient in terms of memory if the ranks are small. An attractive property of the TT-decomposition is the ability to efficiently perform several types of operations on tensors if they are in the TT-format: basic linear algebra operations, such as the addition of a constant and the multiplication by a constant, the summation and the entrywise product of tensors (the results of these operations are tensors in the TT-format generally with the increased ranks); computation of global characteristics of a tensor, such as the sum of all elements and the Frobenius norm. See [17] for a detailed description of all the supported operations. 3.1 TT-representations for vectors and matrices The direct application of the TT-decomposition to a matrix (2-dimensional tensor) coincides with the low-rank matrix format and the direct TT-decomposition of a vector is equivalent to explicitly storing its elements. To be able to efficiently work with large vectors and matrices the TT-format for them is defined in a special manner. Consider a vector b ∈RN, where N = Qd k=1 nk. We can establish a bijection µ between the coordinate ℓ∈{1, . . . , N} of b and a d-dimensional vectorindex µ(ℓ) = (µ1(ℓ), . . . , µd(ℓ)) of the corresponding tensor B, where µk(ℓ) ∈{1, . . . , nk}. The tensor B is then defined by the corresponding vector elements: B(µ(ℓ)) = bℓ. Building a TTrepresentation of B allows us to establish a compact format for the vector b. We refer to it as a TT-vector. Now we define a TT-representation of a matrix W ∈RM×N, where M = Qd k=1 mk and N = Qd k=1 nk. Let bijections ν(t) = (ν1(t), . . . , νd(t)) and µ(ℓ) = (µ1(ℓ), . . . , µd(ℓ)) map row and column indices t and ℓof the matrix W to the d-dimensional vector-indices whose k-th dimensions are of length mk and nk respectively, k = 1, . . . , d. From the matrix W we can form a d-dimensional tensor W whose k-th dimension is of length mknk and is indexed by the tuple (νk(t), µk(ℓ)). The tensor W can then be converted into the TT-format: W(t, ℓ) = W((ν1(t), µ1(ℓ)), . . . , (νd(t), µd(ℓ))) = G1[ν1(t), µ1(ℓ)] . . . Gd[νd(t), µd(ℓ)], (3) where the matrices Gk[νk(t), µk(ℓ)], k = 1, . . . , d, serve as the cores with tuple (νk(t), µk(ℓ)) being an index. Note that a matrix in the TT-format is not restricted to be square. Although indexvectors ν(t) and µ(ℓ) are of the same length d, the sizes of the domains of the dimensions can vary. We call a matrix in the TT-format a TT-matrix. 3 All operations available for the TT-tensors are applicable to the TT-vectors and the TT-matrices as well (for example one can efficiently sum two TT-matrices and get the result in the TT-format). Additionally, the TT-format allows to efficiently perform the matrix-by-vector (matrix-by-matrix) product. If only one of the operands is in the TT-format, the result would be an explicit vector (matrix); if both operands are in the TT-format, the operation would be even more efficient and the result would be given in the TT-format as well (generally with the increased ranks). For the case of the TT-matrixby-explicit-vector product c = W b, the computational complexity is O(d r2 m max{M, N}), where d is the number of the cores of the TT-matrix W , m = maxk=1,...,d mk, r is the maximal rank and N = Qd k=1 nk is the length of the vector b. The ranks and, correspondingly, the efficiency of the TT-format for a vector (matrix) depend on the choice of the mapping µ(ℓ) (mappings ν(t) and µ(ℓ)) between vector (matrix) elements and the underlying tensor elements. In what follows we use a column-major MATLAB reshape command 1 to form a d-dimensional tensor from the data (e. g. from a multichannel image), but one can choose a different mapping. 4 TT-layer In this section we introduce the TT-layer of a neural network. In short, the TT-layer is a fullyconnected layer with the weight matrix stored in the TT-format. We will refer to a neural network with one or more TT-layers as TensorNet. Fully-connected layers apply a linear transformation to an N-dimensional input vector x: y = W x + b, (4) where the weight matrix W ∈RM×N and the bias vector b ∈RM define the transformation. A TT-layer consists in storing the weights W of the fully-connected layer in the TT-format, allowing to use hundreds of thousands (or even millions) of hidden units while having moderate number of parameters. To control the number of parameters one can vary the number of hidden units as well as the TT-ranks of the weight matrix. A TT-layer transforms a d-dimensional tensor X (formed from the corresponding vector x) to the ddimensional tensor Y (which correspond to the output vector y). We assume that the weight matrix W is represented in the TT-format with the cores Gk[ik, jk]. The linear transformation (4) of a fully-connected layer can be expressed in the tensor form: Y(i1, . . . , id) = X j1,...,jd G1[i1, j1] . . . Gd[id, jd] X(j1, . . . , jd) + B(i1, . . . , id). (5) Direct application of the TT-matrix-by-vector operation for the Eq. (5) yields the computational complexity of the forward pass O(dr2m max{m, n}d) = O(dr2m max{M, N}). 5 Learning Neural networks are usually trained with the stochastic gradient descent algorithm where the gradient is computed using the back-propagation procedure [18]. Back-propagation allows to compute the gradient of a loss-function L with respect to all the parameters of the network. The method starts with the computation of the gradient of L w.r.t. the output of the last layer and proceeds sequentially through the layers in the reversed order while computing the gradient w.r.t. the parameters and the input of the layer making use of the gradients computed earlier. Applied to the fully-connected layers (4) the back-propagation method computes the gradients w.r.t. the input x and the parameters W and b given the gradients ∂L ∂y w.r.t to the output y: ∂L ∂x = W ⊺∂L ∂y , ∂L ∂W = ∂L ∂y x⊺, ∂L ∂b = ∂L ∂y . (6) In what follows we derive the gradients required to use the back-propagation algorithm with the TTlayer. To compute the gradient of the loss function w.r.t. the bias vector b and w.r.t. the input vector x one can use equations (6). The latter can be applied using the matrix-by-vector product (where the matrix is in the TT-format) with the complexity of O(dr2n max{m, n}d) = O(dr2n max{M, N}). 1http://www.mathworks.com/help/matlab/ref/reshape.html 4 Operation Time Memory FC forward pass O(MN) O(MN) TT forward pass O(dr2m max{M, N}) O(r max{M, N}) FC backward pass O(MN) O(MN) TT backward pass O(d2 r4 m max{M, N}) O(r3 max{M, N}) Table 1: Comparison of the asymptotic complexity and memory usage of an M × N TT-layer and an M × N fully-connected layer (FC). The input and output tensor shapes are m1 × . . . × md and n1 × . . . × nd respectively (m = maxk=1...d mk) and r is the maximal TT-rank. To perform a step of stochastic gradient descent one can use equation (6) to compute the gradient of the loss function w.r.t. the weight matrix W , convert the gradient matrix into the TT-format (with the TT-SVD algorithm [17]) and then add this gradient (multiplied by a step size) to the current estimate of the weight matrix: Wk+1 = Wk + γk ∂L ∂W . However, the direct computation of ∂L ∂W requires O(MN) memory. A better way to learn the TensorNet parameters is to compute the gradient of the loss function directly w.r.t. the cores of the TT-representation of W . In what follows we use shortened notation for prefix and postfix sequences of indices: i− k := (i1, . . . , ik−1), i+ k := (ik+1, . . . , id), i = (i− k , ik, i+ k ). We also introduce notations for partial core products: P − k [i− k , j− k ] := G1[i1, j1] . . . Gk−1[ik−1, jk−1], P + k [i+ k , j+ k ] := Gk+1[ik+1, jk+1] . . . Gd[id, jd]. (7) We now rewrite the definition of the TT-layer transformation (5) for any k = 2, . . . , d −1: Y(i) = Y(i− k , ik, i+ k ) = X j− k ,jk,j+ k P − k [i− k , j− k ]Gk[ik, jk]P + k [i+ k , j+ k ]X(j− k , jk, j+ k ) + B(i). (8) The gradient of the loss function L w.r.t. to the k-th core in the position [˜ik, ˜jk] can be computed using the chain rule: ∂L ∂Gk[˜ik, ˜jk] | {z } rk−1 × rk = X i ∂L ∂Y(i) ∂Y(i) ∂Gk[˜ik, ˜jk]. (9) Given the gradient matrices ∂Y(i) ∂Gk[˜ik,˜jk] the summation (9) can be done explicitly in O(M rk−1 rk) time, where M is the length of the output vector y. We now show how to compute the matrix ∂Y(i) ∂Gk[˜ik,˜jk] for any values of the core index k ∈{1, . . . , d} and ˜ik ∈{1, . . . , mk}, ˜jk ∈{1, . . . , nk}. For any i = (i1, . . . , id) such that ik ̸= ˜ik the value of Y(i) doesn’t depend on the elements of Gk[˜ik, ˜jk] making the corresponding gradient ∂Y(i) ∂Gk[˜ik,˜jk] equal zero. Similarly, any summand in the Eq. (8) such that jk ̸= ˜jk doesn’t affect the gradient ∂Y(i) ∂Gk[˜ik,˜jk]. These observations allow us to consider only ik = ˜ik and jk = ˜jk. Y(i− k ,˜ik, i+ k ) is a linear function of the core Gk[˜ik, ˜jk] and its gradient equals the following expression: ∂Y(i− k ,˜ik, i+ k ) ∂Gk[˜ik, ˜jk] = X j− k ,j+ k P − k [i− k , j− k ] ⊺ | {z } rk−1 ×1 P + k [i+ k , j+ k ] ⊺ | {z } 1×rk X(j− k , ˜jk, j+ k ). (10) We denote the partial sum vector as Rk[j− k , ˜jk, i+ k ] ∈Rrk: Rk[j1, . . . , jk−1, ˜jk, ik+1, . . . , id] = Rk[j− k , ˜jk, i+ k ] = X j+ k P + k [i+ k , j+ k ] X(j− k , ˜jk, j+ k ). Vectors Rk[j− k , ˜jk, i+ k ] for all the possible values of k, j− k , ˜jk and i+ k can be computed via dynamic programming (by pushing sums w.r.t. each jk+1, . . . , jd inside the equation and summing out one index at a time) in O(dr2m max{M, N}). Substituting these vectors into (10) and using 5 102 103 104 105 106 100 101 102 number of parameters in the weight matrix of the first layer test error % 32 × 32 4 × 8 × 8 × 4 4 × 4 × 4 × 4 × 4 2 × 2 × 8 × 8 × 2 × 2 210 matrix rank uncompressed Figure 1: The experiment on the MNIST dataset. We use a two-layered neural network and substitute the first 1024 × 1024 fully-connected layer with the TT-layer (solid lines) and with the matrix rank decomposition based layer (dashed line). The solid lines of different colors correspond to different ways of reshaping the input and output vectors to tensors (the shapes are reported in the legend). To obtain the points of the plots we vary the maximal TT-rank or the matrix rank. (again) dynamic programming yields us all the necesary matrices for summation (9). The overall computational complexity of the backward pass is O(d2 r4 m max{M, N}). The presented algorithm reduces to a sequence of matrix-by-matrix products and permutations of dimensions and thus can be accelerated on a GPU device. 6 Experiments 6.1 Parameters of the TT-layer In this experiment we investigate the properties of the TT-layer and compare different strategies for setting its parameters: dimensions of the tensors representing the input/output of the layer and the TT-ranks of the compressed weight matrix. We run the experiment on the MNIST dataset [15] for the task of handwritten-digit recognition. As a baseline we use a neural network with two fullyconnected layers (1024 hidden units) and rectified linear unit (ReLU) achieving 1.9% error on the test set. For more reshaping options we resize the original 28 × 28 images to 32 × 32. We train several networks differing in the parameters of the single TT-layer. The networks contain the following layers: the TT-layer with weight matrix of size 1024×1024, ReLU, the fully-connected layer with the weight matrix of size 1024 × 10. We test different ways of reshaping the input/output tensors and try different ranks of the TT-layer. As a simple compression baseline in the place of the TT-layer we use the fully-connected layer such that the rank of the weight matrix is bounded (implemented as follows: the two consecutive fully-connected layers with weight matrices of sizes 1024 × r and r ×1024, where r controls the matrix rank and the compression factor). The results of the experiment are shown in Figure 1. We conclude that the TT-ranks provide much better flexibility than the matrix rank when applied at the same compression level. In addition, we observe that the TT-layers with too small number of values for each tensor dimension and with too few dimensions perform worse than their more balanced counterparts. Comparison with HashedNet [4]. We consider a two-layered neural network with 1024 hidden units and replace both fully-connected layers by the TT-layers. By setting all the TT-ranks in the network to 8 we achieved the test error of 1.6% with 12 602 parameters in total and by setting all the TT-ranks to 6 the test error of 1.9% with 7 698 parameters. Chen et al. [4] report results on the same architecture. By tying random subsets of weights they compressed the network by the factor of 64 to the 12 720 parameters in total with the test error equal 2.79%. 6.2 CIFAR-10 CIFAR-10 dataset [12] consists of 32 × 32 3-channel images assigned to 10 different classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The dataset contains 50000 train and 10000 test images. Following [10] we preprocess the images by subtracting the mean and performing global contrast normalization and ZCA whitening. As a baseline we use the CIFAR-10 Quick [22] CNN, which consists of convolutional, pooling and non-linearity layers followed by two fully-connected layers of sizes 1024 × 64 and 64 × 10. We fix the convolutional part of the network and substitute the fully-connected part by a 1024×N TT-layer 6 Architecture TT-layers compr. vgg-16 compr. vgg-19 compr. vgg-16 top 1 vgg-16 top 5 vgg-19 top 1 vgg-19 top 5 FC FC FC 1 1 1 30.9 11.2 29.0 10.1 TT4 FC FC 50 972 3.9 3.5 31.2 11.2 29.8 10.4 TT2 FC FC 194 622 3.9 3.5 31.5 11.5 30.4 10.9 TT1 FC FC 713 614 3.9 3.5 33.3 12.8 31.9 11.8 TT4 TT4 FC 37 732 7.4 6 32.2 12.3 31.6 11.7 MR1 FC FC 3 521 3.9 3.5 99.5 97.6 99.8 99 MR5 FC FC 704 3.9 3.5 81.7 53.9 79.1 52.4 MR50 FC FC 70 3.7 3.4 36.7 14.9 34.5 15.8 Table 2: Substituting the fully-connected layers with the TT-layers in vgg-16 and vgg-19 networks on the ImageNet dataset. FC stands for a fully-connected layer; TT□stands for a TT-layer with all the TT-ranks equal “□”; MR□stands for a fully-connected layer with the matrix rank restricted to “□”. We report the compression rate of the TT-layers matrices and of the whole network in the second, third and fourth columns. followed by ReLU and by a N × 10 fully-connected layer. With N = 3125 hidden units (contrary to 64 in the original network) we achieve the test error of 23.13% without fine-tuning which is slightly better than the test error of the baseline (23.25%). The TT-layer treated input and output vectors as 4 × 4 × 4 × 4 × 4 and 5 × 5 × 5 × 5 × 5 tensors respectively. All the TT-ranks equal 8, making the number of the parameters in the TT-layer equal 4 160. The compression rate of the TensorNet compared with the baseline w.r.t. all the parameters is 1.24. In addition, substituting the both fully-connected layers by the TT-layers yields the test error of 24.39% and reduces the number of parameters of the fully-connected layer matrices by the factor of 11.9 and the total parameter number by the factor of 1.7. For comparison, in [6] the fully-connected layers in a CIFAR-10 CNN were compressed by the factor of at most 4.7 times with the loss of about 2% in accuracy. 6.2.1 Wide and shallow network With sufficient amount of hidden units, even a neural network with two fully-connected layers and sigmoid non-linearity can approximate any decision boundary [5]. Traditionally, very wide shallow networks are not considered because of high computational and memory demands and the overfitting risk. TensorNet can potentially address both issues. We use a three-layered TensorNet of the following architecture: the TT-layer with the weight matrix of size 3 072 × 262 144, ReLU, the TT-layer with the weight matrix of size 262 144 × 4 096, ReLU, the fully-connected layer with the weight matrix of size 4 096 × 10. We report the test error of 31.47% which is (to the best of our knowledge) the best result achieved by a non-convolutional neural network. 6.3 ImageNet In this experiment we evaluate the TT-layers on a large scale task. We consider the 1000-class ImageNet ILSVRC-2012 dataset [19], which consist of 1.2 million training images and 50 000 validation images. We use deep the CNNs vgg-16 and vgg-19 [21] as the reference models2. Both networks consist of the two parts: the convolutional and the fully-connected parts. In the both networks the second part consist of 3 fully-connected layers with weight matrices of sizes 25088 × 4096, 4096 × 4096 and 4096 × 1000. In each network we substitute the first fully-connected layer with the TT-layer. To do this we reshape the 25088-dimensional input vectors to the tensors of the size 2 × 7 × 8 × 8 × 7 × 4 and the 4096dimensional output vectors to the tensors of the size 4 × 4 × 4 × 4 × 4 × 4. The remaining fullyconnected layers are initialized randomly. The parameters of the convolutional parts are kept fixed as trained by Simonyan and Zisserman [21]. We train the TT-layer and the fully-connected layers on the training set. In Table 2 we vary the ranks of the TT-layer and report the compression factor of the TT-layers (vs. the original fully-connected layer), the resulting compression factor of the whole network, and the top 1 and top 5 errors on the validation set. In addition, we substitute the second fully-connected layer with the TT-layer. As a baseline compression method we constrain the matrix rank of the weight matrix of the first fully-connected layer using the approach of [2]. 2After we had started to experiment on the vgg-16 network the vgg-* networks have been improved by the authors. Thus, we report the results on a slightly outdated version of vgg-16 and the up-to-date version of vgg-19. 7 Type 1 im. time (ms) 100 im. time (ms) CPU fully-connected layer 16.1 97.2 CPU TT-layer 1.2 94.7 GPU fully-connected layer 2.7 33 GPU TT-layer 1.9 12.9 Table 3: Inference time for a 25088 × 4096 fully-connected layer and its corresponding TT-layer with all the TT-ranks equal 4. The memory usage for feeding forward one image is 392MB for the fully-connected layer and 0.766MB for the TT-layer. In Table 2 we observe that the TT-layer in the best case manages to reduce the number of the parameters in the matrix W of the largest fully-connected layer by a factor of 194 622 (from 25088× 4096 parameters to 528) while increasing the top 5 error from 11.2 to 11.5. The compression factor of the whole network remains at the level of 3.9 because the TT-layer stops being the storage bottleneck. By compressing the largest of the remaining layers the compression factor goes up to 7.4. The baseline method when providing similar compression rates significantly increases the error. For comparison, consider the results of [26] obtained for the compression of the fully-connected layers of the Krizhevsky-type network [13] with the Fastfood method. The model achieves compression factors of 2-3 without decreasing the network error. 6.4 Implementation details In all experiments we use our MATLAB extension3 of the MatConvNet framework4 [24]. For the operations related to the TT-format we use the TT-Toolbox5 implemented in MATLAB as well. The experiments were performed on a computer with a quad-core Intel Core i5-4460 CPU, 16 GB RAM and a single NVidia Geforce GTX 980 GPU. We report the running times and the memory usage at the forward pass of the TT-layer and the baseline fully-connected layer in Table 3. We train all the networks with stochastic gradient descent with momentum (coefficient 0.9). We initialize all the parameters of the TT- and fully-connected layers with a Gaussian noise and put L2-regularization (weight 0.0005) on them. 7 Discussion and future work Recent studies indicate high redundancy in the current neural network parametrization. To exploit this redundancy we propose to use the TT-decomposition framework on the weight matrix of a fully-connected layer and to use the cores of the decomposition as the parameters of the layer. This allows us to train the fully-connected layers compressed by up to 200 000× compared with the explicit parametrization without significant error increase. Our experiments show that it is possible to capture complex dependencies within the data by using much more compact representations. On the other hand it becomes possible to use much wider layers than was available before and the preliminary experiments on the CIFAR-10 dataset show that wide and shallow TensorNets achieve promising results (setting new state-of-the-art for non-convolutional neural networks). Another appealing property of the TT-layer is faster inference time (compared with the corresponding fully-connected layer). All in all a wide and shallow TensorNet can become a time and memory efficient model to use in real time applications and on mobile devices. The main limiting factor for an M × N fully-connected layer size is its parameters number MN. The limiting factor for an M ×N TT-layer is the maximal linear size max{M, N}. As a future work we plan to consider the inputs and outputs of layers in the TT-format thus completely eliminating the dependency on M and N and allowing billions of hidden units in a TT-layer. Acknowledgements. We would like to thank Ivan Oseledets for valuable discussions. A. Novikov, D. Podoprikhin, D. Vetrov were supported by RFBR project No. 15-31-20596 (mol-a-ved) and by Microsoft: Moscow State University Joint Research Center (RPD 1053945). A. Osokin was supported by the MSR-INRIA Joint Center. The results of the tensor toolbox application (in Sec. 6) are supported by Russian Science Foundation No. 14-11-00659. 3https://github.com/Bihaqo/TensorNet 4http://www.vlfeat.org/matconvnet/ 5https://github.com/oseledets/TT-Toolbox 8 References [1] K. Asanovi and N. Morgan, “Experimental determination of precision requirements for back-propagation training of artificial neural networks,” International Computer Science Institute, Tech. Rep., 1991. [2] J. Ba and R. Caruana, “Do deep nets really need to be deep?” in Advances in Neural Information Processing Systems 27 (NIPS), 2014, pp. 2654–2662. [3] J. D. Caroll and J. J. Chang, “Analysis of individual differences in multidimensional scaling via n-way generalization of Eckart-Young decomposition,” Psychometrika, vol. 35, pp. 283–319, 1970. [4] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, “Compressing neural networks with the hashing trick,” in International Conference on Machine Learning (ICML), 2015, pp. 2285–2294. [5] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of control, signals and systems, pp. 303–314, 1989. [6] M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. de Freitas, “Predicting parameters in deep learning,” in Advances in Neural Information Processing Systems 26 (NIPS), 2013, pp. 2148–2156. [7] E. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in Advances in Neural Information Processing Systems 27 (NIPS), 2014, pp. 1269–1277. [8] E. Gilboa, Y. Saati, and J. P. Cunningham, “Scaling multidimensional inference for structured gaussian processes,” arXiv preprint, no. 1209.4120, 2012. [9] Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutional networks using vector quantization,” arXiv preprint, no. 1412.6115, 2014. [10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, “Maxout networks,” in International Conference on Machine Learning (ICML), 2013, pp. 1319–1327. [11] W. Hackbusch and S. K¨uhn, “A new scheme for the tensor representation,” J. Fourier Anal. Appl., vol. 15, pp. 706–722, 2009. [12] A. Krizhevsky, “Learning multiple layers of features from tiny images,” Master’s thesis, Computer Science Department, University of Toronto, 2009. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25 (NIPS), 2012, pp. 1097–1105. [14] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, “Speeding-up convolutional neural networks using fine-tuned CP-decomposition,” in International Conference on Learning Representations (ICLR), 2014. [15] Y. LeCun, C. Cortes, and C. J. C. Burges, “The MNIST database of handwritten digits,” 1998. [16] A. Novikov, A. Rodomanov, A. Osokin, and D. Vetrov, “Putting MRFs on a Tensor Train,” in International Conference on Machine Learning (ICML), 2014, pp. 811–819. [17] I. V. Oseledets, “Tensor-Train decomposition,” SIAM J. Scientific Computing, vol. 33, no. 5, pp. 2295– 2317, 2011. [18] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. [19] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision (IJCV), 2015. [20] T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran, “Low-rank matrix factorization for deep neural network training with high-dimensional output targets,” in International Conference of Acoustics, Speech, and Signal Processing (ICASSP), 2013, pp. 6655–6659. [21] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015. [22] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in Advances in Neural Information Processing Systems 25 (NIPS), 2012, pp. 2951–2959. [23] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966. [24] A. Vedaldi and K. Lenc, “Matconvnet – convolutional neural networks for MATLAB,” in Proceeding of the ACM Int. Conf. on Multimedia. [25] J. Xue, J. Li, and Y. Gong, “Restructuring of deep neural network acoustic models with singular value decomposition,” in Interspeech, 2013, pp. 2365–2369. [26] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang, “Deep fried convnets,” arXiv preprint, no. 1412.7149, 2014. [27] Z. Zhang, X. Yang, I. V. Oseledets, G. E. Karniadakis, and L. Daniel, “Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, pp. 63–76, 2014. 9
2015
164
5,663
On some provably correct cases of variational inference for topic models Pranjal Awasthi Department of Computer Science Rutgers University New Brunswick, NJ 08901 pranjal.awasthi@rutgers.edu Andrej Risteski Department of Computer Science Princeton University Princeton, NJ 08540 risteski@cs.princeton.edu Abstract Variational inference is an efficient, popular heuristic used in the context of latent variable models. We provide the first analysis of instances where variational inference algorithms converge to the global optimum, in the setting of topic models. Our initializations are natural, one of them being used in LDA-c, the most popular implementation of variational inference. In addition to providing intuition into why this heuristic might work in practice, the multiplicative, rather than additive nature of the variational inference updates forces us to use non-standard proof arguments, which we believe might be of general theoretical interest. 1 Introduction Over the last few years, heuristics for non-convex optimization have emerged as one of the most fascinating phenomena for theoretical study in machine learning. Methods like alternating minimization, EM, variational inference and the like enjoy immense popularity among ML practitioners, and with good reason: they’re vastly more efficient than alternate available methods like convex relaxations, and are usually easily modified to suite different applications. Theoretical understanding however is sparse and we know of very few instances where these methods come with formal guarantees. Among more classical results in this direction are the analyses of Lloyd’s algorithm for K-means, which is very closely related to the EM algorithm for mixtures of Gaussians [20], [13], [14]. The recent work of [9] also characterizes global convergence properties of the EM algorithm for more general settings. Another line of recent work has focused on a different heuristic called alternating minimization in the context of dictionary learning. [1], [6] prove that with appropriate initialization, alternating minimization can provably recover the ground truth. [22] have proven similar results in the context of phase retreival. Another popular heuristic which has so far eluded such attempts is known as variational inference [19]. We provide the first characterization of global convergence of variational inference based algorithms for topic models [12]. We show that under natural assumptions on the topic-word matrix and the topic priors, along with natural initialization, variational inference converges to the parameters of the underlying ground truth model. To prove our result we need to overcome a number of technical hurdles which are unique to the nature of variational inference. Firstly, the difficulty in analyzing alternating minimization methods for dictionary learning is alleviated by the fact that one can come up with closed form expressions for the updates of the dictionary matrix. We do not have this luxury. Second, the “norm” in which variational inference naturally operates is KL divergence, which can be difficult to work with. We stress that the focus of this work is not to identify new instances of topic modeling that were previously not known to be efficiently solvable, but rather providing understanding about the behaviour of variational inference, the defacto method for learning and inference in the context of topic models. 1 2 Latent variable models and EM We briefly review EM and variational methods. The setting is latent variable models, where the observations Xi are generated according to a distribution P(Xi|θ) = P(Zi|θ)P(Xi|Zi, θ) where θ are parameters of the models, and Zi is a latent variable. Given the observations Xi, a common task is to find the max likelihood value of the parameter θ: argmaxθ X i log(P(Xi|θ)). The EM algorithm is an iterative method to achieve this, dating all the way back to [15] and [24] in the 70s. In the above framework it can be formulated as the following procedure, maintaining estimates θt, ˜P t(Z) of the model parameters and the posterior distribution over the hidden variables: In the E-step, we compute the distribution ˜P t(Z) = P(Z|X, θt). In the Mstep, we set θt+1 = argmaxθ X i E ˜ P t[log P(Xi, Zi|θ)]. Sometimes even the above two steps may not be computationally feasible, in which case they can be relaxed by choosing a family of simple distributions F, and performing the following updates. In the variational E-step, we compute the distribution ˜P t(Z) = min P t∈F KL(P t(Z)||P(Z|X, θt)). In the M-step, we set θt+1 = argmaxθ X i E ˜ P t[log P(Xi, Zi|θ)]. By picking the family F appropriately, it’s often possible to make both steps above run in polynomial time. As expected, none of the above two families of approximations, come with any provable global convergence guarantees. With EM, the problem is ensuring that one does not get stuck in a local optimum. With variational EM, additionally, we are faced with the issue of in principle not even exploring the entire space of solutions. 3 Topic models and prior work We focus on a particular, popular latent variable model - topic models [12]. The generative model over word documents is the following. For each document in the corpus, a proportion of topics γ1, γ2, . . . , γk is sampled according to a prior distribution α. Then, for each position p in the document, we pick a topic Zp according to a multinomial with parameters γ1, . . . , γk. Conditioned on Zp = i, we pick a word j from a multinomial with parameters (βi,1, βi,2, . . . , βi,k) to put in position p. The matrix of values {βi,j} is known as the topic-word matrix. The body of work on topic models is vast [11]. Prior theoretical work relevant in the context of this paper includes the sequence of works by [7],[4], as well as [2], [16], [17] and [10]. [7] and [4] assume that the topic-word matrix contains “anchor words”. This means that each topic has a word which appears in that topic, and no other. [2] on the other hand work with a certain expansion assumption on the word-topic graph, which says that if one takes a subset S of topics, the number of words in the support of these topics should be at least |S| + smax, where smax is the maximum support size of any topic. Neither paper needs any assumption on the topic priors, and can handle (almost) arbitrarily short documents. The assumptions we make on the word-topic matrix will be related to the ones in the above works, but our documents will need to be long, so that the empirical counts of the words are close to their expected counts. Our priors will also be more structured. This is expected since we are trying to analyze an existing heuristic rather than develop a new algorithmic strategy. The case where the documents are short seems significantly more difficult. Namely, in that case there are two issues to consider. One is proving the variational approximation to the posterior distribution over topics is not too bad. The second is proving that the updates do actually reach the global optimum. Assuming long documents allows us to focus on the second issue alone, which is already challenging. On a high level, the instances we consider will have the following structure: • The topics will satisfy a weighted expansion property: for any set S of topics of constant size, for any topic i in this set, the probability mass on words which belong to i, and no other topic in S will be large. (Similar to the expansion in [2], but only over constant sized subsets.) • The number of topics per document will be small. Further, the probability of including a given topic in a document is almost independent of any other topics that might be included in the document already. Similar properties are satisfied by the Dirichlet prior, one of the most popular 2 priors in topic modeling. (Originally introduced by [12].) The documents will also have a “dominating topic”, similarly as in [10]. • For each word j, and a topic i it appears in, there will be a decent proportion of documents that contain topic i and no other topic containing j. These can be viewed as “local anchor documents” for that word-pair topic. We state below, informally, our main result. See Sections 6 and 7 for more details. Theorem. Under the above mentioned assumptions, popular variants of variational inference for topic models, with suitable initializations, provably recover the ground truth model in polynomial time. 4 Variational relaxation for learning topic models In this section we briefly review the variational relaxation for topic models, following closely [12]. Throughout the paper, we will denote by N the total number of words and K the number of topics. We will assume that we are working with a sample set of D documents. We will also denote by ˜fd,j the fractional count of word j in document d (i.e. ˜fd,j = Count(j)/Nd, where Count(j) is the number of times word j appears in the document, and Nd is the number of words in the document). For topic models variational updates are a way to approximate the computationally intractable E-step [23] as described in Section 2. Recall the model parameters for topic models are the topic prior parameters α and the topic-word matrix β. The observable X is the list of words in the document. The latent variables are the topic assignments Zj at each position j in the document and the topic proportions γ. The variational E-step hence becomes ˜P t(Z, γ) = minP t∈F KL(P t(Z, γ)||P(Z, γ|X, αt, βt) for some family F of distributions. The family F one usually considered is P t(γ, Z) = q(γ)ΠNd j=1q′ j(Zj), i.e. a mean field family. In [12] it’s shown that for Dirichlet priors α the optimal distributions q, q′ j are a Dirichlet distribution for q, with some parameter ˜γ, and multinomials for q′ j, with some parameters φj. The variational EM updates are shown to have the following form. In the E-step, one runs to convergence the following updates on the φ and ˜γ parameters: φd,j,i ∝βt i,wd,jeEq[log(γd)|˜γd], ˜ γd,i = αt d,i+PNd j=1 φd,j,i. In the M-step, one updates the β and parameters by setting βt+1 i,j ∝ D X d=1 Nd X j′=1 φt d,j,iwd,j,j′ where φt d,j,i is the converged value of φd,j,i; wd,j is the word in document d, position j; wd,j,j′ is an indicator variable which is 1 if the word in position j′ in document d is word j. The α Dirichlet parameters do not have a closed form expression and are updated via gradient descent. 4.1 Simplified updates in the long document limit From the above updates it is difficult to give assign an intuitive meaning to the ˜γ and φ parameters. (Indeed, it’s not even clear what one would like them to be ideally at the global optimum.) We will be however working in the large document limit - and this will simplify the updates. In particular, in the E-step, in the large document limit, the first term in the update equation for ˜γ has a vanishing contribution. In this case, we can simplify the E-update as: φd,j,i ∝βt i,jγd,i, γd,i ∝PNd j=1 φd,j,i. Notice, importantly, in the second update we now use variables γd,i instead of ˜γd,i, which are normalized such that K X i=1 γd,i = 1. These correspond to the max-likelihood topic proportions, given our current estimates βt i,j for the model parameters. The M-step will remain as is - but we will focus on the β only, and ignore the α updates - as the α estimates disappeared from the E updates: βt+1 i,j ∝ D X d=1 ˜fd,jγt d,i, where γt d,i is the converged value of γd,i. In this case, the intuitive meaning of the βt and γt variables is clear: they are estimates of the the model parameters, and the max-likelihood topic proportions, given an estimate of the model parameters, respectively. The way we derived them, these updates appear to be an approximate form of the variational updates in [12]. However it is possible to also view them in a more principled manner. These updates 3 approximate the posterior distribution P(Z, γ|X, αt, βt) by first approximating this posterior by P(Z|X, γ∗, αt, βt), where γ∗is the max-likelihood value for γ, given our current estimates of α, β, and then setting P(Z|X, γ∗, αt, βt) to be a product distribution. It is intuitively clear that in the large document limit, this approximation should not be much worse than the one in [12], as the posterior concentrates around the maximum likelihood value. (And in fact, our proofs will work for finite, but long documents.) Finally, we will rewrite the above equations in a slightly more convenient form. Denoting fd,j = K X i=1 γd,iβt i,j, the E-step can be written as: iterate until convergence γd,i = γd,i N X j=1 ˜fd,j fd,j βt i,j. The M-step becomes: βt+1 i,j = βt i,j PD d=1 ˜ fd,j f t d,j γt d,i PD d=1 γt d,i where f t d,j = K X i=1 γt d,iβt i,j and γt d,i is the converged value of γd,i. 4.2 Alternating KL minimization and thresholded updates We will further modify the E and M-step update equations we derived above. In a slightly modified form, these updates were used in a paper by [21] in the context of non-negative matrix factorization. There the authors proved that under these updates PD d=1 KL(f t d,j|| ˜fd,j) is non-decreasing. One can easily modify their arguments to show that the same property is preserved if the E-step is replaced by a step γt d = minγt d∈∆K KL( ˜fd||fd), where ∆K is the K-dimensional simplex - i.e. minimizing the KL divergence between the counts and the ”predicted counts” with respect to the γ variables. (In fact, iterating the γ updates above is a way to solve this convex minimization problem via a version of gradient descent which makes multiplicative updates, rather than additive updates.) Thus the updates are performing alternating minimization using the KL divergence as the distance measure (with the difference that for the β variables one essentially just performs a single gradient step). In this paper, we will make a modification of the M-step which is very natural. Intuitively, the update for βt i,j goes over all appearances of the word j and adds the “fractional assignment” of the word j to topic i under our current estimates of the variables β, γ. In the modified version we will only average over those documents d, where γt d,i > γt d,i′, ∀i′ ̸= i. The intuitive reason behind this modification is the following. The EM updates we are studying work with the KL divergence, which puts more weight on the larger entries. Thus, for the documents in Di, the estimates for γt d,i should be better than they might be in the documents D \ Di. (Of course, since the terms f t d,j involve all the variables γt d,i, it is not a priori clear that this modification will gain us much, but we will prove that it in fact does.) Formally, we discuss the three modifications of variational inference specified as Algorithm 1, 2 and 3 (we call them tEM, for thresholded EM): Algorithm 1 KL-tEM (E-step) Solve the following convex program for each document d: minγt d,i P j ˜fd,j log( ˜ fd,j f t d,j ), s.t. γt d,i ≥0, P i γt d,i = 1 and γt d,i = 0 if i is not in the support of document d (M-step) Let Di to be the set of documents d, s.t. γt d,i > γt d,i′, ∀i′ ̸= i. Set βt+1 i,j = βt i,j P d∈Di ˜ fd,j ft d,j γt d,i P d∈Di γt d,i 5 Initializations We will consider two different strategies for initialization. First, we will consider the case where we initialize with the topic-word matrix, and the document priors having the correct support. The analysis of tEM in this case will be the cleanest. While the main focus of the paper is tEM, we’ll show that this initialization can actually be done for our case efficiently. Second, we will consider an initialization that is inspired by what the current LDA-c implementation uses. Concretely, we’ll 4 Algorithm 2 Iterative tEM (E-step) Initialize γd,i uniformly among the topics in the support of document d. Repeat γd,i = γd,i N X j=1 ˜fd,j fd,j βt i,j (4.1) until convergence. (M-step) Same as above. Algorithm 3 Incomplete tEM (E-step) Initialize γd,i with the values gotten in the previous iteration, then perform just one step of 4.1. (M-step) Same as before. assume that the user has some way of finding, for each topic i, a seed document in which the proportion of topic i is at least Cl. Then, when initializing, one treats this document as if it were pure: namely one sets β0 i,j to be the fractional count of word j in this document. We do not attempt to design an algorithm to find these documents. 6 Case study 1: Sparse topic priors, support initialization We start with a simple case. As mentioned, all of our results only hold in the long documents regime: we will assume for each document d, the number of sampled words is large enough, so that one can approximate the expected frequencies of the words, i.e., one can find values γ∗ d,i, such that ˜fd,j = (1±ϵ) PK i=1 γ∗ d,iβ∗ i,j. We’ll split the rest of the assumptions into those that apply to the topicword matrix, and the topic priors. Let’s first consider the assumptions on the topic-word matrix. We will impose conditions that ensure the topics don’t overlap too much. Namely, we assume: • Words are discriminative: Each word appears in o(K) topics. • Almost disjoint supports: ∀i, i′, if the intersection of the supports of i and i′ is S, P j∈S β∗ i,j ≤ o(1) · P j β∗ i,j. We also need assumptions on the topic priors. The documents will be sparse, and all topics will be roughly equally likely to appear. There will be virtually no dependence between the topics: conditioning on the size or presence of a certain topic will not influence much the probability of another topic being included. These are analogues of distributions that have been analyzed for dictionary learning [6]. Formally: • Sparse and gapped documents: Each of the documents in our samples has at most T = O(1) topics. Furthermore, for each document d, the largest topic i0 = argmaxiγ∗ d,i is such that for any other topic i′, γ∗ d,i′ −γ∗ d,i0 > ρ for some (arbitrarily small) constant ρ. • Dominant topic equidistribution: The probability that topic i is such that γ∗ d,i > γ∗ d,i′, ∀i′ ̸= i is Θ(1/K). • Weak topic correlations and independent topic distribution: For all sets S with o(K) topics, it must be the case that: E[γ∗ d,i|γ∗ d,i is dominating] = (1 ± o(1))E[γ∗ d,i|γ∗ d,i is dominating, γ∗ d,i′ = 0, i′ ∈S]. Furthermore, for any set S of topics, s.t. |S| ≤T −1, Pr[γ∗ d,i > 0|γ∗ d,i′∀i′ ∈S] = Θ( 1 K ) These assumptions are a less smooth version of properties of the Dirichlet prior. Namely, it’s a folklore result that Dirichlet draws are sparse with high probability, for a certain reasonable range of parameters. This was formally proven by [25] - though sparsity there means a small number of large coordinates. It’s also well known that Dirichlet essentially cannot enforce any correlation between different topics. 1 1We show analogues of the weak topic correlations property and equidistribution in the supplementary material for completeness sake. 5 The above assumptions can be viewed as a local notion of separability of the model, in the following sense. First, consider a particular document d. For each topic i that participates in that document, consider the words j, which only appear in the support of topic i in the document. In some sense, these words are local anchor words for that document: these words appear only in one topic of that document. Because of the ”almost disjoint supports” property, there will be a decent mass on these words in each document. Similarly, consider a particular non-zero element β∗ i,j of the topic-word matrix. Let’s call Dl the set of documents where β∗ i′,j = 0 for all other topics i′ ̸= i appearing in that document. These documents are like local anchor documents for that word-topic pair: in those documents, the word appears as part of only topic i. It turns out the above properties imply there is a decent number of these for any word-topic pair. Finally, a technical condition: we will also assume that all nonzero γ∗ d,i, β∗ i,j are at least 1 poly(N). Intuitively, this means if a topic is present, it needs to be reasonably large, and similarly for words in topics. Such assumptions also appear in the context of dictionary learning [6]. We will prove the following Theorem 1. Given an instance of topic modelling satisfying the properties specified above, where the number of documents is Ω( K log2 N ϵ2 ), if we initialize the supports of the βt i,j and γt d,i variables correctly, after O (log(1/ϵ′) + log N) KL-tEM, iterative-tEM updates or incomplete-tEM updates, we recover the topic-word matrix and topic proportions to multiplicative accuracy 1 + ϵ′, for any ϵ′ s.t. 1 + ϵ′ ≤ 1 (1−ϵ)7 . Theorem 2. If the number of documents is Ω(K4 log2 K), there is a polynomial-time procedure which with probability 1 −Ω( 1 K ) correctly identifies the supports of the β∗ i,j and γ∗ d,i variables. Provable convergence of tEM: The correctness of the tEM updates is proven in 3 steps: • Identifying dominating topic: First, we prove that if γt d,i is the largest one among all topics in the document, topic i is actually the largest topic. • Phase I: Getting constant multiplicative factor estimates: After initialization, after O(log N) rounds, we will get to variables βt i,j, γt d,i which are within a constant multiplicative factor from β∗ i,j, γ∗ d,i. • Phase II (Alternating minimization - lower and upper bound evolution): Once the β and γ estimates are within a constant factor of their true values, we show that the lone words and documents have a boosting effect: they cause the multiplicative upper and lower bounds to improve at each round. The updates we are studying are multiplicative, not additive in nature, and the objective they are optimizing is non-convex, so the standard techniques do not work. The intuition behind our proof in Phase II can be described as follows. Consider one update for one of the variables, say βt i,j. We show that βt+1 i,j ≈αβ∗ i,j + (1 −α)Ctβ∗ i,j for some constant Ct at time step t. α is something fairly large (one should think of it as 1 −o(1)), and comes from the existence of the local anchor documents. A similar equation holds for the γ variables, in which case the “good” term comes from the local anchor words. Furthermore, we show that the error in the ≈decreases over time, as does the value of Ct, so that eventually we can reach β∗ i,j. The analysis bears a resemblance to the state evolution and density evolution methods in error decoding algorithm analysis - in the sense that we maintain a quantity about the evolving system, and analyze how it evolves under the specified iterations. The quantities we maintain are quite simple - upper and lower multiplicative bounds on our estimates at any round t. Initialization: Recall the goal of this phase is to recover the supports - i.e. to find out which topics are present in a document, and identify the support of each topic. We will find the topic supports first. This uses an idea inspired by [8] in the setting of dictionary learning. Roughly, we devise a test, which will take as input two documents d, d′, and will try to determine if the two documents have a topic in common or not. The test will have no false positives, i.e., will never say YES, if the documents don’t have a topic in common, but might say NO even if they do. We then ensure that with high probability, for each topic we find a pair of documents intersecting in that topic, such that the test says YES. 2 2The detailed initialization algorithm is included in the supplementary material. 6 7 Case study 2: Dominating topics, seeded initialization Next, we’ll consider an initialization which is essentially what the current implementation of LDA-c uses. Namely, we will call the following initialization a seeded initialization: • For each topic i, the user supplies a document d, in which γ∗ d,i ≥Cl. • We treat the document as if it only contains topic i and initialize with β0 i,j = f ∗ d,j. We show how to modify the previous analysis to show that with a few more assumptions, this strategy works as well. Firstly, we will have to assume anchor words, that make up a decent fraction of the mass of each topic. Second, we also assume that the words have a bounded dynamic range, i.e. the values of a word in two different topics are within a constant B from each other. The documents are still gapped, but the gap now must be larger. Finally, in roughly 1/B fraction of the documents where topic i is dominant, that topic has proportion 1 −δ, for some small (but still constant) δ. A similar assumption (a small fraction of almost pure documents) appeared in a recent paper by [10]. Formally, we have: • Small dynamic range and large fraction of anchors: For each discriminative words, if β∗ i,j ̸= 0 and β∗ i′,j ̸= 0, β∗ i,j ≤Bβ∗ i′,j. Furthermore, each topic i has anchor words, such that their total weight is at least p. • Gapped documents: In each document, the largest topic has proportion at least Cl, and all the other topics are at most Cs, s.t. Cl −Cs ≥1 p s 2  p log( 1 Cl ) + (1 −p) log(BCl)  + p log(1 + ϵ) ! + ϵ • Small fraction of 1 −δ dominant documents: Among all the documents where topic i is dominating, in a 8/B fraction of them, γ∗ d,i ≥1 −δ, where δ := min C2 l 2B3 −1 p s 2  p log( 1 Cl ) + (1 −p) log(BCl)  + p log(1 + ϵ) ! −ϵ, 1 − p Cl ! The dependency between the parameters B, p, Cl is a little difficult to parse, but if one thinks of Cl as 1−η for η small, and p ≥1− η log B , since log( 1 Cl ) ≈1+η, roughly we want that Cl−Cs ≫2 p √η. (In other words, the weight we require to have on the anchors depends only logarithmically on the range B.) In the documents where the dominant topic has proportion 1 −δ, a similar reasoning as above gives that we want is approximately γ∗ d,i ≥1 −1 −2η 2B3 + 2 p √η. The precise statement is as follows: Theorem 3. Given an instance of topic modelling satisfying the properties specified above, where the number of documents is Ω( K log2 N ϵ2 ), if we initialize with seeded initialization, after O (log(1/ϵ′) + log N) of KL-tEM updates, we recover the topic-word matrix and topic proportions to multiplicative accuracy 1 + ϵ′, if 1 + ϵ′ ≥ 1 (1−ϵ)7 . The proof is carried out in a few phases: • Phase I: Anchor identification: We show that as long as we can identify the dominating topic in each of the documents, anchor words will make progress: after O(log N) number of rounds, the values for the topic-word estimates will be almost zero for the topics for which word w is not an anchor. For topic for which a word is an anchor we’ll have a good estimate. • Phase II: Discriminative word identification: After the anchor words are properly identified in the previous phase, if β∗ i,j = 0, βt i,j will keep dropping and quickly reach almost zero. The values corresponding to β∗ i,j ̸= 0 will be decently estimated. • Phase III: Alternating minimization: After Phase I and II above, we are back to the scenario of the previous section: namely, there is improvement in each next round. During Phase I and II the intuition is the following: due to our initialization, even in the beginning, each topic is ”correlated” with the correct values. In a γ update, we are minimizing KL( ˜fd||fd) with respect to the γd variables, so we need a way to argue that whenever the β estimates are not too bad, minimizing this quantity provides an estimate about how far the optimal γd variables are from γ∗ d. We show the following useful claim: 7 Lemma 4. If, for all topics i, KL(β∗ i ||βt i) ≤Rβ, and minγd∈∆KKL( ˜fd,j||fd,j) ≤Rf, after running a KL divergence minimization step with respect to the γd variables, we get that ||γ∗ d−γd||1 ≤ 1 p( q 1 2Rβ + 1 2 p Rf) + ϵ. This lemma critically uses the existence of anchor words - namely we show ||β∗v||1 ≥p||v||1. Intuitively, if one thinks of v as γ∗−γt, ||β∗v||1 will be large if ||v||1 is large. Hence, if ||β∗−βt||1 is not too large, whenever ||f ∗−f t||1 is small, so is ||γ∗−γt||1. We will be able to maintain Rβ and Rf small enough throughout the iterations, so that we can identify the largest topic in each of the documents. 8 On common words We briefly remark on common words: words such that β∗ i,j ≤κβ∗ i′,j, ∀i, i′, κ ≤B. In this case, the proofs above, as they are, will not work, 3 since common words do not have any lone documents. However, if 1 − 1 κ100 fraction of the documents where topic i is dominant contains topic i with proportion 1 − 1 κ100 and furthermore, in each topic, the weight on these words is no more than 1 κ100 , then our proofs still work with either initialization4 The idea for the argument is simple: when the dominating topic is very large, we show that f ∗ d,j f t d,j is very highly correlated with β∗ i,j βt i,j , so these documents behave like anchor documents. Namely, one can show: Theorem 5. If we additionally have common words satisfying the properties specified above, after O(log(1/ϵ′) + log N) KL-tEM updates in Case Study 2, or any of the tEM variants in Case Study 1, and we use the same initializations as before, we recover the topic-word matrix and topic proportions to multiplicative accuracy 1 + ϵ′, if 1 + ϵ′ ≥ 1 (1−ϵ)7 . 9 Discussion and open problems In this work we provide the first characterization of sufficient conditions when variational inference leads to optimal parameter estimates for topic models. Our proofs also suggest possible hard cases for variational inference, namely instances with large dynamic range compared to the proportion of anchor words and/or correlated topic priors. It’s not hard to hand-craft such instances where support initialization performs very badly, even with only anchor and common words. We made no effort to explore the optimal relationship between the dynamic range and the proportion of anchor words, as it’s not clear what are the “worst case” instances for this trade-off. Seeded initialization, on the other hand, empirically works much better. We found that when Cl ≥ 0.6, and when the proportion of anchor words is as low as 0.2, variational inference recovers the ground truth, even on instances with fairly large dynamic range. Our current proof methods are too weak to capture this observation. (In fact, even the largest topic is sometimes misidentified in the initial stages, so one cannot even run tEM, only the vanilla variational inference updates.) Analyzing the dynamics of variational inference in this regime seems like a challenging problem which would require significantly new ideas. References [1] A. Agarwal, A. Anandkumar, P. Jain, and P. Netrapalli. Learning sparsely used overcomplete dictionaries via alternating minimization. In Proceedings of The 27th Conference on Learning Theory (COLT), 2013. [2] A. Anandkumar, D. Hsu, A. Javanmard, and S. Kakade. Learning latent bayesian networks and topic models under expansion constraints. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013. 3We stress we want to analyze whether variational inference will work or not. Handling common words algorithmically is easy: they can be detected and ”filtered out” initially. Then we can perform the variational inference updates over the rest of the words only. This is in fact often done in practice. 4See supplementary material. 8 [3] A. Anandkumar, S. Kakade, D. Foster, Y. Liu, and D. Hsu. Two svds suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. Technical report, 2012. [4] S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A practical algorithm for topic modeling with provable guarantees. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013. [5] S. Arora, R. Ge, R. Kanna, and A. Moitra. Computing a nonnegative matrix factorization– provably. In Proceedings of the forty-fourth annual ACM symposium on Theory of Computing, pages 145–162. ACM, 2012. [6] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse coding. In Proceedings of The 28th Conference on Learning Theory (COLT), 2015. [7] S. Arora, R. Ge, and A. Moitra. Learning topic models – going beyond svd. In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2012. [8] S. Arora, R. Ge, and A. Moitra. New algorithms for learning incoherent and overcomplete dictionaries. In Proceedings of The 27th Conference on Learning Theory (COLT), 2014. [9] S. Balakrishnan, M.J. Wainwright, and B. Yu. Statistical guarantees for the em algorithm: From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014. [10] T. Bansal, C. Bhattacharyya, and R. Kannan. A provable svd-based algorithm for learning topics in dominant admixture corpus. In Advances in Neural Information Processing Systems (NIPS), 2014. [11] D. Blei and J.D. Lafferty. Topic models. Text mining: classification, clustering, and applications, 10:71, 2009. [12] D. Blei, A. Ng, , and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [13] S. Dasgupta and L. Schulman. A two-round variant of em for gaussian mixtures. In Proceedings of Uncertainty in Artificial Intelligence (UAI), 2000. [14] S. Dasgupta and L. Schulman. A probabilistic analysis of em for mixtures of separated, spherical gaussians. Journal of Machine Learning Research, 8:203–226, 2007. [15] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39:1–38, 1977. [16] W. Ding, M.H. Rohban, P. Ishwar, and V. Saligrama. Topic discovery through data dependent and random projections. arXiv preprint arXiv:1303.3664, 2013. [17] W. Ding, M.H. Rohban, P. Ishwar, and V. Saligrama. Efficient distributed topic modeling with provable guarantees. In Proceedings ot the 17th International Conference on Artificial Intelligence and Statistics, pages 167–175, 2014. [18] M. Hoffman, D. Blei, J. Paisley, and C. Wan. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347, 2013. [19] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [20] A. Kumar and R. Kannan. Clustering with spectral norm and the k-means algorithm. In Proceedings of Foundations of Computer Science (FOCS), 2010. [21] D. Lee and S. Seung. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems (NIPS), 2000. [22] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In Advances in Neural Information Processing Systems (NIPS), 2013. [23] D. Sontag and D. Roy. Complexity of inference in latent dirichlet allocation. In Advances in Neural Information Processing Systems (NIPS), 2000. [24] R. Sundberg. Maximum likelihood from incomplete data via the em algorithm. Scandinavian Journal of Statistics, 1:49–58, 1974. [25] M. Telgarsky. Dirichlet draws are sparse with high probability. Manuscript, 2013. 9
2015
165
5,664
GAP Safe screening rules for sparse multi-task and multi-class models Eugene Ndiaye Olivier Fercoq Alexandre Gramfort Joseph Salmon LTCI, CNRS, T´el´ecom ParisTech, Universit´e Paris-Saclay Paris, 75013, France firstname.lastname@telecom-paristech.fr Abstract High dimensional regression benefits from sparsity promoting regularizations. Screening rules leverage the known sparsity of the solution by ignoring some variables in the optimization, hence speeding up solvers. When the procedure is proven not to discard features wrongly the rules are said to be safe. In this paper we derive new safe rules for generalized linear models regularized with ℓ1 and ℓ1{ℓ2 norms. The rules are based on duality gap computations and spherical safe regions whose diameters converge to zero. This allows to discard safely more variables, in particular for low regularization parameters. The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules. 1 Introduction The computational burden of solving high dimensional regularized regression problem has lead to a vast literature in the last couple of decades to accelerate the algorithmic solvers. With the increasing popularity of ℓ1-type regularization ranging from the Lasso [18] or group-Lasso [24] to regularized logistic regression and multi-task learning, many algorithmic methods have emerged to solve the associated optimization problems. Although for the simple ℓ1 regularized least square a specific algorithm (e.g., the LARS [8]) can be considered, for more general formulations, penalties, and possibly larger dimension, coordinate descent has proved to be a surprisingly efficient strategy [12]. Our main objective in this work is to propose a technique that can speed-up any solver for such learning problems, and that is particularly well suited for coordinate descent method, thanks to active set strategies. The safe rules introduced by [9] for generalized ℓ1 regularized problems, is a set of rules that allows to eliminate features whose associated coefficients are proved to be zero at the optimum. Relaxing the safe rule, one can obtain some more speed-up at the price of possible mistakes. Such heuristic strategies, called strong rules [19] reduce the computational cost using an active set strategy, but require difficult post-precessing to check for features possibly wrongly discarded. Another road to speed-up screening method has been the introduction of sequential safe rules [21, 23, 22]. The idea is to improve the screening thanks to the computations done for a previous regularization parameter. This scenario is particularly relevant in machine learning, where one computes solutions over a grid of regularization parameters, so as to select the best one (e.g., to perform cross-validation). Nevertheless, such strategies suffer from the same problem as strong rules, since relevant features can be wrongly disregarded: sequential rules usually rely on theoretical quantities that are not known by the solver, but only approximated. Especially, for such rules to work one needs the exact dual optimal solution from the previous regularization parameter. 1 Recently, the introduction of safe dynamic rules [6, 5] has opened a promising venue by letting the screening to be done not only at the beginning of the algorithm, but all along the iterations. Following a method introduced for the Lasso [11], we generalize this dynamical safe rule, called GAP Safe rules (because it relies on duality gap computation) to a large class of learning problems with the following benefits: • a unified and flexible framework for a wider family of problems, • easy to insert in existing solvers, • proved to be safe, • more efficient that previous safe rules, • achieves fast true active set identification. We introduce our general GAP Safe framework in Section 2. We then specialize it to important machine learning use cases in Section 3. In Section 4 we apply our GAP Safe rules to a multitask Lasso problem, relevant for brain imaging with magnetoencephalography data, as well as to multinomial logistic regression regularized with ℓ1{ℓ2 norm for joint feature selection. 2 GAP Safe rules 2.1 Model and notations We denote by rds the set t1, . . . , du for any integer d P N, and by QJ the transpose of a matrix Q. Our observation matrix is Y P Rnˆq where n represents the number of samples, and q the number of tasks or classes. The design matrix X “ rxp1q, . . . , xppqs “ rx1, . . . , xnsJ P Rnˆp has p explanatory variables (or features) column-wise, and n observations row-wise. The standard ℓ2 norm is written } ¨ }2, the ℓ1 norm } ¨ }1, the ℓ8 norm } ¨ }8. The ℓ2 unit ball is denoted by B2 (or simply B) and we write Bpc, rq the ℓ2 ball with center c and radius r. For a matrix B P Rpˆq, we denote by }B}2 2 “ řp j“1 řq k“1 B2 j,k the Frobenius norm, and by x¨, ¨y the associated inner product. We consider the general optimization problem of minimizing a separable function with a groupLasso regularization. The parameter to recover is a matrix B P Rpˆq, and for any j in Rp, Bj,: is the j-th row of B, while for any k in Rq, B:,k is the k-th column. We would like to find pBpλq P arg min BPRpˆq n ÿ i“1 fipxJ i Bq ` λΩpBq loooooooooooomoooooooooooon PλpBq , (1) where fi : R1ˆq ÞÑ R is a convex function with 1{γ-Lipschitz gradient. So F : B Ñ řn i“1 fipxJ i Bq is also convex with Lipschitz gradient. The function Ω: Rpˆq ÞÑ R` is the ℓ1{ℓ2 norm ΩpBq “ řp j“1 }Bj,:}2 promoting a few lines of B to be non-zero at a time. The λ parameter is a non-negative constant controlling the trade-off between data fitting and regularization. Some elements of convex analysis used in the following are introduced here. For a convex function f : Rd Ñ r´8, `8s the Fenchel-Legendre transform1 of f, is the function f ˚ : Rd Ñ r´8, `8s defined by f ˚puq “ supzPRdxz, uy ´ fpzq. The sub-differential of a function f at a point x is denoted by Bfpxq. The dual norm of Ωis the ℓ8{ℓ2 norm and reads Ω˚pBq “ maxjPrps }Bj,:}2. Remark 1. For the ease of reading, all groups are weighted with equal strength, but extension of our results to non-equal weights as proposed in the original group-Lasso [24] paper would be straightforward. 2.2 Basic properties First we recall the associated Fermat’s condition and a dual formulation of the optimization problem: Theorem 1. Fermat’s condition (see [3, Proposition 26.1] for a more general result) For any convex function f : Rn Ñ R: x P arg min xPRn fpxq ô 0 P Bfpxq. (2) 1this is also often referred to as the (convex) conjugate of a function 2 Theorem 2 ([9]). A dual formulation of (1) is given by pΘpλq “ arg max ΘP∆X ´ n ÿ i“1 f ˚ i p´λΘi,:q looooooooomooooooooon DλpΘq . (3) where ∆X “ tΘ P Rnˆq : @j P rps, }xpjqJΘ}2 ď 1u “ tΘ P Rnˆq : Ω˚pXJΘq ď 1u. The primal and dual solutions are linked by @i P rns, pΘpλq i,: “ ´∇fipxJ i pBpλqq{λ. (4) Furthermore, Fermat’s condition reads: @j P rps, xpjqJ pΘpλq P $ & % " ˆBλ j,; }ˆBλ j,;}2 * , if pBpλq j,: ‰ 0, B2, if pBpλq j,: “ 0. (5) Remark 2. Contrarily to the primal, the dual problem has a unique solution under our assumption on fi. Indeed, the dual function is strongly concave, hence strictly concave. Remark 3. For any Θ P Rnˆq let us introduce GpΘq “ r∇f1pΘ1,:qJ, . . . , ∇fnpΘn,:qJs P Rnˆq. Then the primal/dual link can be written pΘpλq “ ´GpX pBpλqq{λ . 2.3 Critical parameter: λmax For λ large enough the solution of the primal problem is simply 0. Thanks to the Fermat’s rule (2), 0 is optimal if and only if ´∇Fp0q{λ P BΩp0q. Thanks to the property of the dual norm Ω˚, this is equivalent to Ω˚p∇Fp0q{λq ď 1 where Ω˚ is the dual norm of Ω. Since ∇Fp0q “ XJGp0q, 0 is a primal solution of Pλ if and only if λ ě λmax :“ maxjPrps }xpjqJGp0q}2 “ Ω˚pXJGp0qq. This development shows that for λ ě λmax, Problem (1) is trivial. So from now on, we will only focus on the case where λ ď λmax. 2.4 Screening rules description Safe screening rules rely on a simple consequence of the Fermat’s condition: }xpjqJ pΘpλq}2 ă 1 ñ pBpλq j,: “ 0 . (6) Stated in such a way, this relation is useless because pΘpλq is unknown (unless λ ą λmax). However, it is often possible to construct a set R Ă Rnˆq, called a safe region, containing it. Then, note that max ΘPR }xpjqJΘ}2 ă 1 ñ pBpλq j,: “ 0 . (7) The so called safe screening rules consist in removing the variable j from the problem whenever the previous test is satisfied, since pBpλq j,: is then guaranteed to be zero. This property leads to considerable speed-up in practice especially with active sets strategies, see for instance [11] for the Lasso case. A natural goal is to find safe regions as narrow as possible: smaller safe regions can only increase the number of screened out variables. However, complex regions could lead to a computational burden limiting the benefit of screening. Hence, we focus on constructing R satisfying the trade-off: • R is as small as possible and contains pΘpλq. • Computing maxΘPR }xpjqJΘ}2 is cheap. 2.5 Spheres as safe regions Various shapes have been considered in practice for the set R such as balls (referred to as spheres) [9], domes [11] or more refined sets (see [23] for a survey). Here we consider the so-called “sphere regions” choosing a ball R “ Bpc, rq as a safe region. One can easily obtain a control 3 on maxΘPBpc,rq }xpjqJΘ}2 by extending the computation of the support function of a ball [11, Eq. (9)] to the matrix case: max ΘPBpc,rq}xpjqJΘ}2 ď }xpjqJc}2 ` r}xpjq}2 . Note that here the center c is a matrix in Rpˆq. We can now state the safe sphere test: Sphere test: If }xpjqJc}2 ` r}xpjq}2 ă 1, then pBpλq j,: “ 0. (8) 2.6 GAP Safe rule description In this section we derive a GAP Safe screening rule extending the one introduced in [11]. For this, we rely on the strong convexity of the dual objective function and on weak duality. Finding a radius: Remember that @i P rns, fi is differentiable with a 1{γ-Lipschitz gradient. As a consequence, @i P rns, f ˚ i is γ-strongly convex [14, Theorem 4.2.2, p. 83] and so Dλ is γλ2-strongly concave: @pΘ1, Θ2q P Rnˆq ˆ Rnˆq, DλpΘ2q ď DλpΘ1q ` x∇DλpΘ1q, Θ2 ´ Θ1y ´ γλ2 2 }Θ1 ´ Θ2}2. Specifying the previous inequality for Θ1 “ pΘpλq, Θ2 “ Θ P ∆X, one has DλpΘq ď DλppΘpλqq ` x∇DλppΘpλqq, Θ ´ pΘpλqy ´ γλ2 2 }pΘpλq ´ Θ}2. By definition, pΘpλq maximizes Dλ on ∆X, so we have: x∇DλppΘpλqq, Θ ´ pΘpλqy ď 0. This implies DλpΘq ď DλppΘpλqq ´ γλ2 2 }pΘpλq ´ Θ}2. By weak duality @B P Rpˆq, DλppΘpλqq ď PλpBq, so : @B P Rpˆq, @Θ P ∆X, DλpΘq ď PλpBq ´ γλ2 2 }pΘpλq ´ Θ}2, and we deduce the following theorem: Theorem 3. @B P Rpˆq, @Θ P ∆X, pΘpλq ´ Θ 2 ď d 2pPλpBq ´ DλpΘqq γλ2 “: ˆrλpB, Θq. (9) Provided one knows a dual feasible point Θ P ∆X and a B P Rpˆq , it is possible to construct a safe sphere with radius ˆrλpB, Θq centered on Θ. We now only need to build a (relevant) dual point to center such a ball. Results from Section 2.3, ensure that ´Gp0q{λmax P ∆X, but it leads to a static rule, a introduced in [9]. We need a dynamic center to improve the screening as the solver proceeds. Finding a center: Remember that pΘpλq “ ´GpX pBpλqq{λ. Now assume that one has a converging algorithm for the primal problem, i.e., Bk Ñ pBpλq. Hence, a natural choice for creating a dual feasible point Θk is to choose it proportional to ´GpXBkq, for instance by setting: Θk “ # Rk λ , if Ω˚pXJRkq ď λ, Rk Ω˚pXJRkq, otherwise. where Rk “ ´GpXBkq . (10) A refined method consists in solving the one dimensional problem: arg maxΘP∆XXSpanpRkq DλpΘq. In the Lasso and group-Lasso case [5, 6, 11] such a step is simply a projection on the intersection of a line and the (polytope) dual set and can be computed efficiently. However for logistic regression the computation is more involved, so we have opted for the simpler solution in Equation (10). This still provides converging safe rules (see Proposition 1). Dynamic GAP Safe rule summarized We can now state our dynamical GAP Safe rule at the k-th step of an iterative solver: 1. Compute Bk, and then obtain Θk and ˆrλpBk, Θkq using (10). 4 2. If }xpjqJΘk}2 ` ˆrλpBk, Θkq}xpjq}2 ă 1, then set pBpλq j,: “ 0 and remove xpjq from X. Dynamic safe screening rules are more efficient than existing methods in practice because they can increase the ability of screening as the algorithm proceeds. Since one has sharper and sharper dual regions available along the iterations, support identification is improved. Provided one relies on a primal converging algorithm, one can show that the dual sequence we propose is converging too. The convergence of the primal is unaltered by our GAP Safe rule: screening out unnecessary coefficients of Bk can only decrease its distance with its original limits. Moreover, a practical consequence is that one can observe surprising situations where lowering the tolerance of the solver can reduce the computation time. This can happen for sequential setups. Proposition 1. Let Bk be the current estimate of pBpλq and Θk defined in Eq. (10) be the current estimate of pΘpλq. Then limkÑ`8 Bk “ pBpλq implies limkÑ`8 Θk “ pΘpλq. Note that if the primal sequence is converging to the optimal, our dual sequence is also converging. But we know that the radius of our safe sphere is p2pPλpBkq ´ DλpΘkqq{pγλ2qq1{2. By strong duality, this radius converges to 0, hence we have certified that our GAP Safe regions sequence BpΘk, ˆrλpBk, Θkqq is a converging safe rules (in the sense introduced in [11, Definition 1]). Remark 4. The active set obtained by our GAP Safe rule (i.e., the indexes of non screened-out variables) converges to the equicorrelation set [20] Eλ :“ tj P p : }xpjqJ pΘpλq}2 “ 1u, allowing us to early identify relevant features (see Proposition 2 in the supplementary material for more details). 3 Special cases of interest We now specialize our results to relevant supervised learning problems, see also Table 1. 3.1 Lasso In the Lasso case q “ 1, the parameter is a vector: B “ β P Rp, Fpβq “ 1{2}y ´ Xβ}2 2 “ řn i“1pyi ´ xJ i βq2, meaning that fipzq “ pyi ´ zq2{2 and Ωpβq “ }β}1. 3.2 ℓ1{ℓ2 multi-task regression In the multi-task Lasso, which is a special case of group-Lasso, we assume that the observation is Y P Rnˆq, FpBq “ 1 2}Y ´XB}2 2 “ 1 2 řn i“1 }Yi,:´xJ i B}2 2 (i.e., fipzq “ }Yi,:´z}2{2) and ΩpBq “ řp j“1 }Bj,:}2. In signal processing, this model is also referred to as Multiple Measurement Vector (MMV) problem. It allows to jointly select the same features for multiple regression tasks [1, 2]. Remark 5. Our framework could encompass easily the case of non-overlapping groups with various size and weights presented in [6]. Since our aim is mostly for multi-task and multinomial applications, we have rather presented a matrix formulation. 3.3 ℓ1 regularized logistic regression Here, we consider the formulation given in [7, Chapter 3] for the two classes logistic regression. In such a context, one observes for each i P rns a class label ci P t1, 2u. This information can be recast as yi “ 1tci“1u, and it is then customary to minimize (1) where Fpβq “ n ÿ i“1 ` ´yixJ i β ` log ` 1 ` exp ` xJ i β ˘˘˘ , (11) with B “ β P Rp (i.e., q “ 1), fipzq “ ´yiz ` logp1 ` exppzqq and the penalty is simply the ℓ1 norm: Ωpβq “ }β}1. Let us introduce Nh, the (binary) negative entropy function defined by 2: Nhpxq “ "x logpxq ` p1 ´ xq logp1 ´ xq, if x P r0, 1s , `8, otherwise . (12) Then, one can easily check that f ˚ i pziq “ Nhpzi ` yiq and γ “ 4. 2with the convention 0 logp0q “ 0 5 Lasso Multi-task regr. Logistic regr. Multinomial regr. fipzq pyi´zq2 2 }Yi,:´z}2 2 logp1 ` ezq ´ yiz log ` qÿ k“1 ezk˘ ´ qÿ k“1 Yi,kzk f ˚ i puq pyi´uq2´y2 i 2 }Yi,:´u}2´}Yi,:}2 2 2 Nhpu ` yiq NHpu ` Yi,:q ΩpBq }β}1 pÿ j“1 }Bj,:}2 }β}1 pÿ j“1 }Bj,:}2 λmax }XJy}8 Ω˚pXJY q }XJp1n{2 ´ yq}8 Ω˚pXJp1nˆq{q ´ Y qq GpΘq θ ´ y Θ ´ Y ez 1`ez ´ y RowNormpeΘq ´ Y γ 1 1 4 1 Table 1: Useful ingredients for computing GAP Safe rules. We have used lower case to indicate when the parameters are vectorial (i.e., q “ 1). The function RowNorm consists in normalizing a (non-negative) matrix row-wise, such that each row sums to one. 3.4 ℓ1{ℓ2 multinomial logistic regression We adapt the formulation given in [7, Chapter 3] for the multinomial regression. In such a context, one observes for each i P rns a class label ci P t1, . . . , qu. This information can be recast into a matrix Y P Rnˆq filled by 0’s and 1’s: Yi,k “ 1tci“ku. In the same spirit as the multi-task Lasso, a matrix B P Rpˆq is formed by q vectors encoding the hyperplanes for the linear classification. The multinomial ℓ1{ℓ2 regularized regression reads: FpBq “ n ÿ i“1 ˜ qÿ k“1 ´Yi,kxJ i B:,k ` log ˜ qÿ k“1 exp ` xJ i B:,k ˘ ¸¸ , (13) with fipzq “ řq k“1 ´Yi,kzk ` log přq k“1 exp pzkqq to recover the formulation as in (1). Let us introduce NH, the negative entropy function defined by (still with the convention 0 logp0q “ 0) NHpxq “ "řq i“1 xi logpxiq, if x P Σq “ tx P Rq ` : řq i“1 xi “ 1u, `8, otherwise. (14) Again, one can easily check that f ˚ i pzq “ NHpz ` Yi,:q and γ “ 1. Remark 6. For multinomial logistic regression, Dλ implicitly encodes the additional constraint Θ P dom Dλ “ tΘ1 : @i P rns, ´λΘ1 i,: ` Yi,: P Σqu where Σq is the q dimensional simplex, see (14). As 0 and Rk{λ both belong to this set, any convex combination of them, such as Θk defined in (10), satisfies this additional constraint. Remark 7. The intercept has been neglected in our models for simplicity. Our GAP Safe framework can also handle such a feature at the cost of more technical details (by adapting the results from [15] for instance). However, in practice, the intercept can be handled in the present formulation by adding a constant column to the design matrix X. The intercept is then regularized. However, if the constant is set high enough, regularization is small and experiments show that it has little to no impact for high-dimensional problems. This is the strategy used by the Liblinear package [10]. 4 Experiments In this section we present results obtained with the GAP Safe rule. Results are on high dimensional data, both dense and sparse. Implementation have been done in Python and Cython for low critical parts. They are based on the multi-task Lasso implementation of Scikit-Learn [17] and coordinate descent logistic regression solver in the Lightning software [4]. In all experiments, the coordinate descent algorithm used follows the pseudo code from [11] with a screening step every 10 iterations. 6 Figure 1: Experiments on MEG/EEG brain imaging dataset (dense data with n “ 360, p “ 22494 and q “ 20). On the left: fraction of active variables as a function of λ and the number of iterations K. The GAP Safe strategy has a much longer range of λ with (red) small active sets. On the right: Computation time to reach convergence using different screening strategies. Note that we have not performed comparison with the sequential screening rule commonly acknowledge as the state-of-the-art “safe” screening rule (such as th EDDP+ [21]), since we can show that this kind of rule is not safe. Indeed, the stopping criterion is based on dual gap accuracy, and comparisons would be unfair since such methods sometimes do not converge to the prescribed accuracy. This is backed-up by a counter example given in the supplementary material. Nevertheless, modifications of such rules, inspired by our GAP Safe rules, can make them safe. However the obtained sequential rules are still outperformed by our dynamic strategies (see Figure 2 for an illustration). 4.1 ℓ1{ℓ2 multi-task regression To demonstrate the benefit of the GAP Safe screening rule for a multi-task Lasso problem we used neuroimaging data. Electroencephalography (EEG) and magnetoencephalography (MEG) are brain imaging modalities that allow to identify active brain regions. The problem to solve is a multi-task regression problem with squared loss where every task corresponds to a time instant. Using a multitask Lasso one can constrain the recovered sources to be identical during a short time interval [13]. This corresponds to a temporal stationary assumption. In this experiment we used a joint MEG/EEG data with 301 MEG and 59 EEG sensors leading to n “ 360. The number of possible sources is p “ 22, 494 and the number of time instants q “ 20. With a 1 kHz sampling rate it is equivalent to say that the sources stay the same for 20 ms. Results are presented in Figure 1. The GAP Safe rule is compared with the dynamic safe rule from [6]. The experimental setup consists in estimating the solutions of the multi-task Lasso problem for 100 values of λ on a logarithmic grid from λmax to λmax{103. For the experiments on the left a fixed number of iterations from 2 to 211 is allowed for each λ. The fraction of active variables is reported. Figure 1 illustrates that the GAP Safe rule screens out much more variables than the compared method, as well as the converging nature of our safe regions. Indeed, the more iterations performed the more the rule allows to screen variables. On the right, computation time confirms the effective speed-up. Our rule significantly improves the computation time for all duality gap tolerance from 10´2 to 10´8, especially when accurate estimates are required, e.g., for feature selection. 4.2 ℓ1 binary logistic regression Results on the Leukemia dataset are reported in Figure 2. We compare the dynamic strategy of GAP Safe to a sequential and non dynamic rule such as Slores [22]. We do not compare to the actual Slores rule as it requires the previous dual optimal solution, which is not available. Slores is indeed not a safe method (see Section B in the supplementary materials). Nevertheless one can observe that dynamic strategies outperform pure sequential one, see Section C in the supplementary material). 7 No screening GAP Safe (sequential) GAP Safe (dynamic) No screening GAP Safe (sequential) GAP Safe (dynamic) Figure 2: ℓ1 regularized binary logistic regression on the Leukemia dataset (n = 72 ; m = 7,129 ; q = 1). Simple sequential and full dynamic screening GAP Safe rules are compared. On the left: fraction of the variables that are active. Each line corresponds to a fixed number of iterations for which the algorithm is run. On the right: computation times needed to solve the logistic regression path to desired accuracy with 100 values of λ. 4.3 ℓ1{ℓ2 multinomial logistic regression We also applied GAP Safe to an ℓ1{ℓ2 multinomial logistic regression problem on a sparse dataset. Data are bag of words features extracted from the News20 dataset (TF-IDF removing English stop words and words occurring only once or more than 95% of the time). One can observe on Figure 3 the dynamic screening and its benefit as more iterations are performed. GAP Safe leads to a significant speedup: to get a duality gap smaller than 10´2 on the 100 values of λ, we needed 1,353 s without screening and only 485 s when GAP Safe was activated. Figure 3: Fraction of the variables that are active for ℓ1{ℓ2 regularized multinomial logistic regression on 3 classes of the News20 dataset (sparse data with n = 2,757 ; m = 13,010 ; q = 3). Computation was run on the best 10% of the features using χ2 univariate feature selection [16]. Each line corresponds to a fixed number of iterations for which the algorithm is run. 5 Conclusion This contribution detailed new safe rules for accelerating algorithms solving generalized linear models regularized with ℓ1 and ℓ1{ℓ2 norms. The rules proposed are safe, easy to implement, dynamic and converging, allowing to discard significantly more variables than alternative safe rules. The positive impact in terms of computation time was observed on all tested datasets and demonstrated here on a high dimensional regression task using brain imaging data as well as binary and multiclass classification problems on dense and sparse data. Extensions to other generalized linear model, e.g., Poisson regression, are expected to reach the same conclusion. Future work could investigate optimal screening frequency, determining when the screening has correctly detected the support. Acknowledgment We acknowledge the support from Chair Machine Learning for Big Data at T´el´ecom ParisTech and from the Orange/T´el´ecom ParisTech think tank phi-TAB. This work benefited from the support of the ”FMJH Program Gaspard Monge in optimization and operation research”, and from the support to this program from EDF. 8 References [1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, pages 41–48, 2006. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008. [3] H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York, 2011. [4] M. Blondel, K. Seki, and K. Uehara. Block coordinate descent algorithms for large-scale sparse multiclass classification. Machine Learning, 93(1):31–52, 2013. [5] A. Bonnefoy, V. Emiya, L. Ralaivola, and R. Gribonval. A dynamic screening principle for the lasso. In EUSIPCO, 2014. [6] A. Bonnefoy, V. Emiya, L. Ralaivola, and R. Gribonval. Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso. IEEE Trans. Signal Process., 63(19):20, 2015. [7] P. B¨uhlmann and S. van de Geer. Statistics for high-dimensional data. Springer Series in Statistics. Springer, Heidelberg, 2011. Methods, theory and applications. [8] B. Efron, T. Hastie, I. M. Johnstone, and R. Tibshirani. Least angle regression. Ann. Statist., 32(2):407–499, 2004. With discussion, and a rejoinder by the authors. [9] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. J. Pacific Optim., 8(4):667–698, 2012. [10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. J. Mach. Learn. Res., 9:1871–1874, 2008. [11] O. Fercoq, A. Gramfort, and J. Salmon. Mind the duality gap: safer rules for the lasso. In ICML, pages 333–342, 2015. [12] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of statistical software, 33(1):1, 2010. [13] A. Gramfort, M. Kowalski, and M. H¨am¨al¨ainen. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods. Phys. Med. Biol., 57(7):1937–1961, 2012. [14] J.-B. Hiriart-Urruty and C. Lemar´echal. Convex analysis and minimization algorithms. II, volume 306. Springer-Verlag, Berlin, 1993. [15] K. Koh, S.-J. Kim, and S. Boyd. An interior-point method for large-scale l1-regularized logistic regression. J. Mach. Learn. Res., 8(8):1519–1555, 2007. [16] C. D. Manning and H. Sch¨utze. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA, USA, 1999. [17] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res., 12:2825–2830, 2011. [18] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSSB, 58(1):267–288, 1996. [19] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong rules for discarding predictors in lasso-type problems. JRSSB, 74(2):245–266, 2012. [20] R. J. Tibshirani. The lasso problem and uniqueness. Electron. J. Stat., 7:1456–1490, 2013. [21] J. Wang, P. Wonka, and J. Ye. Lasso screening rules via dual polytope projection. arXiv preprint arXiv:1211.3966, 2012. [22] J. Wang, J. Zhou, J. Liu, P. Wonka, and J. Ye. A safe screening rule for sparse logistic regression. In NIPS, pages 1053–1061, 2014. [23] Z. J. Xiang, Y. Wang, and P. J. Ramadge. Screening tests for lasso problems. arXiv preprint arXiv:1405.4897, 2014. [24] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. JRSSB, 68(1):49–67, 2006. 9
2015
166
5,665
The Pareto Regret Frontier for Bandits Tor Lattimore Department of Computing Science University of Alberta, Canada tor.lattimore@gmail.com Abstract Given a multi-armed bandit problem it may be desirable to achieve a smallerthan-usual worst-case regret for some special actions. I show that the price for such unbalanced worst-case regret guarantees is rather high. Specifically, if an algorithm enjoys a worst-case regret of B with respect to some action, then there must exist another action for which the worst-case regret is at least Ω(nK/B), where n is the horizon and K the number of actions. I also give upper bounds in both the stochastic and adversarial settings showing that this result cannot be improved. For the stochastic case the pareto regret frontier is characterised exactly up to constant factors. 1 Introduction The multi-armed bandit is the simplest class of problems that exhibit the exploration/exploitation dilemma. In each time step the learner chooses one of K actions and receives a noisy reward signal for the chosen action. A learner’s performance is measured in terms of the regret, which is the (expected) difference between the rewards it actually received and those it would have received (in expectation) by choosing the optimal action. Prior work on the regret criterion for finite-armed bandits has treated all actions uniformly and has aimed for bounds on the regret that do not depend on which action turned out to be optimal. I take a different approach and ask what can be achieved if some actions are given special treatment. Focussing on worst-case bounds, I ask whether or not it is possible to achieve improved worst-case regret for some actions, and what is the cost in terms of the regret for the remaining actions. Such results may be useful in a variety of cases. For example, a company that is exploring some new strategies might expect an especially small regret if its existing strategy turns out to be (nearly) optimal. This problem has previously been considered in the experts setting where the learner is allowed to observe the reward for all actions in every round, not only for the action actually chosen. The earliest work seems to be by Hutter and Poland [2005] where it is shown that the learner can assign a prior weight to each action and pays a worst-case regret of O(√−n log ρi) for expert i where ρi is the prior belief in expert i and n is the horizon. The uniform regret is obtained by choosing ρi = 1/K, which leads to the well-known O(√n log K) bound achieved by the exponential weighting algorithm [Cesa-Bianchi, 2006]. The consequence of this is that an algorithm can enjoy a constant regret with respect to a single action while suffering minimally on the remainder. The problem was studied in more detail by Koolen [2013] where (remarkably) the author was able to exactly describe the pareto regret frontier when K = 2. Other related work (also in the experts setting) is where the objective is to obtain an improved regret against a mixture of available experts/actions [Even-Dar et al., 2008, Kapralov and Panigrahy, 2011]. In a similar vain, Sani et al. [2014] showed that algorithms for prediction with expert advice can be combined with minimal cost to obtain the best of both worlds. In the bandit setting I am only aware 1 of the work by Liu and Li [2015] who study the effect of the prior on the regret of Thompson sampling in a special case. In contrast the lower bound given here applies to all algorithms in a relatively standard setting. The main contribution of this work is a characterisation of the pareto regret frontier (the set of achievable worst-case regret bounds) for stochastic bandits. Let µi ∈R be the unknown mean of the ith arm and assume that supi,j µi −µj ≤1. In each time step the learner chooses an action It ∈{1, . . . , K} and receives reward gIt,t = µi + ηt where ηt is the noise term that I assume to be sampled independently from a 1-subgaussian distribution that may depend on It. This model subsumes both Gaussian and Bernoulli (or bounded) rewards. Let π be a bandit strategy, which is a function from histories of observations to an action It. Then the n-step expected pseudo regret with respect to the ith arm is Rπ µ,i = nµi −E n X t=1 µIt , where the expectation is taken with respect to the randomness in the noise and the actions of the policy. Throughout this work n will be fixed, so is omitted from the notation. The worst-case expected pseudo-regret with respect to arm i is Rπ i = sup µ Rπ µ,i . (1) This means that Rπ ∈RK is a vector of worst-case pseudo regrets with respect to each of the arms. Let B ⊂RK be a set defined by B =   B ∈[0, n]K : Bi ≥min   n, X j̸=i n Bj   for all i   . (2) The boundary of B is denoted by δB. The following theorem shows that δB describes the pareto regret frontier up to constant factors. Theorem There exist universal constants c1 = 8 and c2 = 252 such that: Lower bound: for ηt ∼N(0, 1) and all strategies π we have c1(Rπ + K) ∈B Upper bound: for all B ∈B there exists a strategy π such that Rπ i ≤c2Bi for all i Observe that the lower bound relies on the assumption that the noise term be Gaussian while the upper bound holds for subgaussian noise. The lower bound may be generalised to other noise models such as Bernoulli, but does not hold for all subgaussian noise models. For example, it does not hold if there is no noise (ηt = 0 almost surely). The lower bound also applies to the adversarial framework where the rewards may be chosen arbitrarily. Although I was not able to derive a matching upper bound in this case, a simple modification of the Exp-γ algorithm [Bubeck and Cesa-Bianchi, 2012] leads to an algorithm with Rπ 1 ≤B1 and Rπ k ≲nK B1 log nK B2 1  for all k ≥2 , where the regret is the adversarial version of the expected regret. Details are in the supplementary material. The new results seem elegant, but disappointing. In the experts setting we have seen that the learner can distribute a prior amongst the actions and obtain a bound on the regret depending in a natural way on the prior weight of the optimal action. In contrast, in the bandit setting the learner pays an enormously higher price to obtain a small regret with respect to even a single arm. In fact, the learner must essentially choose a single arm to favour, after which the regret for the remaining arms has very limited flexibility. Unlike in the experts setting, if even a single arm enjoys constant worst-case regret, then the worst-case regret with respect to all other arms is necessarily linear. 2 2 Preliminaries I use the same notation as Bubeck and Cesa-Bianchi [2012]. Define Ti(t) to be the number of times action i has been chosen after time step t and ˆµi,s to be the empirical estimate of µi from the first s times action i was sampled. This means that ˆµi,Ti(t−1) is the empirical estimate of µi at the start of the tth round. I use the convention that ˆµi,0 = 0. Since the noise model is 1-subgaussian we have ∀ε > 0 P {∃s ≤t : ˆµi,s −µi ≥ε/s} ≤exp  −ε2 2t  . (3) This result is presumably well known, but a proof is included in the supplementary material for convenience. The optimal arm is i∗= arg maxi µi with ties broken in some arbitrary way. The optimal reward is µ∗= maxi µi. The gap between the mean rewards of the jth arm and the optimal arm is ∆j = µ∗−µj and ∆ji = µi −µj. The vector of worst-case regrets is Rπ ∈RK and has been defined already in Eq. (1). I write Rπ ≤B ∈RK if Rπ i ≤Bi for all i ∈{1, . . . , K}. For vector Rπ and x ∈R we have (Rπ + x)i = Rπ i + x. 3 Understanding the Frontier Before proving the main theorem I briefly describe the features of the regret frontier. First notice that if Bi = p n(K −1) for all i, then Bi = p n(K −1) = X j̸=i p n/(K −1) = X j̸=i n Bj . Thus B ∈B as expected. This particular B is witnessed up to constant factors by MOSS [Audibert and Bubeck, 2009] and OC-UCB [Lattimore, 2015], but not UCB [Auer et al., 2002], which suffers Rucb i ∈Ω(√nK log n). Of course the uniform choice of B is not the only option. Suppose the first arm is special, so B1 should be chosen especially small. Assume without loss of generality that B1 ≤B2 ≤. . . ≤BK ≤ n. Then by the main theorem we have B1 ≥ K X i=2 n Bi ≥ k X i=2 n Bi ≥(k −1)n Bk . Therefore Bk ≥(k −1)n B1 . (4) This also proves the claim in the abstract, since it implies that BK ≥(K −1)n/B1. If B1 is fixed, then choosing Bk = (k −1)n/B1 does not lie on the frontier because K X k=2 n Bk = K X k=2 B1 k −1 ∈Ω(B1 log K) However, if H = PK k=2 1/(k −1) ∈Θ(log K), then choosing Bk = (k −1)nH/B1 does lie on the frontier and is a factor of log K away from the lower bound given in Eq. (4). Therefore up the a log K factor, points on the regret frontier are characterised entirely by a permutation determining the order of worst-case regrets and the smallest worst-case regret. Perhaps the most natural choice of B (assuming again that B1 ≤. . . ≤BK) is B1 = np and Bk = (k −1)n1−pH for k > 1 . For p = 1/2 this leads to a bound that is at most √ K log K worse than that obtained by MOSS and OC-UCB while being a factor of √ K better for a select few. 3 Assumptions The assumption that ∆i ∈[0, 1] is used to avoid annoying boundary problems caused by the fact that time is discrete. This means that if ∆i is extremely large, then even a single sample from this arm can cause a big regret bound. This assumption is already quite common, for example a worst-case regret of Ω( √ Kn) clearly does not hold if the gaps are permitted to be unbounded. Unfortunately there is no perfect resolution to this annoyance. Most elegant would be to allow time to be continuous with actions taken up to stopping times. Otherwise you have to deal with the discretisation/boundary problem with special cases, or make assumptions as I have done here. 4 Lower Bounds Theorem 1. Assume ηt ∼N(0, 1) is sampled from a standard Gaussian. Let π be an arbitrary strategy, then 8(Rπ + K) ∈B. Proof. Assume without loss of generality that Rπ 1 = mini Rπ i (if this is not the case, then simply re-order the actions). If Rπ 1 > n/8, then the result is trivial. From now on assume Rπ 1 ≤n/8. Let c = 4 and define εk = min 1 2, cRπ k n  ≤1 2 . Define K vectors µ1, . . . , µK ∈RK by (µk)j = 1 2 +    0 if j = 1 εk if j = k ̸= 1 −εj otherwise . Therefore the optimal action for the bandit with means µk is k. Let A = {k : Rπ k ≤n/8} and A′ = {k : k /∈A} and assume k ∈A. Then Rπ k (a) ≥Rπ µk,k (b) ≥εkEπ µk  X j̸=k Tj(n)  (c) = εk n −Eπ µkTk(n)  (d) = cRπ k(n −Eπ µkTk(n)) n , where (a) follows since Rπ k is the worst-case regret with respect to arm k, (b) since the gap between the means of the kth arm and any other arm is at least εk (Note that this is also true for k = 1 since ε1 = mink εk. (c) follows from the fact that P i Ti(n) = n and (d) from the definition of εk. Therefore n  1 −1 c  ≤Eπ µkTk(n) . (5) Therefore for k ̸= 1 with k ∈A we have n  1 −1 c  ≤Eπ µkTk(n) (a) ≤Eπ µ1Tk(n) + nεk q Eπµ1Tk(n) (b) ≤n −Eπ µ1T1(n) + nεk q Eπµ1Tk(n) (c) ≤n c + nεk q Eπµ1Tk(n) , where (a) follows from standard entropy inequalities and a similar argument as used by Auer et al. [1995] (details in supplementary material), (b) since k ̸= 1 and Eπ µ1T1(n) + Eπ µ1Tk(n) ≤n, and (c) by Eq. (5). Therefore Eπ µ1Tk(n) ≥1 −2 c ε2 k , which implies that Rπ 1 ≥Rπ µ1,1 = K X k=2 εkEπ µ1Tk(n) ≥ X k∈A−{1} 1 −2 c εk = 1 8 X k∈A−{1} n Rπ k . 4 Therefore for all i ∈A we have 8Rπ i ≥ X k∈A−{1} n Rπ k · Rπ i Rπ 1 ≥ X k∈A−{i} n Rπ k . Therefore 8Rπ i + 8K ≥ X k̸=i n Rπ k + 8K − X k∈A′−{i} n Rπ k ≥ X k̸=i n Rπ k , which implies that 8(Rπ + K) ∈B as required. 5 Upper Bounds I now show that the lower bound derived in the previous section is tight up to constant factors. The algorithm is a generalisation MOSS [Audibert and Bubeck, 2009] with two modifications. First, the width of the confidence bounds are biased in a non-uniform way, and second, the upper confidence bounds are shifted. The new algorithm is functionally identical to MOSS in the special case that Bi is uniform. Define log+(x) = max {0, log(x)}. 1: Input: n and B1, . . . , BK 2: ni = n2/B2 i for all i 3: for t ∈1, . . . , n do 4: It = arg max i ˆµi,Ti(t−1) + s 4 Ti(t −1) log+  ni Ti(t −1)  − r 1 ni 5: end for Algorithm 1: Unbalanced MOSS Theorem 2. Let B ∈B, then the strategy π given in Algorithm 1 satisfies Rπ ≤252B. Corollary 3. For all µ the following hold: 1. Rπ µ,i∗≤252Bi∗. 2. Rπ µ,i∗≤mini(n∆i + 252Bi) The second part of the corollary is useful when Bi∗is large, but there exists an arm for which n∆i and Bi are both small. The proof of Theorem 2 requires a few lemmas. The first is a somewhat standard concentration inequality that follows from a combination of the peeling argument and Doob’s maximal inequality. The proof may be found in the supplementary material. Lemma 4. Let Zi = max 1≤s≤n µi −ˆµi,s − r 4 s log+ ni s  . Then P {Zi ≥∆} ≤ 20 ni∆2 for all ∆> 0. In the analysis of traditional bandit algorithms the gap ∆ji measures how quickly the algorithm can detect the difference between arms i and j. By design, however, Algorithm 1 is negatively biasing its estimate of the empirical mean of arm i by p 1/ni. This has the effect of shifting the gaps, which I denote by ¯∆ji and define to be ¯∆ji = ∆ji + p 1/nj − p 1/ni = µi −µj + p 1/nj − p 1/ni . Lemma 5. Define stopping time τji by τji = min ( s : ˆµj,s + r 4 s log+ nj s  ≤µj + ¯∆ji/2 ) . If Zi < ¯∆ji/2, then Tj(n) ≤τji. 5 Proof. Let t be the first time step such that Tj(t −1) = τji. Then ˆµj,Tj(t−1)+ s 4 Tj(t −1) log+  nj Tj(t −1)  − p 1/nj ≤µj + ¯∆ji/2 − p 1/nj = µj + ¯∆ji −¯∆ji/2 − p 1/nj = µi − p 1/ni −¯∆ji/2 < ˆµi,Ti(t−1) + s 4 Ti(t −1) log+  ni Ti(t −1)  − p 1/ni , which implies that arm j will not be chosen at time step t and so also not for any subsequent time steps by the same argument and induction. Therefore Tj(n) ≤τji. Lemma 6. If ¯∆ji > 0, then Eτji ≤40 ¯∆2 ji + 64 ¯∆2 ji ProductLog nj ¯∆2 ji 64 ! . Proof. Let s0 be defined by s0 = & 64 ¯∆2 ji ProductLog nj ¯∆2 ji 64 !' =⇒ s 4 s0 log+ nj s0  ≤ ¯∆ji 4 . Therefore Eτji = n X s=1 P {τji ≥s} ≤1 + n−1 X s=1 P ( ˆµi,s −µi,s ≥ ¯∆ji 2 − r 4 s log+ nj s ) ≤1 + s0 + n−1 X s=s0+1 P  ˆµi,s −µi,s ≥ ¯∆ji 4  ≤1 + s0 + ∞ X s=s0+1 exp −s ¯∆2 ji 32 ! ≤1 + s0 + 32 ¯∆2 ji ≤40 ¯∆2 ji + 64 ¯∆2 ji ProductLog nj ¯∆2 ji 64 ! , where the last inequality follows since ¯∆ji ≤2. Proof of Theorem 2. Let ∆= 2/√ni and A = {j : ∆ji > ∆}. Then for j ∈A we have ∆ji ≤ 2 ¯∆ji and ¯∆ji ≥ p 1/ni + √ 1/nj. Letting ∆′ = p 1/ni we have Rπ µ,i = E   K X j=1 ∆jiTj(n)   ≤n∆+ E  X j∈A ∆jiTj(n)   (a) ≤2Bi + E  X j∈A ∆jiτji + n max j∈A  ∆ji : Zi ≥¯∆ji/2   (b) ≤2Bi + X j∈A 80 ¯∆ji + 128 ¯∆ji ProductLog nj ¯∆2 ji 64 !! + 4nE[Zi1{Zi ≥∆′}] (c) ≤2Bi + X j∈A 90√nj + 4nE[Zi1{Zi ≥∆′}] , where (a) follows by using Lemma 5 to bound Tj(n) ≤τji when Zi < ¯∆ji. On the other hand, the total number of pulls for arms j for which Zi ≥¯∆ji/2 is at most n. (b) follows by bounding 6 τji in expectation using Lemma 6. (c) follows from basic calculus and because for j ∈A we have ¯∆ji ≥ p 1/ni. All that remains is to bound the expectation. 4nE[Zi1{Zi ≥∆′}] ≤4n∆′P {Zi ≥∆′} + 4n Z ∞ ∆′ P {Zi ≥z} dz ≤160n ∆′ni = 160n √ni = 160Bi , where I have used Lemma 4 and simple identities. Putting it together we obtain Rπ µ,i ≤2Bi + X j∈A 90√nj + 160B1 ≤252Bi , where I applied the assumption B ∈B and so P j̸=1 √nj = P j̸=1 n/Bj ≤Bi. The above proof may be simplified in the special case that B is uniform where we recover the minimax regret of MOSS, but with perhaps a simpler proof than was given originally by Audibert and Bubeck [2009]. On Logarithmic Regret In a recent technical report I demonstrated empirically that MOSS suffers sub-optimal problemdependent regret in terms of the minimum gap [Lattimore, 2015]. Specifically, it can happen that Rmoss µ,i∗∈Ω  K ∆min log n  , (6) where ∆min = mini:∆i>0 ∆i. On the other hand, the order-optimal asymptotic regret can be significantly smaller. Specifically, UCB by Auer et al. [2002] satisfies Rucb µ,i∗∈O X i:∆i>0 1 ∆i log n ! , (7) which for unequal gaps can be much smaller than Eq. (6) and is asymptotically order-optimal [Lai and Robbins, 1985]. The problem is that MOSS explores only enough to obtain minimax regret, but sometimes obtains minimax regret even when a more conservative algorithm would do better. It is worth remarking that this effect is harder to observe than one might think. The example given in the afforementioned technical report is carefully tuned to exploit this failing, but still requires n = 109 and K = 103 before significant problems arise. In all other experiments MOSS was performing admirably in comparison to UCB. All these problems can be avoided by modifying UCB rather than MOSS. The cost is a factor of O(√log n). The algorithm is similar to Algorithm 1, but chooses the action that maximises the following index. It = arg max i ˆµi,Ti(t−1) + s (2 + ε) log t Ti(t −1) − r log n ni , where ε > 0 is a fixed arbitrary constant. Theorem 7. If π is the strategy of unbalanced UCB with ni = n2/B2 i and B ∈B, then the regret of the unbalanced UCB satisfies: 1. (problem-independent regret). Rπ µ,i∗∈O Bi∗√log n  . 2. (problem-dependent regret). Let A = n i : ∆i ≥2 p 1/ni∗log n o . Then Rπ µ,i∗∈O Bi∗ p log n1{A ̸= ∅} + X i∈A 1 ∆i log n ! . The proof is deferred to the supplementary material. The indicator function in the problemdependent bound vanishes for sufficiently large n provided ni∗∈ω(log(n)), which is equivalent to 7 Bi∗∈o(n/√log n). Thus for reasonable choices of B1, . . . , BK the algorithm is going to enjoy the same asymptotic performance as UCB. Theorem 7 may be proven for any index-based algorithm for which it can be shown that ETi(n) ∈O  1 ∆2 i log n  , which includes (for example) KL-UCB [Capp´e et al., 2013] and Thompson sampling (see analysis by Agrawal and Goyal [2012a,b] and original paper by Thompson [1933]), but not OC-UCB [Lattimore, 2015] or MOSS [Audibert and Bubeck, 2009]. Experimental Results I compare MOSS and unbalanced MOSS in two simple simulated examples, both with horizon n = 5000. Each data point is an empirical average of ∼104 i.i.d. samples, so error bars are too small to see. Code/data is available in the supplementary material. The first experiment has K = 2 arms and B1 = n 1 3 and B2 = n 2 3 . I plotted the results for µ = (0, −∆) for varying ∆. As predicted, the new algorithm performs significantly better than MOSS for positive ∆, and significantly worse otherwise (Fig. 1). The second experiment has K = 10 arms. This time B1 = √n and Bk = (k −1)H√n with H = P9 k=1 1/k. Results are shown for µk = ∆1{k = i∗} for ∆∈[0, 1/2] and i∗∈{1, . . . , 10}. Again, the results agree with the theory. The unbalanced algorithm is superior to MOSS for i∗∈{1, 2} and inferior otherwise (Fig. 2). −0.4 −0.2 0 0.2 0.4 0 200 400 600 800 ∆ Regret MOSS U. MOSS Figure 1 0 1 2 3 4 5 0 1,000 2,000 θ Regret Figure 2: θ = ∆+ (i∗−1)/2 Sadly the experiments serve only to highlight the plight of the biased learner, which suffers significantly worse results than its unbaised counterpart for most actions. 6 Discussion I have shown that the cost of favouritism for multi-armed bandit algorithms is rather serious. If an algorithm exhibits a small worst-case regret for a specific action, then the worst-case regret of the remaining actions is necessarily significantly larger than the well-known uniform worst-case bound of Ω( √ Kn). This unfortunate result is in stark contrast to the experts setting for which there exist algorithms that suffer constant regret with respect to a single expert at almost no cost for the remainder. Surprisingly, the best achievable (non-uniform) worst-case bounds are determined up to a permutation almost entirely by the value of the smallest worst-case regret. There are some interesting open questions. Most notably, in the adversarial setting I am not sure if the upper or lower bound is tight (or neither). It would also be nice to know if the constant factors can be determined exactly asymptotically, but so far this has not been done even in the uniform case. For the stochastic setting it is natural to ask if the OC-UCB algorithm can also be modified. Intuitively one would expect this to be possible, but it would require re-working the very long proof. Acknowledgements I am indebted to the very careful reviewers who made many suggestions for improving this paper. Thank you! 8 References Shipra Agrawal and Navin Goyal. Further optimal regret bounds for thompson sampling. In Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS), 2012a. Shipra Agrawal and Navin Goyal. Analysis of thompson sampling for the multi-armed bandit problem. In Proceedings of Conference on Learning Theory (COLT), 2012b. Jean-Yves Audibert and S´ebastien Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, pages 217–226, 2009. Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on, pages 322–331. IEEE, 1995. Peter Auer, Nicol´o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 2002. S´ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multiarmed Bandit Problems. Foundations and Trends in Machine Learning. Now Publishers Incorporated, 2012. ISBN 9781601986269. Olivier Capp´e, Aur´elien Garivier, Odalric-Ambrym Maillard, R´emi Munos, and Gilles Stoltz. Kullback–Leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516–1541, 2013. Nicolo Cesa-Bianchi. Prediction, learning, and games. Cambridge University Press, 2006. Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best vs. regret to the average. Machine Learning, 72(1-2):21–37, 2008. Marcus Hutter and Jan Poland. Adaptive online prediction by following the perturbed leader. The Journal of Machine Learning Research, 6:639–660, 2005. Michael Kapralov and Rina Panigrahy. Prediction strategies without loss. In Advances in Neural Information Processing Systems, pages 828–836, 2011. Wouter M Koolen. The pareto regret frontier. In Advances in Neural Information Processing Systems, pages 863–871, 2013. Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. Tor Lattimore. Optimally confident UCB : Improved regret for finite-armed bandits. Technical report, 2015. URL http://arxiv.org/abs/1507.07880. Che-Yu Liu and Lihong Li. On the prior sensitivity of thompson sampling. arXiv preprint arXiv:1506.03378, 2015. Amir Sani, Gergely Neu, and Alessandro Lazaric. Exploiting easy data in online optimization. In Advances in Neural Information Processing Systems, pages 810–818, 2014. William Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. 9
2015
167
5,666
Measuring Sample Quality with Stein’s Method Jackson Gorham Department of Statistics Stanford University Lester Mackey Department of Statistics Stanford University Abstract To improve the efficiency of Monte Carlo estimation, practitioners are turning to biased Markov chain Monte Carlo procedures that trade off asymptotic exactness for computational speed. The reasoning is sound: a reduction in variance due to more rapid sampling can outweigh the bias introduced. However, the inexactness creates new challenges for sampler and parameter selection, since standard measures of sample quality like effective sample size do not account for asymptotic bias. To address these challenges, we introduce a new computable quality measure based on Stein’s method that bounds the discrepancy between sample and target expectations over a large class of test functions. We use our tool to compare exact, biased, and deterministic sample sequences and illustrate applications to hyperparameter selection, convergence rate assessment, and quantifying bias-variance tradeoffs in posterior inference. 1 Introduction When faced with a complex target distribution, one often turns to Markov chain Monte Carlo (MCMC) [1] to approximate intractable expectations EP [h(Z)] = R X p(x)h(x)dx with asymptotically exact sample estimates EQ[h(X)] = Pn i=1 q(xi)h(xi). These complex targets commonly arise as posterior distributions in Bayesian inference and as candidate distributions in maximum likelihood estimation [2]. In recent years, researchers [e.g., 3, 4, 5] have introduced asymptotic bias into MCMC procedures to trade off asymptotic correctness for improved sampling speed. The rationale is that more rapid sampling can reduce the variance of a Monte Carlo estimate and hence outweigh the bias introduced. However, the added flexibility introduces new challenges for sampler and parameter selection, since standard sample quality measures, like effective sample size, asymptotic variance, trace and mean plots, and pooled and within-chain variance diagnostics, presume eventual convergence to the target [1] and hence do not account for asymptotic bias. To address this shortcoming, we develop a new measure of sample quality suitable for comparing asymptotically exact, asymptotically biased, and even deterministic sample sequences. The quality measure is based on Stein’s method and is attainable by solving a linear program. After outlining our design criteria in Section 2, we relate the convergence of the quality measure to that of standard probability metrics in Section 3, develop a streamlined implementation based on geometric spanners in Section 4, and illustrate applications to hyperparameter selection, convergence rate assessment, and the quantification of bias-variance tradeoffs in posterior inference in Section 5. We discuss related work in Section 6 and defer all proofs to the appendix. Notation We denote the `2, `1, and `1 norms on Rd by k·k2, k·k1, and k·k1 respectively. We will often refer to a generic norm k·k on Rd with associated dual norms kwk⇤, supv2Rd:kvk=1 hw, vi for vectors w 2 Rd, kMk⇤, supv2Rd:kvk=1 kMvk⇤for matrices M 2 Rd⇥d, and kTk⇤, supv2Rd:kvk=1 kT[v]k⇤for tensors T 2 Rd⇥d⇥d. We denote the j-th standard basis vector by ej, the partial derivative @ @xk by rk, and the gradient of any Rd-valued function g by rg with components (rg(x))jk , rkgj(x). 1 2 Quality Measures for Samples Consider a target distribution P with open convex support X ✓Rd and continuously differentiable density p. We assume that p is known up to its normalizing constant and that exact integration under P is intractable for most functions of interest. We will approximate expectations under P with the aid of a weighted sample, a collection of distinct sample points x1, . . . , xn 2 X with weights q(xi) encoded in a probability mass function q. The probability mass function q induces a discrete distribution Q and an approximation EQ[h(X)] = Pn i=1 q(xi)h(xi) for any target expectation EP [h(Z)]. We make no assumption about the provenance of the sample points; they may arise as random draws from a Markov chain or even be deterministically selected. Our goal is to compare the fidelity of different samples approximating a common target distribution. That is, we seek to quantify the discrepancy between EQ and EP in a manner that (i) detects when a sequence of samples is converging to the target, (ii) detects when a sequence of samples is not converging to the target, and (iii) is computationally feasible. A natural starting point is to consider the maximum deviation between sample and target expectations over a class of real-valued test functions H, dH(Q, P) = sup h2H |EQ[h(X)] −EP [h(Z)]|. (1) When the class of test functions is sufficiently large, the convergence of dH(Qm, P) to zero implies that the sequence of sample measures (Qm)m≥1 converges weakly to P. In this case, the expression (1) is termed an integral probability metric (IPM) [6]. By varying the class of test functions H, we can recover many well-known probability metrics as IPMs, including the total variation distance, generated by H = {h : X ! R | supx2X |h(x)| 1}, and the Wasserstein distance (also known as the Kantorovich-Rubenstein or earth mover’s distance), dWk·k, generated by H = Wk·k , {h : X ! R | supx6=y2X |h(x)−h(y)| kx−yk 1}. The primary impediment to adopting an IPM as a sample quality measure is that exact computation is typically infeasible when generic integration under P is intractable. However, we could skirt this intractability by focusing on classes of test functions with known expectation under P. For example, if we consider only test functions h for which EP [h(Z)] = 0, then the IPM value dH(Q, P) is the solution of an optimization problem depending on Q alone. This, at a high level, is our strategy, but many questions remain. How do we select the class of test functions h? How do we know that the resulting IPM will track convergence and non-convergence of a sample sequence (Desiderata (i) and (ii))? How do we solve the resulting optimization problem in practice (Desideratum (iii))? To address the first two of these questions, we draw upon tools from Charles Stein’s method of characterizing distributional convergence. We return to the third question in Section 4. 3 Stein’s Method Stein’s method [7] for characterizing convergence in distribution classically proceeds in three steps: 1. Identify a real-valued operator T acting on a set G of Rd-valued1 functions of X for which EP [(T g)(Z)] = 0 for all g 2 G. (2) Together, T and G define the Stein discrepancy, S(Q, T , G) , sup g2G |EQ[(T g)(X)]| = sup g2G |EQ[(T g)(X)] −EP [(T g)(Z)]| = dT G(Q, P), an IPM-type quality measure with no explicit integration under P. 2. Lower bound the Stein discrepancy by a familiar convergence-determining IPM dH. This step can be performed once, in advance, for large classes of target distributions and ensures that, for any sequence of probability measures (µm)m≥1, S(µm, T , G) converges to zero only if dH(µm, P) does (Desideratum (ii)). 1One commonly considers real-valued functions g when applying Stein’s method, but we will find it more convenient to work with vector-valued g. 2 3. Upper bound the Stein discrepancy by any means necessary to demonstrate convergence to zero under suitable conditions (Desideratum (i)). In our case, the universal bound established in Section 3.3 will suffice. While Stein’s method is typically employed as an analytical tool, we view the Stein discrepancy as a promising candidate for a practical sample quality measure. Indeed, in Section 4, we will adopt an optimization perspective and develop efficient procedures to compute the Stein discrepancy for any sample measure Q and appropriate choices of T and G. First, we assess the convergence properties of an equivalent Stein discrepancy in the subsections to follow. 3.1 Identifying a Stein Operator The generator method of Barbour [8] provides a convenient and general means of constructing operators T which produce mean-zero functions under P (2) . Let (Zt)t≥0 represent a Markov process with unique stationary distribution P. Then the infinitesimal generator A of (Zt)t≥0, defined by (Au)(x) = lim t!0 (E[u(Zt) | Z0 = x] −u(x))/t for u : Rd ! R, satisfies EP [(Au)(Z)] = 0 under mild conditions on A and u. Hence, a candidate operator T can be constructed from any infinitesimal generator. For example, the overdamped Langevin diffusion, defined by the stochastic differential equation dZt = 1 2r log p(Zt)dt + dWt for (Wt)t≥0 a Wiener process, gives rise to the generator (AP u)(x) = 1 2hru(x), r log p(x)i + 1 2hr, ru(x)i. (3) After substituting g for 1 2ru, we obtain the associated Stein operator (TP g)(x) , hg(x), r log p(x)i + hr, g(x)i. (4) The Stein operator TP is particularly well-suited to our setting as it depends on P only through the derivative of its log density and hence is computable even when the normalizing constant of p is not. If we let @X denote the boundary of X (an empty set when X = Rd) and n(x) represent the outward unit normal vector to the boundary at x, then we may define the classical Stein set Gk·k , ⇢ g : X ! Rd $$$$ sup x6=y2X max ✓ kg(x)k⇤, krg(x)k⇤, krg(x) −rg(y)k⇤ kx −yk ◆ 1 and hg(x), n(x)i = 0, 8x 2 @X with n(x) defined ' of sufficiently smooth functions satisfying a Neumann-type boundary condition. The following proposition – a consequence of integration by parts – shows that Gk·k is a suitable domain for TP . Proposition 1. If EP [kr log p(Z)k] < 1, then EP [(TP g)(Z)] = 0 for all g 2 Gk·k. Together, TP and Gk·k form the classical Stein discrepancy S(Q, TP , Gk·k), our chief object of study. 3.2 Lower Bounding the Classical Stein Discrepancy In the univariate setting (d = 1), it is known for a wide variety of targets P that the classical Stein discrepancy S(µm, TP , Gk·k) converges to zero only if the Wasserstein distance dWk·k(µm, P) does [9, 10]. In the multivariate setting, analogous statements are available for multivariate Gaussian targets [11, 12, 13], but few other target distributions have been analyzed. To extend the reach of the multivariate literature, we show in Theorem 2 that the classical Stein discrepancy also determines Wasserstein convergence for a large class of strongly log-concave densities, including the Bayesian logistic regression posterior under Gaussian priors. Theorem 2 (Stein Discrepancy Lower Bound for Strongly Log-concave Densities). If X = Rd, and log p is strongly concave with third and fourth derivatives bounded and continuous, then, for any probability measures (µm)m≥1, S(µm, TP , Gk·k) ! 0 only if dWk·k(µm, P) ! 0. We emphasize that the sufficient conditions in Theorem 2 are certainly not necessary for lower bounding the classical Stein discrepancy. We hope that the theorem and its proof will provide a template for lower bounding S(Q, TP , Gk·k) for other large classes of multivariate target distributions. 3 3.3 Upper Bounding the Classical Stein Discrepancy We next establish sufficient conditions for the convergence of the classical Stein discrepancy to zero. Proposition 3 (Stein Discrepancy Upper Bound). If X ⇠Q and Z ⇠P with r log p(Z) integrable, S(Q, TP , Gk·k) E[kX −Zk] + E[kr log p(X) −r log p(Z)k] + E ⇥))r log p(Z)(X −Z)>))⇤ E[kX −Zk] + E[kr log p(X) −r log p(Z)k] + r E h kr log p(Z)k2i E h kX −Zk2i . One implication of Proposition 3 is that S(Qm, TP , Gk·k) converges to zero whenever Xm ⇠Qm converges in mean-square to Z ⇠P and r log p(Xm) converges in mean to r log p(Z). 3.4 Extension to Non-uniform Stein Sets The analyses and algorithms in this paper readily accommodate non-uniform Stein sets of the form Gc1:3 k·k , ⇢ g : X ! Rd $$$$ supx6=y2X max ⇣ kg(x)k⇤ c1 , krg(x)k⇤ c2 , krg(x)−rg(y)k⇤ c3kx−yk ⌘ 1 and hg(x), n(x)i = 0, 8x 2 @X with n(x) defined ' (5) for constants c1, c2, c3 > 0 known as Stein factors in the literature. We will exploit this additional flexibility in Section 5.2 to establish tight lower-bounding relations between the Stein discrepancy and Wasserstein distance for well-studied target distributions. For general use, however, we advocate the parameter-free classical Stein set and graph Stein sets to be introduced in the sequel. Indeed, any non-uniform Stein discrepancy is equivalent to the classical Stein discrepancy in a strong sense: Proposition 4 (Equivalence of Non-uniform Stein Discrepancies). For any c1, c2, c3 > 0, min(c1, c2, c3)S(Q, TP , Gk·k) S(Q, TP , Gc1:3 k·k ) max(c1, c2, c3)S(Q, TP , Gk·k). 4 Computing Stein Discrepancies In this section, we introduce an efficiently computable Stein discrepancy with convergence properties equivalent to those of the classical discrepancy. We restrict attention to the unconstrained domain X = Rd in Sections 4.1-4.3 and present extensions for constrained domains in Section 4.4. 4.1 Graph Stein Discrepancies Evaluating a Stein discrepancy S(Q, TP , G) for a fixed (Q, P) pair reduces to solving an optimization program over functions g 2 G. For example, the classical Stein discrepancy is the optimum S(Q, TP , Gk·k) = sup g Pn i=1 q(xi)(hg(xi), r log p(xi)i + hr, g(xi)i) (6) s.t. kg(x)k⇤1, krg(x)k⇤1, krg(x) −rg(y)k⇤kx −yk, 8x, y 2 X. Note that the objective associated with any Stein discrepancy S(Q, TP , G) is linear in g and, since Q is discrete, only depends on g and rg through their values at each of the n sample points xi. The primary difficulty in solving the classical Stein program (6) stems from the infinitude of constraints imposed by the classical Stein set Gk·k. One way to avoid this difficulty is to impose the classical smoothness constraints at only a finite collection of points. To this end, for each finite graph G = (V, E) with vertices V ⇢X and edges E ⇢V 2, we define the graph Stein set, Gk·k,Q,G , ⇢ g : X ! Rd | 8 x 2 V, max 0 kg(x)k⇤, krg(x)k⇤1 1 and, 8 (x, y) 2 E, max ⇣ kg(x)−g(y)k⇤ kx−yk , krg(x)−rg(y)k⇤ kx−yk , kg(x)−g(y)−rg(x)(x−y)k⇤ 1 2 kx−yk2 , kg(x)−g(y)−rg(y)(x−y)k⇤ 1 2 kx−yk2 ⌘ 1 ' , the family of functions which satisfy the classical constraints and certain implied Taylor compatibility constraints at pairs of points in E. Remarkably, if the graph G1 consists of edges between all distinct sample points xi, then the associated complete graph Stein discrepancy S(Q, TP , Gk·k,Q,G1) is equivalent to the classical Stein discrepancy in the following strong sense. 4 Proposition 5 (Equivalence of Classical and Complete Graph Stein Discrepancies). If X = Rd, and G1 = (supp(Q), E1) with E1 = {(xi, xl) 2 supp(Q)2 : xi 6= xl}, then S(Q, TP , Gk·k) S(Q, TP , Gk·k,Q,G1) d S(Q, TP , Gk·k), where d is a constant, independent of (Q, P), depending only on the dimension d and norm k·k. Proposition 5 follows from the Whitney-Glaeser extension theorem for smooth functions [14, 15] and implies that the complete graph Stein discrepancy inherits all of the desirable convergence properties of the classical discrepancy. However, the complete graph also introduces order n2 constraints, rendering computation infeasible for large samples. To achieve the same form of equivalence while enforcing only O(n) constraints, we will make use of sparse geometric spanner subgraphs. 4.2 Geometric Spanners For a given dilation factor t ≥1, a t-spanner [16, 17] is a graph G = (V, E) with weight kx −yk on each edge (x, y) 2 E and a path between each pair x0 6= y0 2 V with total weight no larger than tkx0 −y0k. The next proposition shows that spanner Stein discrepancies enjoy the same convergence properties as the complete graph Stein discrepancy. Proposition 6 (Equivalence of Spanner and Complete Graph Stein Discrepancies). If X = Rd, Gt = (supp(Q), E) is a t-spanner, and G1 = (supp(Q), {(xi, xl) 2 supp(Q)2 : xi 6= xl}), then S(Q, TP , Gk·k,Q,G1) S(Q, TP , Gk·k,Q,Gt) 2t2 S(Q, TP , Gk·k,Q,G1). Moreover, for any `p norm, a 2-spanner with O(dn) edges can be computed in O(dn log(n)) expected time for d a constant depending only on d and k·k [18]. As a result, we will adopt a 2-spanner Stein discrepancy, S(Q, TP , Gk·k,Q,G2), as our standard quality measure. 4.3 Decoupled Linear Programs The final unspecified component of our Stein discrepancy is the choice of norm k·k. We recommend the `1 norm, as the resulting optimization problem decouples into d independent finite-dimensional linear programs (LPs) that can be solved in parallel. More precisely, S(Q, TP , Gk·k1,Q,(V,E)) equals Pd j=1 sup γj2R|V |,Γj2Rd⇥|V | P|V | i=1 q(vi)(γjirj log p(vi) + Γjji) (7) s.t. kγjk1 1, kΓjk1 1, and 8 i 6= l : (vi, vl) 2 E, max ⇣ |γji−γjl| kvi−vlk1 , kΓj(ei−el)k1 kvi−vlk1 , |γji−γjl−hΓjei,vi−vli| 1 2 kvi−vlk2 1 , |γji−γjl−hΓjel,vi−vli| 1 2 kvi−vlk2 1 ⌘ 1. We have arbitrarily numbered the elements vi of the vertex set V so that γji represents the function value gj(vi), and Γjki represents the gradient value rkgj(vi). 4.4 Constrained Domains A small modification to the unconstrained formulation (7) extends our tractable Stein discrepancy computation to any domain defined by coordinate boundary constraints, that is, to X = (↵1, β1) ⇥ · · · ⇥(↵d, βd) with −1 ↵j < βj 1 for all j. Specifically, for each dimension j, we augment the j-th coordinate linear program of (7) with the boundary compatibility constraints max ⇣ |γji| |vij−bj|, |Γjki| |vij−bj|, |γji−Γjji(vij−bj)| 1 2 (vij−bj)2 ⌘ 1, for each i, bj 2 {↵j, βj} \ R, and k 6= j. (8) These additional constraints ensure that our candidate function and gradient values can be extended to a smooth function satisfying the boundary conditions hg(z), n(z)i = 0 on @X. Proposition 15 in the appendix shows that the spanner Stein discrepancy so computed is strongly equivalent to the classical Stein discrepancy on X. Algorithm 1 summarizes the complete solution for computing our recommended, parameter-free spanner Stein discrepancy in the multivariate setting. Notably, the spanner step is unnecessary in the univariate setting, as the complete graph Stein discrepancy S(Q, TP , Gk·k1,Q,G1) can be computed directly by sorting the sample and boundary points and only enforcing constraints between consecutive points in this ordering. Thus, the complete graph Stein discrepancy is our recommended quality measure when d = 1, and a recipe for its computation is given in Algorithm 2. 5 Algorithm 1 Multivariate Spanner Stein Discrepancy input: Q, coordinate bounds (↵1, β1), . . . , (↵d, βd) with −1 ↵j < βj 1 for all j G2 Compute sparse 2-spanner of supp(Q) for j = 1 to d do (in parallel) rj Solve j-th coordinate linear program (7) with graph G2 and boundary constraints (8) return Pd j=1 rj Algorithm 2 Univariate Complete Graph Stein Discrepancy input: Q, bounds (↵, β) with −1 ↵< β 1 (x(1), . . . , x(n0)) SORT({x1, . . . , xn, ↵, β} \ R) return supγ2Rn0, Γ2Rn0 Pn0 i=1 q(x(i))(γi d dx log p(x(i)) + Γi) s.t. kΓk1 1, 8i n0, |γi| I ⇥ ↵< x(i) < β ⇤ , and, 8i < n0, max ⇣ |γi−γi+1| x(i+1)−x(i) , |Γi−Γi+1| x(i+1)−x(i) , |γi−γi+1−Γi(x(i)−x(i+1))| 1 2 (x(i+1)−x(i))2 , |γi−γi+1−Γi+1(x(i)−x(i+1))| 1 2 (x(i+1)−x(i))2 ⌘ 1 5 Experiments We now turn to an empirical evaluation of our proposed quality measures. We compute all spanners using the efficient C++ greedy spanner implementation of Bouts et al. [19] and solve all optimization programs using Julia for Mathematical Programming [20] with the default Gurobi 6.0.4 solver [21]. All reported timings are obtained using a single core of an Intel Xeon CPU E5-2650 v2 @ 2.60GHz. 5.1 A Simple Example We begin with a simple example to illuminate a few properties of the Stein diagnostic. For the target P = N(0, 1), we generate a sequence of sample points i.i.d. from the target and a second sequence i.i.d. from a scaled Student’s t distribution with matching variance and 10 degrees of freedom. The left panel of Figure 1 shows that the complete graph Stein discrepancy applied to the first n Gaussian sample points decays to zero at an n−0.52 rate, while the discrepancy applied to the scaled Student’s t sample remains bounded away from zero. The middle panel displays optimal Stein functions g recovered by the Stein program for different sample sizes. Each g yields a test function h , TP g, featured in the right panel, that best discriminates the sample Q from the target P. Notably, the Student’s t test functions exhibit relatively large magnitude values in the tails of the support. 5.2 Comparing Discrepancies We show in Theorem 14 in the appendix that, when d = 1, the classical Stein discrepancy is the optimum of a convex quadratically constrained quadratic program with a linear objective, O(n) variables, and O(n) constraints. This offers the opportunity to directly compare the behavior of the graph and classical Stein discrepancies. We will also compare to the Wasserstein distance dWk·k, ● ● ●● ● ● ● ●●● ● ● ●●● ● ●● ● ● ● ● ● ● 0.10 0.03 0.01 100 1000 10000 Number of sample points, n Stein discrepancy Gaussian Scaled Student's t ● ● ●●●● ●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 n = 300 n = 3000 n = 30000 −6 −3 0 3 6 −6 −3 0 3 6 x g Gaussian Scaled Student's t ● ● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ●●●●●●●●●●●●●●● ●●● ●●● ● ● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● −2 −1 0 1 2 −2 0 2 4 −2.5 0.0 2.5 5.0 n = 300 n = 3000 n = 30000 −6 −3 0 3 6 −6 −3 0 3 6 x h = TP g Sample ● Gaussian Scaled Student's t Figure 1: Left: Complete graph Stein discrepancy for a N(0, 1) target. Middle / right: Optimal Stein functions g and discriminating test functions h = TP g recovered by the Stein program. 6 seed = 7 seed = 8 seed = 9 ● ● ● ● ●●●●●● ● ● ●●●●●●● ● ● ● ● ● ● ●●●●●● ● ● ● ●●● ●●● ● ● ● ● ● ● ●●●●●● ● ● ●●●●●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●●● ● ● ● ● ● ●●●●● ●● ● ● ●●● ●●●● ● ● ● ● ● ● ●●● ●●● ● ● ● ●●●●●● ● ● 0.01 0.03 0.10 0.30 0.001 0.003 0.010 0.030 Gaussian Uniform 100 1000 10000 100 1000 10000 100 1000 10000 Number of sample points, n Discrepancy value Discrepancy ● Classical Stein Wasserstein Complete graph Stein Figure 2: Comparison of discrepancy measures for sample sequences drawn i.i.d. from their targets. which is computable for simple univariate target distributions [22] and provably lower bounds the non-uniform Stein discrepancies (5) with c1:3 = (0.5, 0.5, 1) for P = Unif(0, 1) and c1:3 = (1, 4, 2) for P = N(0, 1) [9, 23]. For N(0, 1) and Unif(0, 1) targets and several random number generator seeds, we generate a sequence of sample points i.i.d. from the target distribution and plot the nonuniform classical and complete graph Stein discrepancies and the Wasserstein distance as functions of the first n sample points in Figure 2. Two apparent trends are that the graph Stein discrepancy very closely approximates the classical and that both Stein discrepancies track the fluctuations in Wasserstein distance even when a magnitude separation exists. In the Unif(0, 1) case, the Wasserstein distance in fact equals the classical Stein discrepancy because TP g = g0 is a Lipschitz function. 5.3 Selecting Sampler Hyperparameters Stochastic Gradient Langevin Dynamics (SGLD) [3] with constant step size ✏is a biased MCMC procedure designed for scalable inference. It approximates the overdamped Langevin diffusion, but, because no Metropolis-Hastings (MH) correction is used, the stationary distribution of SGLD deviates increasingly from its target as ✏grows. If ✏is too small, however, SGLD explores the sample space too slowly. Hence, an appropriate choice of ✏is critical for accurate posterior inference. To illustrate the value of the Stein diagnostic for this task, we adopt the bimodal Gaussian mixture model (GMM) posterior of [3] as our target. For a range of step sizes ✏, we use SGLD with minibatch size 5 to draw 50 independent sequences of length n = 1000, and we select the value of ✏with the highest median quality – either the maximum effective sample size (ESS, a standard diagnostic based on autocorrelation [1]) or the minimum spanner Stein discrepancy – across these sequences. The average discrepancy computation consumes 0.4s for spanner construction and 1.4s per coordinate linear program. As seen in Figure 3a, ESS, which does not detect distributional bias, selects the largest step size presented to it, while the Stein discrepancy prefers an intermediate value. The rightmost plot of Figure 3b shows that a representative SGLD sample of size n using the ✏selected by ESS is greatly overdispersed; the leftmost is greatly underdispersed due to slow mixing. The middle sample, with ✏selected by the Stein diagnostic, most closely resembles the true posterior. 5.4 Quantifying a Bias-Variance Trade-off The approximate random walk MH (ARWMH) sampler [5] is a second biased MCMC procedure designed for scalable posterior inference. Its tolerance parameter ✏controls the number of datapoint likelihood evaluations used to approximate the standard MH correction step. Qualitatively, a larger ✏ implies fewer likelihood computations, more rapid sampling, and a more rapid reduction of variance. A smaller ✏yields a closer approximation to the MH correction and less bias in the sampler stationary distribution. We will use the Stein discrepancy to explicitly quantify this bias-variance trade-off. We analyze a dataset of 53 prostate cancer patients with six binary predictors and a binary outcome indicating whether cancer has spread to surrounding lymph nodes [24]. Our target is the Bayesian logistic regression posterior [1] under a N(0, I) prior on the parameters. We run RWMH (✏= 0) and ARWMH (✏= 0.1 and batch size = 2) for 105 likelihood evaluations, discard the points from the first 103 evaluations, and thin the remaining points to sequences of length 1000. The discrepancy computation time for 1000 points averages 1.3s for the spanner and 12s for a coordinate LP. Figure 4 displays the spanner Stein discrepancy applied to the first n points in each sequence as a function of the likelihood evaluation count. We see that the approximate sample is of higher Stein quality for smaller computational budgets but is eventually overtaken by the asymptotically exact sequence. 7 ● ● ● ● ● ● ● ● diagnostic = ESS diagnostic = Spanner Stein 1.0 1.5 2.0 2.5 1.0 1.5 2.0 2.5 3.0 1e−04 1e−03 1e−02 Step size, ε Log median diagnostic (a) Step size selection criteria Step size, ε = 5e−05 Step size, ε = 5e−03 Step size, ε = 5e−02 ●●● ● ● ●●●●● ●●●●●●● ●● ●● ● ● ●●● ● ●● ●●●● ● ●● ●●●● ●● ●● ● ● ●● ●●● ● ●●●●● ●● ●●●● ●●● ● ● ●● ● ● ● ●● ●●●●● ●●●●●● ● ●●● ●●● ●● ● ●●● ● ●●● ● ● ●● ● ● ●● ● ● ● ●●●● ●●●●●●● ● ●● ●● ●●● ● ● ● ●● ●● ● ● ● ●● ● ●●●● ● ●●● ●● ● ● ● ● ●●●●●●● ● ● ● ● ●● ● ●●●● ●● ● ●●● ●● ●● ● ● ●●● ●●● ● ● ● ● ● ● ● ● ●●●● ● ● ● ●● ●●● ● ●● ●● ● ●●● ●●●●●● ● ● ●●●●● ●●●●●● ●● ●● ● ● ● ● ● ●● ●●●● ● ● ●●● ●●●● ●● ● ●●●●● ● ●●● ●● ●● ● ●● ● ●● ● ●● ● ●●●● ● ●●●● ● ● ●● ● ● ● ● ●●●●●● ● ● ●●●●●● ●● ●●● ●● ● ● ● ●●● ●● ●●●●● ● ●● ●● ● ●●●●● ●● ●●● ●●●●● ●●●●●● ● ●● ●● ●●● ● ●●●●● ● ●● ●●●●●●●● ● ●●●●●●●●● ●● ●●●● ● ● ●● ● ●● ●● ●●●● ●● ●● ● ● ● ● ● ●●● ● ●●●●●● ● ●●●●●● ● ● ● ●● ●● ● ●● ● ● ● ●●●●●● ●●●●● ● ● ●●●● ● ● ●●●● ● ● ●●●● ●●● ●●● ● ●● ●●●● ● ●●● ● ●●● ●● ●● ●● ●● ●● ● ●●●●● ●●● ● ●●●● ● ●●●●● ●●● ●● ●● ● ●● ● ●●●●●●● ●● ● ● ●●●●●●●●● ● ● ●● ●● ● ● ● ●●●● ●● ●● ● ●●●● ● ●●● ● ●●●●● ●●● ● ● ● ●●●●● ● ●● ●● ●●●● ●● ● ● ● ●●● ●● ●●● ● ●●● ●● ● ●●● ● ● ●●●●●●● ● ●● ●●●● ● ● ● ● ● ● ● ●●●● ●● ● ● ●●●● ● ●● ●● ●● ● ● ●● ●●●●● ●●●● ●●● ● ● ● ●●● ●●● ●● ●● ● ● ● ●● ●● ●● ● ●●●● ● ●● ● ● ●● ● ● ● ● ●●● ●● ●● ●● ● ● ●● ●●●● ● ● ● ● ●● ●● ●●●●●● ●● ● ●● ●●●● ●●●●●●●● ● ●●●●●● ● ●●●● ●●●● ●● ●● ●●● ●● ● ●● ●●● ● ●●● ●● ●●● ●● ●●● ●● ● ● ●●●● ● ●●● ● ● ● ●●●● ● ●●●● ●●● ●●●● ●●●●● ● ● ●●●●●● ● ●● ●● ●●●● ●●●●●● ● ● ●● ● ● ●●●●●●●● ● ●● ● ●●● ●●● ●● ●● ●●●● ●●●● ● ●●● ●● ● ●● ● ● ●● ● ●● ● ●● ● ● ● ●●●● ● ●●● ● ● ●●●●● ●● ● ●● ●●● ● ● ●● ● ●● ●● ●● ●● ●● ● ● ● ● ● ● ●●● ● ● ● ●●● ● ● ● ●● ●● ● ● ●●● ● ●● ● ● ●●● ●● ●●●●●● ●● ● ● ●●●● ● ● ●●●●● ●● ●●●●●● ● ● ● ● ● ● ●● ●● ●●●●● ●●● ●●● ● ● ●● ●●●● ● ● ●●● ● ● ● ● ● ●●●● ●●● ● ●● ● ●●● ●●●● ●●● ● ●● ●●●●● ● ● ● ● ● ●● ● ● ● ●● ● ●●● ● ● ●● ● ●● ● ● ●● ● ● ●●● ●● ● ●● ● ●● ● ● ●● ●●● ● ●● ●●●● ● ● ● ●●● ●● ●● ● ● ● ● ●● ●● ●●● ● ●● ●● ●● ● ● ●● ● ● ●● ● ● ● ● ●●●● ● ● ●●●● ● ● ●●●●● ●● ● ● ●●● ● ●●●●●●● ● ●●●● ●●●● ●● ● ●● ●● ● ● ● ● ● ●●●● ●●● ● ●● ●●● ● ●●●● ● ● ● ●● ● ● ●● ● ● ● ● ●● ●● ● ● ● ● ● ●●●●● ● ● ●●● ● ● ● ● ● ●● ●● ● ●●●● ● ● ●● ●●● ●●●●● ● ● ●● ● ● ●● ●● ●●●●● ● ● ●● ●● ●●● ● ●●●● ● ● ● ● ● ● ●●● ● ●● ●●● ●● ●● ●●● ● ● ● ●● ● ● ● ●● ● ●● ●● ●●●● ●● ● ●● ● ●● ●●●● ● ● ● ●●● ● ●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●●● ●●●●● ● ● ●●● ● ● ● ● ● ● ●● ● ●● ●● ●●● ●●● ●●● ●● ● ●● ● ●● ●● ●● ●●●● ●● ●● ●● ● ● ● ●●● ●●●● ●●● ● ● ●●● ●● ● ●● ●● ● ● ● ●● ● ●●● ● ●● ● ●●● ●●● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ●●● ●● ●● ● ● ●● ●●●●●● ● ● ● ●● ● ●●● ● ● ● ●● ● ● ● ●●● ● ● ●● ● ● ●● ●● ●● ● ● ●● ●●● ● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●●●●●●● ● ● ●● ● ● ● ●●●● ●● ●●● ●●●● ● ●● ●●● ●●● ●●●● ● ●● ● ●●●●● ● ● ●● ●● ● ● ● ● ●●●● ● ● ● ●●●●● ●● ●● ● ● ●●● ● ● ● ● ●● ●● ●●● ●● ●● ● ● ●● ●● ● ●●●● ●●●● ● ● ●●● ● ● ● ● ●● ●● ●● ● ● ● ●● ●● ● ● ●●● ●●●● ● ●●● ● ● ● ●● ●● ● ●● ● ● ●● ●●● ● ● ●●● ● ● ● ●●● ● ●● ● ●●● ●●●●●● ●● ●●●● ● ●●●● ● ● ●●●●●● ● ●●● ● ● ●● ● ● ● ● ●● ● ● ●●● ● ●●●● ●●● ● ● ●●● ● ● ●●●● ●●● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ●● ●● ● ●● ●● ● ● ●● ●●●● ● ● ● ● ● ● ●●●●● ● ●● ●● ●●● ●● ● ● ● ●●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −4 −3 −2 −1 0 1 2 3 4 −2 −1 0 1 2 3 −2 −1 0 1 2 3 −2 −1 0 1 2 3 x1 x2 (b) 1000 SGLD sample points with equidensity contours of p overlaid Figure 3: (a) ESS maximized at ✏= 5 ⇥10−2; Stein discrepancy minimized at ✏= 5 ⇥10−3. ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ●●●●●●●●●●●● ● ●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● Spanner Stein discrepancy Normalized prob. error Mean error Second moment error 16 20 24 0.1 0.2 0.3 0.4 0.5 1.0 0.5 1.0 1.5 2.0 2.5 3e+03 1e+04 3e+04 1e+05 3e+03 1e+04 3e+04 1e+05 3e+03 1e+04 3e+04 1e+05 3e+03 1e+04 3e+04 1e+05 Number of likelihood evaluations Discrepancy Hyperparameter ● ε = 0 ε = 0.1 Figure 4: Bias-variance trade-off curves for Bayesian logistic regression with approximate RWMH. To corroborate our result, we use a Metropolis-adjusted Langevin chain [25] of length 107 as a surrogate Q⇤for the target and compute several error measures for each sample Q: normalized probability error maxl |E[σ(hX, wli) −σ(hZ, wli)]|/kwlk1, mean error maxj |E[Xj−Zj]| maxj |EQ⇤[Zj]| , and second moment error maxj,k |E[XjXk−ZjZk]| maxj,k |EQ⇤[ZjZk]| for X ⇠Q, Z ⇠Q⇤, σ(t) , 1 1+e−t , and wl the l-th datapoint covariate vector. The measures, also found in Figure 4, accord with the Stein discrepancy quantification. 5.5 Assessing Convergence Rates The Stein discrepancy can also be used to assess the quality of deterministic sample sequences. In Figure 5 in the appendix, for P = Unif(0, 1), we plot the complete graph Stein discrepancies of the first n points of an i.i.d. Unif(0, 1) sample, a deterministic Sobol sequence [26], and a deterministic kernel herding sequence [27] defined by the norm khkH = R 1 0 (h0(x))2dx. We use the median value over 50 sequences in the i.i.d. case and estimate the convergence rate for each sampler using the slope of the best least squares affine fit to each log-log plot. The discrepancy computation time averages 0.08s for n = 200 points, and the recovered rates of n−0.49 and n−1 for the i.i.d. and Sobol sequences accord with expected O(1/pn) and O(log(n)/n) bounds from the literature [28, 26]. As witnessed also in other metrics [29], the herding rate of n−0.96 outpaces its best known bound of dH(Qn, P) = O(1/pn), suggesting an opportunity for sharper analysis. 6 Discussion of Related Work We have developed a quality measure suitable for comparing biased, exact, and deterministic sample sequences by exploiting an infinite class of known target functionals. The diagnostics of [30, 31] also account for asymptotic bias but lose discriminating power by considering only a finite collection of functionals. For example, for a N(0, 1) target, the score statistic of [31] cannot distinguish two samples with equal first and second moments. Maximum mean discrepancy (MMD) on a characteristic Hilbert space [32] takes full distributional bias into account but is only viable when the expected kernel evaluations are easily computed under the target. One can approximate MMD, but this requires access to a separate trustworthy ground-truth sample from the target. Acknowledgments The authors thank Madeleine Udell, Andreas Eberle, and Jessica Hwang for their pointers and feedback and Quirijn Bouts, Kevin Buchin, and Francis Bach for sharing their code and counsel. 8 References [1] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov chain monte carlo. CRC press, 2011. [2] C. J. Geyer. Markov chain monte carlo maximum likelihood. Computer Science and Statistics: Proc. 23rd Symp. Interface, pages 156–163, 1991. [3] M. Welling and Y.-W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning, pages 681–688, 2011. [4] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient fisher scoring. In Proceeding of 29th International Conference on Machine Learning (ICML’12), 2012. [5] A. Korattikara, Y. Chen, and M. Welling. Austerity in MCMC land: Cutting the Metropolis-Hastings budget. In Proceeding of 31th International Conference on Machine Learning (ICML’14), 2014. [6] A. M¨uller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):pp. 429–443, 1997. [7] C. Stein. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory, pages 583–602, Berkeley, CA, 1972. University of California Press. [8] A. D. Barbour. Stein’s method and Poisson process convergence. J. Appl. Probab., (Special Vol. 25A): 175–184, 1988. A celebration of applied probability. [9] L. HY. Chen, L. Goldstein, and Q.-M. Shao. Normal approximation by Steins method. Springer Science & Business Media, 2010. [10] S. Chatterjee and Q.-M. Shao. Nonnormal approximation by Steins method of exchangeable pairs with application to the Curie–Weiss model. Annals of Applied Probability, 21(2):464–483, 2011. [11] G. Reinert and A. R¨ollin. Multivariate normal approximation with Steins method of exchangeable pairs under a general linearity condition. Annals of Probability, 37(6):2150–2173, 2009. [12] S. Chatterjee and E. Meckes. Multivariate normal approximation using exchange-able pairs. Alea, 4: 257–283, 2008. [13] E. Meckes. On Steins method for multivariate normal approximation. In High dimensional probability V: The Luminy volume, pages 153–178. Institute of Mathematical Statistics, 2009. [14] G. Glaeser. ´Etude de quelques alg`ebres tayloriennes. J. Analyse Math., 6:1–124; erratum, insert to 6 (1958), no. 2, 1958. [15] P. Shvartsman. The Whitney extension problem and Lipschitz selections of set-valued mappings in jetspaces. Transactions of the American Mathematical Society, 360(10):5529–5550, 2008. [16] P. Chew. There is a planar graph almost as good as the complete graph. In Proceedings of the Second Annual Symposium on Computational Geometry, SCG ’86, pages 169–177, New York, NY, 1986. ACM. [17] D. Peleg and A. A. Sch¨affer. Graph spanners. Journal of Graph Theory, 13(1):99–116, 1989. [18] S. Har-Peled and M. Mendel. Fast construction of nets in low-dimensional metrics and their applications. SIAM Journal on Computing, 35(5):1148–1184, 2006. [19] Q. W. Bouts, A. P. ten Brink, and K. Buchin. A framework for computing the greedy spanner. In Proceedings of the Thirtieth Annual Symposium on Computational Geometry, SOCG’14, pages 11:11– 11:19, New York, NY, 2014. ACM. [20] M. Lubin and I. Dunning. Computing in operations research using Julia. INFORMS Journal on Computing, 27(2):238–248, 2015. [21] Gurobi Optimization. Gurobi optimizer reference manual, 2015. URL http://www.gurobi.com. [22] S. S. Vallender. Calculation of the Wasserstein distance between probability distributions on the line. Theory of Probability & Its Applications, 18(4):784–786, 1974. [23] C. D¨obler. Stein’s method of exchangeable pairs for the Beta distribution and generalizations. arXiv:1411.4477, 2014. [24] A. Canty and B. D. Ripley. boot: Bootstrap R (S-Plus) Functions, 2015. R package version 1.3-15. [25] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, pages 341–363, 1996. [26] R. E. Caflisch. Monte carlo and quasi-monte carlo methods. Acta numerica, 7:1–49, 1998. [27] Y. Chen, M. Welling, and A. Smola. Super-samples from kernel herding. In Proceeding of 26th Uncertainty in Artificial Intelligence (UAI’10), 2010. [28] E. del Barrio, E. Gin, and C. Matrn. Central limit theorems for the Wasserstein distance between the empirical and the true distributions. Ann. Probab., 27(2):1009–1071, 04 1999. [29] F. Bach, S. Lacoste-Julien, and G. Obozinski. On the equivalence between herding and conditional gradient algorithms. In Proceeding of 29th International Conference on Machine Learning (ICML’12), 2012. [30] A. Zellner and C.-K. Min. Gibbs sampler convergence criteria. Journal of the American Statistical Association, 90(431):921–927, 1995. [31] Y. Fan, S. P. Brooks, and A. Gelman. Output assessment for monte carlo simulations via the score statistic. Journal of Computational and Graphical Statistics, 15(1), 2006. [32] A. Gretton, K. M Borgwardt, M. Rasch, B. Sch¨olkopf, and A. J. Smola. A kernel method for the twosample-problem. In Advances in Neural Information Processing Systems, pages 513–520, 2006. 9
2015
168
5,667
Predtron: A Family of Online Algorithms for General Prediction Problems Prateek Jain Microsoft Research, INDIA prajain@microsoft.com Nagarajan Natarajan University of Texas at Austin, USA naga86@cs.utexas.edu Ambuj Tewari University of Michigan, Ann Arbor, USA tewaria@umich.edu Abstract Modern prediction problems arising in multilabel learning and learning to rank pose unique challenges to the classical theory of supervised learning. These problems have large prediction and label spaces of a combinatorial nature and involve sophisticated loss functions. We offer a general framework to derive mistake driven online algorithms and associated loss bounds. The key ingredients in our framework are a general loss function, a general vector space representation of predictions, and a notion of margin with respect to a general norm. Our general algorithm, Predtron, yields the perceptron algorithm and its variants when instantiated on classic problems such as binary classification, multiclass classification, ordinal regression, and multilabel classification. For multilabel ranking and subset ranking, we derive novel algorithms, notions of margins, and loss bounds. A simulation study confirms the behavior predicted by our bounds and demonstrates the flexibility of the design choices in our framework. 1 Introduction Classical supervised learning problems, such as binary and multiclass classification, share a number of characteristics. The prediction space (the space in which the learner makes predictions) is often the same as the label space (the space from which the learner receives supervision). Because directly learning discrete valued prediction functions is hard, one learns real-valued or vector-valued functions. These functions generate continuous predictions that are converted into discrete ones via simple mappings, e.g., via the ‘sign’ function (binary classification) or the ‘argmax’ function (multiclass classification). Also, the most commonly used loss function is simple, viz. the 0-1 loss. In contrast, modern prediction problems, such as multilabel learning, multilabel ranking, and subset ranking do not share these characteristics. In order to handle these problems, we need a more general framework that offers more flexibility. First, it should allow for the possibility of having different label space and prediction space. Second, it should allow practitioners to use creative, new ways to map continuous, vector-valued predictions to discrete ones. Third, it should permit the use of general loss functions. Extensions of the theory of classical supervised learning to modern predictions problems have begun. For example, the work on calibration dimension [1] can be viewed as extending one aspect of the theory, viz. that of calibrated surrogates and consistent algorithms based on convex optimization. This paper deals with the extension of another interesting part of classical supervised learning: mistake driven algorithms such as perceptron (resp. winnow) and their analyses in terms of `2 (resp. `1) margins [2, Section 7.3]. 1 We make a number of contributions. First, we provide a general framework (Section 2) whose ingredients include an arbitrary loss function and an arbitrary representation of discrete predictions in a continuous space. The framework is abstract enough to be of general applicability but it offers enough mathematical structure so that we can derive a general online algorithm, Predtron (Algorithm 1), along with an associated loss bound (Theorem 1) under an abstract margin condition (Section 2.2). Second, we show that our framework unifies several perception-like algorithms for classical problems such as binary classification, multiclass classification, ordinal regression, and multilabel classification (Section 3). Even for these classical problems, we get some new results, for example, when the loss function treats labels asymmetrically or when there exists a ‘reject’ option in classification. Third, we apply our framework to two modern prediction problems: subset ranking (Section 4) and multilabel ranking (Section 5). In both of these problems, the prediction space (rankings) is different from the supervision space (set of labels or vector of relevance scores). For these two problems, we propose interesting, novel notions of correct prediction with a margin and derive mistake bounds under a loss derived from NDCG, a ranking measure that pays more attention to the performance at the top of a ranked list. Fourth, our techniques based on online convex optimization (OCO) can effortlessly incorporate notions of margins w.r.t. non-Euclidean norms, such as `1 norm, group norm, and trace norm. Such flexibility is important in modern prediction problems where the learned parameter can be a high dimensional vector or a large matrix with low group or trace norm. Finally, we test our theory in a simulation study (Section 6) dealing with the subset ranking problem showing how our framework can be adapted to a specific prediction problem. We investigate different margin notions as we vary two key design choices in our abstract framework: the map used to convert continuous predictions into discrete ones, and the choice of the norm used in the definition of margin. Related Work. Our general algorithm is related to the perceptron and online gradient descent algorithms used in structured prediction [3, 4]. But, to the best of knowledge, our emphasis on keeping label and prediction spaces possibly distinct, our use of a general representation of predictions, and our investigation of generalized notions of margins are all novel. The use of simplex coding in multiclass problems [5] inspired the use of maximum similarity/minimum distance decoding to obtain discrete predictions from continuous ones. Our proofs use results about Online Gradient Descent and Online Mirror Descent from the Online Convex Optimization literature [6]. 2 Framework and Main Result The key ingredients in classic supervised learning are an input space, an output space and a loss function. In this paper, the input space X 2 Rp will always be some subset of a finite dimensional Euclidean space. Our algorithms maintain prediction functions as a linear combination of the seen inputs. As a result, they easily kernelize and the theory extends, in a straightforward way to the case when the input space is a, possibly infinite dimensional, reproducing kernel Hilbert space (RKHS). 2.1 Labels, Prediction, and Scores We will distinguish between the label space and the prediction space. The former is the space where the training labels come from whereas the latter is the space where the learning algorithm has to make predictions in. Both spaces will be assumed to be finite. Therefore, without any loss of generality, we can identify the label space with [`] = {1, . . . , `} and the prediction space with [k] where `, k are positive, but perhaps very large, integers. A given loss function L : [k] ⇥[`] ! R+ maps a prediction σ 2 [k] and a label y 2 [`] to a non-negative loss L(σ, y). The loss L can equivalently be thought of as a k ⇥` matrix with loss values as entries. Define the set of correct predictions for a label y as ⌃y = {σy 2 [k] : L(σy, y) = 0}. We assume that, for every label y, the set ⌃y is non-empty. That is, every column of the loss matrix has a zero entry. Also, let cL = minL(σ,y)>0 L(σ, y) and CL = maxσ,y L(σ, y) be the minimum (non-zero) and maximum entries in the loss matrix. In an online setting, the learner will see a stream of examples (X⌧, Y⌧) 2 X ⇥[`]. Learner will predict scores using a linear predictor W 2 Rd⇥p. However, the predicted scores WX⌧will be in Rd, not in the prediction space [k]. So, we need a function pred : Rd ! [k] to convert scores into actual predictions. We will assume that there is a unique representation rep(σ) 2 Rd of each 2 prediction σ such that k rep(σ)k2 = 1 for all σ. Given this, a natural transformation of scores into prediction is given by the following maximum similarity decoding: pred(t) 2 argmax σ2[k] hrep(σ), ti , (1) where ties in the “argmax” can be broken arbitrarily. There are some nice consequences of the definition of pred above. First, because k rep(σ)k2 = 1, maximum similarity decoding is equivalent to nearest neighbor decoding: pred(t) 2 argminσ k rep(σ) −tk2. Second, we have a homogeneity property: pred(ct) = pred(t) if c > 0. Third, rep serves as an “inverse” of pred in the following sense. We have, pred(rep(σ)) = σ for all σ. Moreover, rep(pred(t)) is more similar to t than the representation of any other prediction σ: 8t 2 Rd, σ 2 [k], hrep(pred(t)), ti ≥hrep(σ), ti . In view of these facts, we will use pred−1(σ) and rep(σ) interchangeably. Using pred, the loss function L can be extended to a function defined on Rd ⇥[k] as: L(t, y) = L(pred(t), y). With a little abuse of notation, we will continue to denote this new function also by L. 2.2 Margins We say that a score t is compatible with a label y if the set of σ’s that achieve the maximum in the definition (1) of pred is exactly ⌃y. That is, argmaxσ2[k] ⌦ pred−1(σ), t ↵ = ⌃y. Hence, for any σy 2 ⌃y, σ /2 ⌃y, we have ⌦ pred−1(σy), t ↵ > ⌦ pred−1(σ), t ↵ . The notion of margin makes this requirement stronger. We say that a score t has a margin γ > 0 on label y, iff t is compatible with y and 8σy 2 ⌃y, σ /2 ⌃y, ⌦ pred−1(σy), t ↵ ≥ ⌦ pred−1(σ), t ↵ + γ Note that margin scales with t: if t has margin γ on y then ct has margin cγ on y for any positive c. If we are using linear predictions t = WX, we say that W has margin γ on (X, y) iff t = WX has margin γ on y. We say that W has margin γ on a dataset (X1, y1), . . . , (Xn, yn) iff W has margin γ on (X⌧, y⌧) for all ⌧2 [n]. Finally, a dataset (X1, y1), . . . , (Xn, yn) is said to be linearly separable with margin γ if there is a unit norm1 W ? such that W ? has margin γ on (X1, y1), . . . , (Xn, yn). 2.3 Algorithm Just like the classic perceptron algorithm, our generalized perceptron algorithm (Algorithm 1) is mistake driven. That is, it only updates on round when a mistake, i.e., a non-zero loss, is incurred. On a mistake round, it makes a rank-one update of the form W⌧+1 = W⌧−g⌧· X> ⌧where g⌧2 Rd, X⌧2 Rp. Therefore, W⌧always has a representation of the form P i giX> i . The prediction on a fresh input X is given by P i gi hXi, Xi which means the algorithm, just like the original perceptron, can be kernelized. We will give a loss bound for the algorithm using tools from Online Convex Optimization (OCO). Define the function φ : Rd ⇥[`] ! R as φ(t, y) = max σ2[k] $ L(σ, y) − ⌦ pred−1(σy) −pred−1(σ), t ↵% (2) where σy 2 ⌃y is an arbitrary member of ⌃y. For any y, φ(·, y) is a point-wise maximum of linear functions and hence convex. Also, φ is non-negative: choose σ = σy to lower bound the maximum. The inner product part vanishes and the loss L(σy, y) vanishes too because σy 2 ⌃y. Given the definition of φ, Algorithm 1 can be described succinctly as follows. At round ⌧, if L(W⌧X⌧, Y⌧) > 0, then W⌧+1 = W⌧−⌘rW φ(WX⌧, Y⌧), otherwise W⌧+1 = W⌧. 1Here, we mean that the Frobenius norm kW ?kF equals 1. Of course, the notion of margin can be generalized to any norm including the entry-based `1 norm kWk1 and the spectrum-based `1 norm kWkS(1) (also called the nuclear or trace norm). See Appendix B.2. 3 Algorithm 1 Predtron: Extension of the Perceptron Algorithm to General Prediction Problems 1: W1 0 2: for ⌧= 1, 2, . . . do 3: Receive X⌧2 Rp 4: Predict σ⌧= pred(W⌧X⌧) 2 [k] 5: Receive label y⌧2 [`] 6: if L(σ⌧, y⌧) > 0 then 7: (t, y) = (W⌧X⌧, y⌧) 8: ˜σ⌧= argmaxσ2[k] $ L(σ, y) − ⌦ pred−1(σy) −pred−1(σ), t ↵% 2 [k] 9: r⌧= (pred−1(˜σ⌧) −pred−1(σy)) · X> ⌧2 Rd⇥p 10: W⌧+1 = W⌧−⌘r⌧ 11: else 12: W⌧+1 = W⌧ 13: end if 14: end for Theorem 1. Suppose the dataset (X1, y1), . . . , (Xn, yn) is linearly separable with margin γ. Then the sequence W⌧generated by Algorithm 1 with ⌘= cL/(4R2) satisfies the loss bound, n X ⌧=1 L(W⌧X⌧, y⌧) 4R2C2 L cLγ2 where kX⌧k2 R for all ⌧. Note that the bound above assumes perfect linear separability. However, just the classic perceptron, the bound will degrade gracefully when the best linear predictor does not have enough margin on the data set. The Predtron algorithm has some interesting variants, two of which we consider in the appendix. A loss driven version, Predtron.LD, enjoys a loss bound that gets rid of the CL/cL factor in the bound above. A version, Predtron.Link, that uses link functions to deal with margins defined with respect to non-Euclidean norms is also considered. 3 Relationship to Existing Results It is useful to discuss a few concrete applications of the abstract framework introduced in the last section. Several existing loss bounds can be readily derived by applying our bound for the generalized perceptron algorithm in Theorem 1. In some cases, our framework yields a different algorithm than existing counterparts, yet admitting identical loss bounds, up to constants. Binary Classification. We begin with the classical perceptron algorithm for binary classification (i.e., ` = 2) [7]: L0-1(σ, y) = 1 if σ 6= y or 0 otherwise. Letting rep(σ) be +1 for the positive class and −1 for the negative class, predictor vector W⌧2 R1⇥p, and thus pred(t) = sign(t), Algorithm 1 reduces to the original perceptron algorithm; Theorem 1 yields identical mistake bound on a linearly separable dataset with margin γ (if the classical margin is γ, ours works out to be 2γ), i.e. Pn ⌧=1 L0-1(W⌧X⌧, y⌧)  R2 γ2 . We can also easily incorporate asymmetric losses. Let L↵(σ, y) = ↵y, if σ 6= y and 0 otherwise. We then have the following result. Corollary 2. Consider the perceptron with weighted loss L↵. Assume ↵1 ≥↵2 without loss of generality. Then the sequence W⌧generated by Algorithm 1 satisfies the weighted mistake bound, n X ⌧=1 L↵(W⌧X⌧, y⌧) 4R2↵2 1 ↵2 2γ2 . We are not aware of such results for weighted loss. Previous work [8] studies perceptrons with uneven margins, and the loss bound there only implies a bound on the unweighted loss: Pn ⌧=1 L0-1(t⌧, y⌧). In a technical note, R¨atsch and Kivinen [9] provide a mistake bound of the 4 form (without proof): Pn ⌧=1 L↵(W⌧X⌧, y⌧) R2 4γ2 , but for the specific choice of weights ↵1 = a2 and ↵2 = (1 −a)2 for any a 2 [0, 1]. Another interesting extension is obtained by allowing the predictions to have a REJECT option. Define LREJ(REJECT, y) = βy and LREJ(σ, y) = L0-1(σ, y) otherwise. Assume 1 ≥β1 ≥β2 > 0 without loss of generality. Choosing the standard basis vectors in R2 to be rep(σ) for the positive and the negative classes, and rep(REJECT) = 1 p 2 P σ2{1,2} rep(σ), we obtain Pn ⌧=1 LREJ(W⌧X⌧, y⌧)  4R2β2 1 γ2β2 2 (See Appendix C.1). Multiclass Classification. Each instance is assigned exactly one of m classes (i.e., ` = m). Extending binary classification, we choose the standard basis vectors in Rm to be rep(σ) for the m classes. The learner predicts score t 2 Rm using the predictor W 2 Rm⇥p. So, pred(t) = argmaxi ti. Let wj denote the jth row of W (corresponding to label j). The definition of margin becomes: hwy, Xi −max j6=y hwj, Xi ≥γ which is identical to the multiclass margin studied earlier [10]. For the multiclass 0-1 loss L0-1, we recover their bound, up to constants2. Moreover, our surrogate φ for L0-1: φ(t, y) = max $ 0, 1 + max σ6=y tσ −ty % , matches the multiclass extension of the Hinge loss studied by [11]. Finally, note that it is straightforward to obtain loss bounds for multiclass perceptron with REJECT option by naturally extending the definitions of rep and LREJ for the binary case. Ordinal Regression. The goal is to assign ordinal classes (such as ratings) to a set of objects {X1, X2, . . . } described by their features Xi 2 Rp. In many cases, precise rating information may not be available, but only their relative ranks; i.e., the observations consist of object-rank pairs (X⌧, y⌧) where y⌧2 [`]. Y is totally-ordered with “>” relation, which in turn induces a partial ordering on the objects (Xj is preferred to Xj0 if yj > yj0, Xj and Xj0 are not comparable if yj = yj0). For the ranking loss L(σ, y) = |σ −y|, the PRank perceptron algorithm [12] enjoys the bound Pn ⌧=1 L(⌧⌧, y⌧) (` −1)(R2 + 1)/˜γ2, where ˜γ is a certain rank margin. By a reduction to multi-class classification with ` classes, Algorithm 1 achieves the loss bound 4(` −1)2R2/γ2 (albeit, for a different margin γ). Multilabel Classification. This setting generalizes multiclass classification in that instances are assigned subsets of m classes rather than unique classes, i.e., ` = 2m. The loss function L of interest may dictate the choice of rep and in turn pred. For example, consider the following subset losses that treat labels as well as predictions as subsets: (i) Subset 0-1 loss: LIsErr(σ, y) = 1 if σ = y or 0 otherwise; (ii) Hamming loss: LHam(σ, y) = |σ [ y| −|σ \ y|, and (ii) Error set size: LErrSetSize(σ, y) = ''{(r, s) 2 y ⇥([m] \ y) : r 62 σ, s 2 σ} ''. A natural choice of rep then is the subset indicator vector in {+1, −1}d, where d = m = log `, which can be expressed as rep(σ) = 1 pm $ P j2σ ej −P j62σ ej % (where ej’s are the standard basis vectors in Rm). The learner predicts score t 2 Rm using a matrix W 2 Rm⇥p. Note that pred(t) = sign(t), where sign is applied component-wise. The number of predictions is 2m, but we show in Appendix C.2 that the surrogate (2) and its gradient can be efficiently computed for all of the above losses. 4 Subset Ranking In subset ranking [13], the task is to learn to rank a number of documents in order of their relevance to a query. We will assume, for simplicity, that the number of documents per query is constant that we denote by m. The input space is a subset of Rm⇥p0 that we can identify with Rp for p = mp0. Each row of an input matrix corresponds to a p0-dimensional feature vector derived jointly using the query 2Perceptron algorithm in [10] is based on a slightly different loss defined as LErrSet(t, y) = 1 if |{r 6= y : tr ≥ty}| > 0 or 0 otherwise (where t = WX). This loss upper bounds L0-1 (because of the way ties are handled, there can be rounds when L0-1 is 0, but LErrSet is 1). 5 and one of the documents associated with it. The predictions σ are all m! permutations of degree m. The most natural (but by no means the only one) representation of permutations is to set rep(σ) = −σ/Z where σ(i) is the position of the document i in the predicted ranking and the normalization Z ensures that rep(σ) is a unit vector. Note that the dimension d of this representation is equal to m. The minus sign in this representation ensures that pred(t) outputs a permutation that corresponds to sorting the entries of t in decreasing order, a common convention in existing work. A more general representation is obtained by setting rep(σ) = f(σ)/Z where f : R ! R is a strictly decreasing real valued function that is applied entry-wise to σ. The normalization Z = pPm i=1 f 2(i) ensures that k rep(σ)k2 = 1. To convert an input matrix X 2 Rp (p = mp0) into a score vector t 2 Rm, it seems that we need to learn a matrix W 2 Rm⇥mp0. However, a natural permutation invariance requirement (if the documents associated are presented in a permuted fashion, the output scores should also get permuted in the same way) reduces the dimensionality of W to p0 (see, e.g., [14] for more details). Thus, given a vector w 2 Rp0 we get the score vector as t = Xw. The label space consists of relevance score vectors y 2 {0, 1, . . . , Ymax}m where Ymax is typically between 1 and 4 (yielding 2 to 5 grades of relevance). Note that the prediction space (of size k = m!) is different from the label space (of size ` = (Ymax + 1)m). A variety of loss functions have been used in subset ranking. For multigraded relevance judgments, a very popular choice is NDCG which is defined as NDCG(σ, y) = $ Pm i=1 2y(i)−1 log2(1+σ(i)) % /Z(y) where Z(y) is a normalization constant ensuring NDCG stays bounded by 1. To convert it into a loss we define LNDCG = 1 −NDCG. Note that any permutation that sorts y in decreasing order gets zero LNDCG. One might worry that the computation of the surrogate defined in (2) and its gradient might require an enumeration of m! permutations. The next lemma allays such a concern. Lemma 3. When L = LNDCG and rep(σ) is chosen as above, the computation of the surrogate (2), as well as its gradient, can be reduced to solving a linear assignment problem and hence can be done in O(m3) time. We now give a result explaining what it means for a score vector t to have a margin γ on y when we use a representation of the form described above. Without loss of generality, we may assume that y is sorted in decreasing order of relevance judgements. Lemma 4. Suppose rep(σ) = f(σ)/Z for a strictly decreasing function f : R ! R and Z = pPm i=1 f 2(i). Let y be a non-constant relevance judgement vector sorted in decreasing order. Suppose i1 < i2, . . . < iN, N ≥1 are the positions where the relevance drops by a grade or more (i.e., y(ij) < y(ij −1)). Then t has a margin γ on y iff t is compatible with y and, for j 2 [N], tij−1 ≥tij + γZ f(ij −1) −f(ij) where we define i0 = 1, iN+1 = m + 1 to handle boundary cases. Note that if we choose f(i) = −i↵, ↵> 1 then f(ij −1) −f(ij) = O(i↵−1 j ) for large ij. In that case, the margin condition above requires less separation between documents with different relevance scores down the list (when viewed in decreasing order of relevance scores) than at the top of the list. We end this section with a loss bound for LNDCG under a margin condition. Corollary 5. Suppose L = LNDCG and rep(σ) is as in Lemma 4. Then, assuming the dataset is linearly separable with margin γ, the sequence generated by Algorithm 1 with line 9 replaced by r⌧= X> ⌧(pred−1(˜σ⌧) −pred−1(σy)) 2 Rp0⇥1 satisfies n X ⌧=1 LNDCG(X⌧w⌧, y⌧) 2Ymax+3 · m2 log2 2(2m) · R2 γ2 where kX⌧kop R. Note that the result above uses the standard `2-norm based notion of margin. Imagine a subset ranking problem, where only a small number of features are relevant. It is therefore natural to consider a notion of margin where the weight vector that ranks everything perfectly has low group `1 norm, instead of low `2 norm. The `1 margin also appears in the analysis of AdaBoost [2, Definition 6 6.2]. We can use a special case of a more general algorithm given in the appendix (Appendix B.2, Algorithm 3). Specifically, we replace line 10 with the step w⌧+1 = (r )−1 (r (w⌧) −r⌧) where (w) = 1 2kwk2 r. We set r = log(p0)/(log(p0) −1). The mapping r and its inverse can both be easily computed (see, e.g., [6, p. 145]). Corollary 6. Suppose L = LNDCG and rep(σ) is as in Lemma 4. Then, assuming the dataset is linearly separable with margin γ by a unit `1 norm w? (kw?k1 = 1), the sequence generated by Algorithm 3 with chosen as above (and line 9 modified as in Corollary 5), satisfies n X ⌧=1 LNDCG(X⌧w⌧, y⌧) 9 · 2Ymax+3 · m2 log2 2(2m) · R2 · log p0 γ2 where maxj=1,...,po kX⌧,jk2 R and X⌧,j denotes the jth column of X⌧. 5 Multilabel Ranking As discussed in Section 3, in multilabel classification, both prediction space and label space are {0, 1}m with sizes k = ` = 2m. In multilabel ranking, however, the learner has to output rankings as predictions. So, as in the previous section, we have k = m! since the prediction σ can be any one of m! permutations of the labels. As before, we choose rep(σ) = f(σ)/Z and hence d = m. However, unlike the previous section, the input is no longer a matrix but a vector X 2 Rp. A prediction t 2 Rd is obtained as WX where W 2 Rm⇥p. Note the contrast with the last section: there, inputs are matrices and a weight vector is learned; here, inputs are vectors and a weight matrix is learned. Since we output rankings, it is reasonable to use a loss that takes positions of labels into account. We can use L = LNDCG. Algorithm 1 now immediately applies. Lemma 3 already showed that is efficiently implementable. We have the following straightforward corollary. Corollary 7. Suppose L = LNDCG and rep(σ) is as in Lemma 4. Then, assuming the dataset is linearly separable with margin γ, the sequence generated by Algorithm 1 satisfies n X ⌧=1 LNDCG(X⌧w⌧, y⌧) 2Ymax+3 · m2 log2 2(2m) · R2 γ2 where kX⌧k2 R. The bound above matches the corresponding bound, up to loss specific constants, for the multiclass multilabel perceptron (MMP) algorithm studied by [15]. The definition of margin by [15] for MMP is different from ours since their algorithms are designed specifically for multilabel ranking. Just like them, we can also consider other losses, e.g., precision at top K positions. Another perceptron style algorithm for multilabel ranking adopts a pairwise approach of comparing two labels at a time [16]. However, no loss bounds are derived. The result above uses the standard Frobenius norm based margin. Imagine a multilabel problem, where only a small number of features are relevant across all labels. Then, it is natural to consider a notion of margin where the matrix that ranks everything perfectly has low group (2, 1) norm, instead of low Frobenius norm, where kWk2,1 = Pp j=1 kWjk2 (Wj denotes a column of W). We again use a special case of Algorithm 3 (Appendix B.2). Specifically, we replace line 10 with the step W⌧+1 = (r )−1 (r (W⌧) −r⌧) where (W) = 1 2kWk2 2,r. Recall that the group (2, r)-norm is the `r norm of the `2 norm of the columns of W. We set r = log(p)/(log(p) −1). The mapping r and its inverse can both be easily computed (see, e.g., [17, Eq. (2)]). Corollary 8. Suppose L = LNDCG and rep(σ) is as in Lemma 4. Then, assuming the dataset is linearly separable with margin γ by a unit group norm W ? (kW ?k2,1 = 1), the sequence generated by Algorithm 3 with chosen as above, satisfies n X ⌧=1 LNDCG(X⌧w⌧, y⌧) 9 · 2Ymax+3 · m2 log2 2(2m) · R2 · log p γ2 where kX⌧k1 R. 7 20 40 60 80 100 10 −10 10 −5 10 0 Subset Ranking (m=20, p0=30) No. of Training Points (n) Loss on Test Points (1−NDCG) pred−1(σ(i))=1/i pred−1(σ(i))=−i1.1 pred−1(σ(i))=−i2 15 20 25 10 −10 10 −5 10 0 No. of documents in each instance (m) Loss on Test Points (1−NDCG) Subset Ranking (n=30, p0=30) pred−1(σ(i))=1/i pred−1(σ(i))=−i1.1 pred−1(σ(i))=−i2 500 1000 1500 2000 2500 0 0.1 0.2 0.3 0.4 Data dimensionality (p0) Loss on Test Points (1−NDCG) L1 vs L2 (s=50, n=50, m=20) Predtron−L2 Predtron−L1 (a) (b) (c) Figure 1: Subset Ranking: NDCG loss for different pred−1 choices with varying n (Plot (a)) and m (Plot (b)). As predicted by Lemmas 4 and 5, pred−1(σi) = −i1.1 is more accurate than 1/i. (c): L1 vs L2 margin. LNDCG for two different Predtron algorithms based on L1 and L2 margin. Data is generated using L1 margin notion but with varying sparsity of the optimal scoring function w⇤. 6 Experiments We now present simulation results to demonstrate the application of our proposed Predtron framework to subset ranking. We also demonstrate that empirical results match the trend predicted by our error bounds, hence hinting at tightness of our (upper) bounds. Due to lack of space, we focus only on the subset ranking problem. Also, we would like to stress that we do not claim that the basic version of Predtron itself (with ⌘= 1) provides a state-of-the-art ranker. Instead, we wish to demonstrate the applicability and flexibility of our framework in a controlled setting. We generated n data points X⌧2 Rm⇥p0 using a Gaussian distribution with independent rows. The ith row of X⌧represents a document and is sampled from a spherical Gaussian centered at µi. We selected a w⇤2 Rp0 and also a set of thresholds [⇣1, . . . , ⇣m+1] to generate relevance scores; we set ⇣j = 1 j , 82 j m and ⇣1 = +1 and ⇣m+1 = −1. We set relevance score y⌧(i) of the ith document in the ⌧th document-set as: y⌧(i) = m −j iff ⇣j+1 hX⌧(i), w⇤i ⇣j. That is, y⌧(i) 2 [m −1]. We measure performance of a given method using the NDCG loss LNDCG defined in Section 4. Note that LNDCG is less sensitive to errors in predictions for the less relevant documents in the list. On the other hand, our selection of thresholds ⇣i’s implies that the gap between scores of lowerranked documents is very small compared to the higher-ranked ones, and hence chances of making mistakes lower down the list is higher. Figure 1 (a) shows LNDCG (on a test set) for our Predtron algorithm (see Section 4) but with different pred−1 functions. For pred−1(σ(i)) = f2(σ) = −i1.1, f2(i−1)−f2(i) is monotonically increasing with i. On the other hand, for pred−1(σ(i)) = f1(σ) = 1/i, f1(i −1) −f1(i) is monotonically decreasing with i. Lemma 4 shows that the mistake bound (in terms of LNDCG) of Predtron is better when pred−1 function is selected to be f2(σ(i)) = −i1.1 (as well as for f3(σ(i)) = −i2) instead of f1(σ(i)) = 1/i. Clearly, Figure 1 (a) empirically validates this mistake bound with LNDCG going to almost 0 for f2 and f3 with just 60 training points, while f1 based Predtron has large loss even with n = 100 training points. Next, we fix the number of training instances to be n = 30 and vary the number of documents m. As the gap between ⇣i’s decreases for larger i, increasing m implies reducing the margin. Naturally, Predtron with the above mentioned inverse functions has monotonically increasing loss (see Figure 1 (b)). However, f2 and f3 provide zero-loss solutions for larger m when compared to f1. Finally, we conduct an experiment to show that by selecting appropriate notion of margin, Predtron can obtain more accurate solutions. To this end, we generate data from [−1, 1]p0 and select a sparse w⇤. Now, Predtron with `2-margin notion, i.e., standard gradient descent has pp0 dependency in the error bounds while the `1-margin (see Corollary 6) has only s log(p0) dependence. This error dependency is also revealed by Figure 1 (c), where increasing p0 with fixed s leads to minor increase in the loss for `1-based Predtron but leads to significantly higher loss for `2-based Predtron. Acknowledgments A. Tewari acknowledges the support of NSF under grant IIS-1319810. 8 References [1] Harish G. Ramaswamy and Shivani Agarwal. Classification calibration dimension for general multiclass losses. In Advances in Neural Information Processing Systems, pages 2078–2086, 2012. [2] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2012. [3] Michael Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8, 2002. [4] Nathan D. Ratliff, J Andrew Bagnell, and Martin Zinkevich. (Approximate) subgradient methods for structured prediction. In International Conference on Artificial Intelligence and Statistics, pages 380– 387, 2007. [5] Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, and Jean-Jeacques Slotine. Multiclass learning with simplex coding. In Advances in Neural Information Processing Systems, pages 2789–2797, 2012. [6] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011. [7] Albert B.J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, volume 12, pages 615–622, 1962. [8] Yaoyong Li, Hugo Zaragoza, Ralf Herbrich, John Shawe-Taylor, and Jaz S. Kandola. The perceptron algorithm with uneven margins. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 379–386, 2002. [9] Gunnar Ratsch and Jyrki Kivinen. Extended classification with modified Perceptron, 2002. Presented at the NIPS 2002 Workshop: Beyond Classification and Regression: Learning Rankings, Preferences, Equality Predicates, and Other Structures; abstract available at http://www.cs.cornell.edu/ people/tj/ranklearn/raetsch_kivinen.pdf. [10] Koby Crammer and Yoram Singer. Ultraconservative online algorithms for multiclass problems. The Journal of Machine Learning Research, 3:951–991, 2003. [11] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265–292, 2002. [12] Koby Crammer and Yoram Singer. Pranking with ranking. Advances in Neural Information Procession Systems, 14:641–647, 2002. [13] David Cossock and Tong Zhang. Statistical analysis of bayes optimal subset ranking. IEEE Transactions on Information Theory, 54(11):5140–5154, 2008. [14] Ambuj Tewari and Sougata Chaudhuri. Generalization error bounds for learning to rank: Does the length of document lists matter? In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of JMLR Workshop and Conference Proceedings, 2015. [15] Koby Crammer and Yoram Singer. A family of additive online algorithms for category ranking. The Journal of Machine Learning Research, 3:1025–1058, 2003. [16] Eneldo Loza Menc´ıa and Johannes Furnkranz. Pairwise learning of multilabel classifications with perceptrons. In IEEE International Joint Conference on Neural Networks, pages 2899–2906, 2008. [17] Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Regularization techniques for learning with matrices. Journal of Machine Learning Research, 13:1865–1890, 2012. 9
2015
169
5,668
Dependent Multinomial Models Made Easy: Stick Breaking with the P´olya-Gamma Augmentation Scott W. Linderman∗ Harvard University Cambridge, MA 02138 swl@seas.harvard.edu Matthew J. Johnson∗ Harvard University Cambridge, MA 02138 mattjj@csail.mit.edu Ryan P. Adams Twitter & Harvard University Cambridge, MA 02138 rpa@seas.harvard.edu Abstract Many practical modeling problems involve discrete data that are best represented as draws from multinomial or categorical distributions. For example, nucleotides in a DNA sequence, children’s names in a given state and year, and text documents are all commonly modeled with multinomial distributions. In all of these cases, we expect some form of dependency between the draws: the nucleotide at one position in the DNA strand may depend on the preceding nucleotides, children’s names are highly correlated from year to year, and topics in text may be correlated and dynamic. These dependencies are not naturally captured by the typical Dirichlet-multinomial formulation. Here, we leverage a logistic stick-breaking representation and recent innovations in P´olya-gamma augmentation to reformulate the multinomial distribution in terms of latent variables with jointly Gaussian likelihoods, enabling us to take advantage of a host of Bayesian inference techniques for Gaussian models with minimal overhead. 1 Introduction It is often desirable to model discrete data in terms of continuous latent structure. In applications involving text corpora, discrete-valued time series, or polling and purchasing decisions, we may want to learn correlations or spatiotemporal dynamics and leverage these structures to improve inferences and predictions. However, adding these continuous latent dependence structures often comes at the cost of significantly complicating inference: such models may require specialized, one-off inference algorithms, such as a non-conjugate variational optimization, or they may only admit very general inference tools like particle MCMC [1] or elliptical slice sampling [2], which can be inefficient and difficult to scale. Developing, extending, and applying these models has remained a challenge. In this paper we aim to provide a class of such models that are easy and efficient. We develop models for categorical and multinomial data in which dependencies among the multinomial parameters are modeled via latent Gaussian distributions or Gaussian processes, and we show that this flexible class of models admits a simple auxiliary variable method that makes inference easy, fast, and modular. This construction not only makes these models simple to develop and apply, but also allows the resulting inference methods to use off-the-shelf algorithms and software for Gaussian processes and linear Gaussian dynamical systems. The paper is organized as follows. After providing background material and defining our general models and inference methods, we demonstrate the utility of this class of models by applying it to three domains as case studies. First, we develop a correlated topic model for text corpora. Second, we study an application to modeling the spatial and temporal patterns in birth names given only sparse data. Finally, we provide a new continuous state-space model for discrete-valued sequences, ∗These authors contributed equally. 1 including text and human DNA. In each case, given our model construction and auxiliary variable method, inference algorithms are easy to develop and very effective in experiments. Code to use these models, write new models that leverage these inference methods, and reproduce the figures in this paper is available at github.com/HIPS/pgmult. 2 Modeling correlations in multinomial parameters In this section, we discuss an auxiliary variable scheme that allows multinomial observations to appear as Gaussian likelihoods within a larger probabilistic model. The key trick discussed in the proceeding sections is to introduce P´olya-gamma random variables into the joint distribution over data and parameters in such a way that the resulting marginal leaves the original model intact. The integral identity underlying the P´olya-gamma augmentation scheme [3] is (eψ)a (1 + eψ)b = 2−beκψ Z ∞ 0 e−ωψ2/2p(ω | b, 0) dω, (1) where κ = a −b/2 and p(ω | b, 0) is the density of the P´olya-gamma distribution PG(b, 0), which does not depend on ψ. Consider a likelihood function of the form p(x | ψ) = c(x) (eψ)a(x) (1 + eψ)b(x) (2) for some functions a, b, and c. Such likelihoods arise, e.g., in logistic regression and in binomial and negative binomial regression [3]. Using (1) along with a prior p(ψ), we can write the joint density of (ψ, x) as p(ψ, x) = p(ψ) c(x) (eψ)a(x) (1 + eψ)b(x) = Z ∞ 0 p(ψ) c(x) 2−b(x)eκ(x)ψe−ωψ2/2p(ω | b(x), 0) dω. (3) The integrand of (3) defines a joint density on (ψ, x, ω) which admits p(ψ, x) as a marginal density. Conditioned on these auxiliary variables ω, we have p(ψ | x, ω) ∝p(ψ)eκ(x)ψe−ωψ2/2 (4) which is Gaussian when p(ψ) is Gaussian. Furthermore, by the exponential tilting property of the P´olya-gamma distribution, we have ω | ψ, x ∼PG(b(x), ψ). Thus the identity (1) gives rise to a conditionally conjugate augmentation scheme for Gaussian priors and likelihoods of the form (2). This augmentation scheme has been used to develop Gibbs sampling and variational inference algorithms for Bernoulli, binomial [3], and negative binomial [4] regression models with logit link functions, and to the multinomial distribution with a multi-class logistic link function [3, 5]. The multi-class logistic “softmax” function, πLN(ψ), maps a real-valued vector ψ ∈RK to a probability vector π ∈[0, 1]K by setting πk = eψk/ PK j=1 eψj. It is commonly used in multi-class regression [6] and correlated topic modeling [7]. Correlated multinomial parameters can be modeled with a Gaussian prior on the vector ψ, though the resulting models are not conjugate. The P´olya-gamma augmentation can be applied to such models [3, 5], but it only provides single-site Gibbs updating of ψ. This paper develops a joint augmentation in the sense that, given the auxiliary variables, the entire vector ψ is resampled as a block in a single Gibbs update. 2.1 A new P´olya-gamma augmentation for the multinomial distribution First, rewrite the K-dimensional multinomial recursively in terms of K −1 binomial densities: Mult(x | N, π) = K−1 Y k=1 Bin(xk | Nk, eπk), (5) Nk = N − X j<k xj, eπk = πk 1 −P j<k πj , k = 2, 3, . . . , K, (6) 2 Figure 1: Correlated 2D Gaussian priors on ψ and their implied densities on πSB(ψ). See text for details. where N1 = N = P k xk and eπ1 = π1. For convenience, we define N(x) ≡[N1, . . . , NK−1]. This decomposition of the multinomial density is a “stick-breaking” representation where each eπk represents the fraction of the remaining probability mass assigned to the k-th component. We let eπk = σ(ψk), where σ(·) denotes the logistic function, and define the function, πSB : RK−1 →[0, 1]K, which maps a vector ψ to a normalized probability vector π. Next, we rewrite the density into the form required by (1) by substituting σ(ψk) for eπk: Mult(x | N, ψ) = K−1 Y k=1 Bin(xk | Nk, σ(ψk)) = K−1 Y k=1 Nk xk  σ(ψk)xk(1 −σ(ψk))Nk−xk (7) = K−1 Y k=1 Nk xk  (eψk)xk (1 + eψk)Nk . (8) Choosing ak(x) = xk and bk(x) = Nk for each k = 1, 2, . . . , K −1, we can then introduce P´olyagamma auxiliary variables ωk corresponding to each coordinate ψk; dropping terms that do not depend on ψ and completing the square yields p(x, ω | ψ) ∝ K−1 Y k=1 e(xk−Nk/2)ψk−ωkψ2 k/2 ∝N  Ω−1κ(x) ψ, Ω−1  , (9) where Ω≡diag(ω) and κ(x) ≡x −N(x)/2. That is, conditioned on ω, the likelihood of ψ under the augmented multinomial model is proportional to a diagonal Gaussian distribution. Figure 1 shows how several Gaussian densities map to probability densities on the simplex. Correlated Gaussians (left) put most probability mass near the π1 = π2 axis of the simplex, and anticorrelated Gaussians (center) put mass along the sides where π1 is large when π2 is small and vice-versa. Finally, a nearly isotropic Gaussian approximates a symmetric Dirichlet. Appendix A gives a closed-form expression for the density on π induced by a Gaussian distribution on ψ, and also an expression for a diagonal Gaussian that approximates a Dirichlet by matching moments. 3 Correlated topic models The Latent Dirichlet Allocation (LDA) [8] is a popular model for learning topics from text corpora. The Correlated Topic Model (CTM) [7] extends LDA by including a Gaussian correlation structure among topics. This correlation model is powerful not only because it reveals correlations among 3 Figure 2: A comparison of correlated topic model performance. The left panel shows a subset of the inferred topic correlations for the AP News corpus. Two examples are highlighted: a) positive correlation between topics (house, committee, congress, law) and (Bush, Dukakis, president, campaign), and b) anticorrelation between (percent, year, billion, rate) and (court, case, attorney, judge). The middle and right panels demonstrate the efficacy of our SB-CTM relative to competing models on the AP News corpus and the 20 Newsgroup corpus, respectively. topics but also because inferring such correlations can significantly improve predictions, especially when inferring the remaining words in a document after only a few have been revealed [7]. However, the addition of this Gaussian correlation structure breaks the Dirichlet-Multinomial conjugacy of LDA, making estimation and particularly Bayesian inference and model-averaged predictions more challenging. An approximate maximum likelihood approach using variational EM [7] is often effective, but a fully Bayesian approach which integrates out parameters may be preferable, especially when making predictions based on a small number of revealed words in a document. A recent Bayesian approach based on a P´olya-Gamma augmentation to the logistic normal CTM (LN-CTM) [5] provides a Gibbs sampling algorithm with conjugate updates, but the Gibbs updates are limited to single-site resampling of one scalar at a time, which can lead to slow mixing in correlated models. In this section we show that MCMC sampling in a correlated topic model based on the stick breaking construction (SB-CTM) can be significantly more efficient than sampling in the LN-CTM while maintaining the same integration advantage over EM. In the standard LDA model, each topic βt (t = 1, 2, . . . , T) is a distribution over a vocabulary of V possible words, and each document d has a distribution over topics θd (d = 1, 2, . . . , D). The n-th word in document d is denoted wn,d for d = 1, 2, . . . , Nd. When each βt and θd is given a symmetric Dirichlet prior with parameters αβ and αθ, respectively, the generative model is βt ∼Dir(αβ), θd ∼Dir(αθ), zn,d | θd ∼Cat(θd), wn,d | zn,d, {βt} ∼Cat(βzn,d). (10) The CTM replaces the Dirichlet prior on each θd with a correlated prior induced by first sampling a correlated Gaussian vector ψd ∼N(µ, Σ) and then applying the logistic normal map: θd = πLN(ψd) Analogously, our SB-CTM generates the correlation structure by instead applying the stick-breaking logistic map, θd = πSB(ψd). The goal is then to infer the posterior distribution over the topics βt, the documents’ topic allocations ψd, and their mean and correlation structure (µ, Σ), where the parameters (µ, Σ) are given a conjugate normal-inverse Wishart (NIW) prior. Modeling correlation structure within the topics β can be done analogously. For fully Bayesian inference in the SB-CTM, we develop a Gibbs sampler that exploits the block conditional Gaussian structure provided by the stick-breaking construction. The Gibbs sampler iteratively samples z | w, β, ψ; β | z, w; ψ | z, µ, Σ, ω; and µ, Σ | ψ as well as the auxiliary variables ω | ψ, z. The first two are standard updates for LDA models, so we focus on the latter three. Using the identities derived in Section 2.1, the conditional density of each ψd | zd, µ, Σ, ω can be written p(ψd | zd, ωd) ∝N(Ω−1 d κ(cd) | ψd, Ω−1 d ) N(ψd | µ, Σ) ∝N(ψd | eµ, eΣ), (11) 4 where we have defined eµ = eΣ  κ(cd) + Σ−1µ  , eΣ =  Ωd + Σ−1−1 , cd,t = X n I[zn,d = t], Ωd = diag(ωd), and so it is resampled as a joint Gaussian. The correlation structure parameters µ and Σ are sampled from their conditional NIW distribution. Finally, the auxiliary variables ω are sampled as P´olyaGamma random variables, with ωd | zd, ψd ∼PG(N(cd), ψd). A feature of the stick-breaking construction is that the the auxiliary variable update is embarrassingly parallel. We compare the performance of this Gibbs sampling algorithm for the SB-CTM to the Gibbs sampling algorithm of the LN-CTM [5], which uses a different P´olya-gamma augmentation, as well as the original variational EM algorithm for the CTM and collapsed Gibbs sampling in standard LDA. Figure 2 shows results on both the AP News dataset and the 20 Newsgroups dataset, where models were trained on a random subset of 95% of the complete documents and tested on the remaining 5% by estimating held-out likelihoods of half the words given the other half. The collapsed Gibbs sampler for LDA is fast but because it does not model correlations its ability to predict is significantly constrained. The variational EM algorithm for the CTM is reasonably fast but its point estimate doesn’t quite match the performance from integrating out parameters via MCMC in this setting. The LN-CTM Gibbs sampler continues to improve slowly but is limited by its single-site updates, while the SB-CTM sampler seems to both mix effectively and execute efficiently due to its block Gaussian updating. The SB-CTM demonstrates that the stick-breaking construction and corresponding P´olya-Gamma augmentation makes inference in correlated topic models both easy to implement and computationally efficient. The block conditional Gaussianity also makes inference algorithms modular and compositional: the construction immediately extends to dynamic topic models (DTMs) [9], in which the latent ψd evolve according to linear Gaussian dynamics, and inference can be implemented simply by applying off-the-shelf code for Gaussian linear dynamical systems (see Section 5). Finally, because LDA is so commonly used as a component of other models (e.g. for images [10]), easy, effective, modular inference for CTMs and DTMs is a promising general tool. 4 Gaussian processes with multinomial observations Consider the United States census data, which lists the first names of children born in each state for the years 1910-2013. Suppose we wish to predict the probability of a particular name in New York State in the years 2012 and 2013 given observed names in earlier years. We might reasonably expect that name probabilities vary smoothly over time as names rise and fall in popularity, and that name probability would be similar in neighboring states. A Gaussian process naturally captures these prior intuitions about spatiotemporal correlations, but the observed name counts are most naturally modeled as multinomial draws from latent probability distributions over names for each combination of state and year. We show how efficient inference can be performed in this otherwise difficult model by leveraging the P´olya-gamma augmentation. Let Z ∈RM×D denote the matrix of D dimensional inputs and X ∈NM×K denote the observed K dimensional count vectors for each input. In our example, each row zm of Z corresponds to the year, latitude, and longitude of an observation, and K is the number of names. Underlying these observations we introduce a set of latent variables, ψm,k such that the probability vector at input zm is πm = πSB(ψm,:). The auxiliary variables for the k-th name, ψ:,k, are linked via a Gaussian process with covariance matrix, C, whose entry Ci,j is the covariance between input zi and zj under the GP prior, and mean vector µk. The covariance matrix is shared by all names, and the mean is empirically set to match the measured name probability. The full model is then, ψ:,k ∼GP(µk, C), xm ∼Mult(Nm, πSB(ψm,:)). To perform inference, introduce auxiliary P´olya-gamma variables, ωm,k for each ψm,k. Conditioned on these variables, the conditional distribution of ψ:,k is, p(ψ:,k | Z, X, ω, µ, C) ∝N  Ω−1 k κ(X:,k) ψ:,k, Ω−1 k  N(ψ:,k | µk, C) ∝N  ψ:,k | eµk, eΣk  eΣk = C−1 + Ωk −1 eµk = eΣk κ(X:,k) + C−1µk  , 5 2012 2013 Model Top 10 Bot. 10 Top 10 Bot. 10 Static 2011 4.2 (1.3) 0.7 (1.2) 4.2 (1.4) 0.8 (1.0) Raw GP 4.9 (1.1) 0.7 (0.9) 5.0 (1.0) 0.8 (0.9) LNM GP 6.7 (1.4) 4.8 (1.7) 6.8 (1.4) 4.6 (1.7) SBM GP 7.3 (1.0) 4.0 (1.8) 7.0 (1.0) 3.9 (1.4) Average number of names correctly predicted Figure 3: A spatiotemporal Gaussian process applied to the names of children born in the United States from 1960-2013. With a limited dataset of only 50 observations per state/year, the stick breaking and logistic normal multinomial GPs (SBM GP and LNM GP) outperform na¨ıve approaches in predicting the top and bottom 10 names (bottom left, parentheses: std. error). Our SBM GP, which leverages the P´olya-gamma augmentation, is considerably more efficient than the non-conjugate LNM GP (bottom right). where Ωk = diag(ω:,k). The auxiliary variables are updated according to their conditional distribution: ωm,k | xm, ψm,k ∼PG(Nm,k, ψm,k), where Nm,k = Nm −P j<k xm,j. Figure 3 illustrates the power of this approach on U.S. census data. The top two plots show the inferred probabilities under our stick-breaking multinomial GP model for the full dataset. Interesting spatiotemporal correlations in name probability are uncovered. In this large-count regime, the posterior uncertainty is negligible since we observe thousands of names per state and year, and simply modeling the transformed empirical probabilities with a GP works well. However, in the sparse data regime with only Nm = 50 observations per input, it greatly improves performance to model uncertainty in the latent probabilities using a Gaussian process with multinomial observations. The bottom panels compare four methods of predicting future names in the years 2012 and 2013 for a down-sampled dataset with Nm = 50: predicting based on the empirical probability measured in 2011; a standard GP to the empirical probabilities transformed by π−1 SB (Raw GP); a GP whose outputs are transformed by the logistic normal function, πLN, to obtain multinomial probabilities (LNM GP) fit using elliptical slice sampling [2]; and our stick-breaking multinomial GP (SBM GP). In terms of ability to predict the top and bottom 10 names, the multinomial models are both comparable and vastly superior to the naive approaches. The SBM GP model is considerably faster than the logistic normal version, as shown in the bottom right panel. The augmented Gibbs sampler is more efficient than the elliptical slice sampling algorithm used to handle the non-conjugacy in the LNM GP. Moreover, we are able to make collapsed predictions in which we compute the predictive distribution test ψ’s given ω, integrating out the training ψ. In contrast, the LNM GP must condition on the training GP values in order to make predictions, and effectively integrate over training samples using MCMC. Appendix B goes into greater detail on how marginal predictions are computed and why they are more efficient than predicting conditioned on a single value of ψ. 6 Figure 4: Predictive log likelihood comparison of time series models with multinomial observations. 5 Multinomial linear dynamical systems While discrete-state hidden Markov models (HMMs) are ubiquitous for modeling time series and sequence data, it can be preferable to use a continuous state space model. In particular, while discrete states have no intrinsic geometry, continuous states can correspond to natural Euclidean embeddings [11]. These considerations are particularly relevant to text, where word embeddings [12] have proven to be a powerful tool. Gaussian linear dynamical systems (LDS) provide very efficient learning and inference algorithms, but they can typically only be applied when the observations are themselves linear with Gaussian noise. While it is possible to apply a Gaussian LDS to count vectors [11], the resulting model is misspecified in the sense that, as a continuous density, the model assigns zero probability to training and test data. However, Belanger and Kakade [11] show that this model can still be used for several machine learning tasks with compelling performance, and that the efficient algorithms afforded by the misspecified Gaussian assumptions confer a significant computational advantage. Indeed, the authors have observed that such a Gaussian model is “worth exploring, since multinomial models with softmax link functions prevent closed-form M step updates and require expensive” computations [13]; this paper aims to bridge precisely this gap and enable efficient Gaussian LDS computational methods to be applied while maintaining multinomial emissions and an asymptotically unbiased representation of the posterior. While there are other approximation schemes that effectively extend some of the benefits of LDSs to nonlinear, non-Gaussian settings, such as the extended Kalman filter (EKF) and unscented Kalman filter (UKF) [14, 15], these methods do not allow for asymptotically unbiased Bayesian inference, can have complex behavior, and can make model learning a challenge. Alternatively, particle MCMC (pMCMC) [1] is a very powerful algorithm that provides unbiased Bayesian inference for very general state space models, but it does not enjoy the efficient block updates or conjugacy of LDSs or HMMs. The stick-breaking multinomial linear dynamical system (SBM-LDS) generates states via a linear Gaussian dynamical system but generates multinomial observations via the stick-breaking map: z0|µ0, Σ0 ∼N(µ0, Σ0), zt|zt−1, A, B ∼N(Azt−1, B), xt|zt, C ∼Mult(Nt, πSB(Czt)), where zt ∈RD is the system state at time t and xt ∈NK are the multinomial observations. We suppress notation for conditioning on A, B, C, µ0, and Σ0, which are system parameters of appropriate sizes that are given conjugate priors. The logistic normal multinomial LDS (LNM-LDS) is defined analogously but uses πLN in place of πSB. To produce a Gibbs sampler with fully conjugate updates, we augment the observations with P´olya-gamma random variables ωt,k. As a result, the conditional state sequence z1:T |ω1:T , x1:T is jointly distributed according to a Gaussian LDS in which the diagonal observation potential at time t is N(Ω−1 t κ(xt)|Czt, Ω−1 t ). Thus the state sequence can be jointly sampled using off7 the-shelf LDS software, and the system parameters can similarly be updated using standard algorithms. The only remaining update is to the auxiliary variables, which are sampled according to ωt|zt, C, x ∼PG(N(xt), Czt). We compare the SBM-LDS and the Gibbs sampling inference algorithm to three baseline methods: an LNM-LDS using pMCMC and ancestor resampling [16] for inference, an HMM using Gibbs sampling, and a “raw” LDS which treats the multinomial observation vectors as observations in RK as in [11]. We examine each method’s performance on each of three experiments: in modeling a sequence of 682 amino acids from human DNA with 22 dimensional observations, a set of 20 random AP news articles with an average of 77 words per article and a vocabulary size of 200 words, and an excerpt of 4000 words from Lewis Carroll’s Alice’s Adventures in Wonderland with a vocabulary of 1000 words. We reserved the final 10 amino acids, 10 words per news article, and 100 words from Alice for computing predictive likelihoods. Each linear dynamical model had a 10dimensional state space, while the HMM had 10 discrete states (HMMs with 20, 30, and 40 states all performed worse on these tasks). Figure 4 (left panels) shows the predictive log likelihood for each method on each experiment, normalized by the number of counts in the test dataset and relative to the likelihood under a multinomial model fit to the training data mean. For the DNA data, which has the smallest “vocabulary” size, the HMM achieves the highest predictive likelihood, but the SBM-LDS edges out the other LDS methods. On the two text datasets, the SBM-LDS outperforms the other methods, particularly in Alice where the vocabulary is larger and the document is longer. In terms of run time, the SBM-LDS is orders of magnitude faster than the LNM-LDS with pMCMC (right panel) because it mixes much more efficiently over the latent trajectories. 6 Related Work The stick-breaking transformation used herein was applied to categorical models by Khan et al. [17], but they used local variational bound instead of the P´olya-gamma augmentation. Their promising results corroborate our findings of improved performance using this transformation. Their generalized expectation-maximization algorithm is not fully Bayesian, and does not integrate into existing Gaussian modeling and inference code as easily as our augmentation. Conversely, Chen et al. [5] used the P´olya-gamma augmentation in conjunction with the logistic normal transformation for correlated topic modeling, exploiting the conditional conjugacy of a single entry ψk | ωk, ψ¬k with a Gaussian prior. Unlike our stick-breaking transformation, which admits block Gibbs sampling over the entire vector ψ simultaneously, their approach is limited to singlesite Gibbs sampling. As shown in our correlated topic model experiments, this has dramatic effects on inferential performance. Moreover, it precludes analytical marginalization and integration with existing Gaussian modeling algorithms. For example, it is not immediately applicable to inference in linear dynamical systems with multinomial observations. 7 Conclusion These case studies demonstrate that the stick-breaking multinomial model construction paired with the P´olya-gamma augmentation yields a flexible class of models with easy, efficient, and compositional inference. In addition to making these models easy, the methods developed here can also enable new models for multinomial and mixed data: the latent continuous structures used here to model correlations and state-space structure can be leveraged to explore new models for interpretable feature embeddings, interacting time series, and dependence with other covariates. 8 Acknowledgements S.W.L. is supported by a Siebel Scholarship and the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. M.J.J. is supported by the Harvard/MIT Joint Research Grants Program. R.P.A. is supported by NSF IIS-1421780 as well as the Alfred P. Sloan Foundation. 8 References [1] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72 (3):269–342, 2010. [2] Iain Murray, Ryan P. Adams, and David J.C. MacKay. Elliptical slice sampling. Journal of Machine Learning Research: Workshop and Conference Proceedings (AISTATS), 9:541–548, 05/2010 2010. [3] Nicholas G Polson, James G Scott, and Jesse Windle. Bayesian inference for logistic models using P´olya–gamma latent variables. Journal of the American Statistical Association, 108 (504):1339–1349, 2013. [4] Mingyuan Zhou, Lingbo Li, David Dunson, and Lawrence Carin. Lognormal and gamma mixed negative binomial regression. In Proceedings of the International Conference on Machine Learning, volume 2012, page 1343, 2012. [5] Jianfei Chen, Jun Zhu, Zi Wang, Xun Zheng, and Bo Zhang. Scalable inference for logisticnormal topic models. In Advances in Neural Information Processing Systems, pages 2445– 2453, 2013. [6] Chris C Holmes, Leonhard Held, et al. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1(1):145–168, 2006. [7] David Blei and John Lafferty. Correlated topic models. Advances in Neural Information Processing Systems, 18:147, 2006. [8] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. the Journal of machine Learning research, 3:993–1022, 2003. [9] David M Blei and John D Lafferty. Dynamic topic models. In Proceedings of the International Conference on Machine Learning, pages 113–120. ACM, 2006. [10] Xiaogang Wang and Eric Grimson. Spatial latent Dirichlet allocation. In Advances in Neural Information Processing Systems, pages 1577–1584, 2008. [11] David Belanger and Sham Kakade. A linear dynamical system model for text. In Proceedings of the International Conference on Machine Learning, 2015. [12] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the International Conference on Machine Learning, pages 160–167. ACM, 2008. [13] David Belanger and Sham Kakade. Embedding word tokens using a linear dynamical system. In NIPS 2014 Modern ML+NLP Workshop, 2014. [14] Eric A Wan and Rudolph Van Der Merwe. The unscented Kalman filter for nonlinear estimation. In Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000. AS-SPCC. The IEEE 2000, pages 153–158. IEEE, 2000. [15] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005. [16] Fredrik Lindsten, Thomas Sch¨on, and Michael I Jordan. Ancestor sampling for particle Gibbs. In Advances in Neural Information Processing Systems, pages 2591–2599, 2012. [17] Mohammad E Khan, Shakir Mohamed, Benjamin M Marlin, and Kevin P Murphy. A stickbreaking likelihood for categorical data analysis with latent Gaussian models. In International Conference on Artificial Intelligence and Statistics, pages 610–618, 2012. 9
2015
17
5,669
MCMC for Variationally Sparse Gaussian Processes James Hensman CHICAS, Lancaster University james.hensman@lancaster.ac.uk Alexander G. de G. Matthews University of Cambridge am554@cam.ac.uk Maurizio Filippone EURECOM maurizio.filippone@eurecom.fr Zoubin Ghahramani University of Cambridge zoubin@cam.ac.uk Abstract Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper is available at github.com/sparseMCMC. 1 Introduction Gaussian process models are attractive for machine learning because of their flexible nonparametric nature. By combining a GP prior with different likelihoods, a multitude of machine learning tasks can be tackled in a probabilistic fashion [1]. There are three things to consider when using a GP model: approximation of the posterior function (especially if the likelihood is non-Gaussian), computation, storage and inversion of the covariance matrix, which scales poorly in the number of data; and estimation (or marginalization) of the covariance function parameters. A multitude of approximation schemes have been proposed for efficient computation when the number of data is large. Early strategies were based on retaining a sub-set of the data [2]. Snelson and Ghahramani [3] introduced an inducing point approach, where the model is augmented with additional variables, and Titsias [4] used these ideas in a variational approach. Other authors have introduced approximations based on the spectrum of the GP [5, 6], or which exploit specific structures within the covariance matrix [7, 8], or by making unbiased stochastic estimates of key computations [9]. In this work, we extend the variational inducing point framework, which we prefer for general applicability (no specific requirements are made of the data or covariance function), and because the variational inducing point approach can be shown to minimize the KL divergence to the posterior process [10]. To approximate the posterior function and covariance parameters, Markov chain Monte-Carlo (MCMC) approaches provide asymptotically exact approximations. Murray and Adams [11] and Filippone et al. [12] examine schemes which iteratively sample the function values and covariance parameters. Such sampling schemes require computation and inversion of the full covariance matrix at each iteration, making them unsuitable for large problems. Computation may be reduced somewhat by considering variational methods, approximating the posterior using some fixed family of distributions [13, 14, 15, 16, 1, 17], though many covariance matrix inversions are generally required. Recent works [18, 19, 20] have proposed inducing point schemes which can reduce the 1 Table 1: Existing variational approaches Reference p(y | f) Sparse Posterior Hyperparam. Williams & Barber[21] [also 14, 17] probit/logit  Gaussian (assumed) point estimate Titsias [4] Gaussian  Gaussian (optimal) point estimate Chai [18] softmax  Gaussian (assumed) point estimate Nguyen and Bonilla [1] any factorized  Mixture of Gaussians point estimate Hensman et al. [20] probit  Gaussian (assumed) point estimate This work any factorized  free-form free-form computation required substantially, though the posterior is assumed Gaussian and the covariance parameters are estimated by (approximate) maximum likelihood. Table 1 places our work in the context of existing variational methods for GPs. This paper presents a general inference scheme, with the only concession to approximation being the variational inducing point assumption. Non-Gaussian posteriors are permitted through MCMC, with the computational benefits of the inducing point framework. The scheme jointly samples the inducing-point representation of the function with the covariance function parameters; with sufficient inducing points our method approaches full Bayesian inference over GP values and the covariance parameters. We show empirically that the number of required inducing points is substantially smaller than the dataset size for several real problems. 2 Stochastic process posteriors The model is set up as follows. We are presented with some data inputs X = {xn}N n=1 and responses y = {yn}N n=1. A latent function is assumed drawn from a GP with zero mean and covariance function k(x, x′) with (hyper-) parameters θ. Consistency of the GP means that only those points with data are considered: the latent vector f represents the values of the function at the observed points f = {f(xn)}N n=1, and has conditional distribution p(f | X, θ) = N(f | 0, Kff), where Kff is a matrix composed of evaluating the covariance function at all pairs of points in X. The data likelihood depends on the latent function values: p(y | f). To make a prediction for latent function value test points f ⋆= {f(x⋆)}x⋆∈X⋆, the posterior function values and parameters are integrated: p(f ⋆| y) = Z Z p(f ⋆| f, θ)p(f, θ | y) dθ df . (1) In order to make use of the computational savings offered by the variational inducing point framework [4], we introduce additional input points to the function Z and collect the responses of the function at that point into the vector u = {um = f(zm)}M m=1. With some variational posterior q(u, θ), new points are predicted similarly to the exact solution q(f ⋆) = Z Z p(f ⋆| u, θ)q(u, θ) dθ du . (2) This makes clear that the approximation is a stochastic process in the same fashion as the true posterior: the length of the predictions vector f ⋆is potentially unbounded, covering the whole domain. To obtain a variational objective, first consider the support of u under the true posterior, and for f under the approximation. In the above, these points are subsumed into the prediction vector f ⋆: from here we shall be more explicit, letting f be the points of the process at X, u be the points of the process at Z and f ⋆be a large vector containing all other points of interest1. All of the free parameters of the model are then f ⋆, f, u, θ, and using a variational framework, we aim to minimize the Kullback-Leibler divergence between the approximate and true posteriors: K ≜KL[q(f ⋆, f, u, θ)||p(f ⋆, f, u, θ | y)] = −E q(f ⋆,f,u,θ)  log p(f ⋆| u, f, θ)p(u | f, θ)p(f, θ | y) p(f ⋆| u, f, θ)p(f | u, θ)q(u, θ)  (3) 1The vector f ⋆here is considered finite but large enough to contain any point of interest for prediction. The infinite case follows Matthews et al. [10], is omitted here for brevity, and results in the same solution. 2 where the conditional distributions for f ⋆have been expanded to make clear that they are the same under the true and approximate posteriors, and X, Z and X⋆have been omitted for clarity. Straightforward identities simplify the expression, K = −Eq(f,u,θ)  log p(u | f, θ)p(f | θ)p(θ)p(y | f)/p(y) p(f | u, θ)q(u, θ)  = −Eq(f,u,θ)  log p(u | θ)p(θ)p(y | f) q(u, θ)  + log p(y) , (4) resulting in the variational inducing-point objective investigated by Titsias [4], aside from the inclusion of θ. This can be rearranged to give the following informative expression K = KL  q(u, θ)||p(u | θ)p(θ) exp{Ep(f | u,θ)[log p(y | f)]} C  −log C + log p(y). (5) Here C is an intractable constant which normalizes the distribution and is independent of q. Minimizing the KL divergence on the right hand side reveals that the optimal variational distribution is log ˆq(u, θ) = Ep(f | u,θ) [log p(y | f)] + log p(u | θ) + log p(θ) −log C. (6) For general likelihoods, since the optimal distribution does not take any particular form, we intend to sample from it using MCMC, thus combining the benefits of variationally-sparse Gaussian processes with a free-form posterior. Sampling is feasible using standard methods since log ˆq is computable up to a constant, using O(NM 2) computations. After completing this work, it was brought to our attention that a similar suggestion had been made in [22], though the idea was dismissed because “prediction in sparse GP models typically involves some additional approximations”. Our presentation of the approximation consisting of the entire stochastic process makes clear that no additional approximations are required. To sample effectively, the following are proposed. Whitening the prior Noting that the problem (6) appears similar to a standard GP for u, albeit with an interesting ‘likelihood’, we make use of an ancillary augmentation u = Rv, with RR⊤= Kuu, v ∼N(0, I). This results in the optimal variational distribution log ˆq(v, θ) = Ep(f | u=Rv) [log p(y | f)] + log p(v) + log p(θ) −log C (7) Previously [11, 12] this parameterization has been used with schemes which alternate between sampling the latent function values (represented by v or u) and the parameters θ. Our scheme uses HMC across v and θ jointly, whose effectiveness is examined throughout the experiment section. Quadrature The first term in (6) is the expected log-likelihood. In the case of factorization across the data-function pairs, this results in N one-dimensional integrals. For Gaussian or Poisson likelihood these integrals are tractable, otherwise they can be approximated by Gauss-Hermite quadrature. Given the current sample v, the expectations are computed w.r.t. p(fn | v, θ) = N(µn, γn), with: µ = A⊤v; γ = diag(Kff −A⊤A); A = R−1Kuf; RR⊤= Kuu, (8) where the kernel matrices Kuf, Kuu are computed similarly to Kff, but over the pairs in (X, Z), (Z, Z) respectively. From here, one can compute the expected likelihood and it is subsequently straightforward to compute derivatives in terms of Kuf, diag(Kff) and R. Reverse mode differentiation of Cholesky To compute derivatives with respect to θ and Z we use reverse-mode differentiation (backpropagation) of the derivative through the Cholesky matrix decomposition, transforming ∂log ˆq(v, θ)/∂R into ∂log ˆq(v, θ)/∂Kuu, and then ∂log ˆq(v, θ)/∂θ. This is discussed by Smith [23], and results in a O(M 3) operation; an efficient Cython implementation is provided in the supplement. 3 Treatment of inducing point positions & inference strategy A natural question is, what strategy should be used to select the inducing points Z? In the original inducing point formulation [3], the positions Z were treated as parameters to be optimized. One could interpret them as parameters of the approximate prior covariance [24]. The variational formulation 3 [4] treats them as parameters of the variational approximation, thus protecting from over-fitting as they form part of the variational posterior. In this work, since we propose a Bayesian treatment of the model, we question whether it is feasible to treat Z in a Bayesian fashion. Since u and Z are auxiliary parameters, the form of their distribution does not affect the marginals of the model. The term p(u | Z) has been defined by the consistency with the GP in order to preserve the posterior-process interpretation above (i.e. u should be points on the GP), but we are free to choose p(Z). Omitting dependence on θ for clarity, and choosing w.l.o.g. q(u, Z) = q(u | Z)q(Z), the bound on the marginal likelihood, similarly to (4) is given by L = Ep(f | u,Z)q(u | Z)q(Z)  log p(y | f)p(u | Z)p(Z) q(u | Z)q(Z)  . (9) The bound can be maximized w.r.t p(Z) by noting that the term only appears inside a (negative) KL divergence: −Eq(Z)[log q(Z)/p(Z)]. Substituting the optimal p(Z) = q(Z) reduces (9) to L = Eq(Z)  Ep(f | u,Z)q(u | Z)  log p(y | f)p(u | Z) q(u | Z)  , (10) which can now be optimized w.r.t. q(Z). Since no entropy term appears for q(Z), the bound is maximized when the distribution becomes a Dirac’s delta. In summary, since we are free to choose a prior for Z which maximizes the amount of information captured by u, the optimal distribution becomes p(Z) = q(Z) = δ(Z −ˆZ). This formally motivates optimizing the inducing points Z. Derivatives for Z For completeness we also include the derivative of the free form objective with respect to the inducing point positions. Substituting the optimal distribution ˆq(u, θ) into (4) to give ˆK and then differentiating we obtain ∂ˆK ∂Z = −∂log C ∂Z = −Eˆq(v,θ)  ∂ ∂ZEp(f | u=Rv) [log p(y | f)]  . (11) Since we aim to draw samples from ˆq(v, θ), evaluating this free form inducing point gradient using samples seems plausible but challenging. Instead we use the following strategy. 1. Fit a Gaussian approximation to the posterior. We follow [20] in fitting a Gaussian approximation to the posterior. The positions of the inducing points are initialized using k-means clustering of the data. The values of the latent function are represented by a mean vector (initialized randomly) and a lower-triangular matrix L forms the approximate posterior covariance as LL⊤. For large problems (such as the MNIST experiment), stochastic optimization using AdaDelta is used. Otherwise, LBFGS is used. After a few hundred iterations with the inducing points positions fixed, they are optimized in free-form alongside the variational parameters and covariance function parameters. 2. Initialize the model using the approximation. Having found a satisfactory approximation, the HMC strategy takes the optimized inducing point positions from the Gaussian approximation. The initial value of v is drawn from the Gaussian approximation, and the covariance parameters are initialized at the (approximate) MAP value. 3. Tuning HMC. The HMC algorithm has two free parameters to tune, the number of leapfrog steps and the step-length. We follow a strategy inspired by Wang et al. [25], where the number of leapfrog steps is drawn randomly from 1 to Lmax, and Bayesian optimization is used to maximize the expected square jump distance (ESJD), penalized by √Lmax. Rather than allow an adaptive (but convergent) scheme as [25], we run the optimization for 30 iterations of 30 samples each, and use the best parameters for a long run of HMC. 4. Run tuned HMC to obtain predictions. Having tuned the HMC, it is run for several thousand iterations to obtain a good approximation to ˆq(v, θ). The samples are used to estimate the integral in equation (2). The following section investigates the effectiveness of the proposed sampling scheme. 4 Experiments 4.1 Efficient sampling using Hamiltonian Monte Carlo This section illustrates the effectiveness of Hamiltonian Monte Carlo in sampling from ˆq(v, θ). As already pointed out, the form assumed by the optimal variational distribution ˆq(v, θ) in equation (6) resembles the joint distribution in a GP model with a non-Gaussian likelihood. 4 For a fixed θ, sampling v is relatively straightforward, and this can be done efficiently using HMC [12, 26, 27] or Elliptical Slice Sampling [28]. A well tuned HMC has been reported to be extremely efficient in sampling the latent variables, and this motivates our effort into trying to extend this efficiency to the sampling of hyper-parameters as well. This is also particularly appealing due to the convenience offered by the proposed representation of the model. The problem of drawing samples from the posterior distribution over v, θ has been investigated in detail in [11, 12]. In these works, it has been advocated to alternate between the sampling of v and θ in a Gibbs sampling fashion and condition the sampling of θ on a suitably chosen transformation of the latent variables. For each likelihood model, we compare efficiency and convergence speed of the proposed HMC sampler with a Gibbs sampler where v is sampled using HMC and θ is sampled using the Metropolis-Hastings algorithm. To make the comparison fair, we imposed the mass matrix in HMC and the covariance in MH to be isotropic, and any parameters of the proposal were tuned using Bayesian optimization. Unlike in the proposed HMC sampler, for the Gibbs sampler we did not penalize the objective function of the Bayesian optimization for large numbers of leapfrog steps, as in this case HMC proposals on the latent variables are computationally cheaper than those on the hyper-parameters. We report efficiency in sampling from ˆq(v, θ) using Effective Sample Size (ESS) and Time Normalized (TN)-ESS. In the supplement we include convergence plots based on the Potential Scale Reduction Factor (PSRF) computed based on ten parallel chains; in these each chain is initialized from the VB solution and individually tuned using Bayesian optimization. 4.2 Binary Classification We first use the image dataset [29] to investigate the benefits of the approach over a Gaussian approximation, and to investigate the effect of changing the number of inducing points, as well as optimizing the inducing points under the Gaussian approximation. The data are 18 dimensional: we investigated the effect of our approximation using both ARD (one lengthscale per dimension) and an isotropic RBF kernel. The data were split randomly into 1000/1019 train/test sets; the log predictive density over ten random splits is shown in Figure 1. Following the strategy outlined above, we fitted a Gaussian approximation to the posterior, with Z initialized with k-means. Figure 1 investigates the difference in performance when Z is optimized using the Gaussian approximation, compared to just using k-means for Z. Whilst our strategy is not guaranteed to find the global optimum, it is clear that it improves the performance. The second part of Figure 1 shows the performance improvement of our sampling approach over the Gaussian approximation. We drew 10,000 samples, discarding the first 1000: we see a consistent improvement in performance once M is large enough. For small M, The Gaussian approximation appears to work very well. The supplement contains a similar Figure for the case where a single lengthscale is shared: there, the improvement of the MCMC method over the Gaussian approximation is smaller but consistent. We speculate that the larger gains for ARD are due to posterior uncertainty in the lengthscales, which is poorly represented by a point in the Gaussian/MAP approximation. The ESS and TN-ESS are comparable between HMC and the Gibbs sampler. In particular, for 100 inducing points and the RBF covariance, ESS and TN-ESS for HMC are 11 and 1.0 · 10−3 and for the Gibbs sampler are 53 and 5.1 · 10−3. For the ARD covariance, ESS and TN-ESS for HMC are 14 and 5.1 · 10−3 and for the Gibbs sampler are 1.6 and 1.5 · 10−4. Convergence, however, seems to be faster for HMC, especially for the ARD covariance (see the supplement). 4.3 Log Gaussian Cox Processes We apply our methods to Log Gaussian Cox processes [30]: doubly stochastic models where the rate of an inhomogeneous Poisson process is given by a Gaussian process. The main difficulty for inference lies in that the likelihood of the GP requires an integral over the domain, which is typically intractable. For low dimensional problems, this integral can be approximated on a grid; assuming that the GP is constant over the width of the grid leads to a factorizing Poisson likelihood for each of the grid points. Whilst some recent approaches allow for a grid-free approach [19], these usually require concessions in the model, such as an alternative link function, and do not approach full Bayesian inference over the covariance function parameters. 5 Zoptimized Zk-means 5 10 20 50 100 5 10 20 50 100 −0.4 −0.2 number of inducing points log p(y⋆)[MCMC] Zoptimized Zk-means 5 10 20 50 100 5 10 20 50 100 0 2 4 ·10−2 number of inducing points log p(y⋆)[MCMC] −log p(y⋆)[Gauss.] Figure 1: Performance of the method on the image dataset, with one lengthscale per dimension. Left, box-plots show performance for varying numbers of inducing points and Z strategies. Optimizing Z using the Gaussian approximation offers significant improvement over the k-means strategy. Right: improvement of the performance of the Gaussian approximation method, with the same inducing points. The method offers consistent performance gains when the number of inducing points is larger. The supplement contains a similar figure with only a single lengthscale. 1860 1880 1900 1920 1940 1960 0 1 2 time (years) rate VB+Gaussian VB+MCMC MCMC VB+MCMC 0 20 40 60 lengthscale MCMC 0 2 4 variance Figure 2: The posterior of the rates for the coal mining disaster data. Left: posterior rates using our variational MCMC method and a Gaussian approximation. Data are shown as vertical bars. Right: posterior samples for the covariance function parameters using MCMC. The Gaussian approximation estimated the parameters as (12.06, 0.55). Coal mining disasters On the one-dimensional coal-mining disaster data. We held out 50% of the data at random, and using a grid of 100 points with 30 evenly spaced inducing points Z, fitted both a Gaussian approximation to the posterior process with an (approximate) MAP estimate for the covariance function parameters (variance and lengthscale of an RBF kernel). With Gamma priors on the covariance parameters we ran our sampling scheme using HMC, drawing 3000 samples. The resulting posterior approximations are shown in Figure 2, alongside the true posterior using a sampling scheme similar to ours (but without the inducing point approximation). The free-form variational approximation matches the true posterior closely, whilst the Gaussian approximation misses important detail. The approximate and true posteriors over covariance function parameters are shown in the right hand part of Figure 2, there is minimal discrepancy in the distributions. Over 10 random splits of the data, the average held-out log-likelihood was −1.229 for the Gaussian approximation and −1.225 for the free-form MCMC variant; the average difference was 0.003, and the MCMC variant was always better than the Gaussian approximation. We attribute this improved performance to marginalization of the covariance function parameters. Efficiency of HMC is greater than for the Gibbs sampler; ESS and TN-ESS for HMC are 6.7 and 3.1 · 10−2 and for the Gibbs sampler are 9.7 and 1.9 · 10−2. Also, chains converge within few thousand iterations for both methods, although convergence for HMC is faster (see the supplement). 6 Figure 3: Pine sapling data. From left to right: reported locations of pine saplings; posterior mean intensity on a 32x32 grid using full MCMC; posterior mean intensity on a 32x32 grid (with sparsity using 225 inducing points), posterior mean intensity on a 64x64 grid (using 225 inducing points). Pine saplings The advantages of the proposed approximation are prominent as the number of grid points become higher, an effect emphasized with increasing dimension of the domain. We fitted a similar model to the above to the pine sapling data [30]. We compared the sampling solution obtained using 225 inducing points on a 32 x 32 grid to the gold standard full MCMC run with the same prior and grid size. Figure 3 shows that the agreement between the variational sampling and full sampling is very close. However the variational method was considerably faster. Using a single core on a desktop computer required 3.4 seconds to obtain 1 effective sample for a well tuned variational method whereas it took 554 seconds for well tuned full MCMC. This effect becomes even larger as we increase the resolution of the grid to 64 x 64, which gives a better approximation to the underlying smooth function as can be seen in figure 3. It took 4.7 seconds to obtain one effective sample for the variational method, but now gold standard MCMC comparison was computationally extremely challenging to run for even a single HMC step. This is because it requires linear algebra operations using O(N 3) flops with N = 4096. 4.4 Multi-class Classification To do multi-class classification with Gaussian processes, one latent function is defined for each of the classes. The functions are defined a-priori independent, but covary a posteriori because of the likelihood. Chai [18] studies a sparse variational approximation to the softmax multi-class likelihood restricted to a Gaussian approximation. Here, following [31, 32, 33], we use a robust-max likelihood. Given a vector fn containing K latent functions evaluated at the point xn, the probability that the label takes the integer value yn is 1 −ϵ if yn = argmax fn and ϵ/K −1 otherwise. As Girolami and Rogers [31] discuss, the ‘soft’ probit-like behaviour is recovered by adding a diagonal ‘nugget’ to the covariance function. In this work, ϵ was fixed to 0.001, though it would also be possible to treat this as a parameter for inference. The expected log-likelihood is Ep(fn | v,θ)[log p(yn | fn)] = p log(ϵ)+(1−p) log(ϵ/(K−1)), where p is the probability that the labelled function is largest, which is computable using one-dimensional quadrature. An efficient Cython implementation is contained in the supplement. Toy example To investigate the proposed posterior approximation for the multivariate classification case, we turn to the toy data shown in Figure 4. We drew 750 data points from three Gaussian distributions. The synthetic data was chosen to include non-linear decision boundaries and ambiguous decision areas. Figure 4 shows that there are differences between the variational and sampling solutions, with the sampling solution being more conservative in general (the contours of 95% confidence are smaller). As one would expect at the decision boundary there are strong correlations between the functions which could not be captured by the Gaussian approximation we are using. Note the movement of inducing points away from k-means and towards the decision boundaries. Efficiency of HMC and the Gibbs sampler is comparable. In the RBF case, ESS and TN-ESS for HMC are 1.9 and 3.8 · 10−4 and for the Gibbs sampler are 2.5 and 3.6 · 10−4. In the ARD case, ESS and TN-ESS for HMC are 1.2 and 2.8 · 10−3 and for the Gibbs sampler are 5.1 and 6.8 · 10−4. For both cases, the Gibbs sampler struggles to reach convergence even though the average acceptance rates are similar to those recommended for the two samplers individually. MNIST The MNIST dataset is a well studied benchmark with a defined training/test split. We used 500 inducing points, initialized from the training data using k-means. A Gaussian approximation 7 Figure 4: A toy multiclass problem. Left: the Gaussian approximation, colored points show the simulated data, lines show posterior probability contours at 0.3, 0.95, 0.99. Inducing points positions shows as black points. Middle: the free form solution with 10,000 posterior samples. The free-form solution is more conservative (the contours are smaller). Right: posterior samples for v at the same position but across different latent functions. The posterior exhibits strong correlations and edges. Figure 5: Left: three k-means centers used to initialize the inducing point positions. Center: the positions of the same inducing points after optimization. Right: difference. was optimized using minibatch-based optimization over the means and variances of q(u), as well as the inducing points and covariance function parameters. The accuracy on the held-out data was 98.04%, significantly improving on previous approaches to classify these digits using GP models. For binary classification, Hensman et al. [20] reported that their Gaussian approximation resulted in movement of the inducing point positions toward the decision boundary. The same effect appears in the multivariate case, as shown in Figure 5, which shows three of the 500 inducing points used in the MNIST problem. The three examples were initialized close to the many six digits, and after optimization have moved close to other digits (five and four). The last example still appears to be a six, but has moved to a more ‘unusual’ six shape, supporting the function at another extremity. Similar effects are observed for all inducing-point digits. Having optimized the inducing point positions with the approximate q(v), and estimate for θ, we used these optimal inducing points to draw samples from v and θ. This did not result in an increase in accuracy, but did improve the log-density on the test set from -0.068 to -0.064. Evaluating the gradients for the sampler took approximately 0.4 seconds on a desktop machine, and we were easily able to draw 1000 samples. This dataset size has generally be viewed as challenging in the GP community and consequently there are not many published results to compare with. One recent work [34] reports a 94.05% accuracy using variational inference and a GP latent variable model. 5 Discussion We have presented an inference scheme for general GP models. The scheme significantly reduces the computational cost whilst approaching exact Bayesian inference, making minimal assumptions about the form of the posterior. The improvements in accuracy in comparison with the Gaussian approximation of previous works has been demonstrated, as has the quality of the approximation to the hyper-parameter distribution. Our MCMC scheme was shown to be effective for several likelihoods, and we note that the automatic tuning of the sampling parameters worked well over hundreds of experiments. This paper shows that MCMC methods are feasible for inference in large GP problems, addressing the unfair sterotype of ‘slow’ MCMC. Acknowledgments JH was funded by an MRC fellowship, AM and ZG by EPSRC grant EP/I036575/1 and a Google Focussed Research award. 8 References [1] T. V. Nguyen and E. V. Bonilla. Automated variational inference for Gaussian process models. In NIPS, pages 1404–1412, 2014. [2] L. Csat´o and M. Opper. Sparse on-line Gaussian processes. Neural comp., 14(3):641–668, 2002. [3] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In NIPS, pages 1257– 1264, 2005. [4] M. K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In AISTATS, pages 567–574, 2009. [5] M. L´azaro-Gredilla, J. Qui˜nonero-Candela, C. E. Rasmussen, and A. Figueiras-Vidal. Sparse spectrum Gaussian process regression. JMLR, 11:1865–1881, 2010. [6] A. Solin and S. S¨arkk¨a. Hilbert space methods for reduced-rank Gaussian process regression. arXiv preprint 1401.5508, 2014. [7] A. G. Wilson, E. Gilboa, A. Nehorai, and J. P. Cunningham. Fast kernel learning for multidimensional pattern extrapolation. In NIPS, pages 3626–3634. 2014. [8] S. S¨arkk¨a. Bayesian filtering and smoothing, volume 3. Cambridge University Press, 2013. [9] M. Filippone and R. Engler. Enabling scalable stochastic gradient-based inference for Gaussian processes by employing the Unbiased LInear System SolvEr (ULISSE). ICML 2015, 2015. [10] A. G. D. G. Matthews, J. Hensman, R. E. Turner, and Z. Ghahramani. On sparse variational methods and the KL divergence between stochastic processes. arXiv preprint 1504.07027, 2015. [11] I. Murray and R. P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models. In NIPS, pages 1732–1740, 2010. [12] M. Filippone, M. Zhong, and M. Girolami. A comparative evaluation of stochastic-based inference methods for Gaussian process models. Mach. Learn., 93(1):93–114, 2013. [13] M. N. Gibbs and D. J. C. MacKay. Variational Gaussian process classifiers. IEEE Trans. Neural Netw., 11(6):1458–1464, 2000. [14] M. Opper and C. Archambeau. The variational Gaussian approximation revisited. Neural comp., 21(3): 786–792, 2009. [15] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. JMLR, 6:1679–1704, 2005. [16] H. Nickisch and C. E. Rasmussen. Approximations for binary Gaussian process classification. JMLR, 9: 2035–2078, 2008. [17] E. Khan, S. Mohamed, and K. P. Murphy. Fast Bayesian inference for non-conjugate Gaussian process regression. In NIPS, pages 3140–3148, 2012. [18] K. M. A. Chai. Variational multinomial logit Gaussian process. JMLR, 13(1):1745–1808, June 2012. [19] C. Lloyd, T. Gunter, M. A. Osborne, and S. J. Roberts. Variational inference for Gaussian process modulated poisson processes. ICML 2015, 2015. [20] J. Hensman, A. Matthews, and Z. Ghahramani. Scalable variational Gaussian process classification. In AISTATS, pages 351–360, 2014. [21] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Trans. Pattern Anal. Mach. Intell., 20(12):1342–1351, 1998. [22] Michalis K Titsias, Neil Lawrence, and Magnus Rattray. Markov chain monte carlo algorithms for gaussian processes. In D. Barber, A. T. Chiappa, and S. Cemgil, editors, Bayesian time series models. 2011. [23] S. P. Smith. Differentiation of the cholesky algorithm. J. Comp. Graph. Stat., 4(2):134–147, 1995. [24] J. Qui˜nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. JMLR, 6:1939–1959, 2005. [25] Z. Wang, S. Mohamed, and N. De Freitas. Adaptive Hamiltonian and Riemann manifold Monte Carlo. In ICML, volume 28, pages 1462–1470, 2013. [26] J. Vanhatalo and A. Vehtari. Sparse Log Gaussian Processes via MCMC for Spatial Epidemiology. In Gaussian processes in practice, volume 1, pages 73–89, 2007. [27] O. F. Christensen, G. O. Roberts, and J. S. Rosenthal. Scaling limits for the transient phase of local MetropolisHastings algorithms. JRSS:B, 67(2):253–268, 2005. [28] I. Murray, R. P. Adams, and D. J. C. MacKay. Elliptical slice sampling. In AISTATS, volume 9, 2010. [29] G. R¨atsch, T. Onoda, and K-R M¨uller. Soft margins for adaboost. Mach. Learn., 42(3):287–320, 2001. [30] J. Møller, A. R. Syversveen, and R. P. Waagepetersen. Log Gaussian Cox processes. Scand. stat., 25(3): 451–482, 1998. [31] M. Girolami and S. Rogers. Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Comp., 18:2006, 2005. [32] H. Kim and Z. Ghahramani. Bayesian Gaussian Process Classification with the EM-EP Algorithm. IEEE TPAMI, 28(12):1948–1959, 2006. [33] D. Hern´andez-Lobato, J. M. Hern´andez-Lobato, and P. Dupont. Robust multi-class Gaussian process classification. In NIPS, pages 280–288, 2011. [34] Y. Gal, M. Van der Wilk, and Rasmussen C. E. Distributed variational inference in sparse Gaussian process regression and latent variable models. In NIPS. 2014. 9
2015
170
5,670
Action-Conditional Video Prediction using Deep Networks in Atari Games Junhyuk Oh Xiaoxiao Guo Honglak Lee Richard Lewis Satinder Singh University of Michigan, Ann Arbor, MI 48109, USA {junhyuk,guoxiao,honglak,rickl,baveja}@umich.edu Abstract Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future image-frames depend on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, actionconditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs. 1 Introduction Over the years, deep learning approaches (see [5, 26] for survey) have shown great success in many visual perception problems (e.g., [16, 7, 32, 9]). However, modeling videos (building a generative model) is still a very challenging problem because it often involves high-dimensional natural-scene data with complex temporal dynamics. Thus, recent studies have mostly focused on modeling simple video data, such as bouncing balls or small patches, where the next frame is highly-predictable given the previous frames [29, 20, 19]. In many applications, however, future frames depend not only on previous frames but also on control or action variables. For example, the first-person-view in a vehicle is affected by wheel-steering and acceleration. The camera observation of a robot is similarly dependent on its movement and changes of its camera angle. More generally, in vision-based reinforcement learning (RL) problems, learning to predict future images conditioned on actions amounts to learning a model of the dynamics of the agent-environment interaction, an essential component of model-based approaches to RL. In this paper, we focus on Atari games from the Arcade Learning Environment (ALE) [1] as a source of challenging action-conditional video modeling problems. While not composed of natural scenes, frames in Atari games are high-dimensional, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional images conditioned by control inputs. This paper proposes, evaluates, and contrasts two spatio-temporal prediction architectures based on deep networks that incorporate action variables (See Figure 1). Our experimental results show that our architectures are able to generate realistic frames over 100-step action-conditional future frames without diverging in some Atari games. We show that the representations learned by our architectures 1) approximately capture natural similarity among actions, and 2) discover which objects are directly controlled by the agent’s actions and which are only indirectly influenced or not controlled. We evaluated the usefulness of our architectures for control in two ways: 1) by replacing emulator frames with predicted frames in a previously-learned model-free controller (DQN; DeepMind’s state 1                   action       encoding transformation decoding (a) Feedforward encoding                         action encoding transformation decoding (b) Recurrent encoding Figure 1: Proposed Encoding-Transformation-Decoding network architectures. of the art Deep-Q-Network for Atari Games [21]), and 2) by using the predicted frames to drive a more informed than random exploration strategy to improve a model-free controller (also DQN). 2 Related Work Video Prediction using Deep Networks. The problem of video prediction has led to a variety of architectures in deep learning. A recurrent temporal restricted Boltzmann machine (RTRBM) [29] was proposed to learn temporal correlations from sequential data by introducing recurrent connections in RBM. A structured RTRBM (sRTRBM) [20] scaled up RTRBM by learning dependency structures between observations and hidden variables from data. More recently, Michalski et al. [19] proposed a higher-order gated autoencoder that defines multiplicative interactions between consecutive frames and mapping units, and showed that temporal prediction problem can be viewed as learning and inferring higher-order interactions between consecutive images. Srivastava et al. [28] applied a sequence-to-sequence learning framework [31] to a video domain, and showed that long short-term memory (LSTM) [12] networks are capable of generating video of bouncing handwritten digits. In contrast to these previous studies, this paper tackles problems where control variables affect temporal dynamics, and in addition scales up spatio-temporal prediction to larger-size images. ALE: Combining Deep Learning and RL. Atari 2600 games provide challenging environments for RL because of high-dimensional visual observations, partial observability, and delayed rewards. Approaches that combine deep learning and RL have made significant advances [21, 22, 11]. Specifically, DQN [21] combined Q-learning [36] with a convolutional neural network (CNN) and achieved state-of-the-art performance on many Atari games. Guo et al. [11] used the ALE-emulator for making action-conditional predictions with slow UCT [15], a Monte-Carlo tree search method, to generate training data for a fast-acting CNN, which outperformed DQN on several domains. Throughout this paper we will use DQN to refer to the architecture used in [21] (a more recent work [22] used a deeper CNN with more data to produce the currently best-performing Atari game players). Action-Conditional Predictive Model for RL. The idea of building a predictive model for vision-based RL problems was introduced by Schmidhuber and Huber [27]. They proposed a neural network that predicts the attention region given the previous frame and an attention-guiding action. More recently, Lenz et al. [17] proposed a recurrent neural network with multiplicative interactions that predicts the physical coordinate of a robot. Compared to this previous work, our work is evaluated on much higher-dimensional data with complex dependencies among observations. There have been a few attempts to learn from ALE data a transition-model that makes predictions of future frames. One line of work [3, 4] divides game images into patches and applies a Bayesian framework to predict patch-based observations. However, this approach assumes that neighboring patches are enough to predict the center patch, which is not true in Atari games because of many complex interactions. The evaluation in this prior work is 1-step prediction loss; in contrast, here we make and evaluate long-term predictions both for quality of pixels generated and for usefulness to control. 3 Proposed Architectures and Training Method The goal of our architectures is to learn a function f : x1:t, at →xt+1, where xt and at are the frame and action variables at time t, and x1:t are the frames from time 1 to time t. Figure 1 shows our two architectures that are each composed of encoding layers that extract spatio-temporal features from the input frames (§3.1), action-conditional transformation layers that transform the encoded features into a prediction of the next frame in high-level feature space by introducing action variables as additional input (§3.2) and finally decoding layers that map the predicted high-level features into pixels (§3.3). Our contributions are in the novel action-conditional deep convolutional architectures for high-dimensional, long-term prediction as well as in the novel use of the architectures in visionbased RL domains. 2 3.1 Two Variants: Feedforward Encoding and Recurrent Encoding Feedforward encoding takes a fixed history of previous frames as an input, which is concatenated through channels (Figure 1a), and stacked convolution layers extract spatio-temporal features directly from the concatenated frames. The encoded feature vector henc t ∈Rn at time t is: henc t = CNN (xt−m+1:t) , (1) where xt−m+1:t ∈R(m×c)×h×w denotes m frames of h × w pixel images with c color channels. CNN is a mapping from raw pixels to a high-level feature vector using multiple convolution layers and a fully-connected layer at the end, each of which is followed by a non-linearity. This encoding can be viewed as early-fusion [14] (other types of fusions, e.g., late-fusion or 3D convolution [35] can also be applied to this architecture). Recurrent encoding takes one frame as an input for each time-step and extracts spatio-temporal features using an RNN in which the temporal dynamics is modeled by the recurrent layer on top of the high-level feature vector extracted by convolution layers (Figure 1b). In this paper, LSTM without peephole connection is used for the recurrent layer as follows: [henc t , ct] = LSTM CNN (xt) , henc t−1, ct−1  , (2) where ct ∈Rn is a memory cell that retains information from a deep history of inputs. Intuitively, CNN (xt) is given as input to the LSTM so that the LSTM captures temporal correlations from high-level spatial features. 3.2 Multiplicative Action-Conditional Transformation We use multiplicative interactions between the encoded feature vector and the control variables: hdec t,i = X j,l Wijlhenc t,j at,l + bi, (3) where henc t ∈Rn is an encoded feature, hdec t ∈Rn is an action-transformed feature, at ∈Ra is the action-vector at time t, W ∈Rn×n×a is 3-way tensor weight, and b ∈Rn is bias. When the action a is represented using one-hot vector, using a 3-way tensor is equivalent to using different weight matrices for each action. This enables the architecture to model different transformations for different actions. The advantages of multiplicative interactions have been explored in image and text processing [33, 30, 18]. In practice the 3-way tensor is not scalable because of its large number of parameters. Thus, we approximate the tensor by factorizing into three matrices as follows [33]: hdec t = Wdec (Wenchenc t ⊙Waat) + b, (4) where Wdec ∈Rn×f, Wenc ∈Rf×n, Wa ∈Rf×a, b ∈Rn, and f is the number of factors. Unlike the 3-way tensor, the above factorization shares the weights between different actions by mapping them to the size-f factors. This sharing may be desirable relative to the 3-way tensor when there are common temporal dynamics in the data across different actions (discussed further in §4.3). 3.3 Convolutional Decoding It has been recently shown that a CNN is capable of generating an image effectively using upsampling followed by convolution with stride of 1 [8]. Similarly, we use the “inverse” operation of convolution, called deconvolution, which maps 1 × 1 spatial region of the input to d × d using deconvolution kernels. The effect of s × s upsampling can be achieved without explicitly upsampling the feature map by using stride of s. We found that this operation is more efficient than upsampling followed by convolution because of the smaller number of convolutions with larger stride. In the proposed architecture, the transformed feature vector hdec is decoded into pixels as follows: ˆxt+1 = Deconv Reshape hdec , (5) where Reshape is a fully-connected layer where hidden units form a 3D feature map, and Deconv consists of multiple deconvolution layers, each of which is followed by a non-linearity except for the last deconvolution layer. 3.4 Curriculum Learning with Multi-Step Prediction It is almost inevitable for a predictive model to make noisy predictions of high-dimensional images. When the model is trained on a 1-step prediction objective, small prediction errors can compound 3 through time. To alleviate this effect, we use a multi-step prediction objective. More specifically, given the training data D = n x(i) 1 , a(i) 1  , ...,  x(i) Ti , a(i) Ti oN i=1, the model is trained to minimize the average squared error over K-step predictions as follows: LK (θ) = 1 2K X i X t K X k=1 ˆx(i) t+k −x(i) t+k 2 , (6) where ˆx(i) t+k is a k-step future prediction. Intuitively, the network is repeatedly unrolled through K time steps by using its prediction as an input for the next time-step. The model is trained in multiple phases based on increasing K as suggested by Michalski et al. [19]. In other words, the model is trained to predict short-term future frames and fine-tuned to predict longer-term future frames after the previous phase converges. We found that this curriculum learning [6] approach is necessary to stabilize the training. A stochastic gradient descent with backpropagation through time (BPTT) is used to optimize the parameters of the network. 4 Experiments In the experiments that follow, we have the following goals for our two architectures. 1) To evaluate the predicted frames in two ways: qualitatively evaluating the generated video, and quantitatively evaluating the pixel-based squared error, 2) To evaluate the usefulness of predicted frames for control in two ways: by replacing the emulator’s frames with predicted frames for use by DQN, and by using the predictions to improve exploration in DQN, and 3) To analyze the representations learned by our architectures. We begin by describing the details of the data, and model architecture, and baselines. Data and Preprocessing. We used our replication of DQN to generate game-play video datasets using an ϵ-greedy policy with ϵ = 0.3, i.e. DQN is forced to choose a random action with 30% probability. For each game, the dataset consists of about 500, 000 training frames and 50, 000 test frames with actions chosen by DQN. Following DQN, actions are chosen once every 4 frames which reduces the video from 60fps to 15fps. The number of actions available in games varies from 3 to 18, and they are represented as one-hot vectors. We used full-resolution RGB images (210 × 160) and preprocessed the images by subtracting mean pixel values and dividing each pixel value by 255. Network Architecture. Across all game domains, we use the same network architecture as follows. The encoding layers consist of 4 convolution layers and one fully-connected layer with 2048 hidden units. The convolution layers use 64 (8 × 8), 128 (6 × 6), 128 (6 × 6), and 128 (4 × 4) filters with stride of 2. Every layer is followed by a rectified linear function [23]. In the recurrent encoding network, an LSTM layer with 2048 hidden units is added on top of the fully-connected layer. The number of factors in the transformation layer is 2048. The decoding layers consists of one fully-connected layer with 11264 (= 128×11×8) hidden units followed by 4 deconvolution layers. The deconvolution layers use 128 (4×4), 128 (6×6), 128 (6×6), and 3 (8×8) filters with stride of 2. For the feedforward encoding network, the last 4 frames are given as an input for each time-step. The recurrent encoding network takes one frame for each time-step, but it is unrolled through the last 11 frames to initialize the LSTM hidden units before making a prediction. Our implementation is based on Caffe toolbox [13]. Details of Training. We use the curriculum learning scheme above with three phases of increasing prediction step objectives of 1, 3 and 5 steps, and learning rates of 10−4, 10−5, and 10−5, respectively. RMSProp [34, 10] is used with momentum of 0.9, (squared) gradient momentum of 0.95, and min squared gradient of 0.01. The batch size for each training phase is 32, 8, and 8 for the feedforward encoding network and 4, 4, and 4 for the recurrent encoding network, respectively. When the recurrent encoding network is trained on 1-step prediction objective, the network is unrolled through 20 steps and predicts the last 10 frames by taking ground-truth images as input. Gradients are clipped at [−0.1, 0.1] before non-linearity of each gate of LSTM as suggested by [10]. Two Baselines for Comparison. The first baseline is a multi-layer perceptron (MLP) that takes the last frame as input and has 4 hidden layers with 400, 2048, 2048, and 400 units. The action input is concatenated to the second hidden layer. This baseline uses approximately the same number of parameters as the recurrent encoding model. The second baseline, no-action feedforward (or naFf), is the same as the feedforward encoding model (Figure 1a) except that the transformation layer consists of one fully-connected layer that does not get the action as input. 4 MLP Step naFf Feedforward Recurrent Ground Truth Action 255 256 257 é é no-op Figure 2: Example of predictions over 250 steps in Freeway. The ‘Step’ and ‘Action’ columns show the number of prediction steps and the actions taken respectively. The white boxes indicate the object controlled by the agent. From prediction step 256 to 257 the controlled object crosses the top boundary and reappears at the bottom; this non-linear shift is predicted by our architectures and is not predicted by MLP and naFf. The horizontal movements of the uncontrolled objects are predicted by our architectures and naFf but not by MLP. 0 50 100 0 50 100 150 200 (a) Seaquest 0 50 100 0 100 200 300 400 (b) Space Invaders 0 50 100 0 20 40 60 80 (c) Freeway 0 50 100 0 100 200 300 400 (d) QBert 0 50 100 0 50 100 150 200 250 (e) Ms Pacman MLP naFf Feedforward Recurrent Figure 3: Mean squared error over 100-step predictions 4.1 Evaluation of Predicted Frames Qualitative Evaluation: Prediction video. The prediction videos of our models and baselines are available in the supplementary material and at the following website: https://sites.google. com/a/umich.edu/junhyuk-oh/action-conditional-video-prediction. As seen in the videos, the proposed models make qualitatively reasonable predictions over 30–500 steps depending on the game. In all games, the MLP baseline quickly diverges, and the naFf baseline fails to predict the controlled object. An example of long-term predictions is illustrated in Figure 2. We observed that both of our models predict complex local translations well such as the movement of vehicles and the controlled object. They can predict interactions between objects such as collision of two objects. Since our architectures effectively extract hierarchical features using CNN, they are able to make a prediction that requires a global context. For example, in Figure 2, the model predicts the sudden change of the location of the controlled object (from the top to the bottom) at 257-step. However, both of our models have difficulty in accurately predicting small objects, such as bullets in Space Invaders. The reason is that the squared error signal is small when the model fails to predict small objects during training. Another difficulty is in handling stochasticity. In Seaquest, e.g., new objects appear from the left side or right side randomly, and so are hard to predict. Although our models do generate new objects with reasonable shapes and movements (e.g., after appearing they move as in the true frames), the generated frames do not necessarily match the ground-truth. Quantitative Evaluation: Squared Prediction Error. Mean squared error over 100-step predictions is reported in Figure 3. Our predictive models outperform the two baselines for all domains. However, the gap between our predictive models and naFf baseline is not large except for Seaquest. This is due to the fact that the object controlled by the action occupies only a small part of the image. 5 Feedforward Recurrent True (a) Ms Pacman (28 × 28 cropped) Feedforward Recurrent True (b) Space Invaders (90 × 90 cropped) Figure 4: Comparison between two encoding models (feedforward and recurrent). (a) Controlled object is moving along a horizontal corridor. As the recurrent encoding model makes a small translation error at 4th frame, the true position of the object is in the crossroad while the predicted position is still in the corridor. The (true) object then moves upward which is not possible in the predicted position and so the predicted object keeps moving right. This is less likely to happen in feedforward encoding because its position prediction is more accurate. (b) The objects move down after staying at the same location for the first five steps. The feedforward encoding model fails to predict this movement because it only gets the last four frames as input, while the recurrent model predicts this downwards movement more correctly. 0 50 100 0 2000 4000 6000 8000 (a) Seaquest 0 50 100 0 200 400 600 (b) Space Invaders 0 50 100 0 10 20 30 (c) Freeway 0 50 100 0 1000 2000 3000 4000 5000 (d) QBert 0 50 100 0 500 1000 1500 2000 2500 (e) Ms Pacman Emulator Rand MLP naFf Feedforward Recurrent Figure 5: Game play performance using the predictive model as an emulator. ‘Emulator’ and ‘Rand’ correspond to the performance of DQN with true frames and random play respectively. The x-axis is the number of steps of prediction before re-initialization. The y-axis is the average game score measured from 30 plays. Qualitative Analysis of Relative Strengths and Weaknesses of Feedforward and Recurrent Encoding. We hypothesize that feedforward encoding can model more precise spatial transformations because its convolutional filters can learn temporal correlations directly from pixels in the concatenated frames. In contrast, convolutional filters in recurrent encoding can learn only spatial features from the one-frame input, and the temporal context has to be captured by the recurrent layer on top of the high-level CNN features without localized information. On the other hand, recurrent encoding is potentially better for modeling arbitrarily long-term dependencies, whereas feedforward encoding is not suitable for long-term dependencies because it requires more memory and parameters as more frames are concatenated into the input. As evidence, in Figure 4a we show a case where feedforward encoding is better at predicting the precise movement of the controlled object, while recurrent encoding makes a 1-2 pixel translation error. This small error leads to entirely different predicted frames after a few steps. Since the feedforward and recurrent architectures are identical except for the encoding part, we conjecture that this result is due to the failure of precise spatio-temporal encoding in recurrent encoding. On the other hand, recurrent encoding is better at predicting when the enemies move in Space Invaders (Figure 4b). This is due to the fact that the enemies move after 9 steps, which is hard for feedforward encoding to predict because it takes only the last four frames as input. We observed similar results showing that feedforward encoding cannot handle long-term dependencies in other games. 4.2 Evaluating the Usefulness of Predictions for Control Replacing Real Frames with Predicted Frames as Input to DQN. To evaluate how useful the predictions are for playing the games, we implement an evaluation method that uses the predictive model to replace the game emulator. More specifically, a DQN controller that takes the last four frames is first pre-trained using real frames and then used to play the games based on ϵ = 0.05greedy policy where the input frames are generated by our predictive model instead of the game emulator. To evaluate how the depth of predictions influence the quality of control, we re-initialize the predictions using the true last frames after every n-steps of prediction for 1 ≤n ≤100. Note that the DQN controller never takes a true frame, just the outputs of our predictive models. The results are shown in Figure 5. Unsurprisingly, replacing real frames with predicted frames reduces the score. However, in all the games using the model to repeatedly predict only a few time 6 Table 1: Average game score of DQN over 100 plays with standard error. The first row and the second row show the performance of our DQN replication with different exploration strategies. Model Seaquest S. Invaders Freeway QBert Ms Pacman DQN - Random exploration 13119 (538) 698 (20) 30.9 (0.2) 3876 (106) 2281 (53) DQN - Informed exploration 13265 (577) 681 (23) 32.2 (0.2) 8238 (498) 2522 (57) (a) Random exploration. (b) Informed exploration. Figure 6: Comparison between two exploration methods on Ms Pacman. Each heat map shows the trajectories of the controlled object measured over 2500 steps for the corresponding method. é N F N F è ç ê ì ë î í é è ç ê ì ë î í Figure 7: Cosine similarity between every pair of action factors (see text for details). steps yields a score very close to that of using real frames. Our two architectures produce much better scores than the two baselines for deep predictions than would be suggested based on the much smaller differences in squared error. The likely cause of this is that our models are better able to predict the movement of the controlled object relative to the baselines even though such an ability may not always lead to better squared error. In three out of the five games the score remains much better than the score of random play even when using 100 steps of prediction. Improving DQN via Informed Exploration. To learn control in an RL domain, exploration of actions and states is necessary because without it the agent can get stuck in a bad sub-optimal policy. In DQN, the CNN-based agent was trained using an ϵ-greedy policy in which the agent chooses either a greedy action or a random action by flipping a coin with probability of ϵ. Such random exploration is a basic strategy that produces sufficient exploration, but can be slower than more informed exploration strategies. Thus, we propose an informed exploration strategy that follows the ϵ-greedy policy, but chooses exploratory actions that lead to a frame that has been visited least often (in the last d time steps), rather than random actions. Implementing this strategy requires a predictive model because the next frame for each possible action has to be considered. The method works as follows. The most recent d frames are stored in a trajectory memory, denoted D =  x(i) d i=1. The predictive model is used to get the next frame x(a) for every action a. We estimate the visit-frequency for every predicted frame by summing the similarity between the predicted frame and the most d recent frames stored in the trajectory memory using a Gaussian kernel as follows: nD(x(a)) = d X i=1 k(x(a), x(i)); k(x, y) = exp(− X j min(max((xj −yj)2 −δ, 0), 1)/σ) (7) where δ is a threshold, and σ is a kernel bandwidth. The trajectory memory size is 200 for QBert and 20 for the other games, δ = 0 for Freeway and 50 for the others, and σ = 100 for all games. For computational efficiency, we trained a new feedforward encoding network on 84 × 84 gray-scaled images as they are used as input for DQN. The details of the network architecture are provided in the supplementary material. Table 1 summarizes the results. The informed exploration improves DQN’s performance using our predictive model in three of five games, with the most significant improvement in QBert. Figure 6 shows how the informed exploration strategy improves the initial experience of DQN. 4.3 Analysis of Learned Representations Similarity among Action Representations. In the factored multiplicative interactions, every action is linearly transformed to f factors (Waa in Equation 4). In Figure 7 we present the cosine similarity between every pair of action-factors after training in Seaquest. ‘N’ and ‘F’ corresponds 7 to ‘no-operation’ and ‘fire’. Arrows correspond to movements with (black) or without (white) ‘fire’. There are positive correlations between actions that have the same movement directions (e.g., ‘up’ and ‘up+fire’), and negative correlations between actions that have opposing directions. These results are reasonable and discovered automatically in learning good predictions. Distinguishing Controlled and Uncontrolled Objects is itself a hard and interesting problem. Bellemare et al. [2] proposed a framework to learn contingent regions of an image affected by agent action, suggesting that contingency awareness is useful for model-free agents. We show that our architectures implicitly learn contingent regions as they learn to predict the entire image. Prev. frame Next frame Prediction Action Non-Action Figure 8: Distinguishing controlled and uncontrolled objects. Action image shows a prediction given only learned actionfactors with high variance; Non-Action image given only low-variance factors. In our architectures, a factor (fi = (Wa i,:)⊤a) with higher variance measured over all possible actions, Var (fi) = Ea h (fi −Ea[fi])2i , is more likely to transform an image differently depending on actions, and so we assume such factors are responsible for transforming the parts of the image related to actions. We therefore collected the high variance (referred to as “highvar”) factors from the model trained on Seaquest (around 40% of factors), and collected the remaining factors into a low variance (“lowvar”) subset. Given an image and an action, we did two controlled forward propagations: giving only highvar factors (by setting the other factors to zeros) and vice versa. The results are visualized as ‘Action’ and ‘Non-Action’ in Figure 8. Interestingly, given only highvar-factors (Action), the model predicts sharply the movement of the object controlled by actions, while the other parts are mean pixel values. In contrast, given only lowvar-factors (Non-Action), the model predicts the movement of the other objects and the background (e.g., oxygen), and the controlled object stays at its previous location. This result implies that our model learns to distinguish between controlled objects and uncontrolled objects and transform them using disentangled representations (see [25, 24, 37] for related work on disentangling factors of variation). 5 Conclusion This paper introduced two different novel deep architectures that predict future frames that are dependent on actions and showed qualitatively and quantitatively that they are able to predict visuallyrealistic and useful-for-control frames over 100-step futures on several Atari game domains. To our knowledge, this is the first paper to show good deep predictions in Atari games. Since our architectures were domain independent we expect that they will generalize to many vision-based RL problems. In future work we will learn models that predict future reward in addition to predicting future frames and evaluate the performance of our architectures in model-based RL. Acknowledgments. This work was supported by NSF grant IIS-1526059, Bosch Research, and ONR grant N00014-13-1-0762. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors. References [1] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. [2] M. G. Bellemare, J. Veness, and M. Bowling. Investigating contingency awareness using Atari 2600 games. In AAAI, 2012. [3] M. G. Bellemare, J. Veness, and M. Bowling. Bayesian learning of recursively factored environments. In ICML, 2013. [4] M. G. Bellemare, J. Veness, and E. Talvitie. Skip context tree switching. In ICML, 2014. [5] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127, 2009. [6] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009. [7] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In CVPR, 2012. 8 [8] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015. [9] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [10] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [11] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In NIPS, 2014. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [13] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Multimedia, 2014. [14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014. [15] L. Kocsis and C. Szepesv´ari. Bandit based Monte-Carlo planning. In ECML. 2006. [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [17] I. Lenz, R. Knepper, and A. Saxena. DeepMPC: Learning deep latent features for model predictive control. In RSS, 2015. [18] R. Memisevic. Learning to relate images. IEEE TPAMI, 35(8):1829–1846, 2013. [19] V. Michalski, R. Memisevic, and K. Konda. Modeling deep temporal dependencies with recurrent grammar cells. In NIPS, 2014. [20] R. Mittelman, B. Kuipers, S. Savarese, and H. Lee. Structured recurrent temporal restricted Boltzmann machines. In ICML, 2014. [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [22] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [23] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML, 2010. [24] S. Reed, K. Sohn, Y. Zhang, and H. Lee. Learning to disentangle factors of variation with manifold interaction. In ICML, 2014. [25] S. Rifai, Y. Bengio, A. Courville, P. Vincent, and M. Mirza. Disentangling factors of variation for facial expression recognition. In ECCV. 2012. [26] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015. [27] J. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2:125–134, 1991. [28] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015. [29] I. Sutskever, G. E. Hinton, and G. W. Taylor. The recurrent temporal restricted Boltzmann machine. In NIPS, 2009. [30] I. Sutskever, J. Martens, and G. E. Hinton. Generating text with recurrent neural networks. In ICML, 2011. [31] I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [33] G. W. Taylor and G. E. Hinton. Factored conditional restricted Boltzmann machines for modeling motion style. In ICML, 2009. [34] T. Tieleman and G. Hinton. Lecture 6.5 - RMSProp: Divde the gradient by a running average of its recent magnitude. Coursera, 2012. [35] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015. [36] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992. [37] J. Yang, S. Reed, M.-H. Yang, and H. Lee. Weakly-supervised disentangling with recurrent transformations for 3D view synthesis. In NIPS, 2015. 9
2015
171
5,671
Unified View of Matrix Completion under General Structural Constraints Suriya Gunasekar UT at Austin, USA suriya@utexas.edu Arindam Banerjee UMN Twin Cities, USA banerjee@cs.umn.edu Joydeep Ghosh UT at Austin, USA ghosh@ece.utexas.edu Abstract Matrix completion problems have been widely studied under special low dimensional structures such as low rank or structure induced by decomposable norms. In this paper, we present a unified analysis of matrix completion under general low-dimensional structural constraints induced by any norm regularization. We consider two estimators for the general problem of structured matrix completion, and provide unified upper bounds on the sample complexity and the estimation error. Our analysis relies on generic chaining, and we establish two intermediate results of independent interest: (a) in characterizing the size or complexity of low dimensional subsets in high dimensional ambient space, a certain partial complexity measure encountered in the analysis of matrix completion problems is characterized in terms of a well understood complexity measure of Gaussian widths, and (b) it is shown that a form of restricted strong convexity holds for matrix completion problems under general norm regularization. Further, we provide several non-trivial examples of structures included in our framework, notably including the recently proposed spectral k-support norm. 1 Introduction The task of completing the missing entries of a matrix from an incomplete subset of (potentially noisy) entries is encountered in many applications including recommendation systems, data imputation, covariance matrix estimation, and sensor localization among others. Traditionally ill–posed high dimensional estimation problems, where the number of parameters to be estimated is much higher than the number of observations, has been extensively studied in the recent literature. However, matrix completion problems are particularly ill–posed as the observations are both limited (high dimensional), and the measurements are extremely localized, i.e., the observations consist of individual matrix entries. The localized measurement model, in contrast to random Gaussian or sub– Gaussian measurements, poses additional complications in general high dimensional estimation. For well–posed estimation in high dimensional problems including matrix completion, it is imperative that low dimensional structural constraints are imposed on the target. For matrix completion, the special case of low–rank constraint has been widely studied. Several existing work propose tractable estimators with near–optimal recovery guarantees for (approximate) low–rank matrix completion [8, 7, 28, 26, 18, 19, 22, 11, 20, 21]. A recent work [16] addresses the extension to structures with decomposable norm regularization. However, the scope of matrix completion extends for low dimensional structures far beyond simple low–rankness or decomposable norm structures. In this paper, we consider a unified statistical analysis of matrix completion under a general set of low dimensional structures that are induced by any suitable norm regularization. We provide statistical analysis of two generalized matrix completion estimators, the constrained norm minimizer, and the generalized matrix Dantzig selector (Section 2.2). The main results in the paper (Theorem 1a– 1b) provide unified upper bounds on the sample complexity and estimation error of these estimators 1 for matrix completion under any norm regularization. Existing results on matrix completion with low rank or other decomposable structures can be obtained as special cases of our general results. Our unified analysis of sample complexity is motivated by recent work on high dimensional estimation using global (sub) Gaussian measurements [10, 1, 35, 3, 37, 5]. A key ingredient in the recovery analysis of high dimensional estimation involves establishing a certain variation of Restricted Isometry Property (RIP) [9] of the measurement operator. It has been shown that such properties are satisfied by Gaussian and sub–Gaussian measurement operators with high probability. Unfortunately, as has been noted before by Candes et al. [8], owing to highly localized measurements, such conditions are not satisfied in the matrix completion problem, and the existing results based on global (sub) Gaussian measurements are not directly applicable. In fact, a key question we consider is: given the radically limited measurement model in matrix completion, by how much would the sample complexity of estimation increase beyond the known sample complexity bounds for global (sub) Gaussian measurements. Our results upper bounds the sample complexity for matrix completion to within only a log d factor larger over that of global (sub) Gaussian measurements [10, 3, 5]. While the result is known for low rank matrix completion using nuclear norm minimization [26, 20], with a careful use of generic chaining, we show that the log d factor suffices for structures induced by any norm! As a key intermediate result, we show that a useful form restricted strong convexity (RSC) [27] holds for the localized measurements encountered in matrix completion under general norm regularized structures. The result substantially generalizes existing RSC results for matrix completion under the special cases of nuclear norm and decomposable norm regularization [26, 16]. For our analysis, we use tools from generic chaining [33] to characterize the main results (Theorem 1a–1b) in terms of the Gaussian width (Definition 1) of certain error sets. Gaussian widths provide a powerful geometric characterization for quantifying the complexity of a structured low dimensional subset in a high dimensional ambient space. Such a unified characterization in terms of Gaussian width has the advantage that numerous tools have been developed in the literature for bounding the Gaussian width for structured sets, and this literature can be readily leveraged to derive new recovery guarantees for matrix completion under suitable structural constraints (Appendix D.2). In addition to the theoretical elegance of such a unified framework, identifying useful but potentially non–decomposable low dimensional structures is of significant practical interest. The broad class of structures enforced through symmetric convex bodies and symmetric atomic sets [10] can be analyzed under this paradigm (Section 2.1). Such specialized structures can potentially capture the constraints in certain applications better than simple low–rankness. In particular, we discuss in detail, a non–trivial example of the spectral k–support norm introduced by McDonald et al. [25]. To summarize the key contributions of the paper: • Theorem 1a–1b provide unified upper bounds on sample complexity and estimation error for matrix completion estimators using general norm regularization: a substantial generalization of the existing results on matrix completion under structural constraints. • Theorem 1a is applied to derive statistical results for the special case of matrix completion under spectral k–support norm regularization. • An intermediate result, Theorem 5 shows that under any norm regularization, a form of Restricted Strong Convexity (RSC) holds in the matrix completion setting with extremely localized measurements. Further, a certain partial measure of complexity of a set is encountered in matrix completion analysis (12). Another intermediate result, Theorem 2 provides bounds on the partial complexity measures in terms of a better understood complexity measure of Gaussian width. These intermediate results are of independent interest beyond the scope of the paper. Notations and Preliminaries Indexes i, j are typically used to index rows and columns respectively of matrices, and index k is used to index the observations. ei, ej, ek, etc. denote the standard basis in appropriate dimensions1. Notation G ans g are used to denote a matrix and vector respectively, with independent standard Gaussian random variables. P(.) and E(.) denote the probability of an event and the expectation of a random variable, respectively. Given an integer N, let [N] = {1, 2, . . . , N}. Euclidean norm in a vector space is denoted as ∥x∥2 = p ⟨x, x⟩. For a matrix X with singular values σ1 ≥σ2 ≥. . ., common norms include the Frobenius norm ∥X∥F = pP i σ2 i , the nuclear norm ∥X∥∗= P i σi, 1for brevity we omit the explicit dependence of dimension unless necessary 2 the spectral norm ∥X∥op = σ1, and the maximum norm ∥X∥∞= maxij |Xij|. Also let, Sd1d2−1 = {X ∈Rd1×d2 : ∥X∥F = 1} and Bd1d2 = {X ∈Rd1×d2 : ∥X∥F ≤1}. Finally, given a norm ∥.∥ defined on a vectorspace V, its dual norm is given by ∥X∥∗= sup∥Y ∥≤1⟨X, Y ⟩. Definition 1 (Gaussian Width). Gaussian width of a set S ⊂Rd1×d2 is a widely studied measure of complexity of a subset in high dimensional ambient space and is given by: wG(S) = EG sup X∈S ⟨X, G⟩, (1) where recall that G is a matrix of independent standard Gaussian random variables. Some key results on Gaussian width are discussed in Appendix D.2. Definition 2 (Sub–Gaussian Random Variable [36]). The sub–Gaussian norm of a random variable X is given by: ∥X∥Ψ2 = supp≥1 p−1/2(E|X|p)1/p. X is said be b–sub–Gaussian if ∥X∥Ψ2 ≤b. Equivalently, X is sub–Gaussian if one of the following conditions are satisfied for some constants k1, k2, and k3 [Lemma 5.5 of [36]]. (1) ∀p ≥1, (E|X|p)1/p ≤b√p, (2) ∀t > 0, P(|X| > t) ≤e1−t2/k2 1b2, (3) E[ek2X2/b2] ≤e, or (4) if EX = 0, then ∀s > 0, E[esX] ≤ek3s2b2/2. Definition 3 (Restricted Strong Convexity (RSC)). A function L is said to satisfy Restricted Strong Convexity (RSC) at Θ with respect to a subset S, if for some RSC parameter κL > 0, ∀∆∈S, L(Θ + ∆) −L(Θ) −⟨∇L(Θ), ∆⟩≥κL∥∆∥2 F . (2) Definition 4 (Spikiness Ratio [26]). For X ∈Rd1×d2, a measure of the “spikiness” is given by: αsp(X) = √d1d2∥X∥∞ ∥X∥F . (3) Definition 5 (Norm Compatibility Constant [27]). The compatibility constant of a norm R : V →R under a closed convex cone C ⊂V is defined as follows: ΨR(C) = sup X∈C\{0} R(X) ∥X∥F . (4) 2 Structured Matrix Completion Denote the ground truth target matrix as Θ∗∈Rd1×d2; let d=d1+ d2. In the noisy matrix completion, observations consists of individual entries of Θ∗observed through an additive noise channel. Sub–Gaussian Noise: Given, a list of independently sampled standard basis Ω= {Ek = eike⊤ jk : ik ∈[d1], jk ∈[d2]} with potential duplicates, observations (yk) ∈R|Ω| are given by: yk = ⟨Θ∗, Ek⟩+ ξηk, for k = 1, 2, . . . , |Ω|, (5) where η ∈R|Ω| is the noise vector of independent sub–Gaussian random variables with E[ηk] = 0, and ∥ηk∥Ψ2 = 1 (recall ∥.∥Ψ2 from Definition 2), and ξ2 is scaled variance of noise per observation, (note Var(ηk) ≤constant). Also, without loss of generality, assume normalization ∥Θ∗∥F = 1. Uniform Sampling: Assume that the entries in Ωare drawn independently and uniformly: Ek ∼uniform{eie⊤ j : i ∈[d1], j ∈[d2]}, for Ek ∈Ω. (6) Given Ω, define the linear operator PΩ: Rd1×d2 →RΩas follows (ek ∈R|Ω|): PΩ(X) = P|Ω| k=1⟨X, Ek⟩ek (7) Structural Constraints For matrix completion with |Ω| < d1d2, low dimensional structural constraints on Θ∗are necessary for well–posedness. We consider a generalized constraint setting wherein for some low–dimensional model space M, Θ∗∈M is enforced through a surrogate norm regularizer R(.). We make no further assumptions on R other than it being a norm in Rd1×d2. Low Spikiness In matrix completion under uniform sampling model, further restrictions on Θ∗(beyond low dimensional structure) are required to ensure that the most informative entries of the matrix are observed with high probability [8]. Early work assumed stringent matrix incoherence conditions for low–rank completion to preclude such matrices [7, 18, 19], while more recent work [11, 26], relax these assumptions to a more intuitive restriction of the spikiness ratio, defined in (3). However, under this relaxation only an approximate recovery is typically guaranteed in low–noise regime, as opposed to near exact recovery under incoherence assumptions. Assumption 1 (Spikiness Ratio). There exists α∗> 0, such that ∥Θ∗∥∞= αsp(Θ∗) ∥Θ∗∥F √d1d2 ≤ α∗ √d1d2 . □ 3 2.1 Special Cases and Applications We briefly introduce some interesting examples of structural constraints with practical applications. Example 1 (Low Rank and Decomposable Norms). Low–rankness is the most common structure used in many matrix estimation problems including collaborative filtering, PCA, spectral clustering, etc. Convex estimators using nuclear norm ∥Θ∥∗regularization has been widely studied statistically [8, 7, 28, 26, 18, 19, 22, 11, 20, 21]. A recent work [16] extends the analysis of low rank matrix completion to general decomposable norms, i.e. R:∀X, Y ∈(M, M⊥), R(X+Y )=R(X)+R(Y ). Example 2 (Spectral k–support Norm). A non–trivial and significant example of norm regularization that is not decomposable is the spectral k–support norm recently introduced by McDonald et al. [25]. Spectral k–support norm is essentially the vector k–support norm [2] applied on the singular values σ(Θ) of a matrix Θ ∈Rd1×d2. Without loss of generality, let ¯d = d1 = d2. Let Gk = {g ⊆[ ¯d] : |g| ≤k} be the set of all subsets [ ¯d] of cardinality at most k, and denote the set V(Gk) = {(vg)g∈Gk : vg ∈R ¯d, supp(vg) ⊆g}. The spectral k–support norm is given by: ∥Θ∥k–sp = inf v∈V(Gk) n X g∈Gk ∥vg∥2 : X g∈Gk vg = σ(Θ) o , (8) McDonald et al. [25] showed that spectral k–support norm is a special case of cluster norm [17]. It was further shown that in multi–task learning, wherein the tasks (columns of Θ∗) are assumed to be clustered into dense groups, the cluster norm provides a trade–off between intra–cluster variance, (inverse) inter–cluster variance, and the norm of the task vectors. Both [17] and [25] demonstrate superior empirical performance of cluster norms (and k–support norm) over traditional trace norm and spectral elastic net minimization on bench marked matrix completion and multi–task learning datasets. However, there are no known work on the statistical analysis of matrix completion with spectral k–support norm regularization. In Section 3.2, we discuss the consequence of our main theorem for this non–trivial special case. Example 3 (Additive Decomposition). Elementwise sparsity is a common structure often assumed in high–dimensional estimation problems. However, in matrix completion, elementwise sparsity conflicts with Assumption 1 (and more traditional incoherence assumptions). Indeed, it is easy to see that with high probability most of the |Ω| ≪d1d2 uniformly sampled observations will be zero, and an informed prediction is infeasible. However, elementwise sparse structures are often used within an additive decomposition framework, wherein Θ∗= P k Θ(k), such that each component matrix Θ(k) is in turn structured (e.g. low rank+sparse used for robust PCA [6]). In such structures, there is no scope for recovering sparse components outside the observed indices, and it is assumed that: Θ(k) is sparse ⇒supp(Θ(k)) ⊆Ω. Further, the sparsity assumption might still conflict with the spikiness assumption. In such cases, consistent matrix completion is feasible under additional regularity assumptions if the superposed matrix is non–spiky. A candidate norm regularizer for such structures is the weighted infimum convolution of individual structure inducing norms [6, 39], Rw(Θ) = inf  X k wkRk(Θ(k)) : X k Θ(k) = Θ . Example 4 (Other Applications). Other potential applications including cut matrices [30, 10], structures induced by compact convex sets, norms inducing structured sparsity assumptions on the spectrum of Θ∗, etc. can also be handled under the paradigm of this paper. 2.2 Structured Matrix Estimator Let R be the norm surrogate for the structural constraints on Θ∗, and R∗denote its dual norm. We propose and analyze two convex estimators for the task of structured matrix completion: Constrained Norm Minimizer bΘcn = argmin ∥Θ∥∞≤ α∗ √ d1d2 R(Θ) s.t. ∥PΩ(Θ) −y∥2 ≤λcn. (9) Generalized Matrix Dantzig Selector bΘds = argmin ∥Θ∥∞≤ α∗ √ d1d2 R(Θ) s.t. √d1d2 |Ω| R∗P ∗ Ω(PΩ(Θ) −y) ≤λds, (10) 4 where P ∗ Ω: RΩ→Rd1×d2 is the linear adjoint of PΩ, i.e. ⟨PΩ(X), y⟩= ⟨X, P ∗ Ω(y)⟩. Note: Theorem 1a–1b gives consistency results for (9) and (10), respectively, under certain conditions on the parameters λcn > 0, λds > 0, and α∗> 1. In particular, these conditions assume knowledge of the noise variance ξ2 and spikiness ratio αsp(Θ∗). In practice, both ξ and αsp(Θ∗) are typically unknown and the parameters are tuned by validating on held out data. 3 Main Results We define the following “restricted” error cone and its subset: TR = TR(Θ∗) = cone{∆: R(Θ∗+ ∆) ≤R(Θ∗)}, and ER = TR ∩Sd1d2−1. (11) Let bΘcn and bΘds be the estimates from (9) and (10), respectively. If λcn and λds are chosen such that Θ∗belongs to the feasible sets in (9) and (10), respectively, then the error matrices b∆cn = bΘcn −Θ∗ and b∆ds = bΘds −Θ∗are contained in TR. Theorem 1a (Constrained Norm Minimizer). Under the problem setup in Section 2, let bΘcn = Θ∗+ b∆cn be the estimate from (9) with λcn = 2ξ. For large enough c0, if |Ω| > c2 0w2 G(ER) log d, then there exists an RSC parameter κc0 > 0 and constants c1, c2, c3 such that with probability greater than 1−exp(−c1w2 G(ER))−2 exp(−c2w2 G(ER) log d), 1 d1d2 ∥b∆cn∥2 F ≤4 max ( c3ξ2 κc0 , α∗2 d1d2 c2 0w2 G(ER) log d |Ω| ) . Theorem 1b (Matrix Dantzig Selector). Under the problem setup in Section 2, let bΘds = Θ∗+ b∆ds be the estimate from (10) with λds ≥2ξ √d1d2 |Ω| R∗P ∗ Ω(w). For large enough c0, if |Ω| > c2 0w2 G(ER) log d, then there exists an RSC parameter κc0 > 1, and constant c1 such that with probability greater than 1−exp(−c1w2 G(ER)), ∥b∆ds∥2 F ≤4 max ( λ2 dsΨ2 R(TR) κ2c0 , α∗2 c2 0w2 G(ER) log d |Ω| ) . Recall Gaussian width wG and subspace compatibility constant ΨR from (1) and (4), respectively. Remarks: 1. If R(Θ) = ∥Θ∥∗and rank(Θ∗) = r, then w2 G(ER) ≤ 3dr, ΨR(TR) ≤ 2r and √d1d2 |Ω| ∥P ∗ Ω(η)∥2 ≤2 q d log d |Ω| w.h.p [10, 14, 26]. Using these bounds in Theorem 1b recovers near–optimal results for low rank matrix completion under spikiness assumption [26]. 2. For both estimators, upper bound on sample complexity is dominated by the square of Gaussian width which is often considered the effective dimension of a subset in high dimensional space and plays a key role in high dimensional estimation under Gaussian measurement ensembles. The results show that, independent of R(.), the upper bound on sample complexity for consistent matrix completion with highly localized measurements is within a log d factor of the known sample complexity of ∼w2 G(ER) for estimation from Gaussian measurements [3, 10, 37, 5]. 3. First term in estimation error bounds in Theorem 1a–1b scales with ξ2 which is the per observation noise variance (upto constant). The second term is an upper bound on error that arises due to unidentifiability of Θ∗within a certain radius under the spikiness constraints [26]; in contrast [7] show exact recovery when ξ = 0 using more stringent matrix incoherence conditions. 4. Bound on b∆cn from Theorem 1a is comparable to the result by Cand´es et al. [7] for low rank matrix completion under non–low–noise regime, where the first term dominates, and those of [10, 35] for high dimensional estimation under Gaussian measurements. With a bound on w2 G(ER), it is easy to specialize this result for new structural constraints. However, this bound is potentially loose and asymptotically converges to a constant error proportional to the noise variance ξ2. 5. The estimation error bound in Theorem 1b is typically sharper than that in Theorem 1a. However, for specific structures, using application of Theorem 1b requires additional bounds on ER∗(PΩ(W)) and ΨR(TR) besides w2 G(ER). 3.1 Partial Complexity Measures Recall that for wG(S) = E supX∈S⟨X, G⟩and R|Ω| ∋g ∼N(0, I|Ω|) is a standard normal vector. 5 Definition 6 (Partial Complexity Measures). Given a randomly sampled collection Ω= {Ek ∈ Rd1×d2}, and a random vector η ∈R|Ω|, the partial η–complexity measure of S is given by: wΩ,η(S) = EΩ,η sup X∈S−S ⟨X, P ∗ Ω(η)⟩. (12) The special cases where η is a vector of standard Gaussian g, or standard Rademacher ϵ (i.e. ϵk ∈ {−1, 1} w.p. 1/2) variables, are of particular interest. In the case of symmetric η, like g and ϵ, wΩ,η(S) = 2EΩ,η supX∈S⟨X, P ∗ Ω(η)⟩, and the later expression will be used interchangeably ignoring the constant term. Theorem 2 (Partial Gaussian Complexity). Let S ⊂Bd1d2, and let Ωbe sampled according to (6). ∃universal constants K1, K2, K3, and K4 such that: wΩ,g(S) ≤K1 s |Ω| d1d2 wG(S) + min n K2 r EΩsup X,Y ∈S ∥PΩ(X −Y )∥2 2, K3 sup X∈S αsp(X) √d1d2 o (13) Further, for a centered i.i.d. 1–sub–Gaussian vector η ∈R|Ω|, wΩ,η(S) ≤K4wΩ,g(S). Note: For Ω⊊[d1]×[d2], the second term in (13) is a consequence of the localized measurements. 3.2 Spectral k–Support Norm We introduced spectral k–support norm in Section 2.1. The estimators from (9) and (10) for spectral k–support norm can be efficiently solved through proximal methods using the proximal operators derived in [25]. We are interested in the statistical guarantees for matrix completion using spectral k–support norm regularization. We extend the analysis for upper bounding the Gaussian width of the descent cone for the vector k–support norm by [29] to the case of spectral k–support norm. WLOG let d1 = d2 = ¯d. Let σ∗∈R ¯d be the vector of singular values of Θ∗sorted in non–ascending order. Let r ∈{0, 1, 2, . . . , k −1} be the unique integer satisfying: σ∗ k−r−1 > 1 r+1 Pp i=k−r σ∗ i ≥σ∗ k−r. Denote I2 = {1, 2, . . . , k −r −1}, I1 = {k −r, k −r + 1, . . . , s}, and I0 = {s + 1, s + 2, . . . , ¯d}. Finally, for I ⊆[ ¯d], (σ∗ I)i = 0 ∀i ∈Ic, and (σ∗ I)i = σ∗ i ∀i ∈I. Lemma 3. If rank of Θ∗is s and ER is the error set from R(Θ) = ∥Θ∥k–sp, then w2 G(ER) ≤s(2 ¯d −s) + (r + 1)2∥σ∗ I2∥2 2 ∥σ∗ I1∥2 1 + |I1|  (2 ¯d −s). Proof of the above lemma is provided in the appendix. Lemma 3 can be combined with Theorem 1a to obtain recovery guarantees for matrix completion under spectral k–support norm. 4 Discussions and Related Work Sample Complexity: For consistent recovery in high dimensional convex estimation, it is desirable that the descent cone at the target parameter Θ∗is “small” relative to the feasible set (enforced by the observations) of the estimator. Thus, a measure of complexity/size of the error cone at Θ∗is crucial in establishing sample complexity and estimation error bounds. Results in this paper are largely characterized in terms of a widely used complexity measure of Gaussian width wG(.), and can be compared with the literature on estimation from Gaussian measurements. Error Bounds: Theorem 1a provides estimation error bounds that depends only on the Gaussian width of the descent cone. In non–low–noise regime, this result is comparable to analogous results of constrained norm minimization [6, 10, 35]. However, this bound is potentially loose owing to mismatched data–fit term using squared loss, and asymptotically converges to a constant error proportional to the noise variance ξ2. A tighter analysis on the estimation error can be obtained for the matrix Dantzig selector (10) from Theorem 1b. However, application of Theorem 1b requires computing high probability upper bound on R∗(PΩ(W)). The literature on norms of random matrices [13, 24, 36, 34] can be exploited in deriving such bounds. Beside, in special cases: if R(.) ≥K∥.∥∗, then KR∗(.) ≤∥.∥op can be used to obtain asymptotically consistent results. Finally, under near zero–noise, the second term in the results of Theorem 1 dominates, and bounds are weaker than that of [6, 19] owing to the relaxation of stronger incoherence assumption. 6 Related Work and Future Directions: The closest related work is the result on consistency of matrix completion under decomposable norm regularization by [16]. Results in this paper are a strict generalization to general norm regularized (not necessarily decomposable) matrix completion. We provide non–trivial examples of application where structures enforced by such non–decomposable norms are of interest. Further, in contrast to our results that are based on Gaussian width, the RSC parameter in [16] depends on a modified complexity measure κR(d, |Ω|) (see definition in [16]). An advantage of results based on Gaussian width is that, application of Theorem 1 for special cases can greatly benefit from the numerous tools in the literature for the computation of wG(.). Another closely related line of work is the non–asymptotic analysis of high dimensional estimation under random Gaussian or sub–Gaussian measurements [10, 1, 35, 3, 37, 5]. However, the analysis from this literature rely on variants of RIP of the measurement ensemble [9], which is not satisfied by the the extremely localized measurements encountered in matrix completion[8]. In an intermediate result, we establish a form of RSC for matrix completion under general norm regularization: a result that was previously known only for nuclear norm and decomposable norm regularization. In future work, it is of interest to derive matching lower bounds on estimation error for matrix completion under general low dimensional structures, along the lines of [22, 5] and explore special case applications of the results in the paper. We also plan to derive explicit characterization of λds in terms of Gaussian width of unit balls by exploiting generic chaining results for general Banach spaces [33]. 5 Proof Sketch Proofs of the lemmas are provided in the Appendix. 5.1 Proof of Theorem 1 Define the following set of β–non–spiky matrices in Rd1×d2 for constant c0 from Theorem 1: A(β)= ( X : αsp(X) = √d1d2∥X∥∞ ∥X∥F ≤β ) . (14) Define, βc0 = 1 c0 s |Ω| w2 G(ER) log d (15) Case 1: Spiky Error Matrix When the error matrix from (9) or (10) has large spikiness ratio, following bound on error is immediate using ∥b∆∥∞≤∥bΘ∥∞+∥Θ∗∥∞≤2α∗/√d1d2 in (3). Proposition 4 (Spiky Error Matrix). For the constant c0 in Theorem 1a, if αsp(b∆cn) /∈A(βc0), then ∥b∆cn∥F ≤2c0α∗ q w2 G(ER) log d |Ω| . An analogous result also holds for b∆ds. □ Case 2: Non–Spiky Error Matrix Let b∆ds, b∆cn ∈A(β). 5.1.1 Restricted Strong Convexity (RSC) Recall TR and ER from (11). The most significant step in the proof of Theorem 1 involves showing that over a useful subset of TR, a form of RSC (2) is satisfied by a squared loss penalty. Theorem 5 (Restricted Strong Convexity). Let |Ω| > c2 0w2 G(ER) log d, for large enough constant c0; further, sub–sampling excess samples such that |Ω| ∼Ω(w2 G(ER) log2 d). There exists a RSC parameter κc0 = 1−δc0 > 0, such that the following holds w.p. greater that 1−exp(−c1w2 G(ER)), ∀X ∈TR ∩A(β), d1d2 |Ω| ∥PΩ(X)∥2 2 ≥κc0∥X∥2 F . Proof in Appendix A combines empirical process tools along with Theorem 2. □ Recall from (5), that y −PΩ(Θ∗) = ξw, where w ∈R|Ω| consists of independent sub–Gaussian random variables with E[wk] = 0 and ∥wk∥Ψ2 = 1 (recall ∥.∥Ψ2 from Definition 2). 7 5.1.2 Constrained Norm Minimizer Lemma 6. Under the conditions of Theorem 1, let c1 be a constant such that ∀k, Var(wk) ≤ c1. ∃a universal constant c2 such that, if λcn ≥2c1ξ p |Ω|, then with probability greater than 1 −2 exp (−c2|Ω|), (a) b∆ds ∈TR, and (b) ∥PΩ(b∆cn)∥2 ≤2λcn. □ Using λcn =2c1ξ p |Ω| in (9), if b∆cn ∈A(β), then using Theorem 5 and Lemma 6, w.h.p. ∥b∆cn∥2 F d1d2 ≤1 κc0 ∥PΩ(b∆cn)∥2 2 |Ω| ≤4c2 1ξ2 κc0 . (16) 5.1.3 Matrix Dantzig Selector Proposition 7. λds ≥ξ √d1d2 |Ω| R∗P ∗ Ω(w) ⇒w.h.p. (a) b∆ds ∈TR; (b) √d1d2 |Ω| R∗P ∗ Ω(PΩ(b∆ds))≤2λds. Above result follows from optimality of bΘds and triangle inequality. Also, √d1d2 |Ω| ∥PΩ(b∆ds)∥2 2 ≤ √d1d2 |Ω| R∗P ∗ Ω(PΩ(b∆ds))R(b∆ds) ≤2λdsΨR(TR)∥b∆ds∥F , where recall norm compatibility constant ΨR(TR) from (4). Finally, using Theorem 5, w.h.p. ∥b∆ds∥2 F d1d2 ≤1 |Ω| ∥PΩ(b∆ds)∥2 2 κc0 ≤4λdsΨR(TR) κc0 ∥b∆ds∥F √d1d2 . (17) 5.2 Proof of Theorem 2 Let the entries of Ω= {Ek = eike⊤ jk : k = 1, 2, . . . , |Ω|} be sampled as in (6). Recall that g ∈R|Ω| is a standard normal vector. For a compact S ⊆Rd1×d2, it suffices to prove Theorem 2 for a dense countable subset of S. Overloading S to such a countable subset, define following random process: (XΩ,g(X))X∈S, where XΩ,g(X) = ⟨X, P ∗ Ω(g)⟩= P k⟨X, Ek⟩gk. (18) We start with a key lemma in the proof of Theorem 2. Proof of this lemma, provided in Appendix B, uses tools from the broad topic of generic chaining developed in recent works [31, 33]. Lemma 8. ∃constants k1, k2 such that for S ⊆Sd1d2−1, then wΩ,g(S) = E sup X∈S XΩ,g(X) ≤k1 s |Ω| d1d2 wG(S) + k2 r E sup X,Y ∈S ∥PΩ(X −Y )∥2 2. □ Lemma 9. There exists constants k3, k4, such that for S ⊆Sd1d2−1 E sup X,Y ∈S ∥PΩ(X −Y )∥2 2 ≤k3 sup X∈S αsp(X) √d1d2 wΩ,g(S) + k4 |Ω| d1d2 w2 G(S) Theorem 2 follows by combining Lemma 8 and Lemma 9, and simplifying using √ ab ≤a/2 + b/2 and triangle inequality (See Appendix B). The statement in Theorem 2 about partial sub–Gaussian complexity follows from a standard result in empirical process given in Lemma 12. Acknowledgments We thank the anonymous reviewers for helpful comments and suggestions. S. Gunasekar and J. Ghosh acknowledge funding funding from NSF grants IIS-1421729, IIS-1417697, and IIS1116656. A. Banerjee acknowledges NSF grants IIS-1447566, IIS-1422557, CCF-1451986, CNS-1314560, IIS-0953274, IIS-1029711, and NASA grant NNX12AQ39A. 8 References [1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: A geometric theory of phase transitions in convex optimization. Inform. Inference, 2014. [2] A. Argyriou, R. Foygel, and N. Srebro. Sparse prediction with the k-support norm. In NIPS, 2012. [3] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with norm regularization. In NIPS, 2014. [4] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with bregman divergences. JMLR, 2005. [5] T. Cai, T. Liang, and A. Rakhlin. Geometrizing local rates of convergence for linear inverse problems. arXiv preprint, 2014. [6] E. J. Cand´es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? ACM, 2011. [7] E. J. Cand´es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 2010. [8] E. J. Cand´es and B. Recht. Exact matrix completion via convex optimization. FoCM, 2009. [9] Emmanuel J Candes and Terence Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 2005. [10] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 2012. [11] M. A. Davenport, Y. Plan, E. Berg, and M. Wootters. 1-bit matrix completion. Inform. Inference, 2014. [12] R. M. Dudley. The sizes of compact subsets of hilbert space and continuity of gaussian processes. Journal of Functional Analysis, 1967. [13] A. Edelman. Eigenvalues and condition numbers of random matrices. Journal on Matrix Analysis and Applications, 1988. [14] M. Fazel, H Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. In American Control Conference, 2001. [15] J. Forster and M. Warmuth. Relative expected instantaneous loss bounds. Journal of Computer and System Sciences, 2002. [16] S. Gunasekar, P. Ravikumar, and J. Ghosh. Exponential family matrix completion under structural constraints. In ICML, 2014. [17] L. Jacob, J. P. Vert, and F. R. Bach. Clustered multi-task learning: A convex formulation. In NIPS, 2009. [18] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. IT, 2010. [19] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. JMLR, 2010. [20] O. Klopp. Noisy low-rank matrix completion with general sampling distribution. Bernoulli, 2014. [21] O. Klopp. Matrix completion by singular value thresholding: sharp bounds. arXiv preprint arXiv, 2015. [22] Vladimir Koltchinskii, Karim Lounici, Alexandre B Tsybakov, et al. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. The Annals of Statistics, 2011. [23] M. Ledoux and M. Talagrand. Probability in Banach Spaces: isoperimetry and processes. Springer, 1991. [24] A. E. Litvak, A. Pajor, M. Rudelson, and N. Tomczak-Jaegermann. Smallest singular value of random matrices and geometry of random polytopes. Advances in Mathematics, 2005. [25] A. M. McDonald, M. Pontil, and D. Stamos. New perspectives on k-support and cluster norms. arXiv preprint, 2014. [26] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. JMLR, 2012. [27] S. Negahban, B. Yu, M. J. Wainwright, and P. Ravikumar. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In NIPS, 2009. [28] B. Recht. A simpler approach to matrix completion. JMLR, 2011. [29] E. Richard, G. Obozinski, and J.-P. Vert. Tight convex relaxations for sparse matrix factorization. In ArXiv e-prints, 2014. [30] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Learning Theory. Springer, 2005. [31] M. Talagrand. Majorizing measures: the generic chaining. The Annals of Probability, 1996. [32] M. Talagrand. Majorizing measures without measures. Annals of probability, 2001. [33] M. Talagrand. Upper and Lower Bounds for Stochastic Processes. Springer, 2014. [34] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 2012. [35] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. arXiv preprint, 2014. [36] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. Compressed sensing, pages 210–268, 2012. [37] R. Vershynin. Estimation in high dimensions: a geometric perspective. ArXiv e-prints, 2014. [38] A. G. Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra and its Applications, 1992. [39] E. Yang and P. Ravikumar. Dirty statistical models. In NIPS, 2013. 9
2015
172
5,672
When are Kalman-Filter Restless Bandits Indexable? Christopher Dance and Tomi Silander Xerox Research Centre Europe 6 chemin de Maupertuis, Meylan, Is`ere, France {dance,silander}@xrce.xerox.com Abstract We study the restless bandit associated with an extremely simple scalar Kalman filter model in discrete time. Under certain assumptions, we prove that the problem is indexable in the sense that the Whittle index is a non-decreasing function of the relevant belief state. In spite of the long history of this problem, this appears to be the first such proof. We use results about Schur-convexity and mechanical words, which are particular binary strings intimately related to palindromes. 1 Introduction We study the problem of monitoring several time series so as to maintain a precise belief while minimising the cost of sensing. Such problems can be viewed as POMDPs with belief-dependent rewards [3] and their applications include active sensing [7], attention mechanisms for multiple-object tracking [22], as well as online summarisation of massive data from time-series [4]. Specifically, we discuss the restless bandit [24] associated with the discrete-time Kalman filter [19]. Restless bandits generalise bandit problems [6, 8] to situations where the state of each arm (project, site or target) continues to change even if the arm is not played. As with bandit problems, the states of the arms evolve independently given the actions taken, suggesting that there might be efficient algorithms for large-scale settings, based on calculating an index for each arm, which is a real number associated with the (belief-)state of that arm alone. However, while bandits always have an optimal index policy (select the arm with the largest index), it is known that no index policy can be optimal for some discrete-state restless bandits [17] and such problems are in general PSPACE-hard even to approximate to any non-trivial factor [10]. Further, in this paper we address restless bandits with real-valued rather than discrete states. On the other hand, Whittle proposed a natural index policy for restless bandits [24], but this policy only makes sense when the restless bandit is indexable (Section 2). Briefly, a restless bandit is said to be indexable when an optimal solution to a relaxed version of the problem consists in playing all arms whose indices exceed a given threshold. (The relaxed version of the problem relaxes the constraint on the number of arms pulled per turn to a constraint on the average number of arms pulled per turn). Under certain conditions, indexability implies a form of asymptotic optimality of Whittle’s policy for the original problem [23, 20]. Restless bandits associated with scalar Kalman(-Bucy) filters in continuous time were recently shown to be indexable [12] and the corresponding discrete-time problem has attracted considerable attention over a long period [15, 11, 16, 21]. However, that attention has produced no satisfactory proof of indexability – even for scalar time-series and even if we assume that there is a monotone optimal policy for the single-arm problem, which is a policy that plays the arm if and only if the relevant belief-state exceeds some threshold (here the relevant belief-state is a posterior variance). Theorem 1 of this paper addresses that gap. After formalising the problem (Section 2), we describe the concepts and intuition (Section 3) behind the main result (Section 4). The main tools are mechanical words (which are not sufficiently well-known) and Schur convexity. As these tools are associated with rather general theorems, we believe that future work (Section 5) should enable substantial generalisation of our results. 1 2 Problem and Index We consider the problem of tracking N time-series, which we call arms, in discrete time. The state Zi,t ∈R of arm i at time t ∈Z+ evolves as a standard-normal random walk independent of everything but its immediate past (Z+, R−and R+ all include zero). The action space is U := {1, . . . , N}. Action ut = i makes an expensive observation Yi,t of arm i which is normally-distributed about Zi,t with precision bi ∈R+ and we receive cheap observations Yj,t of each other arm j with precision aj ∈R+ where aj < bj and aj = 0 means no observation at all. Let Zt, Yt, Ht, Ft be the state, observation, history and observed history, so that Zt := (Z1,t, . . . , ZN,t), Yt := (Y1,t, . . . , YN,t), Ht := ((Z0, u0, Y0), . . . , (Zt, ut, Yt)) and Ft := ((u0, Y0), . . . , (ut, Yt)). Then we formalise the above as (1· is the indicator function) Zi,0 ∼N(0, 1), Zi,t+1 | Ht ∼N(Zi,t, 1), Yi,t | Ht−1, Zt, ut ∼N  Zi,t, 1ut̸=i ai + 1ut=i bi  . Note that this setting is readily generalised to E[(Zi,t+1 −Zi,t)2] ̸= 1 by a change of variables. Thus the posterior belief is given by the Kalman filter as Zi,t | Ft ∼N( ˆZi,t, xi,t) where the posterior mean is ˆZi,t ∈R and the error variance xi,t ∈R+ satisfies xi,t+1 = φi,1ut+1=i(xi,t) where φi,0(x) := x + 1 aix + ai + 1 and φi,1(x) := x + 1 bix + bi + 1. (1) Problem KF1. Let π be a policy so that ut = π(Ft−1). Let xπ i,t be the error variance under π. The problem is to choose π so as to minimise the following objective for discount factor β ∈[0, 1). The objective consists of a weighted sum of error variances xπ i,t with weights wi ∈R+ plus observation costs hi ∈R+ for i = 1, . . . , N: E " ∞ X t=0 N X i=1 βt  hi1ut=i + wixπ i,t # = ∞ X t=0 N X i=1 βt  hi1ut=i + wixπ i,t where the equality follows as (1) is a deterministic mapping (and assuming π is deterministic). Single-Arm Problem and Whittle Index. Now fix an arm i and write xπ t , φ0(·), . . . instead of xπ t,i, φi,0(·), . . . . Say there are now two actions ut = 0, 1 corresponding to cheap and expensive observations respectively and the expensive observation now costs h + ν where ν ∈R. The singlearm problem is to choose a policy, which here is an action sequence, π := (u0, u1, . . . ) so as to minimise V π(x|ν) := ∞ X t=0 βt {(h + ν)ut + wxπ t } where x0 = x. (2) Let Q(x, α|ν) be the optimal cost-to-go in this problem if the first action must be α and let π∗be an optimal policy, so that Q(x, α|ν) := (h + ν)α + wx + βV π∗(φα(x)|ν). For any fixed x ∈R+, the value of ν for which actions u0 = 0 and u0 = 1 are both optimal is known as the Whittle index λW (x) assuming it exists and is unique. In other words The Whittle index λW (x) is the solution to Q(x, 0|λW (x)) = Q(x, 1|λW (x)). (3) Let us consider a policy which takes action u0 = α then acts optimally producing actions uα∗ t (x) and error variances xα∗ t (x). Then (3) gives ∞ X t=0 βt  (h + λW (x))u0∗ t + wx0∗ t (x) = ∞ X t=0 βt  (h + λW (x))u1∗ t + wx1∗ t (x) . Solving this linear equation for the index λW (x) gives λW (x) = w P∞ t=1 βt(x0∗ t (x) −x1∗ t (x)) P∞ t=0 βt(u1∗ t (x) −u0∗ t (x)) −h. (4) Whittle [24] recognised that for his index policy (play the arm with the largest λW (x)) to make sense, any arm which receives an expensive observation for added cost ν, must also receive an expensive observation for added cost ν′ < ν. Such problems are said to be indexable. The question resolved by this paper is whether Problem KF1 is indexable. Equivalently, is λW (x) non-decreasing in x ∈R+? 2 xt xt+1 ?0(x) ?1(x) A B C D E xt 0* xt xt+1 ?0(x) ?1(x) F G H I J xt 1* Figure 1: Orbit x0∗ t (x) traces the path ABCDE . . . for the word 01w = 01101. Orbit x1∗ t (x) traces the path FGHIJ . . . for the word 10w = 10101. Word w = 101 is a palindrome. 3 Main Result, Key Concepts and Intuition We make the following intuitive assumption about threshold (monotone) policies. A1. For some x ∈R+ depending on ν ∈R, the policy ut = 1xt≥x is optimal for problem (2). Note that under A1, definition (3) means the policy ut = 1xt>x is also optimal, so we can choose u0∗ t (x) := 0 if x0∗ t−1(x) ≤x 1 otherwise and x0∗ t (x) := φ0(x0∗ t−1(x)) if x0∗ t−1(x) ≤x φ1(x0∗ t−1(x)) otherwise u1∗ t (x) := 0 if x1∗ t−1(x) < x 1 otherwise and x1∗ t (x) := φ0(x1∗ t−1(x)) if x1∗ t−1(x) < x φ1(x1∗ t−1(x)) otherwise          (5) where x0∗ 0 (x) = x1∗ 0 (x) = x. We refer to x0∗ t (x), x1∗ t (x) as the x-threshold orbits (Figure 1). We are now ready to state our main result. Theorem 1. Suppose a threshold policy (A1) is optimal for the single-arm problem (2). Then Problem KF1 is indexable. Specifically, for any b > a ≥0 let φ0(x) := x + 1 ax + a + 1, φ1(x) := x + 1 bx + b + 1 and for any w ∈R+, h ∈R and 0 < β < 1, let λW (x) := w P∞ t=1 βt(x0∗ t (x) −x1∗ t (x)) P∞ t=0 βt(u1∗ t (x) −u0∗ t (x)) −h (6) in which action sequences u0∗ t (x), u1∗ t (x) and error variance sequences x0∗ t (x), x1∗ t (x) are given in terms of φ0, φ1 by (5). Then λW (x) is a continuous and non-decreasing function of x ∈R+. We are now ready to describe the key concepts underlying this result. Words. In this paper, a word w is a string on {0, 1}∗with kth letter wk and wi:j := wiwi+1 . . . wj. The empty word is ϵ, the concatenation of words u, v is uv, the word that is the n-fold repetition of w is wn, the infinite repetition of w is wω and ˜w is the reverse of w, so w = ˜w means w is a palindrome. The length of w is |w| and |w|u is the number of times that word u appears in w, overlaps included. Christoffel, Sturmian and Mechanical Words. It turns out that the action sequences in (5) are given by such words, so the following definitions are central to this paper. 3 (0,00001) (00001,0001) (0001,0001001) (0001001,001) (001,00100101) (00100101,00101) (00101,0010101) (0010101,01) (01,0101011) (0101011,01011) (01011,01011011) (01011011,011) (011,0110111) (0110111,0111) (0111,01111) (01111,1) (0,0001) (0001,001) (001,00101) (00101,01) (01,01011) (01011,011) (011,0111) (0111,1) (0,001) (001,01) (01,011) (011,1) (0,01) (01,1) (0,1) Figure 2: Part of the Christoffel tree. The Christoffel tree (Figure 2) is an infinite complete binary tree [5] in which each node is labelled with a pair (u, v) of words. The root is (0, 1) and the children of (u, v) are (u, uv) and (uv, v). The Christoffel words are the words 0, 1 and the concatenations uv for all (u, v) in that tree. The fractions |uv|1/|uv|0 form the Stern-Brocot tree [9] which contains each positive rational number exactly once. Also, infinite paths in the Stern-Brocot tree converge to the positive irrational numbers. Analogously, Sturmian words could be thought of as infinitely-long Christoffel words. Alternatively, among many known characterisations, the Christoffel words can be defined as the words 0, 1 and the words 0w1 where a := |0w1|1/|0w1| and (01w)n := ⌊(n + 1)a⌋−⌊na⌋ for any relatively prime natural numbers |0w1|0 and |0w1|1 and for n = 1, 2, . . . , |0w1|. The Sturmian words are then the infinite words 0w1w2 · · · where, for n = 1, 2, . . . and a ∈(0, 1)\Q, (01w1w2 · · · )n := ⌊(n + 1)a⌋−⌊na⌋. We use the notation 0w1 for Sturmian words although they are infinite. The set of mechanical words is the union of the Christoffel and Sturmian words [13]. (Note that the mechanical words are sometimes defined in terms of infinite repetitions of the Christoffel words.) Majorisation. As in [14], let x, y ∈Rm and let x(i) and y(i) be their elements sorted in ascending order. We say x is weakly supermajorised by y and write x ≺w y if j X k=1 x(k) ≥ j X k=1 y(k) for all j = 1, . . . , m. If this is an equality for j = m we say x is majorised by y and write x ≺y. It turns out that x ≺y ⇔ j X k=1 x[k] ≤ j X k=1 y[k] for j = 1, . . . , m −1 with equality for j = m where x[k], y[k] are the sequences sorted in descending order. For x, y ∈Rm we have [14] x ≺y ⇔ m X i=1 f(xi) ≤ m X i=1 f(yi) for all convex functions f : R →R. More generally, a real-valued function φ defined on a subset A of Rm is said to be Schur-convex on A if x ≺y implies that φ(x) ≤φ(y). M¨obius Transformations. Let µA(x) denote the M¨obius transformation µA(x) := A11x+A12 A21x+A22 where A ∈R2×2. M¨obius transformations such as φ0(·), φ1(·) are closed under composition, so for any word w we define φw(x) := φw|w| ◦· · · ◦φw2 ◦φw1(x) and φϵ(x) := x. Intuition. Here is the intuition behind our main result. For any x ∈R+, the orbits in (5) correspond to a particular mechanical word 0, 1 or 0w1 depending on the value of x (Figure 1). Specifically, for any word u, let yu be the fixed point of the mapping φu on R+ so that φu(yu) = yu and yu ∈R+. Then the word corresponding to x is 1 for 0 ≤x ≤y1, 0w1 for x ∈[y01w, y10w] and 0 for y0 ≤x < ∞. In passing we note that these fixed points are sorted in ascending order by the ratio ρ := |01w|0/|01w|1 of counts of 0s to counts of 1s, as 4 |01w|0 / |01w|1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 y01w and ?w(0) 0 20 40 60 80 100 Figure 3: Lower fixed points y01w of Christoffel words (black dots), majorisation points for those words (black circles) and the tree of φw(0) (blue). illustrated by Figure 3. Interestingly, it turns out that ratio ρ is a piecewise-constant yet continuous function of x, reminiscent of the Cantor function. Also, composition of M¨obius transformations is homeomorphic to matrix multiplication so that µA ◦µB(x) = µAB(x) for any A, B ∈R2×2. Thus, the index (6) can be written in terms of the orbits of a linear system (11) given by 0, 1 or 0w1. Further, if A ∈R2×2 and det(A) = 1 then the gradient of the corresponding M¨obius transformation is the convex function dµA(x) dx = 1 (A21x + A22)2 . So the gradient of the index is the difference of the sums of a convex function of the linear-system orbits. However, such sums are Schur-convex functions and it follows that the index is increasing because one orbit weakly supermajorises the other, as we now show for the case 0w1 (noting that the proof is easier for words 0, 1). As 0w1 is a mechanical word, w is a palindrome. Further, if w is a palindrome, it turns out that the difference between the linear-system orbits increases with x. So, we might define the majorisation point for w as the x for which one orbit majorises the other. Quite remarkably, if w is a palindrome then the majorisation point is φw(0) (Proposition 7). Indeed the black circles and blue dots of Figure 3 coincide. Finally, φw(0) is less than or equal to y01w which is the least x for which the orbits correspond to the word 0w1. Indeed, the blue dots of Figure 3 are below the corresponding black dots. Thus one orbit does indeed supermajorise the other. 4 Proof of Main Result 4.1 Mechanical Words The M¨obius transformations of (1) satisfy the following assumption for I := R+. We prove that the fixed point yw of word w (the solution to φw(x) = x on I) is unique in the supplementary material. Assumption A2. Functions φ0 : I →I, φ1 : I →I, where I is an interval of R, are increasing and non-expansive, so for all x, y ∈I : x < y and for k ∈{0, 1} we have φk(x) < φk(y) | {z } increasing and φk(y) −φk(x) < y −x | {z } non-expansive . Furthermore, the fixed points y0, y1 of φ0, φ1 on I satisfy y1 < y0. Hence the following two propositions (supplementary material) apply to φ0, φ1 of (1) on I = R+. 5 Proposition 1. Suppose A2 holds, x ∈I and w is a non-empty word. Then x < φw(x) ⇔φw(x) < yw ⇔x < yw and x > φw(x) ⇔φw(x) > yw ⇔x > yw. For a given x, in the notation of (5), we call the shortest word u such that (u1∗ 1 , u1∗ 2 , . . . ) = uω the x-threshold word. Proposition 2 generalises a recent result about x-threshold words in a setting where φ0, φ1 are linear [18]. Proposition 2. Suppose A2 holds and 0w1 is a mechanical word. Then 0w1 is the x-threshold word ⇔x ∈[y01w, y10w]. Also, if x0, x1 ∈I with x0 ≥y0 and x1 ≤y1 then the x0- and x1-threshold words are 0 and 1. We also use the following very interesting fact (Proposition 4.2 on p.28 of [5]). Proposition 3. Suppose 0w1 is a mechanical word. Then w is a palindrome. 4.2 Properties of the Linear-System Orbits M(w) and Prefix Sums S(w) Definition. Assume that a, b ∈R+ and a < b. Consider the matrices F :=  1 1 a 1 + a  , G :=  1 1 b 1 + b  and K :=  −1 −1 0 1  so that the M¨obius transformations µF , µG are the functions φ0, φ1 of (1) and GF −FG = (b−a)K. Given any word w ∈{0, 1}∗, we define the matrix product M(w) M(w) := M(w|w|) · · · M(w1), where M(ϵ) := I, M(0) := F and M(1) := G where I ∈R2×2 is the identity and the prefix sum S(w) as the matrix polynomial S(w) := |w| X k=1 M(w1:k), where S(ϵ) := 0 (the all-zero matrix). (7) For any A ∈R2×2, let tr(A) be the trace of A, let Aij = [A]ij be the entries of A and let A ≥0 indicate that all entries of A are non-negative. Remark. Clearly, det(F) = det(G) = 1 so that det(M(w)) = 1 for any word w. Also, S(w) corresponds to the partial sums of the linear-system orbits, as hinted in the previous section. The following proposition captures the role of palindromes (proof in the supplementary material). Proposition 4. Suppose w is a word, p is a palindrome and n ∈Z+. Then 1. M(p) = fh+1 h+f f h2−1 h+f h ! for some f, h ∈R, 2. tr(M(10p)) = tr(M(01p)), 3. If u ∈{p(10p)n, (10p)n10} then M(u) −M(˜u) = λK for some λ ∈R−, 4. If w is a prefix of p then [M(p(10p)n10w)]22 ≤[M(p(01p)n01w)]22, 5. [M((10p)n10w)]21 ≥[M((01p)n01w)]21, 6. [M((10p)n1)]21 ≥[M((01p)n0)]21. We now demonstrate a surprisingly simple relation between S(w) and M(w). Proposition 5. Suppose w is a palindrome. Then S21(w) = M22(w) −1 and S22(w) = M12(w) + S21(w). (8) Furthermore, if ∆k := [S(10w)M(w(10w)k) −S(01w)M(w(01w)k)]22 then ∆k = 0 for all k ∈Z+. (9) 6 Proof. Let us write M := M(w), S := S(w). We prove (8) by induction on |w|. In the base case w ∈{ϵ, 0, 1}. For w = ϵ, M22 −1 = 0 = S21, M12 + S21 = 0 = S22. For w ∈{0, 1}, M22 −1 = c = S21, M12 + S21 = 1 + c = S22 for some c ∈{a, b}. For the inductive step, in accordance with Claim 1 of Proposition 4, assume w ∈{0v0, 1v1} for some word v satisfying M(v) = fh+1 h+f f h2−1 h+f h ! , S(v) =  c d h −1 f + h −1  for some c, d, f, h ∈R. For w = 1v1, M := M(1v1) = GM(v)G and S := S(1v1) = GM(v)G+S(v)G+G. Calculating the corresponding matrix products and sums gives S21 = (bh + h + bf −1)(bh + 2h + bf + f + 1)(h + f)−1 = M22 −1 S22 −S21 = bh + 2h + bf + f = M12 as claimed. For w = 0u0 the claim also holds as F = G|b=a. This completes the proof of (8). Furthermore Part. Let A := S(w)FG + FG + G and B := S(w)GF + GF + F. Then ∆k = [(A(M(w)FG)k −B(M(w)GF)k)M(w)]22 (10) by definition of S(·). By Claim 1 of Proposition 4 and (8) we know that M(w) = fh+1 h+f f h2−1 h+f h ! , S(w) =  c d h −1 f + h −1  for some c, d, f, h ∈R. Substituting these expressions and the definitions of F, G into the definitions of A, B and then into (10) for k ∈{0, 1} directly gives ∆0 = ∆1 = 0 (although this calculation is long). Now consider the case k ≥2. Claim 2 of Proposition 4 says tr(M(10w)) = tr(M(01w)) and clearly det(M(10w)) = det(M(01w)) = 1. Thus we can diagonalise as M(w)FG =: UDU −1, M(w)GF =: V DV −1, D := diag(λ, 1/λ) for some λ ≥1 so that ∆k = [AUDkU −1M(w) −eT BV DkV −1M(w)]22 =: γ1λk + γ2λ−k. So, if λ = 1 then ∆k = γ1 + γ2 = ∆0 and we already showed that ∆0 = 0. Otherwise λ ̸= 1, so ∆0 = ∆1 = 0 implies γ1 + γ2 = γ1λ + γ2λ−1 = 0 which gives γ1 = γ2 = 0. Thus for any k ∈Z+ we have ∆k = γ1λk + γ2λ−k = 0. 4.3 Majorisation The following is a straightforward consequence of results in [14] proved in the supplementary material. We emphasize that the notation ≺w has nothing to do with the notion of w as a word. Proposition 6. Suppose x, y ∈Rm + and f : R →R is a symmetric function that is convex and decreasing on R+. Then x ≺w y and β ∈[0, 1] ⇒ Pm i=1 βif(x(i)) ≥Pm i=1 βif(y(i)). For any x ∈R and any fixed word w, define the sequences for n ∈Z+ and k = 1, . . . , m xnm+k(x) := [M((10w)n(10w)1:k)v(x)]2, σ(n) x := (xnm+1(x), . . . , xnm+m(x)) ynm+k(x) := [M((01w)n(01w)1:k)v(x)]2, σ(n) y := (ynm+1(x), . . . , ynm+m(x)) ) (11) where m := |10w| and v(x) := (x, 1)T . Proposition 7. Suppose w is a palindrome and x ≥φw(0). Then σ(n) x and σ(n) y are ascending sequences on R+ and σ(n) x ≺w σ(n) y for any n ∈Z+. Proof. Clearly φw(0) ≥0 so x ≥0 and hence v(x) ≥0. So for any word u and letter c ∈{0, 1} we have M(uc)v(x) = M(c)M(u)v(x) ≥M(u)v(x) ≥0 as M(c) ≥I. Thus xk+1(x) ≥xk(x) ≥0 and yk+1(x) ≥yk(x) ≥0. In conclusion, σ(n) x and σ(n) y are ascending sequences on R+. Now φw(0) = [M(w)]12 [M(w)]22 . Thus [Av(φw(0))]2 := [AM(w)]22 [M(w)]22 for any A ∈R2×2. So xnm+k(φw(0)) −ynm+k(φw(0)) = 1 [M(w)]22 [(M((10w)n(10w)1:k) −M((01w)n(01w)1:k))M(w)]22 ≤0 7 for k = 2, . . . , m by Claim 4 of Proposition 4. So all but the first term of the sum Tm(φw(0)) is non-positive where Tj(x) := j X k=1 (xnm+k(x) −ynm+k(x)). Thus T1(φw(0)) ≥T2(φw(0)) ≥. . . Tm(φw(0)). But Tm(φw(0)) = 1 [M(w)]22 m X k=1 [(M((10w)n(10w)1:k) −M((01w)n(01w)1:k))M(w)]22 = 1 [M(w)]22 [S(10w)M(w(10w)n) −S(01w)M(w(01w)n)]22 = 0 where the last step follows from (9). So Tj(φw(0)) ≥0 for j = 1, . . . , m. Yet Claims 5 and 6 of Proposition 4 give d dxTj(x) = Pj k=1[M((10w)n(10w)1:k) −M((01w)n(01w)1:k)]21 ≥0. So for x ≥φw(0) we have Tj(x) ≥0 for j = 1, . . . , m which means that σ(n) x ≺w σ(n) y . 4.4 Indexability Theorem 1. The index λW (x) of (6) is continuous and non-decreasing for x ∈R+. Proof. As weight w is non-negative and cost h is a constant we only need to prove the result for λ(x) := λW (x) w=1,h=0 and we can use w to denote a word. By Proposition 2, x ∈[y01w, y10w] for some mechanical word 0w1. (Cases x /∈(y1, y0) are clarified in the supplementary material.) Let us show that the hypotheses of Proposition 7 are satisfied by w and x. Firstly, w is a palindrome by Proposition 3. Secondly, φw01(0) ≥0 and as φw(·) is monotonically increasing, it follows that φw◦φw01(0) ≥φw(0). Equivalently, φ01w◦φw(0) ≥φw(0) so that φw(0) ≤y01w by Proposition 1. Hence x ≥y01w ≥φw(0). Thus Proposition 7 applies, showing that the sequences σ(n) x and σ(n) y , with elements xnm+k(x) and ynm+k(x) as defined in (11), are non-decreasing sequences on R+ with σ(n) x ≺w σ(n) y . Also, 1/x2 is a symmetric function that is convex and decreasing on R+. Therefore Proposition 6 applies giving m X k=1  βnm+k−1 (xnm+k(x))2 − βnm+k−1 (ynm+k(x))2  ≥0 for any n ∈Z+ where m := |01w|. (12) Also Proposition 2 shows that the x-threshold orbits are (φu1(x), . . . , φu1:k(x), . . . ) and (φl1(x), . . . , φl1:k(x), . . . ) where u := (01w)ω and l := (10w)ω. So the denominator of (6) is ∞ X k=0 βk(1lk+1=1 −1uk+1=1) = ∞ X k=0 βmk(1 −β) ⇒λ(x) = 1 −βm 1 −β ∞ X k=1 βk−1(φu1:k(x) −φl1:k(x)). Note that d dx ex+f gx+h = 1 (gx+h)2 for any eh −fg = 1. Then (12) gives dλ(x) dx = 1 −βm 1 −β ∞ X n=0 m X k=1  βnm+k−1 (xnm+k(x))2 − βnm+k−1 (ynm+k(x))2  ≥0. But λ(x) is continuous for x ∈R+ (as shown in the supplementary material). Therefore we conclude that λ(x) is non-decreasing for x ∈R+. 5 Further Work One might attempt to prove that assumption A1 holds using general results about monotone optimal policies for two-action MDPs based on submodularity [2] or multimodularity [1]. However, we find counter-examples to the required submodularity condition. Rather, we are optimistic that the ideas of this paper themselves offer an alternative approach to proving A1. It would then be natural to extend our results to settings where the underlying state evolves as Zt+1 | Ht ∼N(mZt, 1) for some multiplier m ̸= 1 and to cost functions other than the variance. Finally, the question of the indexability of the discrete-time Kalman filter in multiple dimensions remains open. 8 References [1] E. Altman, B. Gaujal, and A. Hordijk. Multimodularity, convexity, and optimization properties. Mathematics of Operations Research, 25(2):324–347, 2000. [2] E. Altman and S. Stidham Jr. Optimality of monotonic policies for two-action Markovian decision processes, with applications to control of queues with delayed information. Queueing Systems, 21(3-4):267– 291, 1995. [3] M. Araya, O. Buffet, V. Thomas, and F. Charpillet. A POMDP extension with belief-dependent rewards. In Neural Information Processing Systems, pages 64–72, 2010. [4] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization: Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 671–680, 2014. [5] J. Berstel, A. Lauve, C. Reutenauer, and F. Saliola. Combinatorics on Words: Christoffel Words and Repetitions in Words. CRM Monograph Series, 2008. [6] S. Bubeck and N. Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Foundation and Trends in Machine Learning, Vol. 5. NOW, 2012. [7] Y. Chen, H. Shioi, C. Montesinos, L. P. Koh, S. Wich, and A. Krause. Active detection via adaptive submodularity. In Proceedings of The 31st International Conference on Machine Learning, pages 55–63, 2014. [8] J. Gittins, K. Glazebrook, and R. Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011. [9] R. Graham, D. Knuth, and O. Patashnik. Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley, 1994. [10] S. Guha, K. Munagala, and P. Shi. Approximation algorithms for restless bandit problems. Journal of the ACM, 58(1):3, 2010. [11] B. La Scala and B. Moran. Optimal target tracking with restless bandits. Digital Signal Processing, 16(5):479–487, 2006. [12] J. Le Ny, E. Feron, and M. Dahleh. Scheduling continuous-time Kalman filters. IEEE Trans. Automatic Control, 56(6):1381–1394, 2011. [13] M. Lothaire. Algebraic combinatorics on words. Cambridge University Press, 2002. [14] A. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of majorization and its applications. Springer Science & Business Media, 2010. [15] L. Meier, J. Peschon, and R. Dressler. Optimal control of measurement subsystems. IEEE Trans. Automatic Control, 12(5):528–536, 1967. [16] J. Ni˜no-Mora and S. Villar. Multitarget tracking via restless bandit marginal productivity indices and Kalman filter in discrete time. In Proceedings of the 48th IEEE Conference on Decision and Control, pages 2905–2910, 2009. [17] R. Ortner, D. Ryabko, P. Auer, and R. Munos. Regret bounds for restless Markov bandits. In Algorithmic Learning Theory, pages 214–228. Springer, 2012. [18] B. Rajpathak, H. Pillai, and S. Bandyopadhyay. Analysis of stable periodic orbits in the one dimensional linear piecewise-smooth discontinuous map. Chaos, 22(3):033126, 2012. [19] T. Thiele. Sur la compensation de quelques erreurs quasi-syst´ematiques par la m´ethode des moindres carr´es. CA Reitzel, 1880. [20] I. Verloop. Asymptotic optimal control of multi-class restless bandits. CNRS Technical Report, hal00743781, 2014. [21] S. Villar. Restless bandit index policies for dynamic sensor scheduling optimization. PhD thesis, Statistics Department, Universidad Carlos III de Madrid, 2012. [22] E. Vul, G. Alvarez, J. B. Tenenbaum, and M. J. Black. Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model. In Neural Information Processing Systems, pages 1955–1963, 2009. [23] R. R. Weber and G. Weiss. On an index policy for restless bandits. Journal of Applied Probability, pages 637–648, 1990. [24] P. Whittle. Restless bandits: Activity allocation in a changing world. Journal of Applied Probability, pages 287–298, 1988. 9
2015
173
5,673
3D Object Proposals for Accurate Object Class Detection Xiaozhi Chen1 Kaustav Kundu 2 Yukun Zhu2 Andrew Berneshawi2 Huimin Ma1 Sanja Fidler2 Raquel Urtasun2 1Department of Electronic Engineering Tsinghua University 2Department of Computer Science University of Toronto chenxz12@mails.tsinghua.edu.cn, {kkundu, yukun}@cs.toronto.edu, andrew.berneshawi@mail.utoronto.ca, mhmpub@tsinghua.edu.cn, {fidler, urtasun}@cs.toronto.edu Abstract The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes. 1 Introduction Due to the development of advanced warning systems, cameras are available onboard of almost every new car produced in the last few years. Computer vision provides a very cost effective solution not only to improve safety, but also to one of the holy grails of AI, fully autonomous self-driving cars. In this paper we are interested in 2D and 3D object detection for autonomous driving. With the large success of deep learning in the past years, the object detection community shifted from simple appearance scoring on exhaustive sliding windows [1] to more powerful, multi-layer visual representations [2, 3] extracted from a smaller set of object/region proposals [4, 5]. This resulted in over 20% absolute performance gains [6, 7] on the PASCAL VOC benchmark [8]. The motivation behind these bottom-up grouping approaches is to provide a moderate number of region proposals among which at least a few accurately cover the ground-truth objects. These approaches typically over-segment an image into super pixels and group them based on several similarity measures [4, 5]. This is the strategy behind Selective Search [4], which is used in most state-of-the-art detectors these days. Contours in the image have also been exploited in order to locate object proposal boxes [9]. Another successful approach is to frame the problem as energy minimization where a parametrized family of energies represents various biases for grouping, thus yielding multiple diverse solutions [10]. Interestingly, the state-of-the-art R-CNN approach [6] does not work well on the autonomous driving benchmark KITTI [11], falling significantly behind the current top performers [12, 13]. This is due to the low achievable recall of the underlying box proposals on this benchmark. KITTI images contain many small objects, severe occlusion, high saturated areas and shadows. Furthermore, KITTI’s evaluation requires a much higher overlap with ground-truth for cars in order for a detection to count as correct. Since most existing object/region proposal methods rely on grouping super pixels based on intensity and texture, they fail in these challenging conditions. 1 Image Stereo depth-Feat Prior Figure 1: Features: From left to right: original image, stereo reconstruction, depth-based features and our prior. In the third image, purple is free space (F in Eq. (2)) and occupancy is yellow (S in Eq. (1)). In the prior, the ground plane is green and red to blue indicates distance to the ground. In this paper, we propose a new object proposal approach that exploits stereo information as well as contextual models specific to the domain of autonomous driving. Our method reasons in 3D and places proposals in the form of 3D bounding boxes. We exploit object size priors, ground plane, as well as several depth informed features such as free space, point densities inside the box, visibility and distance to the ground. Our experiments show a significant improvement in achievable recall over the state-of-the-art at all overlap thresholds and object occlusion levels, demonstrating that our approach produces highly accurate object proposals. In particular, we achieve a 25% higher recall for 2K proposals than the state-of-the-art RGB-D method MCG-D [14]. Combined with CNN scoring, our method outperforms all published results on object detection for Car, Cyclist and Pedestrian on KITTI [11]. Our code and data are online: http://www.cs.toronto.edu/˜3dop. 2 Related Work With the wide success of deep networks [2, 3], which typically operate on a fixed spatial scope, there has been increased interest in object proposal generation. Existing approaches range from purely RGB [4, 9, 10, 5, 15, 16], RGB-D [17, 14, 18, 19], to video [20]. In RGB, most approaches combine superpixels into larger regions based on color and texture similarity [4, 5]. These approaches produce around 2,000 proposals per image achieving nearly perfect achievable recall on the PASCAL VOC benchmark [8]. In [10], regions are proposed by defining parametric affinities between pixels and solving the energy using parametric min-cut. The proposed solutions are then scored using simple Gestalt-like features, and typically only 150 top-ranked proposals are needed to succeed in consequent recognition tasks [21, 22, 7]. [16] introduces learning into proposal generation with parametric energies. Exhaustively sampled bounding boxes are scored in [23] using several “objectness” features. BING [15] proposals also score windows based on an object closure measure as a proxy for “objectness”. Edgeboxes [9] score millions of windows based on contour information inside and on the boundary of each window. A detailed comparison is done in [24]. Fewer approaches exist that exploit RGB-D. [17, 18] extend CPMC [10] with additional affinities that encourage the proposals to respect occlusion boundaries. [14] extends MCG [5] to 3D by an additional set of depth-informed features. They show significant improvements in performance with respect to past work. In [19], RGB-D videos are used to propose boxes around very accurate point clouds. Relevant to our work is Sliding Shapes [25], which exhaustively evaluates 3D cuboids in RGB-D scenes. This approach, however, utilizes an object scoring function trained on a large number of rendered views of CAD models, and uses complex class-based potentials that make the method run slow in both training and inference. Our work advances over prior work by exploiting the typical sizes of objects in 3D, the ground plane and very efficient depth-informed scoring functions. Related to our work are also detection approaches for autonomous driving. In [26], objects are predetected via a poselet-like approach and a deformable wireframe model is then fit using the image information inside the box. Pepik et al. [27] extend the Deformable Part-based Model [1] to 3D by linking parts across different viewpoints and using a 3D-aware loss function. In [28], an ensemble of models derived from visual and geometrical clusters of object instances is employed. In [13], Selective Search boxes are re-localized using top-down, object level information. [29] proposes a holistic model that re-reasons about DPM detections based on priors from cartographic maps. In KITTI, the best performing method so far is the recently proposed 3DVP [12] which uses the ACF detector [30] and learned occlusion patters in order to improve performance of occluded cars. 3 3D Object Proposals The goal of our approach is to output a diverse set of object proposals in the context of autonomous driving. Since 3D reasoning is of crucial importance in this domain, we place our proposals in 3D and represent them as cuboids. We assume a stereo image pair as input and compute depth via the 2 Car # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours Pedestrian # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours Cyclist # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours # candidates 10 1 10 2 10 3 10 4 recall at IoU threshold 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING SS EB MCG MCG-D Ours (a) Easy (b) Moderate (c) Hard Figure 2: Proposal recall: We use 0.7 overlap threshold for Car , and 0.5 for Pedestrian and Cyclist. state-of-the-art approach by Yamaguchi et al. [31]. We use depth to compute a point-cloud x and conduct all our reasoning in this domain. We next describe our notation and present our framework. 3.1 Proposal Generation as Energy Minimization We represent each object proposal with a 3D bounding box, denoted by y, which is parametrized by a tuple, (x, y, z, θ, c, t), where (x, y, z) denotes the center of the 3D box and θ, represents its azimuth angle. Note that each box y in principle lives in a continuous space, however, for efficiency we reason in a discretized space (details in Sec. 3.2). Here, c denotes the object class of the box and t ∈{1, . . . , Tc} indexes the set of 3D box “templates” which represent the physical size variations of each object class c. The templates are learned from the training data. We formulate the proposal generation problem as inference in a Markov Random Field (MRF) which encodes the fact that the proposal y should enclose a high density region in the point cloud. Furthermore, since the point cloud represents only the visible portion of the 3D space, y should not overlap with the free space that lies within the rays between the points in the point cloud and the camera. If that was the case, the box would in fact occlude the point cloud, which is not possible. We also encode the fact that the point cloud should not extend vertically beyond our placed 3D box, and that the height of the point cloud in the immediate vicinity of the box should be lower than the box. Our MRF energy thus takes the following form: E(x, y) = w⊤ c,pcdφpcd(x, y) + w⊤ c,fsφfs(x, y) + w⊤ c,htφht(x, y) + w⊤ c,ht−contrφht−contr(x, y) Note that our energy depends on the object class via class-specific weights w⊤ c , which are trained using structured SVM [32] (details in Sec. 3.4). We now explain each potential in more detail. Point Cloud Density: This potential encodes the density of the point cloud within the box φpcd(x, y) = P p∈Ω(y) S(p) |Ω(y)| (1) 3 where S(p) indicates whether the voxel p is occupied or not (contains point cloud points), and Ω(y) denotes the set of voxels inside the box defined by y. Fig. 1 visualizes the potential. This potential simply counts the fraction of occupied voxels inside the box. It can be efficiently computed in constant time via integral accumulators, which is a generalization of integral images to 3D. Free Space: This potential encodes the constraint that the free space between the point cloud and the camera cannot be occupied by the box. Let F represent a free space grid, where F(p) = 1 means that the ray from the camera to the voxel p does not hit an occupied voxel, i.e., voxel p lies in the free space. We define the potential as follows: φfs(x, y) = P p∈Ω(y)(1 −F(p)) |Ω(y)| (2) This potential thus tries to minimize the free space inside the box, and can also be computed efficiently using integral accumulators. Height Prior: This potential encodes the fact that the height of the point cloud inside the box should be close to the mean height of the object class c. This is encoded in the following way: φht(x, y) = 1 |Ω(y)| X p∈Ω(y) Hc(p) (3) with Hc(p) =      exp " −1 2 dp −µc,ht σc,ht 2# , if S(p) = 1 0, o.w. (4) where, dp indicates the height of the road plane lying below the voxel p. Here, µc,ht, σc,ht are the MLE estimates of the mean height and standard deviation by assuming a Gaussian distribution of the data. Integral accumulators can be used to efficiently compute these features. Height Contrast: This potential encodes the fact that the point cloud that surrounds the bounding box should have a lower height than the height of the point cloud inside the box. This is encoded as: φht−contr(x, y) = φht(x, y) φht(x, y+) −φht(x, y) (5) where y+ represents the cuboid obtained by extending y by 0.6m in the direction of each face. 3.2 Discretization and Accumulators Our point cloud is defined with respect to a left-handed coordinate system, where the positive Z-axis is along the viewing direction of the camera and the Y-axis is along the direction of gravity. We discretize the continuous space such that the width of each voxel is 0.2m in each dimension. We compute the occupancy, free space and height prior grids in this discretized space. Following the idea of integral images, we compute our accumulators in 3D. 3.3 Inference Inference in our model is performed by minimizing the energy defined in Eq. (??): y∗= argminyE(x, y) Due to the efficient computation of the features using integral accumulators evaluating each configuration y takes constant time. Still, evaluating exhaustively in the entire grid would be slow. In order to reduce the search space, we carve certain regions of the grid by skipping configurations which do not overlap with the point cloud. We further reduce the search space along the vertical dimension by placing all our bounding boxes on the road plane, y = yroad. We estimate the road by partitioning the image into super pixels, and train a road classifier using a neural net with several 2D and 3D features. We then use RANSAC on the predicted road pixels to fit the ground plane. Using the ground-plane considerably reduces the search space along the vertical dimension. However since the points are noisy at large distances from the camera, we sample additional proposal boxes at locations farther than 20m from the camera. We sample these boxes at heights y = yroad ±σroad, where σroad is the MLE estimate of the standard deviation by assuming a Gaussian distribution of 4 Car IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 12.3 SS 26.7 EB 37.5 MCG 45.1 MCG-D 49.6 Ours 65.6 IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 7.7 SS 18 EB 26.5 MCG 36.1 MCG-D 38.8 Ours 58.3 IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 7.1 SS 16.5 EB 23 MCG 31.3 MCG-D 32.8 Ours 57.8 Pedestrian IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 7.6 SS 5.4 EB 9.2 MCG 15 MCG-D 19.6 Ours 49.9 IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 6.6 SS 5.1 EB 7.7 MCG 13.3 MCG-D 16.1 Ours 44.8 IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 6.1 SS 5 EB 6.9 MCG 12.2 MCG-D 14 Ours 39.7 Cyclist IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 6.2 SS 7.5 EB 4.9 MCG 10.8 MCG-D 10.8 Ours 55.2 IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 4.1 SS 6 EB 4.3 MCG 8 MCG-D 10.2 Ours 40.8 IoU overlap threshold 0.5 0.6 0.7 0.8 0.9 1 recall 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 BING 4.4 SS 6.1 EB 4.4 MCG 8.1 MCG-D 10.7 Ours 40.8 (a) Easy (b) Moderate (c) Hard Figure 3: Recall vs IoU for 500 proposals. The number next to the labels indicates the average recall (AR). the distance between objects and the estimated ground plane. Using our sampling strategy, scoring all possible configurations takes only a fraction of a second. Note that by minimizing our energy we only get one, best object candidate. In order to generate N diverse proposals, we sort the values of E(x, y) for all y, and perform greedy inference: we pick the top scoring proposal, perform NMS, and iterate. The entire inference process and feature computation takes on average 1.2s per image for N = 2000 proposals. 3.4 Learning We learn the weights {wc,pcd, wc,fs, wc,ht, wc,ht−contr} of the model using structured SVM [32]. Given N ground truth input-output pairs, {x(i), y(i)}i=1,··· ,N, the parameters are learnt by solving the following optimization problem: min w∈RD 1 2||w||2 + C N N X i=1 ξi s.t.: wT (φ(x(i), y) −φ(x(i), y(i))) ≥∆(y(i), y) −ξi, ∀y \ y(i) We use the parallel cutting plane of [33] to solve this minimization problem. We use Intersectionover-Union (IoU) between the set of GT boxes, y(i), and candidates y as the task loss ∆(y(i), y). We compute IoU in 3D as the volume of intersection of two 3D boxes divided by the volume of their union. This is a very strict measure that encourages accurate 3D placement of the proposals. 3.5 Object Detection and Orientation Estimation Network We use our object proposal method for the task of object detection and orientation estimation. We score bounding box proposals using CNN. Our network is built on Fast R-CNN [34], which share convolutional features across all proposals and use ROI pooling layer to compute proposal-specific 5 Cars Pedestrians Cyclists Easy Moderate Hard Easy Moderate Hard Easy Moderate Hard LSVM-MDPM-sv [35, 1] 68.02 56.48 44.18 47.74 39.36 35.95 35.04 27.50 26.21 SquaresICF [36] 57.33 44.42 40.08 DPM-C8B1 [37] 74.33 60.99 47.16 38.96 29.03 25.61 43.49 29.04 26.20 MDPM-un-BB [1] 71.19 62.16 48.43 DPM-VOC+VP [27] 74.95 64.71 48.76 59.48 44.86 40.37 42.43 31.08 28.23 OC-DPM [38] 74.94 65.95 53.86 AOG [39] 84.36 71.88 59.27 SubCat [28] 84.14 75.46 59.71 54.67 42.34 37.95 DA-DPM [40] 56.36 45.51 41.08 Fusion-DPM [41] 59.51 46.67 42.05 R-CNN [42] 61.61 50.13 44.79 FilteredICF [43] 61.14 53.98 49.29 pAUCEnsT [44] 65.26 54.49 48.60 51.62 38.03 33.38 MV-RGBD-RF [45] 70.21 54.56 51.25 54.02 39.72 34.82 3DVP [12] 87.46 75.77 65.38 Regionlets [13] 84.75 76.45 59.70 73.14 61.15 55.21 70.41 58.72 51.83 Ours 93.04 88.64 79.10 81.78 67.47 64.70 78.39 68.94 61.37 Table 1: Average Precision (AP) (in %) on the test set of the KITTI Object Detection Benchmark. Cars Pedestrians Cyclists Easy Moderate Hard Easy Moderate Hard Easy Moderate Hard AOG [39] 43.81 38.21 31.53 DPM-C8B1 [37] 59.51 50.32 39.22 31.08 23.37 20.72 27.25 19.25 17.95 LSVM-MDPM-sv [35, 1] 67.27 55.77 43.59 43.58 35.49 32.42 27.54 22.07 21.45 DPM-VOC+VP [27] 72.28 61.84 46.54 53.55 39.83 35.73 30.52 23.17 21.58 OC-DPM [38] 73.50 64.42 52.40 SubCat [28] 83.41 74.42 58.83 44.32 34.18 30.76 3DVP [12] 86.92 74.59 64.11 Ours 91.44 86.10 76.52 72.94 59.80 57.03 70.13 58.68 52.35 Table 2: AOS scores (in %) on the test set of KITTI’s Object Detection and Orientation Estimation Benchmark. features. We extend this basic network by adding a context branch after the last convolutional layer, and an orientation regression loss to jointly learn object location and orientation. Features output from the original and the context branches are concatenated and fed to the prediction layers. The context regions are obtained by enlarging the candidate boxes by a factor of 1.5. We used smooth L1 loss [34] for orientation regression. We use OxfordNet [3] trained on ImageNet to initialize the weights of convolutional layers and the branch for candidate boxes. The parameters of the context branch are initialized by copying the weights from the original branch. We then fine-tune it end to end on the KITTI training set. 4 Experimental Evaluation We evaluate our approach on the challenging KITTI autonomous driving dataset [11], which contains three object classes: Car, Pedestrian, and Cyclist. KITTI’s object detection benchmark has 7,481 training and 7,518 test images. Evaluation is done in three regimes: easy, moderate and hard, containing objects at different occlusion and truncation levels. The moderate regime is used to rank the competing methods in the benchmark. Since the test ground-truth labels are not available, we split the KITTI training set into train and validation sets (each containing half of the images). We ensure that our training and validation set do not come from the same video sequences, and evaluate the performance of our bounding box proposals on the validation set. Following [4, 24], we use the oracle recall as metric. For each ground-truth (GT) object we find the proposal that overlaps the most in IoU (i.e., “best proposal”). We say that a GT instance has been recalled if IoU exceeds 70% for cars, and 50% for pedestrians and cyclists. This follows the standard KITTI’s setup. Oracle recall thus computes the percentage of recalled GT objects, and thus the best achievable recall. We also show how different number of generated proposals affect recall. Comparison to the State-of-the-art: We compare our approach to several baselines: MCGD [14], MCG [5], Selective Search (SS) [4], BING [15], and Edge Boxes (EB) [9]. Fig. 2 shows recall as a function of the number of candidates. We can see that by using 1000 proposals, we achieve around 90% recall for Cars in the moderate and hard regimes, while for easy we need 6 Images Top 100 prop. Ground truth Best prop. Figure 4: Qualitative results for the Car class. We show the original image, 100 top scoring proposals, groundtruth 3D boxes, and our best set of proposals that cover the ground-truth. Images Top 100 prop. Ground truth Best prop. Figure 5: Qualitative examples for the Pedestrian class. Method BING Selective Search Edge Boxes (EB) MCG MCG-D Ours Time (seconds) 0.01 15 1.5 100 160 1.2 Table 3: Running time of different proposal methods only 200 candidates to get the same recall. Notice that other methods saturate or require orders of magnitude more candidates to reach 90% recall. For Pedestrians and Cyclists our results show similar improvements over the baselines. Note that while we use depth-based features, MCG-D uses both depth and appearance based features, and all other methods use only appearance features. This shows the importance of 3D information in the autonomous driving scenario. Furthermore, the other methods use class agnostic proposals to generate the candidates, whereas we generate them based on the object class. This allows us to achieve higher recall values by exploiting size priors tailored to each class. Fig. 3 shows recall for 500 proposals as a function of the IoU overlap. Our approach significantly outperforms the baselines, particularly for Cyclists. Running Time: Table 3 shows running time of different proposal methods. Our approach is fairly efficient and can compute all features and proposals in 1.2s on a single core. Qualitative Results: Figs. 4 and 5 show qualitative results for cars and pedestrians. We show the input RGB image, top 100 proposals, the GT boxes in 3D, as well as proposals from our method with the best 3D IoU (chosen among 2000 proposals). Our method produces very precise proposals even for the more difficult (far away or occluded) objects. 7 Object Detection: To evaluate our full object detection pipeline, we report results on the test set of the KITTI benchmark. The results are presented in Table 1. Our approach outperforms all the competitors significantly across all categories. In particular, we achieve 12.19%, 6.32% and 10.22% improvement in AP for Cars, Pedestrians, and Cyclists, in the moderate setting. Object Orientation Estimation: Average Orientation Similarity (AOS) [11] is used as the evaluation metric in object detection and orientation estimation task. Results on KITTI test set are shown in Table 2. Our approach again outperforms all approaches by a large margin. Particularly, our approach achieves ∼12% higher scores than 3DVP [12] on Cars in moderate and hard data. The improvement on Pedestrians and Cyclists are even more significant as they are more than 20% higher than the second best method. Suppl. material: We refer the reader to supplementary material for many additional results. 5 Conclusion We have presented a novel approach to object proposal generation in the context of autonomous driving. In contrast to most existing work, we take advantage of stereo imagery and reason directly in 3D. We formulate the problem as inference in a Markov random field encoding object size priors, ground plane and a variety of depth informed features. Our approach significantly outperforms existing state-of-the-art object proposal methods on the challenging KITTI benchmark. In particular, for 2K proposals our approach achieves a 25% higher recall than the state-of-the-art RGB-D method MCG-D [14]. Combined with CNN scoring our method significantly outperforms all previous published object detection results for all three object classes on the KITTI [11] benchmark. Acknowledgements: The work was partially supported by NSFC 61171113, NSERC and Toyota Motor Corporation. References [1] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 2010. [2] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [3] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In arXiv:1409.1556, 2014. [4] K. Van de Sande, J. Uijlings, T. Gevers, and A. Smeulders. Segmentation as selective search for object recognition. In ICCV, 2011. [5] P. Arbelaez, J. Pont-Tusetand, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In CVPR. 2014. [6] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv preprint arXiv:1311.2524, 2013. [7] Y. Zhu, R. Urtasun, R. Salakhutdinov, and S. Fidler. SegDeepM: Exploiting segmentation and context in deep neural networks for object detection. In CVPR, 2015. [8] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results. [9] L. Zitnick and P. Doll´ar. Edge boxes: Locating object proposals from edges. In ECCV. 2014. [10] J. Carreira and C. Sminchisescu. Cpmc: Automatic object segmentation using constrained parametric min-cuts. PAMI, 34(7):1312–1328, 2012. [11] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012. [12] Y. Xiang, W. Choi, Y. Lin, and S. Savarese. Data-driven 3d voxel patterns for object category recognition. In CVPR, 2015. [13] C. Long, X. Wang, G. Hua, M. Yang, and Y. Lin. Accurate object detection with location relaxation and regionlets relocalization. In ACCV, 2014. [14] S. Gupta, R. Girshick, P. Arbelaez, and J. Malik. Learning rich features from RGB-D images for object detection and segmentation. In ECCV. 2014. 8 [15] M. Cheng, Z. Zhang, M. Lin, and P. Torr. BING: Binarized normed gradients for objectness estimation at 300fps. In CVPR, 2014. [16] T. Lee, S. Fidler, and S. Dickinson. A learning framework for generating region proposals with mid-level cues. In ICCV, 2015. [17] D. Banica and C Sminchisescu. Cpmc-3d-o2p: Semantic segmentation of rgb-d images using cpmc and second order pooling. In CoRR abs/1312.7715, 2013. [18] D. Lin, S. Fidler, and R. Urtasun. Holistic scene understanding for 3d object detection with rgbd cameras. In ICCV, 2013. [19] A. Karpathy, S. Miller, and Li Fei-Fei. Object discovery in 3d scenes via shape analysis. In ICRA, 2013. [20] D. Oneata, J. Revaud, J. Verbeek, and C. Schmid. Spatio-temporal object detection proposals. In ECCV, 2014. [21] J. Carreira, R. Caseiro, J. Batista, and C. Sminchisescu. Semantic segmentation with second-order pooling. In ECCV. 2012. [22] S. Fidler, R. Mottaghi, A. Yuille, and R. Urtasun. Bottom-up segmentation for top-down detection. In CVPR, 2013. [23] B. Alexe, T. Deselares, and V. Ferrari. Measuring the objectness of image windows. PAMI, 2012. [24] J. Hosang, R. Benenson, P. Doll´ar, and B. Schiele. What makes for effective detection proposals? arXiv:1502.05082, 2015. [25] S. Song and J. Xiao. Sliding shapes for 3d object detection in depth images. In ECCV. 2014. [26] M. Zia, M. Stark, and K. Schindler. Towards scene understanding with detailed 3d object representations. IJCV, 2015. [27] B. Pepik, M. Stark, P. Gehler, and B. Schiele. Multi-view and 3d deformable part models. PAMI, 2015. [28] E. Ohn-Bar and M. M. Trivedi. Learning to detect vehicles by clustering appearance patterns. IEEE Transactions on Intelligent Transportation Systems, 2015. [29] S. Wang, S. Fidler, and R. Urtasun. Holistic 3d scene understanding from a single geo-tagged image. In CVPR, 2015. [30] P. Doll´ar, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. PAMI, 2014. [31] K. Yamaguchi, D. McAllester, and R. Urtasun. Efficient joint segmentation, occlusion labeling, stereo and flow estimation. In ECCV, 2014. [32] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support Vector Learning for Interdependent and Structured Output Spaces. In ICML, 2004. [33] A. Schwing, S. Fidler, M. Pollefeys, and R. Urtasun. Box in the box: Joint 3d layout and object reasoning from single images. In ICCV, 2013. [34] Ross Girshick. Fast R-CNN. In ICCV, 2015. [35] A. Geiger, C. Wojek, and R. Urtasun. Joint 3d estimation of objects and scene layout. In NIPS, 2011. [36] R. Benenson, M. Mathias, T. Tuytelaars, and L. Van Gool. Seeking the strongest rigid detector. In CVPR, 2013. [37] J. Yebes, L. Bergasa, R. Arroyo, and A. Lzaro. Supervised learning and evaluation of KITTI’s cars detector with DPM. In IV, 2014. [38] B. Pepik, M. Stark, P. Gehler, and B. Schiele. Occlusion patterns for object class detection. In CVPR, 2013. [39] B. Li, T. Wu, and S. Zhu. Integrating context and occlusion for car detection by hierarchical and-or model. In ECCV, 2014. [40] J. Xu, S. Ramos, D. Vozquez, and A. Lopez. Hierarchical Adaptive Structural SVM for Domain Adaptation. In arXiv:1408.5400, 2014. [41] C. Premebida, J. Carreira, J. Batista, and U. Nunes. Pedestrian detection combining rgb and dense lidar data. In IROS, 2014. [42] J. Hosang, M. Omran, R. Benenson, and B. Schiele. Taking a deeper look at pedestrians. In arXiv, 2015. [43] S. Zhang, R. Benenson, and B. Schiele. Filtered channel features for pedestrian detection. In arXiv:1501.05759, 2015. [44] S. Paisitkriangkrai, C. Shen, and A. van den Hengel. Pedestrian detection with spatially pooled features and structured ensemble learning. In arXiv:1409.5209, 2014. [45] A. Gonzalez, G. Villalonga, J. Xu, D. Vazquez, J. Amores, and A. Lopez. Multiview random forest of local experts combining rgb and lidar data for pedestrian detection. In IV, 2015. 9
2015
174
5,674
Interpolating Convex and Non-Convex Tensor Decompositions via the Subspace Norm Qinqing Zheng University of Chicago qinqing@cs.uchicago.edu Ryota Tomioka Toyota Technological Institute at Chicago tomioka@ttic.edu Abstract We consider the problem of recovering a low-rank tensor from its noisy observation. Previous work has shown a recovery guarantee with signal to noise ratio O(n⌈K/2⌉/2) for recovering a Kth order rank one tensor of size n × · · · × n by recursive unfolding. In this paper, we first improve this bound to O(nK/4) by a much simpler approach, but with a more careful analysis. Then we propose a new norm called the subspace norm, which is based on the Kronecker products of factors obtained by the proposed simple estimator. The imposed Kronecker structure allows us to show a nearly ideal O(√n+ √ HK−1) bound, in which the parameter H controls the blend from the non-convex estimator to mode-wise nuclear norm minimization. Furthermore, we empirically demonstrate that the subspace norm achieves the nearly ideal denoising performance even with H = O(1). 1 Introduction Tensor is a natural way to express higher order interactions for a variety of data and tensor decomposition has been successfully applied to wide areas ranging from chemometrics, signal processing, to neuroimaging; see [15, 18] for a survey. Moreover, recently it has become an active area in the context of learning latent variable models [3]. Many problems related to tensors, such as, finding the rank, or a best rank-one approaximation of a tensor is known to be NP hard [11, 8]. Nevertheless we can address statistical problems, such as, how well we can recover a low-rank tensor from its randomly corrupted version (tensor denoising) or from partial observations (tensor completion). Since we can convert a tensor into a matrix by an operation known as unfolding, recent work [25, 19, 20, 13] has shown that we do get nontrivial guarantees by using some norms or singular value decompositions. More specifically, Richard & Montanari [20] has shown that when a rank-one Kth order tensor of size n × · · · × n is corrupted by standard Gaussian noise, a nontrivial bound can be shown with high probability if the signal to noise ratio β/σ ≿n⌈K/2⌉/2 by a method called the recursive unfolding1. Note that β/σ ≿√n is sufficient for matrices (K = 2) and also for tensors if we use the best rank-one approximation (which is known to be NP hard) as an estimator. On the other hand, Jain & Oh [13] analyzed the tensor completion problem and proposed an algorithm that requires O(n3/2·polylog(n)) samples for K = 3; while information theoretically we need at least Ω(n) samples and the intractable maximum likelihood estimator would require O(n · polylog(n)) samples. Therefore, in both settings, there is a wide gap between the ideal estimator and current polynomial time algorithms. A subtle question that we will address in this paper is whether we need to unfold the tensor so that the resulting matrix become as square as possible, which was the reasoning underlying both [19, 20]. As a parallel development, non-convex estimators based on alternating minimization or nonlinear optimization [1, 21] have been widely applied and have performed very well when appropriately 1We say an ≿bn if there is a constant C > 0 such that an ≥C · bn. 1 Table 1: Comparison of required signal-to-noise ratio β/σ of different algorithms for recovering a Kth order rank one tensor of size n × · · · × n contaminated by Gaussian noise with Standard deviation σ. See model (2). The bound for the ordinary unfolding is shown in Corollary 1. The bound for the subspace norm is shown in Theorem 2. The ideal estimator is proven in Appendix A. Overlapped/ Latent nuclear norm[23] Recursive unfolding[20]/ square norm[19] Ordinary unfolding Subspace norm (proposed) Ideal O(n(K−1)/2) O(n⌈K/2⌉/2) O(nK/4) O(√n + √ HK−1) O( p nK log(K)) set up. Therefore it would be of fundamental importance to connect the wisdom of non-convex estimators with the more theoretically motivated estimators that recently emerged. In this paper, we explore such a connection by defining a new norm based on Kronecker products of factors that can be obtained by simple mode-wise singular value decomposition (SVD) of unfoldings (see notation section below), also known as the higher-order singular value decomposition (HOSVD) [6, 7]. We first study the non-asymptotic behavior of the leading singular vector from the ordinary (rectangular) unfolding X(k) and show a nontrivial bound for signal to noise ratio β/σ ≿nK/4. Thus the result also applies to odd order tensors confirming a conjecture in [20]. Furthermore, this motivates us to use the solution of mode-wise truncated SVDs to construct a new norm. We propose the subspace norm, which predicts an unknown low-rank tensor as a mixture of K low-rank tensors, in which each term takes the form foldk(M (k)( bP (1) ⊗· · · ⊗bP (k−1) ⊗bP (k+1) ⊗· · · ⊗bP (K))⊤), where foldk is the inverse of unfolding (·)(k), ⊗denotes the Kronecker product, and bP (k) ∈Rn×H is a orthonormal matrix estimated from the mode-k unfolding of the observed tensor, for k = 1, . . . , K; H is a user-defined parameter, and M (k) ∈Rn×HK−1. Our theory tells us that with sufficiently high signal-to-noise ratio the estimated bP (k) spans the true factors. We highlight our contributions below: 1. We prove that the required signal-to-noise ratio for recovering a Kth order rank one tensor from the ordinary unfolding is O(nK/4). Our analysis shows a curious two phase behavior: with high probability, when nK/4 ≾β/σ ≾nK/2, the error shows a fast decay as 1/β4; for β/σ ≿nK/2, the error decays slowly as 1/β2. We confirm this in a numerical simulation. 2. The proposed subspace norm is an interpolation between the intractable estimators that directly control the rank (e.g., HOSVD) and the tractable norm-based estimators. It becomes equivalent to the latent trace norm [23] when H = n at the cost of increased signal-to-noise ratio threshold (see Table 1). 3. The proposed estimator is more efficient than previously proposed norm based estimators, because the size of the SVD required in the algorithm is reduced from n × nK−1 to n × HK−1. 4. We also empirically demonstrate that the proposed subspace norm performs nearly optimally for constant order H. Notation Let X ∈Rn1×n2×···×nK be a Kth order tensor. We will often use n1 = · · · = nK = n to simplify the notation but all the results in this paper generalizes to general dimensions. The inner product between a pair of tensors is defined as the inner products of them as vectors; i.e., ⟨X, W⟩= ⟨vec(X), vec(W)⟩. For u ∈Rn1, v ∈Rn2, w ∈Rn3, u ◦v ◦w denotes the n1 × n2 × n3 rank-one tensor whose i, j, k entry is uivjwk. The rank of X is the minimum number of rank-one tensors required to write X as a linear combination of them. A mode-k fiber of tensor X is an nk dimensional vector that is obtained by fixing all but the kth index of X. The mode-k unfolding X(k) of tensor X is an nk × Q k′̸=k nk′ matrix constructed by concatenating all the mode-k fibers along columns. We denote the spectral and Frobenius norms for matrices by ∥· ∥and ∥· ∥F , respectively. 2 2 The power of ordinary unfolding 2.1 A perturbation bound for the left singular vector We first establish a bound on recovering the left singular vector of a rank-one n × m matrix (with m > n) perturbed by random Gaussian noise. Consider the following model known as the information plus noise model [4]: ˜X = βuv⊤+ σE, (1) where u and v are unit vectors, β is the signal strength, σ is the noise standard deviation, and the noise matrix E is assumed to be random with entries sampled i.i.d. from the standard normal distribution. Our goal is to lower-bound the correlation between u and the top left singular vector ˆu of ˜X for signal-to-noise ratio β/σ ≿(mn)1/4 with high probability. A direct application of the classic Wedin perturbation theorem [28] to the rectangular matrix ˜X does not provide us the desired result. This is because it requires the signal to noise ratio β/σ ≥2∥E∥. Since the spectral norm of E scales as Op(√n+√m) [27], this would mean that we require β/σ ≿ m1/2; i.e., the threshold is dominated by the number of columns m, if m ≥n. Alternatively, we can view ˆu as the leading eigenvector of ˜X ˜X ⊤, a square matrix. Our key insight is that we can decompose ˜X ˜X ⊤as follows: ˜X ˜X ⊤= (β2uu⊤+ mσ2I) + (σ2EE⊤−mσ2I) + βσ(uv⊤E⊤+ Evu⊤). Note that u is the leading eigenvector of the first term because adding an identity matrix does not change the eigenvectors. Moreover, we notice that there are two noise terms: the first term is a centered Wishart matrix and it is independent of the signal β; the second term is Gaussian distributed and depends on the signal β. This implies a two-phase behavior corresponding to either the Wishart or the Gaussian noise term being dominant, depending on the value of β. Interestingly, we get a different speed of convergence for each of these phases as we show in the next theorem (the proof is given in Appendix D.1). Theorem 1. There exists a constant C such that with probability at least 1 −4e−n, if m/n ≥C, |⟨ˆu, u⟩| ≥        1 −Cnm (β/σ)4 , if √m > β σ ≥(Cnm) 1 4 , 1 − Cn (β/σ)2 , if β σ ≥√m, otherwise, |⟨ˆu, u⟩| ≥1 − Cn (β/σ)2 if β/σ ≥ √ Cn. In other words, if ˜X has sufficiently many more columns than rows, as the signal to noise ratio β/σ increases, ˆu first converges to u as 1/β4, and then as 1/β2. Figure 1(a) illustrates these results. We randomly generate a rank-one 100 × 10000 matrix perturbed by Gaussian noise, and measure the distance between ˆu and u. The phase transition happens at β/σ = (nm)1/4, and there are two regimes of different convergence rates as Theorem 1 predicts. 2.2 Tensor Unfolding Now let’s apply the above result to the tensor version of information plus noise model studied by [20]. We consider a rank one n×· · ·×n tensor (signal) contaminated by Gaussian noise as follows: Y = X ∗+ σE = βu(1) ◦· · · ◦u(K) + σE, (2) where factors u(k) ∈Rn, k = 1, . . . , K, are unit vectors, which are not necessarily identical, and the entries of E ∈Rn×···×n are i.i.d samples from the normal distribution N(0, 1). Note that this is slightly more general (and easier to analyze) than the symmetric setting studied by [20]. 3 log(β/σ) 2 3 4 5 6 -10 -8 -6 -4 -2 0 n = 100, m = 10000 log(1 −|⟨u, ˆu⟩|) −3.81 log(β/α) + 12.76 −2.38 log(β/α) + 6.18 log 1 (nm)1/42 log (√m) (a) Synthetic experiment showing phase transition at β/σ = (nm)1/4 and regimes with different rates of convergence. See Theorem 1. σ(Q i ni) 1 4 0 5 10 15 20 25 30 correlation 0 0.2 0.4 0.6 0.8 1 |⟨u1, bu1⟩| - [20 40 60] |⟨u2, bu2⟩| - [20 40 60] |⟨u1, bu1⟩| - [40 80 120] |⟨u2, bu2⟩| - [40 80 120] β1 β2 (b) Synthetic experiment showing phase transition at β = σ(Q k nk)1/4 for odd order tensors. See Corollary 1. Figure 1: Numerical demonstration of Theorem 1 and Corollary 1. Several estimators for recovering X ∗from its noisy version Y have been proposed (see Table 1). Both the overlapped nuclear norm and latent nuclear norm discussed in [23] achives the relative performance guarantee ||| ˆ X −X ∗|||F /β ≤Op  σ √ nK−1/β  , (3) where ˆ X is the estimator. This bound implies that if we want to obtain relative error smaller than ε, we need the signal to noise ratio β/σ to scale as β/σ ≿ √ nK−1/ε. Mu et al. [19] proposed the square norm, defined as the nuclear norm of the matrix obtained by grouping the first ⌊K/2⌋indices along the rows and the last ⌈K/2⌉indices along the columns. This norm improves the right hand side of inequality (3) to Op(σ √ n⌈K/2⌉/β), which translates to requiring β/σ ≿ √ n⌈K/2⌉/ε for obtaining relative error ε. The intuition here is the more square the unfolding is the better the bound becomes. However, there is no improvement for K = 3. Richard and Montanari [20] studied the (symmetric version of) model (2) and proved that a recursive unfolding algorithm achieves the factor recovery error dist(ˆu(k), u(k)) = ε with β/σ ≿ √ n⌈K/2⌉/ε with high probability, where dist(u, u′) := min(∥u −u′∥, ∥u + u′∥). They also showed that the randomly initialized tensor power method [7, 16, 3] can achieve the same error ε with slightly worse threshold β/σ ≿max(√n/ε2, nK/2)√K log K also with high probability. The reasoning underlying both [19] and [20] is that square unfolding is better. However, if we take the (ordinary) mode-k unfolding Y (k) = βu(k)u(k−1) ⊗· · · ⊗u(1) ⊗u(K) ⊗· · · ⊗u(k+1)⊤+ σE(k), (4) we can see (4) as an instance of information plus noise model (1) where m/n = nK−2. Thus the ordinary unfolding satisfies the condition of Theorem 1 for n or K large enough. Corollary 1. Consider a K(≥3)th order rank one tensor contaminated by Gaussian noise as in (2). There exists a constant C such that if nK−2 ≥C, with probability at least 1 −4Ke−n, we have dist2(ˆu(k), u(k)) ≤        2CnK (β/σ)4 , if n K−1 2 > β/σ ≥C 1 4 n K 4 , 2Cn (β/σ)2 , if β/σ ≥n K−1 2 , for k = 1, . . . , K, where ˆu(k) is the leading left singular vector of the rectangular unfolding Y (k). This proves that as conjectured by [20], the threshold β/σ ≿nK/4 applies not only to the even order case but also to the odd order case. Note that Hopkins et al. [10] have shown a similar result without the sharp rate of convergence. The above corollary easily extends to more general n1 × · · · × nK tensor by replacing the conditions by qQ ℓ̸=k nℓ> β/σ ≥(C QK k=1 nk)1/4 and β/σ ≥ qQ ℓ̸=k nℓ. The result also holds when X ∗has rank higher than 1; see Appendix E. 4 We demonstrate this result in Figure 1(b). The models behind the experiment are slightly more general ones in which [n1, n2, n3] = [20, 40, 60] or [40, 80, 120] and the signal X ∗is rank two with β1 = 20 and β2 = 10. The plot shows the inner products ⟨u(1) 1 , ˆu(1) 1 ⟩and ⟨u(1) 2 , ˆu(1) 2 ⟩as a measure of the quality of estimating the two mode-1 factors. The horizontal axis is the normalized noise standard deviation σ(QK k=1 nk)1/4. We can clearly see that the inner product decays symmetrically around β1 and β2 as predicted by Corollary 1 for both tensors. 3 Subspace norm for tensors Suppose the true tensor X ∗∈Rn×···×n admits a minimum Tucker decomposition [26] of rank (R, . . . , R): X ∗= PR i1=1 · · · PR iK=1 βi1i2...iKu(1) i1 ◦· · · ◦u(K) iK . (5) If the core tensor C = (βi1...iK) ∈RR×···×R is superdiagonal, the above decomposition reduces to the canonical polyadic (CP) decomposition [9, 15]. The mode-k unfolding of the true tensor X ∗can be written as follows: X∗ (k) = U (k)C(k)  U (1) ⊗· · · ⊗U (k−1) ⊗U (k+1) ⊗· · · ⊗U (K)⊤ , (6) where C(k) is the mode-k unfolding of the core tensor C; U (k) is a n × R matrix U (k) = [u(k) 1 , . . . , u(k) R ] for k = 1, . . . , K. Note that U (k) is not necessarily orthogonal. Let X∗ (k) = P (k)Λ(k)Q(k)⊤be the SVD of X∗ (k). We will observe that Q(k) ∈Span  P (1) ⊗· · · ⊗P (k−1) ⊗P (k+1) ⊗· · · ⊗P (K) (7) because of (6) and U (k) ∈Span(P (k)). Corollary 1 shows that the left singular vectors P (k) can be recovered under mild conditions; thus the span of the right singular vectors can also be recovered. Inspired by this, we define a norm that models a tensor X as a mixture of tensors Z(1), . . . , Z(K). We require that the mode-k unfolding of Z(k), i.e. Z(k) (k), has a low rank factorization Z(k) (k) = M (k)S(k)⊤, where M (k) ∈Rn×HK−1 is a variable, and S(k) ∈RnK−1×HK−1 is a fixed arbitrary orthonormal basis of some subspace, which we choose later to have the Kronecker structure in (7). In the following, we define the subspace norm, suggest an approach to construct the right factor S(k), and prove the denoising bound in the end. 3.1 The subspace norm Consider a Kth order tensor of size n × · · · n. Definition 1. Let S(1), . . . , S(K) be matrices such that S(k) ∈RnK−1×HK−1 with H ≤n. The subspace norm for a Kth order tensor X associated with {S(k)}K k=1 is defined as |||X|||s := ( inf{M (k)}K k=1 PK k=1 ∥M (k)∥∗, if X ∈Span({S(k)}K k=1), +∞, otherwise, where ∥· ∥∗is the nuclear norm, and Span({S(k)}K k=1) :=  X ∈ Rn×···×n : ∃M (1), . . . , M (K), X = PK k=1 foldk(M (k)S(k)⊤) . In the next lemma (proven in Appendix D.2), we show the dual norm of the subspace norm has a simple appealing form. As we see in Theorem 2, it avoids the O( √ nK−1) scaling (see the first column of Table 1) by restricting the influence of the noise term in the subspace defined by S(1), . . . , S(K). Lemma 1. The dual norm of |||·|||s is a semi-norm |||X|||s∗= max k=1,...,K ∥X(k)S(k)∥, where ∥· ∥is the spectral norm. 5 3.2 Choosing the subspace A natural question that arises is how to choose the matrices S(1), . . . , S(k). Lemma 2. Let the X∗ (k) = P (k)Λ(k)Q(k) be the SVD of X∗ (k), where P (k) is n × R and Q(k) is nK−1 × R. Assume that R ≤n and U (k) has full column rank. It holds that for all k, i) U (k) ∈Span(P (k)), ii) Q(k) ∈Span  P (1) ⊗· · · ⊗P (k−1) ⊗P (k+1) ⊗· · · ⊗P (K) . Proof. We prove the lemma in Appendix D.4. Corollary 1 shows that when the signal to noise ratio is high enough, we can recover P (k) with high probability. Hence we suggest the following three-step approach for tensor denoising: (i) For each k, unfold the observation tensor in mode k and compute the top H left singular vectors. Concatenate these vectors to obtain a n × H matrix bP (k). (ii) Construct S(k) as S(k) = bP (1) ⊗· · · ⊗bP (k−1) ⊗bP (k+1) ⊗· · · ⊗bP (K). (iii) Solve the subspace norm regularized minimization problem min X 1 2|||Y −X|||2 F + λ|||X|||s, (8) where the subspace norm is associated with the above defined {S(k)}K k=1. See Appendix B for details. 3.3 Analysis Let Y ∈Rn×···×n be a tensor corrupted by Gaussian noise with standard deviation σ as follows: Y = X ∗+ σE. (9) We define a slightly modified estimator ˆ X as follows: ˆ X = arg min X,{M (k)}K k=1 n1 2|||Y −X|||2 F + λ|||X|||s : X = K X k=1 foldk  M (k)S(k)⊤ , {M (k)}K k=1 ∈M(ρ) o (10) where M(ρ) is a restriction of the set of matrices M (k) ∈Rn×HK−1, k = 1, . . . , K defined as follows: M(ρ) := n {M (k)}K k=1 : ∥foldk(M (k))(ℓ)∥≤ρ K (√n + √ HK−1), ∀k ̸= ℓ o . This restriction makes sure that M (k), k = 1, . . . , K, are incoherent, i.e., each M (k) has a spectral norm that is as low as a random matrix when unfolded at a different mode ℓ. Similar assumptions were used in low-rank plus sparse matrix decomposition [2, 12] and for the denoising bound for the latent nuclear norm [23]. Then we have the following statement (we prove this in Appendix D.3). Theorem 2. Let Xp be any tensor that can be expressed as Xp = K X k=1 foldk  M (k) p S(k)⊤ , which satisfies the above incoherence condition {M (k) p }K k=1 ∈M(ρ) and let rk be the rank of M (k) p for k = 1, . . . , K. In addition, we assume that each S(k) is constructed as S(k) = 6 bP (k−1) ⊗· · · ⊗bP (k+1) with ( bP (k))⊤bP (k) = IH. Then there are universal constants c0 and c1 such that any solution ˆ X of the minimization problem (10) with λ = |||Xp −X ∗|||s∗+ c0σ √n + √ HK−1 + p 2 log(K/δ)  satisfies the following bound ||| ˆ X −X ∗|||F ≤|||Xp −X ∗|||F + c1λ rXK k=1 rk, with probability at least 1 −δ. Note that the right-hand side of the bound consists of two terms. The first term is the approximation error. This term will be zero if X ∗lies in Span({S(k)}K k=1). This is the case, if we choose S(k) = InK−1 as in the latent nuclear norm, or if the condition of Corollary 1 is satisfied for the smallest βR when we use the Kronecker product construction we proposed. Note that the regularization constant λ should also scale with the dual subspace norm of the residual Xp −X ∗. The second term is the estimation error with respect to Xp. If we take Xp to be the orthogonal projection of X ∗to the Span({S(k)}K k=1), we can ignore the contribution of the residual to λ, because (Xp −X ∗)(k)S(k) = 0. Then the estimation error scales mildly with the dimensions n, HK−1 and with the sum of the ranks. Note that if we take S(k) = InK−1, we have HK−1 = nK−1, and we recover the guarantee (3) . 4 Experiments In this section, we conduct tensor denoising experiments on synthetic and real datasets, to numerically confirm our analysis in previous sections. 4.1 Synthetic data We randomly generated the true rank two tensor X ∗of size 20×30×40 with singular values β1 = 20 and β2 = 10. The true factors are generated as random matrices with orthonormal columns. The observation tensor Y is then generated by adding Gaussian noise with standard deviation σ to X ∗. Our approach is compared to the CP decomposition, the overlapped approach, and the latent approach. The CP decomposition is computed by the tensorlab [22] with 20 random initializations. We assume CP knows the true rank is 2. For the subspace norm, we use Algorithm 2 described in Section 3. We also select the top 2 singular vectors when constructing bU (k)’s. We computed the solutions for 20 values of regularization parameter λ logarithmically spaced between 1 and 100. For the overlapped and the latent norm, we use ADMM described in [25]; we also computed 20 solutions with the same λ’s used for the subspace norm. We measure the performance in the relative error defined as ||| b X −X ∗|||F /|||X ∗|||F . We report the minimum error obtained by choosing the optimal regularization parameter or the optimal initialization. Although the regularization parameter could be selected by leaving out some entries and measuring the error on these entries, we will not go into tensor completion here for the sake of simplicity. Figure 2 (a) and (b) show the result of this experiment. The left panel shows the relative error for 3 representative values of λ for the subspace norm. The black dash-dotted line shows the minimum error across all the λ’s. The magenta dashed line shows the error corresponding to the theoretically motivated choice λ = σ(maxk(√nk + √ HK−1) + p 2 log(K)) for each σ. The two vertical lines are thresholds of σ from Corollary 1 corresponding to β1 and β2, namely, β1/(Q k nk)1/4 and β2/(Q k nk)1/4. It confirms that there is a rather sharp increase in the error around the theoretically predicted places (see also Figure 1(b)). We can also see that the optimal λ should grow linearly with σ. For large σ (small SNR), the best relative error is 1 since the optimal choice of the regularization parameter λ leads to predicting with b X = 0. Figure 2 (b) compares the performance of the subspace norm to other approaches. For each method the smallest error corresponding to the optimal choice of the regularization parameter λ is shown. 7 σ 0 0.5 1 1.5 2 Relative Error 0 0.5 1 1.5 2 λ1 = 1.62 λ2 = 5.46 λ3 = 14.38 min error suggested (a) σ 0 0.5 1 1.5 2 Relative Error 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 cp subspace latent overlap optimistic (b) σ 0 500 1000 1500 2000 Relative Error 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 cp subspace latent overlap suggested optimistic (c) Figure 2: Tensor denoising. (a) The subspace approach with three representative λ’s on synthetic data. (b) Comparison of different methods on synthetic data. (c) Comparison on amino acids data. In addition, to place the numbers in context, we plot the line corresponding to Relative error = p R P k nk log(K) |||X ∗|||F · σ, (11) which we call “optimistic”. This can be motivated from considering the (non-tractable) maximum likelihood estimator for CP decomposition (see Appendix A). Clearly, the error of CP, the subspace norm, and “optimistic” grows at the same rate, much slower than overlap and latent. The error of CP increases beyond 1, as no regularization is imposed (see Appendix C for more experiments). We can see that both CP and the subspace norm are behaving near optimally in this setting, although such behavior is guaranteed for the subspace norm whereas it is hard to give any such guarantee for the CP decomposition based on nonlinear optimization. 4.2 Amino acids data The amino acid dataset [5] is a semi-realistic dataset commonly used as a benchmark for low rank tensor modeling. It consists of five laboratory-made samples, each one contains different amounts of tyrosine, tryptophan and phenylalanine. The spectrum of their excitation wavelength (250-300 nm) and emission (250-450 nm) are measured by fluorescence, which gives a 5 × 201 × 61 tensor. As the true factors are known to be these three acids, this data perfectly suits the CP model. The true rank is fed into CP and the proposed approach as H = 3. We computed the solutions of CP for 20 different random initializations, and the solutions of other approaches with 20 different values of λ. For the subspace and the overlapped approach, λ’s are logarithmically spaced between 103 and 105. For the latent approach, λ’s are logarithmically spaced between 104 and 106. Again, we include the optimistic scaling (11) to put the numbers in context. Figure 2(c) shows the smallest relative error achieved by all methods we compare. Similar to the synthetic data, both CP and the subspace norm behaves near ideally, though the relative error of CP can be larger than 1 due to the lack of regularization. Interestingly the theoretically suggested scaling of the regularization parameter λ is almost optimal. 5 Conclusion We have settled a conjecture posed by [20] and showed that indeed O(nK/4) signal-to-noise ratio is sufficient also for odd order tensors. Moreover, our analysis shows an interesting two-phase behavior of the error. This finding lead us to the development of the proposed subspace norm. The proposed norm is defined with respect to a set of orthonormal matrices bP (1), . . . , bP (K), which are estimated by mode-wise singular value decompositions. We have analyzed the denoising performance of the proposed norm, and shown that the error can be bounded by the sum of two terms, which can be interpreted as an approximation error term coming from the first (non-convex) step, and an estimation error term coming from the second (convex) step. 8
2015
175
5,675
Biologically Inspired Dynamic Textures for Probing Motion Perception Jonathan Vacher CNRS UNIC and Ceremade Univ. Paris-Dauphine 75775 Paris Cedex 16, FRANCE vacher@ceremade.dauphine.fr Andrew Isaac Meso Institut de Neurosciences de la Timone UMR 7289 CNRS/Aix-Marseille Universit´e 13385 Marseille Cedex 05, FRANCE andrew.meso@univ-amu.fr Laurent Perrinet Institut de Neurosciences de la Timone UMR 7289 CNRS/Aix-Marseille Universit´e 13385 Marseille Cedex 05, FRANCE laurent.perrinet@univ-amu.fr Gabriel Peyr´e CNRS and Ceremade Univ. Paris-Dauphine 75775 Paris Cedex 16, FRANCE peyre@ceremade.dauphine.fr Abstract Perception is often described as a predictive process based on an optimal inference with respect to a generative model. We study here the principled construction of a generative model specifically crafted to probe motion perception. In that context, we first provide an axiomatic, biologically-driven derivation of the model. This model synthesizes random dynamic textures which are defined by stationary Gaussian distributions obtained by the random aggregation of warped patterns. Importantly, we show that this model can equivalently be described as a stochastic partial differential equation. Using this characterization of motion in images, it allows us to recast motion-energy models into a principled Bayesian inference framework. Finally, we apply these textures in order to psychophysically probe speed perception in humans. In this framework, while the likelihood is derived from the generative model, the prior is estimated from the observed results and accounts for the perceptual bias in a principled fashion. 1 Motivation A normative explanation for the function of perception is to infer relevant hidden parameters from the sensory input with respect to a generative model [7]. Equipped with some prior knowledge about this representation, this corresponds to the Bayesian brain hypothesis, as has been perfectly illustrated by the particular case of motion perception [19]. However, the Gaussian hypothesis related to the parameterization of knowledge in these models —for instance in the formalization of the prior and of the likelihood functions— does not always fit with psychophysical results [17]. As such, a major challenge is to refine the definition of generative models so that they conform to the widest variety of results. From this observation, the estimation problem inherent to perception is linked to the definition of an adequate generative model. In particular, the simplest generative model to describe visual motion is the luminance conservation equation. It states that luminance I(x, t) for (x, t) ∈R2 × R is approximately conserved along trajectories defined as integral lines of a vector field v(x, t) ∈R2 × R. The corresponding generative model defines random fields as solutions to the stochastic partial differential equation (sPDE), ⟨v, ∇I⟩+ ∂I ∂t = W, (1) 1 where ⟨·, ·⟩denotes the Euclidean scalar product in R2, ∇I is the spatial gradient of I. To match the statistics of natural scenes or some category of textures, the driving term W is usually defined as a colored noise corresponding to some average spatio-temporal coupling, and is parameterized by a covariance matrix Σ, while the field is usually a constant vector v(x, t) = v0 accounting for a full-field translation with constant speed. Ultimately, the application of this generative model is essential for probing the visual system, for instance to understand how observers might detect motion in a scene. Indeed, as shown by [9, 19], the negative log-likelihood corresponding to the luminance conservation model (1) and determined by a hypothesized speed v0 is proportional to the value of the motion-energy model [1] ||⟨v0, ∇(K ⋆I)⟩+ ∂(K⋆I) ∂t ||2, where K is the whitening filter corresponding to the inverse of Σ, and ⋆is the convolution operator. Using some prior knowledge on the distribution of motions, for instance a preference for slow speeds, this indeed leads to a Bayesian formalization of this inference problem [18]. This has been successful in accounting for a large class of psychophysical observations [19]. As a consequence, such probabilistic frameworks allow one to connect different models from computer vision to neuroscience with a unified, principled approach. However the model defined in (1) is obviously quite simplistic with respect to the complexity of natural scenes. It is therefore useful here to relate this problem to solutions proposed by texture synthesis methods in the computer vision community. Indeed, the literature on the subject of static textures synthesis is abundant (see [16] and the references therein for applications in computer graphics). Of particular interest for us is the work of Galerne et al. [6], which proposes a stationary Gaussian model restricted to static textures. Realistic dynamic texture models are however less studied, and the most prominent method is the non-parametric Gaussian auto-regressive (AR) framework of [3], which has been refined in [20]. Contributions. Here, we seek to engender a better understanding of motion perception by improving generative models for dynamic texture synthesis. From that perspective, we motivate the generation of optimal stimulation within a stationary Gaussian dynamic texture model. We base our model on a previously defined heuristic [10, 11] coined “Motion Clouds”. Our first contribution is Figure 1: Parameterization of the class of Motion Clouds stimuli. The illustration relates the parametric changes in MC with real world (top row) and observer (second row) movements. (A) Orientation changes resulting in scene rotation are parameterized through θ as shown in the bottom row where a horizontal a and obliquely oriented b MC are compared. (B) Zoom movements, either from scene looming or observer movements in depth, are characterised by scale changes reflected by a scale or frequency term z shown for a larger or closer object b compared to more distant a. (C) Translational movements in the scene characterised by V using the same formulation for static (a) slow (b) and fast moving MC, with the variability in these speeds quantified by σV . (ξ and τ) in the third row are the spatial and temporal frequency scale parameters. The development of this formulation is detailed in the text. 2 an axiomatic derivation of this model, seen as a shot noise aggregation of dynamically warped “textons”. This formulation is important to provide a clear understanding of the effects of the model’s parameters manipulated during psychophysical experiments. Within our generative model, they correspond to average translation speed and orientation of the “textons” and standard deviations of random fluctuations around this average. Our second contribution (proved in the supplementary materials) is to demonstrate an explicit equivalence between this model and a class of linear stochastic partial differential equations (sPDE). This shows that our model is a generalization of the well-known luminance conservation equation. This sPDE formulation has two chief advantages: it allows for a real-time synthesis using an AR recurrence and it allows one to recast the log-likelihood of the model as a generalization of the classical motion energy model, which in turn is crucial to allow for a Bayesian modeling of perceptual biases. Our last contribution is an illustrative application of this model to the psychophysical study of motion perception in humans. This application shows how the model allows us to define a likelihood, which enables a simple fitting procedure to determine the prior driving the perceptual bias. Notations. In the following, we will denote (x, t) ∈R2 × R the space/time variable, and (ξ, τ) ∈ R2 × R the corresponding frequency variables. If f(x, t) is a function defined on R3, then ˆf(ξ, τ) denotes its Fourier transform. For ξ ∈R2, we denote ξ = ||ξ||(cos(∠ξ), sin(∠ξ)) ∈R2 its polar coordinates. For a function g in R2, we denote ¯g(x) = g(−x). In the following, we denote with a capital letter such as A a random variable, a we denote a a realization of A, we let PA(a) be the corresponding distribution of A. 2 Axiomatic Construction of a Dynamic Texture Stimulation Model Solving a model-based estimation problem and finding optimal dynamic textures for stimulating an instance of such a model can be seen as equivalent mathematical problems. In the luminance conservation model (1), the generative model is parameterized by a spatio-temporal coupling function, which is encoded in the covariance Σ of the driving noise and the motion flow v0. This coupling (covariance) is essential as it quantifies the extent of the spatial integration area as well as the integration dynamics, an important issue in neuroscience when considering the implementation of integration mechanisms from the local to the global scale. In particular, it is important to understand modular sensitivity in the various lower visual areas with different spatio-temporal selectivities such as Primary Visual Cortex (V1) or ascending the processing hierarchy, Middle Temple area (MT). For instance, by varying the frequency bandwidth of such dynamic textures, distinct mechanisms for perception and action have been identified [11]. However, such textures were based on a heuristic [10], and our goal here is to develop a principled, axiomatic definition. 2.1 From Shot Noise to Motion Clouds We propose a mathematically-sound derivation of a general parametric model of dynamic textures. This model is defined by aggregation, through summation, of a basic spatial “texton” template g(x). The summation reflects a transparency hypothesis, which has been adopted for instance in [6]. While one could argue that this hypothesis is overly simplistic and does not model occlusions or edges, it leads to a tractable framework of stationary Gaussian textures, which has proved useful to model static micro-textures [6] and dynamic natural phenomena [20]. The simplicity of this framework allows for a fine tuning of frequency-based (Fourier) parameterization, which is desirable for the interpretation of psychophysical experiments. We define a random field as Iλ(x, t) def. = 1 √ λ X p∈N g(ϕAp(x −Xp −Vpt)) (2) where ϕa : R2 →R2 is a planar warping parameterized by a finite dimensional vector a. Intuitively, this model corresponds to a dense mixing of stereotyped, static textons as in [6]. The originality is two-fold. First, the components of this mixing are derived from the texton by visual transformations ϕAp which may correspond to arbitrary transformations such as zooms or rotations, illustrated in Figure 1. Second, we explicitly model the motion (position Xp and speed Vp) of each individual texton. The parameters (Xp, Vp, Ap)p∈N are independent random vectors. They account for the 3 variability in the position of objects or observers and their speed, thus mimicking natural motions in an ambient scene. The set of translations (Xp)p∈N is a 2-D Poisson point process of intensity λ > 0. The following section instantiates this idea and proposes canonical choices for these variabilities. The warping parameters (Ap)p are distributed according to a distribution PA. The speed parameters (Vp)p are distributed according to a distribution PV on R2. The following result shows that the model (2) converges to a stationary Gaussian field and gives the parameterization of the covariance. Its proof follows from a specialization of [5, Theorem 3.1] to our setting. Proposition 1. Iλ is stationary with bounded second order moments. Its covariance is Σ(x, t, x′, t′) = γ(x −x′, t −t′) where γ satisfies ∀(x, t) ∈R3, γ(x, t) = Z Z R2 cg(ϕa(x −νt))PV (ν)PA(a)dνda (3) where cg = g ⋆¯g is the auto-correlation of g. When λ →+∞, it converges (in the sense of finite dimensional distributions) toward a stationary Gaussian field I of zero mean and covariance Σ. 2.2 Definition of “Motion Clouds” We detail this model here with warpings as rotations and scalings (see Figure 1). These account for the characteristic orientations and sizes (or spatial scales) in a scene with respect to the observer ∀a = (θ, z) ∈[−π, π) × R∗ +, ϕa(x) def. = zR−θ(x), where Rθ is the planar rotation of angle θ. We now give some physical and biological motivation underlying our particular choice for the distributions of the parameters. We assume that the distributions PZ and PΘ of spatial scales z and orientations θ, respectively (see Figure 1), are independent and have densities, thus considering ∀a = (θ, z) ∈[−π, π) × R∗ +, PA(a) = PZ(z) PΘ(θ). The speed vector ν is assumed to be randomly fluctuating around a central speed v0, so that ∀ν ∈R2, PV (ν) = P||V −v0||(||ν −v0||). (4) In order to obtain “optimal” responses to the stimulation (as advocated by [21]), it makes sense to define the texton g to be equal to an oriented Gabor acting as an atom, based on the structure of a standard receptive field of V1. Each would have a scale σ and a central frequency ξ0. Since the orientation and scale of the texton is handled by the (θ, z) parameters, we can impose without loss of generality the normalization ξ0 = (1, 0). In the special case where σ →0, g is a grating of frequency ξ0, and the image I is a dense mixture of drifting gratings, whose power-spectrum has a closed form expression detailed in Proposition 2. Its proof can be found in the supplementary materials. We call this Gaussian field a Motion Cloud (MC), and it is parameterized by the envelopes (PZ, PΘ, PV ) and has central frequency and speed (ξ0, v0). Note that it is possible to consider any arbitrary textons g, which would give rise to more complicated parameterizations for the power spectrum ˆg, but we decided here to stick to the simple case of gratings. Proposition 2. When g(x) = ei⟨x, ξ0⟩, the image I defined in Proposition 1 is a stationary Gaussian field of covariance having the power-spectrum ∀(ξ, τ) ∈R2 × R, ˆγ(ξ, τ) = PZ (||ξ||) ||ξ||2 PΘ (∠ξ) L(P||V −v0||)  −τ + ⟨v0, ξ⟩ ||ξ||  , (5) where the linear transform L is such that ∀u ∈R, L(f)(u) = R π −π f(−u/ cos(ϕ))dϕ. Remark 1. Note that the envelope of ˆγ is shaped along a cone in the spatial and temporal domains. This is an important and novel contribution when compared to a Gaussian formulation like a classical Gabor. In particular, the bandwidth is then constant around the speed plane or the orientation line with respect to spatial frequency. Basing the generation of the textures on all possible translations, rotations and zooms, we thus provide a principled approach to show that bandwidth should be proportional to spatial frequency to provide a better model of moving textures. 2.3 Biologically-inspired Parameter Distributions We now give meaningful specialization for the probability distributions (PZ, PΘ, P||V −v0||), which are inspired by some known scaling properties of the visual transformations relevant to dynamic scene perception. 4 First, small, centered, linear movements of the observer along the axis of view (orthogonal to the plane of the scene) generate centered planar zooms of the image. From the linear modeling of the observer’s displacement and the subsequent multiplicative nature of zoom, scaling should follow a Weber-Fechner law stating that subjective sensation when quantified is proportional to the logarithm of stimulus intensity. Thus, we choose the scaling z drawn from a log-normal distribution PZ, defined in (6). The bandwidth σZ quantifies the variance in the amplitude of zooms of individual textons relative to the set characteristic scale z0. Similarly, the texture is perturbed by variation in the global angle θ of the scene: for instance, the head of the observer may roll slightly around its normal position. The von-Mises distribution – as a good approximation of the warped Gaussian distribution around the unit circle – is an adapted choice for the distribution of θ with mean θ0 and bandwidth σΘ, see (6). We may similarly consider that the position of the observer is variable in time. On first order, movements perpendicular to the axis of view dominate, generating random perturbations to the global translation v0 of the image at speed ν −v0 ∈R2. These perturbations are for instance described by a Gaussian random walk: take for instance tremors, which are constantly jittering, small (⩽1 deg) movements of the eye. This justifies the choice of a radial distribution (4) for PV . This radial distribution P||V −v0|| is thus selected as a bell-shaped function of width σV , and we choose here a Gaussian function for simplicity, see (6). Note that, as detailed in the supplementary a slightly different bell-function (with a more complicated expression) should be used to obtain an exact equivalence with the sPDE discretization mentioned in Section 4. The distributions of the parameters are thus chosen as PZ(z) ∝z0 z e − ln( z z0 ) 2 2 ln(1+σ2 Z) , PΘ(θ) ∝e cos(2(θ−θ0)) 4σ2 Θ and P||V −v0||(r) ∝e − r2 2σ2 V . (6) Remark 2. Note that in practice we have parametrized PZ by its mode mZ = argmaxz PZ(z) and standard deviation dZ = qR z2PZ(z)dz, see the supplementary material and [4]. z0 σZ σV ξ1 τ Slope: ∠v0 ξ2 ξ1 θ0 z0 σΘ σZ Two different projections of ˆγ in Fourier space t MC of two different spatial frequencies z Figure 2: Graphical representation of the covariance γ (left) —note the cone-like shape of the envelopes– and an example of synthesized dynamics for narrow-band and broad-band Motion Clouds (right). Plugging these expressions (6) into the definition (5) of the power spectrum of the motion cloud, one obtains a parameterization which is very similar to the one originally introduced in [11]. The following table gives the speed v0 and frequency (θ0, z0) central parameters in terms of amplitude and orientation, each one being coupled with the relevant dispersion parameters. Figure 1 and 2 shows a graphical display of the influence of these parameters. Speed Freq. orient. Freq. amplitude (mean, dispersion) (v0, σV ) (θ0, σΘ) (z0, σZ) or (mZ, dZ) Remark 3. Note that the final envelope of ˆγ is in agreement with the formulation that is used in [10]. However, that previous derivation was based on a heuristic which intuitively emerged from a long interaction between modelers and psychophysicists. Herein, we justified these different points from first principles. Remark 4. The MC model can equally be described as a stationary solution of a stochastic partial differential equation (sPDE). This sPDE formulation is important since we aim to deal with dynamic stimulation, which should be described by a causal equation which is local in time. This is crucial for numerical simulations, since, this allows us to perform real-time synthesis of stimuli using an 5 auto-regressive time discretization. This is a significant departure from previous Fourier-based implementation of dynamic stimulation [10, 11]. This is also important to simplify the application of MC inside a bayesian model of psychophysical experiments (see Section 3)The derivation of an equivalent sPDE model exploits a spectral formulation of MCs as Gaussian Random fields. The full proof along with the synthesis algorithm can be found in the supplementary material. 3 Psychophysical Study: Speed Discrimination To exploit the useful features of our MC model and provide a generalizable proof of concept based on motion perception, we consider here the problem of judging the relative speed of moving dynamical textures and the impact of both average spatial frequency and average duration of temporal correlations. 3.1 Methods The task was to discriminate the speed v ∈R of MC stimuli moving with a horizontal central speed v = (v, 0). We assign as independent experimental variable the most represented spatial frequency mZ, that we denote in the following z for easier reading. The other parameters are set to the following values σV = 1 t⋆z, θ0 = π 2 , σΘ = π 12, and dZ = 1.0 c/◦. Note that σV is thus dependent of the value of z (that is computed from mZ and dZ, see Remark 2 and the supplementary ) to ensure that t⋆= 1 σV z stays constant. This parameter t⋆controls the temporal frequency bandwidth, as illustrated on the middle of Figure 2. We used a two alternative forced choice (2AFC) paradigm. In each trial a grey fixation screen with a small dark fixation spot was followed by two stimulus intervals of 250 ms each, separated by a grey 250 ms inter-stimulus interval. The first stimulus had parameters (v1, z1) and the second had parameters (v2, z2). At the end of the trial, a grey screen appeared asking the participant to report which one of the two intervals was perceived as moving faster by pressing one of two buttons, that is whether v1 > v2 or v2 > v1. Given reference values (v⋆, z⋆), for each trial, (v1, z1) and (v2, z2) are selected so that  vi = v⋆, zi ∈z⋆+ ∆Z vj ∈v⋆+ ∆V , zj = z⋆ where  ∆V = {−2, −1, 0, 1, 2}, ∆Z = {−0.48, −0.21, 0, 0.32, 0.85}, where (i, j) = (1, 2) or (i, j) = (2, 1) (i.e. the ordering is randomized across trials), and where z values are expressed in cycles per degree (c/◦) and v values in ◦/s. Ten repetitions of each of the 25 possible combinations of these parameters are made per block of 250 trials and at least four such blocks were collected per condition tested. The outcome of these experiments are summarized by psychometric curves ˆϕv⋆,z⋆, where for all (v −v⋆, z −z⋆) ∈∆V × ∆Z, the value ˆϕv⋆,z⋆(v, z) is the empirical probability (each averaged over the typically 40 trials) that a stimulus generated with parameters (v⋆, z) is moving faster than a stimulus with parameters (v, z⋆). To assess the validity of our model, we tested four different scenarios by considering all possible choices among z⋆= 1.28 c/◦, v⋆∈{5◦/s, 10◦/s}, and t⋆∈{0.1s, 0.2s}, which corresponds to combinations of low/high speeds and a pair of temporal frequency parameters. Stimuli were generated on a Mac running OS 10.6.8 and displayed on a 20” Viewsonic p227f monitor with resolution 1024 × 768 at 100 Hz. Routines were written using Matlab 7.10.0 and Psychtoolbox 3.0.9 controlled the stimulus display. Observers sat 57 cm from the screen in a dark room. Three observers with normal or corrected to normal vision took part in these experiments. They gave their informed consent and the experiments received ethical approval from the Aix-Marseille Ethics Committee in accordance with the declaration of Helsinki. 3.2 Bayesian modeling To make full use of our MC paradigm in analyzing the obtained results, we follow the methodology of the Bayesian observer used for instance in [13, 12, 8]. We assume the observer makes its decision using a Maximum A Posteriori (MAP) estimator ˆvz(m) = argmin v [−log(PM|V,Z(m|v, z)) − log(PV |Z(v|z))] computed from some internal representation m ∈R of the observed stimulus. For simplicity, we assume that the observer estimates z from m without bias. To simplify the numerical analysis, we assume that the likelihood is Gaussian, with a variance independent of v. Furthermore, 6 we assume that the prior is Laplacian as this gives a good description of the a priori statistics of speeds in natural images [2]: PM|V,Z(m|v, z) = 1 √ 2πσz e −|m−v|2 2σ2z and PV |Z(v|z) ∝eazv1[0,vmax](v). (7) where vmax > 0 is a cutoff speed ensuring that PV |Z is a well defined density even if az > 0. Both az and σz are unknown parameters of the model, and are obtained from the outcome of the experiments by a fitting process we now explain. 3.3 Likelihood and Prior Estimation Following for instance [13, 12, 8], the theoretical psychophysical curve obtained by a Bayesian decision model is ϕv⋆,z⋆(v, z) def. = E(ˆvz⋆(Mv,z⋆) > ˆvz(Mv⋆,z)) where Mv,z ∼N(v, σ2 z) is a Gaussian variable having the distribution PM|V,Z(·|v, z). The following proposition shows that in our special case of Gaussian prior and Laplacian likelihood, it can be computed in closed form. Its proof follows closely the derivation of [12, Appendix A], and can be found in the supplementary materials. Proposition 3. In the special case of the estimator (3.2) with a parameterization (7), one has ϕv⋆,z⋆(v, z) = ψ v −v⋆−az⋆σ2 z⋆+ azσ2 z p σ2 z⋆+ σ2z ! (8) where ψ(t) = 1 √ 2π R t −∞e−s2/2ds is a sigmoid function. One can fit the experimental psychometric function to compute the perceptual bias term µz,z⋆∈R and an uncertainty λz,z⋆such that ˆϕv⋆,z⋆(v, z) ≈ψ  v−v⋆−µz,z⋆ λz,z⋆  . Remark 5. Note that in practice we perform a fit in a log-speed domain ie we consider ϕ˜v⋆,z⋆(˜v, z) where ˜v = ln(1 + v/v0) with v0 = 0.3◦/s following [13]. By comparing the theoretical and experimental psychopysical curves (8) and (3.3), one thus obtains the following expressions σ2 z = λ2 z,z⋆−1 2λ2 z⋆,z⋆ and az = az⋆σ2 z⋆ σ2z −µz,z⋆ σ2z . The only remaining unknown is az⋆, that can be set as any negative number based on previous work on low speed priors or, alternatively estimated in future by performing a wiser fitting method. 3.4 Psychophysic Results The main results are summarized in Figure 3 showing the parameters µz,z⋆in Figure 3(a) and the parameters σz in Figure 3(b). Spatial frequency has a positive effect on perceived speed; speed is systematically perceived as faster as spatial frequency is increased, moreover this shift cannot simply be explained to be the result of an increase in the likelihood width (Figure 3(b)) at the tested spatial frequency, as previously observed for contrast changes [13, 12]. Therefore the positive effect could be explained by a negative effect in prior slopes az as the spatial frequency increases. However, we do not have any explanation for the observed constant likelihood width as it is not consistent with the speed width of the stimuli σV = 1 t⋆z which is decreasing with spatial frequency. 3.5 Discussion We exploited the principled and ecologically motivated parameterization of MC to ask about the effect of scene scaling on speed judgements. In the experimental task, MC stimuli, in which the spatial scale content was systematically varied (via frequency manipulations) around a central frequency of 1.28 c/◦were found to be perceived as slightly faster at higher frequencies slightly slower at lower frequencies. The effects were most prominent at the faster speed tested, of 10 ◦/s relative to those at 5 ◦/s. The fitted psychometic functions were compared to those predicted by a Bayesian model in which the likelihood or the observer’s sensory representation was characterised by a simple Gaussian. Indeed, for this small data set intended as a proof of concept, the model was able to explain 7 (a) 0.8 1.0 1.2 1.4 1.6 1.8 2.0 −0.20 −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 PSE bias (µz,z∗) Subject 1 v∗= 5, t∗= 100 v∗= 5, t∗= 200 v∗= 10, t∗= 100 v∗= 10, t∗= 200 0.8 1.0 1.2 1.4 1.6 1.8 2.0 −0.2 −0.1 0.0 0.1 0.2 0.3 Subject 2 (b) 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Spatial frequency (z) in cycles/deg −0.05 0.00 0.05 0.10 0.15 0.20 0.25 Likehood width (σz) 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Spatial frequency (z) in cycles/deg −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 Figure 3: 2AFC speed discrimination results. (a) Task generates psychometric functions which show shifts in the point of subjective equality for the range of test z. Stimuli of lower frequency with respect to the reference (intersection of dotted horizontal and vertical lines gives the reference stimulus) are perceived as going slower, those with greater mean frequency are perceived as going relatively faster. This effect is observed under all conditions but is stronger at the highest speed and for subject 1. (b) The estimated σz appear noisy but roughly constant as a function of z for each subject. Widths are generally higher for v = 5 (red) than v = 10 (blue) traces. The parameter t⋆does not show a significant effect across the conditions tested. these systematic biases for spatial frequency as shifts in our a priori on speed during the perceptual judgements as the likelihood width are constant across tested frequencies but lower at the higher of the tested speeds. Thus having a larger measured bias given the case of the smaller likelihood width (faster speed) is consistent with a key role for the prior in the observed perceptual bias. A larger data set, including more standard spatial frequencies and the use of more observers, is needed to disambiguate the models predicted prior function. 4 Conclusions We have proposed and detailed a generative model for the estimation of the motion of images based on a formalization of small perturbations from the observer’s point of view during parameterized rotations, zooms and translations. We connected these transformations to descriptions of ecologically motivated movements of both observers and the dynamic world. The fast synthesis of naturalistic textures optimized to probe motion perception was then demonstrated, through fast GPU implementations applying auto-regression techniques with much potential for future experimentation. This extends previous work from [10] by providing an axiomatic formulation. Finally, we used the stimuli in a psychophysical task and showed that these textures allow one to further understand the processes underlying speed estimation. By linking them directly to the standard Bayesian formalism, we show that the sensory representations of the stimulus (the likelihoods) in such models can be described directly from the generative MC model. In our case we showed this through the influence of spatial frequency on speed estimation. We have thus provided just one example of how the optimized motion stimulus and accompanying theoretical work might serve to improve our understanding of inference behind perception. The code associated to this work is available at https://jonathanvacher.github.io. Acknowledgements We thank Guillaume Masson for useful discussions during the development of the experiments. We also thank Manon Bouy´e and ´Elise Amfreville for proofreading. LUP was supported by EC FP7269921, “BrainScaleS”. The work of JV and GP was supported by the European Research Council (ERC project SIGMA-Vision). AIM and LUP were supported by SPEED ANR-13-SHS2-0006. 8 References [1] Adelson, E. H. and Bergen, J. R. (1985). Spatiotemporal energy models for the perception of motion. Journal of Optical Society of America, A., 2(2):284–99. [2] Dong, D. (2010). Maximizing causal information of natural scenes in motion. In Ilg, U. J. and Masson, G. S., editors, Dynamics of Visual Motion Processing, pages 261–282. Springer US. [3] Doretto, G., Chiuso, A., Wu, Y. N., and Soatto, S. (2003). Dynamic textures. International Journal of Computer Vision, 51(2):91–109. [4] Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4(12):2379–2394. [5] Galerne, B. (2011). Stochastic image models and texture synthesis. PhD thesis, ENS de Cachan. [6] Galerne, B., Gousseau, Y., and Morel, J. M. (2011). Micro-Texture synthesis by phase randomization. Image Processing On Line, 1. [7] Gregory, R. L. (1980). Perceptions as hypotheses. Philosophical Transactions of the Royal Society B: Biological Sciences, 290(1038):181–197. [8] Jogan, M. and Stocker, A. A. (2015). Signal integration in human visual speed perception. The Journal of Neuroscience, 35(25):9381–9390. [9] Nestares, O., Fleet, D., and Heeger, D. (2000). Likelihood functions and confidence bounds for total-least-squares problems. In IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000, volume 1, pages 523–530. IEEE Comput. Soc. [10] Sanz-Leon, P., Vanzetta, I., Masson, G. S., and Perrinet, L. U. (2012). Motion clouds: modelbased stimulus synthesis of natural-like random textures for the study of motion perception. Journal of Neurophysiology, 107(11):3217–3226. [11] Simoncini, C., Perrinet, L. U., Montagnini, A., Mamassian, P., and Masson, G. S. (2012). More is not always better: adaptive gain control explains dissociation between perception and action. Nature Neurosci, 15(11):1596–1603. [12] Sotiropoulos, G., Seitz, A. R., and Seri`es, P. (2014). Contrast dependency and prior expectations in human speed perception. Vision Research, 97(0):16 – 23. [13] Stocker, A. A. and Simoncelli, E. P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9(4):578–585. [14] Unser, M. and Tafti, P. (2014). An Introduction to Sparse Stochastic Processes. Cambridge University Press, Cambridge, UK. 367 p. [15] Unser, M., Tafti, P. D., Amini, A., and Kirshner, H. (2014). A unified formulation of gaussian versus sparse stochastic processes - part II: Discrete-Domain theory. IEEE Transactions on Information Theory, 60(5):3036–3051. [16] Wei, L. Y., Lefebvre, S., Kwatra, V., and Turk, G. (2009). State of the art in example-based texture synthesis. In Eurographics 2009, State of the Art Report, EG-STAR. Eurographics Association. [17] Wei, X.-X. and Stocker, A. A. (2012). Efficient coding provides a direct link between prior and likelihood in perceptual bayesian inference. In Bartlett, P. L., Pereira, F. C. N., Burges, C. J. C., Bottou, L., and Weinberger, K. Q., editors, NIPS, pages 1313–1321. [18] Weiss, Y. and Fleet, D. J. (2001). Velocity likelihoods in biological and machine vision. In In Probabilistic Models of the Brain: Perception and Neural Function, pages 81–100. [19] Weiss, Y., Simoncelli, E. P., and Adelson, E. H. (2002). Motion illusions as optimal percepts. Nature Neuroscience, 5(6):598–604. [20] Xia, G. S., Ferradans, S., Peyr´e, G., and Aujol, J. F. (2014). Synthesizing and mixing stationary gaussian texture models. SIAM Journal on Imaging Sciences, 7(1):476–508. [21] Young, R. A. and Lesperance, R. M. (2001). The gaussian derivative model for spatial-temporal vision: II. cortical data. Spatial vision, 14(3):321–390. 9
2015
176
5,676
Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling Xiaocheng Shang∗ University of Edinburgh x.shang@ed.ac.uk Zhanxing Zhu∗ University of Edinburgh zhanxing.zhu@ed.ac.uk Benedict Leimkuhler University of Edinburgh b.leimkuhler@ed.ac.uk Amos J. Storkey University of Edinburgh a.storkey@ed.ac.uk Abstract Monte Carlo sampling for Bayesian posterior inference is a common approach used in machine learning. The Markov Chain Monte Carlo procedures that are used are often discrete-time analogues of associated stochastic differential equations (SDEs). These SDEs are guaranteed to leave invariant the required posterior distribution. An area of current research addresses the computational benefits of stochastic gradient methods in this setting. Existing techniques rely on estimating the variance or covariance of the subsampling error, and typically assume constant variance. In this article, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. The proposed method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications. 1 Introduction In machine learning applications, direct sampling with the entire large-scale dataset is computationally infeasible. For instance, standard Markov Chain Monte Carlo (MCMC) methods [16], as well as typical Hybrid Monte Carlo (HMC) methods [3, 6, 9], require the calculation of the acceptance probability and the creation of informed proposals based on the whole dataset. In order to improve computational efficiency, a number of stochastic gradient methods [4, 5, 20, 21] have been proposed in the setting of Bayesian sampling based on random (and much smaller) subsets to approximate the likelihood of the whole dataset, thus substantially reducing the computational cost in practice. Welling and Teh proposed the so-called Stochastic Gradient Langevin Dynamics (SGLD) [21], combining the ideas of stochastic optimization [18] and traditional Brownian dynamics, with a sequence of stepsizes decreasing to zero. A fixed stepsize is often adopted in practice which is the choice in this article as in Vollmer et al. [20], where a modified SGLD (mSGLD) was also introduced that was designed to reduce sampling bias. SGLD generates samples from first order Brownian dynamics, and thus, with a fixed timestep, one can show that it is unable to dissipate excess noise in gradient approximations while maintaining the desired invariant distribution [4]. A Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) method was proposed by Chen et al. [4], which relies on second order Langevin dynamics and incorporates a parameter-dependent diffusion matrix that is intended to effectively offset the stochastic perturbation of the gradient. However, it is difficult to accommodate the additional diffusion term ∗The first and second authors contributed equally, and the listed author order was decided by lot. 1 in practice. Moreover, as pointed out in [5] poor estimation of it may have a significant adverse influence on the sampling of the target distribution; for example the effective system temperature may be altered. The “thermostat” idea, which is widely used in molecular dynamics [7, 13], was recently adopted in the Stochastic Gradient Nos´e-Hoover Thermostat (SGNHT) by Ding et al. [5] in order to adjust the kinetic energy during simulation in such a way that the canonical ensemble is preserved (i.e. so that a prescribed constant temperature distribution is maintained). In fact, the SGNHT method is essentially equivalent to the Adaptive Langevin (Ad-Langevin) thermostat proposed earlier by Jones and Leimkuhler [10] in the molecular dynamics setting (see [15] for discussion). Despite the substantial interest generated by these methods, the mathematical foundation for stochastic gradient methods has been incomplete. The underlying dynamics of the SGNHT [5] was taken up by Leimkuhler and Shang [15], together with the design of discretization schemes with high effective order of accuracy. SGNHT methods are designed based on the assumption of constant noise variance. In this article, we propose a Covariance-Controlled Adaptive Langevin (CCAdL) thermostat, that can handle parameter-dependent noise, improving both robustness and reliability in practice, and which can effectively speed up the convergence to the desired invariant distribution in large-scale machine learning applications. The rest of the article is organized as follows. In Section 2, we describe the setting of Bayesian sampling with noisy gradients and briefly review existing techniques. In Section 3, we construct the novel Covariance-Controlled Adaptive Langevin (CCAdL) method that can effectively dissipate parameter-dependent noise while maintaining the correct distribution. Various numerical experiments are performed in Section 4 to verify the usefulness of CCAdL in a wide range of large-scale machine learning applications. Finally, we summarize our findings in Section 5. 2 Bayesian Sampling with Noisy Gradients In the typical setting of Bayesian sampling [3, 19], one is interested in drawing states from a posterior distribution defined as π(θ|X) ∝π(X|θ)π(θ) , (1) where θ ∈RNd is the parameter vector of interest, X denotes the entire dataset, and, π(X|θ) and π(θ) are the likelihood and prior distributions, respectively. We introduce a potential energy function U(θ) by defining π(θ|X) ∝exp(−βU(θ)), where β is a positive parameter and can be interpreted as being proportional to the reciprocal temperature in an associated physical system, i.e. β−1 = kBT (kB is the Boltzmann constant and T is temperature). In practice, β is often set to be unity for notational simplicity. Taking the logarithm of (1) yields U(θ) = −log π(X|θ)−log π(θ) . (2) Assuming the data are independent and identically distributed (i.i.d.), the logarithm of the likelihood can be calculated as log π(X|θ) = N X i=1 log π(xi|θ) , (3) where N is the size of the entire dataset. However, as already mentioned, it is computationally infeasible to deal with the entire large-scale dataset at each timestep as would typically be required in MCMC and HMC methods. Instead, in order to improve the efficiency, a random (and much smaller, n ≪N) subset is preferred in stochastic gradient methods, in which the likelihood of the dataset for given parameters is approximated as log π(X|θ) ≈N n n X i=1 log π(xri|θ) , (4) where {xri}n i=1 represents a random subset of X. Thus, the “noisy” potential energy can be written as ˜U(θ) = −N n n X i=1 log π(xri|θ)−log π(θ) , (5) where the negative gradient of the potential is referred to as the “noisy” force, i.e. ˜F(θ) = −∇˜U(θ). 2 Our goal is to correctly sample the Gibbs distribution ρ(θ) ∝exp(−βU(θ)) (1). As in [4, 5], the gradient noise is assumed to be Gaussian with mean zero and unknown variance, in which case one may rewrite the noisy force as ˜F(θ) = −∇U(θ)+ p Σ(θ)M1/2R , (6) where M typically is a diagonal matrix, Σ(θ) represents the covariance matrix of the noise and R is a vector of i.i.d. standard normal random variables. Note that p Σ(θ)M1/2R here is actually equivalent to N (0, Σ(θ)M). In a typical setting of numerical integration with associated stepsize h, one has h˜F(θ) = h  −∇U(θ)+ p Σ(θ)M1/2R  = −h∇U(θ)+ √ h p hΣ(θ)  M1/2R , (7) and therefore, assuming a constant covariance matrix (i.e. Σ = σ2I, where I is the identity matrix), the SGNHT method by Ding et al. [5], has the following underlying dynamics, written as a standard It¯o stochastic differential equation (SDE) system [15]: dθ = M−1pdt , dp = −∇U(θ)dt+σ √ hM1/2dW−ξpdt+ p 2Aβ−1M1/2dWA , dξ = µ−1  pT M−1p−NdkBT  dt , (8) where, colloquially, dW and dWA, respectively, represent vectors of independent Wiener increments; and are often informally denoted by N(0, dtI) [4]. The coefficient p 2Aβ−1M1/2, represents the strength of artificial noise added into the system to improve ergodicity, and A, which can be termed as “effective friction”, is a positive parameter and proportional to the variance of the noise. The auxiliary variable ξ ∈R is governed by a Nos´e-Hoover device [8, 17] via a negative feedback mechanism, i.e. when the instantaneous temperature (average kinetic energy per degree of freedom) calculated as kBT = pT M−1p/Nd (9) is below the target temperature, the “dynamical friction” ξ would decrease allowing an increase of temperature, while ξ would increase when the temperature is above the target. µ is a coupling parameter which is referred to as the “thermal mass” in the molecular dynamics setting. Proposition 1: (See Jones and Leimkuhler [10]) The SGNHT method (8) preserves the modified Gibbs (stationary) distribution ˜ρβ(θ, p, ξ) = Z−1 exp (−βH(θ, p)) exp −βµ(ξ−¯ξ)2/2  , (10) where Z is the normalizing constant, H(θ, p) = pT M−1p/2+U(θ) is the Hamiltonian, and ¯ξ = A+βhσ2/2 . (11) Proposition 1 tells us that the SGNHT method can adaptively dissipate excess noise pumped into the system while maintaining the correct distribution. The variance of the gradient noise, σ2, does not need to be known a priori. As long as σ2 is constant, the auxiliary variable ξ will be able to automatically find its mean value ¯ξ on the fly. However, with a parameter-dependent covariance matrix Σ(θ), the SGNHT method (8) would not produce the required target distribution (10). Ding et al. [5] claimed that it is reasonable to assume the covariance matrix Σ(θ) is constant when the size of the dataset, N, is large, in which case the variance of the posterior of θ is small. The magnitude of the posterior variance does not actually relate to the constancy of the Σ, however, in general Σ is not constant. Simply assuming the non-constancy of the Σ can have a significant impact on the performance of the method (most notably the stability measured by the largest usable stepsize). Therefore, it is essential to have an approach that can handle parameter-dependent noise. In the following section we propose a covariance-controlled thermostat that can effectively dissipate parameter-dependent noise while maintaining the target stationary distribution. 3 Covariance-Controlled Adaptive Langevin Thermostat As mentioned in the previous section, the SGNHT method (8) can only dissipate noise with a constant covariance matrix. When the covariance matrix becomes parameter-dependent, in general a parameter-dependent covariance matrix does not imply the required “thermal equilibrium”, i.e. the system cannot be expected to converge to the desired invariant distribution (10), typically resulting in poor estimation of functions of parameters of interest. In fact, in that case it is not clear whether or not there exists an invariant distribution at all. 3 In order to construct a stochastic-dynamical system that preserves the canonical distribution, we suggest adding a suitable damping (viscous) term to effectively dissipate the parameter-dependent gradient noise. To this end, we propose the following Covariance-Controlled Adaptive Langevin (CCAdL) thermostat dθ = M−1pdt , dp = −∇U(θ)dt+ p hΣ(θ)M1/2dW−(h/2)βΣ(θ)pdt−ξpdt+ p 2Aβ−1M1/2dWA , dξ = µ−1  pT M−1p−NdkBT  dt . (12) Proposition 2: The CCAdL thermostat (12) preserves the modified Gibbs (stationary) distribution ˆρβ(θ, p, ξ) = Z−1 exp (−βH(θ, p)) exp −βµ(ξ−A)2/2  . (13) Proof: The Fokker-Planck equation corresponding to (12) is ρt = L†ρ := −M−1p·∇θρ+∇U(θ)·∇pρ+(h/2)∇p·(Σ(θ)M∇pρ)+(h/2)β∇p·(Σ(θ)pρ) +ξ∇p·(pρ)+Aβ−1∇p·(M∇pρ)−µ−1  pT M−1p−NdkBT  ∇ξρ . Just insert ˆρβ (13) into the Fokker-Planck operator L† to see that it vanishes. 2. The incorporation of the parameter-dependent covariance matrix Σ(θ) in (12) is intended to offset the covariance matrix coming from the gradient approximation. However, in practice, one does not know Σ(θ) a priori. Thus instead one must estimate Σ(θ) during the simulation, a task which will be addressed in Section 3.1. This procedure is related to the method used in the SGHMC method proposed by Chen et al. [4], which uses dynamics of the following form: dθ = M−1pdt , dp = −∇U(θ)dt+ p hΣ(θ)M1/2dW−Apdt+ p 2β−1 (AI−hΣ(θ)/2)M1/2dWA . (14) It can be shown that the SGHMC method preserves the Gibbs canonical distribution ρβ(θ, p) = Z−1 exp (−βH(θ, p)) . (15) Although both CCAdL (12) and SGHMC (14) preserve their respective invariant distributions, let us note several advantages of the former over the latter in practice: (i) CCAdL and SGHMC both require estimation of the covariance matrix Σ(θ) during simulation, which can be costly in high dimension. In numerical experiments, we have found that simply using the diagonal of the covariance matrix, at significantly reduced computational cost, works quite well in CCAdL. By contrast, it is difficult to find a suitable value of the parameter A in SGHMC since one has to make sure the matrix AI−hΣ(θ)/2 is positive semi-definite. One may attempt to use a large value of the “effective friction” A and/or a small stepsize h. However, too-large a friction would essentially reduce SGHMC to SGLD, which is not desirable, as pointed out in [4], while extremely small stepsize would significantly impact the computational efficiency. (ii) Estimation of the covariance matrix Σ(θ) unavoidably introduces additional noise in both CCAdL and SGHMC. Nonetheless, CCAdL can still effectively control the system temperature (i.e. maintaining the correct distribution of the momenta) due to the use of the stabilizing Nos´e-Hoover control, while in SGHMC poor estimation of the covariance matrix may lead to significant deviations of the system temperature (as well as the distribution of the momenta), resulting in poor sampling of the parameters of interest. 3.1 Covariance Estimation of Noisy Gradients Under the assumption that the noise of the stochastic gradient follows a normal distribution, we apply a similar method to that of [2] to estimate the covariance matrix associated with the noisy gradient. If we let g(θ; x) = ∇θ log π(x|θ) and assume that the size of subset n is large enough for the central limit theorem to hold, we have 1 n n X i=1 g(θt; xri) ∼N  Ex[g(θt; x)], 1 nIt  , (16) where It = Cov[g(θt; x)] is the covariance of the gradient at θt. Given that the noisy (stochastic) gradient based on current subset ∇˜U(θt) = −N n Pn i=1 g(θt; xri)−∇log π(θt), and the clean (full) 4 Algorithm 1 Covariance-Controlled Adaptive Langevin (CCAdL) 1: Input: h, A, {κt} ˆT t=1. 2: Initialize θ0, p0, I0, and ξ0 = A. 3: for t = 1, 2, . . . , ˆT do 4: θt = θt−1+pt−1h; 5: Estimate ˆIt using Eq. (18); 6: pt = pt−1−∇˜U(θt)h−h 2 N 2 n ˆItpt−1h−ξt−1pt−1h+ √ 2AhN(0, 1); 7: ξt = ξt−1+ pT t pt/Nd−1  h; 8: end for gradient ∇U(θt) = −PN i=1 g(θt; xi)−∇log π(θt), we have Ex[∇˜U(θt)] = Ex[∇U(θt)], and thus ∇˜U(θt) = ∇U(θt)+N  0, N 2 n It  , (17) i.e. Σ(θt) = N 2It/n. Assuming θt does not change dramatically over time, we use the moving average update to estimate It, ˆIt = (1−κt)ˆIt−1+κtV(θt) , (18) where κt = 1/t, and V(θt) = 1 n−1 n X i=1 (g(θt; xri)−¯g(θt)) (g(θt; xri)−¯g(θt))T (19) is the empirical covariance of gradient. ¯g(θt) represents the mean gradient of the log likelihood computed from a subset. As proved in [2], this estimator has a convergence order of O(1/N). As already mentioned, estimating the full covariance matrix is computationally infeasible in high dimension. However, we have found that employing a diagonal approximation of the covariance matrix (i.e. only estimating the variance along each dimension of the noisy gradient), works quite well in practice, as demonstrated in Section 4. The procedure of the CCAdL method is summarized in Algorithm 1, where we simply used M = I, β = 1, and µ = Nd in order to be consistent with the original implementation of SGNHT [5]. Note that this is a simple, first order (in terms of the stepsize) algorithm. A recent article [15] has introduced higher order of accuracy schemes which can improve accuracy, but our interest here is in the direct comparison of the underlying machinery of SGHMC, SGNHT, and CCAdL, so we avoid further modifications and enhancements related to timestepping at this stage. In the following section, we compare the newly-established CCAdL method with SGHMC and SGNHT on various machine learning tasks to demonstrate the benefits of CCAdL in Bayesian sampling with a noisy gradient. 4 Numerical Experiments 4.1 Bayesian Inference for Gaussian Distribution We first compare the performance of the newly-established CCAdL method with SGHMC and SGNHT for a simple task using synthetic data, i.e. Bayesian inference of both the mean and variance of a one-dimensional normal distribution. We apply the same experimental setting as in [5]. We generated N = 100 samples from the standard normal distribution N(0, 1). We used the likelihood function of N(xi|µ, γ−1) and assigned Normal-Gamma distribution as their prior distribution, i.e. µ, γ ∼N(µ|0, γ)Gam(γ|1, 1). Then the corresponding posterior distribution is another NormalGamma distribution, i.e. (µ, γ)|X ∼N(µ|µN, (κNγ)−1)Gam(γ|αN, βN), with µN = N ¯x N +1 , κN = 1+N , αN = 1+ N 2 , βN = 1+ N X i=1 (xi−¯x)2 2 + N ¯x2 2(1+N) , where ¯x = PN i=1 xi/N. A random subset of size n = 10 was selected at each timestep to approximate the full gradient, resulting in the following stochastic gradients, ∇µ ˜U = (N +1)µγ−γN n n X i=1 xri , ∇γ ˜U = 1−N +1 2γ + µ2 2 + N 2n n X i=1 (xri −µ)2 . 5 It can be seen that the variance of the stochastic gradient noise is no longer constant and actually depends on the size of the subset, n, and the values of µ and γ in each iteration. This directly violates the constant noise variance assumption of SGNHT [5], while CCAdL adjusts to the varying noise variance. The marginal distributions of µ and γ obtained from various methods with different combinations of h and A were compared and plotted in Figure 1, with Table 1 consisting of the corresponding root mean square error (RMSE) of the distribution and autocorrelation time from 106 samples. In most of the cases, both SGNHT and CCAdL easily outperform the SGHMC method possibly due to the presence of the Nos´e-Hoover device, with SGHMC only showing superiority with small values of h and large value of A, neither of which is desirable in practice as discussed in Section 3. Between SGNHT and the newly-proposed CCAdL method, the latter achieves better performance in each of the cases investigated, highlighting the importance of the covariance control with parameterdependent noise. −0.5 0 0.5 0 1 2 3 4 µ Density True SGHMC SGNHT CCAdL −0.5 0 0.5 0 1 2 3 4 µ Density True SGHMC SGNHT CCAdL −0.5 0 0.5 0 1 2 3 4 µ Density True SGHMC SGNHT CCAdL −0.5 0 0.5 0 1 2 3 4 µ Density True SGHMC SGNHT CCAdL 0.5 1 1.5 0 1 2 3 γ Density True SGHMC SGNHT CCAdL 0.5 1 1.5 0 1 2 3 γ Density True SGHMC SGNHT CCAdL 0.5 1 1.5 0 1 2 3 γ Density True SGHMC SGNHT CCAdL 0.5 1 1.5 0 1 2 3 γ Density True SGHMC SGNHT CCAdL (a) h = 0.001, A = 1 (b) h = 0.001, A = 10 (c) h = 0.01, A = 1 (d) h = 0.01, A = 10 Figure 1: Comparisons of marginal distribution (density) of µ (top row) and γ (bottom row) with various values of h and A indicated in each column. The peak region is highlighted in the inset. Table 1: Comparisons of (RMSE, Autocorrelation time) of (µ, γ) of various methods for Bayesian inference of Gaussian mean and variance. Methods h = 0.001, A = 1 h = 0.001, A = 10 h = 0.01, A = 1 h = 0.01, A = 10 SGHMC (0.0148, 236.12) (0.0029, 333.04) (0.0531, 29.78) (0.0132, 39.33) SGNHT (0.0037, 238.32) (0.0035, 406.71) (0.0044, 26.71) (0.0043, 55.00) CCAdL (0.0034, 238.06) (0.0031, 402.45) (0.0021, 26.71) (0.0035, 54.43) 4.2 Large-scale Bayesian Logistic Regression We then consider a Bayesian logistic regression model trained on the benchmark MNIST dataset for binary classification of digits 7 and 9 using 12, 214 training data points, with a test set of size 2037. A 100-dimensional random projection of the original features was used. We used the likelihood function of π {xi, yi}N i=1|w  ∝QN i=1 1/ 1+exp(−yiwT xi)  , and the prior distribution of π(w) ∝exp(−wT w/2), respectively. A subset of size n = 500 was used at each timestep. Since the dimensionality of this problem is not that high, a full covariance estimation was used for CCAdL. We investigate the convergence speed of each method through measuring test log likelihood using posterior mean against the number of passes over the entire dataset, see Figure 2 (top row). CCAdL displays significant improvements over SGHMC and SGNHT with different values of h and A: (1) CCAdL converges much faster than the other two, which also indicates its faster mixing speed and shorter burn-in period; (2) CCAdL shows robustness in different values of the “effective friction” A, with SGHMC and SGNHT relying on a relative large value of A (especially the SGHMC method) which is intended to dominate the gradient noise. To compare the sample quality obtained from each method, Figure 2 (bottom row) plots the twodimensional marginal posterior distribution in randomly-selected dimensions of 2 and 5 based on 106 samples from each method after the burn-in period (i.e. we start to collect samples when the 6 test log likelihood stabilizes). The true (reference) distribution was obtained by a sufficiently long run of standard HMC. We implemented 10 runs of standard HMC and found there was no variation between these runs, which guarantees its qualification as the true (reference) distribution. Again, CCAdL shows much better performance than SGHMC and SGNHT. Note that the SGHMC does not even fit in the region of the plot, and in fact it shows significant deviation even in the estimation of the mean. 0 200 400 600 −800 −700 −600 −500 −400 −300 Number of Passes Test Log Likelihood SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 0 100 200 300 −800 −700 −600 −500 −400 −300 Number of Passes Test Log Likelihood SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 0 100 200 300 −800 −700 −600 −500 −400 −300 Number of Passes Test Log Likelihood SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 0.03 0.035 0.04 0.045 0.05 0.055 −5 0 5 10 15 x 10 −3 w2 w5 True(HMC) SGHMC SGNHT CCAdL 0.03 0.035 0.04 0.045 0.05 0.055 −5 0 5 10 15 x 10 −3 w2 w5 True(HMC) SGHMC SGNHT CCAdL 0.03 0.035 0.04 0.045 0.05 0.055 −5 0 5 10 15 x 10 −3 w2 w5 True(HMC) SGHMC SGNHT CCAdL (a) h = 0.2×10−4 (b) h = 0.5×10−4 (c) h = 1×10−4 Figure 2: Comparisons of Bayesian Logistic Regression of various methods on the MNIST dataset of digits 7 and 9 with various values of h and A: (top row) test log likelihood using posterior mean against number of passes over the entire dataset; (bottom row) two-dimensional marginal posterior distribution in (randomly selected) dimensions 2 and 5 with A = 10 fixed, based on 106 samples from each method after the burn-in period (i.e. we start to collect samples when the test log likelihood stabilizes). Magenta circle is the true (reference) posterior mean obtained from standard HMC, and crosses represent the sample means computed from various methods. Ellipses represent iso-probability contours covering 95% probability mass. Note that the contour of SGHMC is well beyond the scale of figure and thus we do not include it here. 4.3 Discriminative Restricted Boltzmann Machine (DRBM) DRBM [11] is a self-contained non-linear classifier, and the gradient of its discriminative objective can be explicitly computed. Due to the limited space, we refer the readers to [11] for more details. We trained a DRBM on different large-scale multi-class datasets from LIBSVM1 dataset collection, including connect-4, letter, and SensIT Vehicle acoustic. The detailed information of these datasets are presented in Table 2. We selected the number of hidden units using cross-validation to achieve their best results. Since the dimension of parameters, Nd, is relatively high, we only used diagonal covariance matrix estimation for CCAdL to significantly reduce the computational cost, i.e. only estimating the variance along each dimension. The size of the subset was chosen as 500–1000 to obtain a reasonable variance estimation. For each dataset, we chose the first 20% of the total number of passes over the entire dataset as the burn-in period, and collected the remaining samples for prediction. Table 2: Datasets used in DRBM with corresponding parameter configurations. Datasets training/test set classes features hidden units total number of parameters Nd connect-4 54,046/13,511 3 126 20 2603 letter 10,500/5,000 26 16 100 4326 acoustic 78,823/19,705 3 50 20 1083 The error rate computed by various methods on the test set using posterior mean against number of passes over entire dataset was plotted in Figure 3. It can be observed that SGHMC and SGNHT only work well with a large value of the effective friction A, which corresponds to a strong random walk effect and thus slows down the convergence. On the contrary, CCAdL works reliably (much better than the other two) in a wide range of A, and more importantly in the large stepsize regime, which 1http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/multiclass.html 7 speeds up the convergence rate in relation to the computational work performed. It can be easily seen that the performance of SGHMC heavily relies on using a small value of h and large value of A, which significantly limits its usefulness in practice. 50 100 150 200 0.27 0.28 0.29 0.3 0.31 0.32 0.33 Number of Passes Test Error SGHMC, A=10 SGHMC, A=50 SGNHT, A=10 SGNHT, A=50 CCAdL, A=10 CCAdL, A=50 50 100 150 200 0.27 0.28 0.29 0.3 0.31 0.32 0.33 Number of Passes Test Error SGHMC, A=10 SGHMC, A=50 SGNHT, A=10 SGNHT, A=50 CCAdL, A=10 CCAdL, A=50 50 100 150 200 0.27 0.28 0.29 0.3 0.31 0.32 0.33 Number of Passes Test Error SGHMC, A=10 SGHMC, A=50 SGNHT, A=10 SGNHT, A=50 CCAdL, A=10 CCAdL, A=50 (1a) connect-4, h = 0.5×10−3 (1b) connect-4, h = 1×10−3 (1c) connect-4, h = 2×10−3 100 200 300 400 0.1 0.15 0.2 0.25 Test Error Number of Passes SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 100 200 300 400 0.1 0.15 0.2 0.25 Test Error Number of Passes SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 100 200 300 400 0.1 0.15 0.2 0.25 Test Error Number of Passes SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 (2a) letter, h = 1×10−3 (2b) letter, h = 2×10−3 (2c) letter, h = 5×10−3 50 100 150 200 0.25 0.3 0.35 0.4 Number of Passes Test Error SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 50 100 150 200 0.25 0.3 0.35 0.4 Number of Passes Test Error SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 50 100 150 200 0.25 0.3 0.35 0.4 Number of Passes Test Error SGHMC, A=1 SGHMC, A=10 SGNHT, A=1 SGNHT, A=10 CCAdL, A=1 CCAdL, A=10 (3a) acoustic, h = 0.2×10−3 (3b) acoustic, h = 0.5×10−3 (3c) acoustic, h = 1×10−3 Figure 3: Comparisons of DRBM on datasets connect-4 (top row), letter (middle row), and acoustic (bottom row) with various values of h and A indicated: test error rate of various methods using posterior mean against number of passes over the entire dataset. 5 Conclusions and Future Work In this article, we have proposed a novel Covariance-Controlled Adaptive Langevin (CCAdL) formulation that can effectively dissipate parameter-dependent noise while maintaining a desirable invariant distribution. CCAdL combines ideas of SGHMC and SGNHT from the literature, but achieves significant improvements over each of these methods in practice. The additional error introduced by covariance estimation is expected to be small in a relative sense, i.e. substantially smaller than the error arising from the noisy gradient. Our findings have been verified in large-scale machine learning applications. In particular, we have consistently observed that SGHMC relies on a small stepsize h and large friction A, which significantly reduces its usefulness in practice as discussed. The techniques presented in this article could be of use in the more general setting of large-scale Bayesian sampling and optimization, which we leave for future work. A naive nonsymmetric splitting method has been applied for CCAdL for fair comparison in this article. However, we point out that optimal design of splitting methods in ergodic SDE systems has been explored recently in the mathematics community [1, 13, 14]. Moreover, it has been shown in [15] that a certain type of symmetric splitting method for the Ad-Langevin/SGNHT method with a clean (full) gradient inherits the superconvergence property (i.e. fourth order convergence to the invariant distribution for configurational quantities) recently demonstrated in the setting of Langevin dynamics [12, 14]. We leave further exploration of this direction in the context of noisy gradients for future work. 8 References [1] A. Abdulle, G. Vilmart, and K. C. Zygalakis. Long time accuracy of Lie-Trotter splitting methods for Langevin dynamics. SIAM Journal on Numerical Analysis, 53(1):1–16, 2015. [2] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient Fisher scoring. In Proceedings of the 29th International Conference on Machine Learning, pages 1591–1598, 2012. [3] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov Chain Monte Carlo. CRC Press, 2011. [4] T. Chen, E. B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In Proceedings of the 31st International Conference on Machine Learning, pages 1683–1691, 2014. [5] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian sampling using stochastic gradient thermostats. In Advances in Neural Information Processing Systems 27, pages 3203–3211, 2014. [6] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 195(2):216–222, 1987. [7] D. Frenkel and B. Smit. Understanding Molecular Simulation: From Algorithms to Applications, Second Edition. Academic Press, 2001. [8] W. G. Hoover. Computational Statistical Mechanics, Studies in Modern Thermodynamics. Elsevier Science, 1991. [9] A. M. Horowitz. A generalized guided Monte Carlo algorithm. Physics Letters B, 268(2):247– 252, 1991. [10] A. Jones and B. Leimkuhler. Adaptive stochastic methods for sampling driven molecular systems. The Journal of Chemical Physics, 135:084125, 2011. [11] H. Larochelle and Y. Bengio. Classification using discriminative restricted Boltzmann machines. In Proceedings of the 25th International Conference on Machine Learning, pages 536–543, 2008. [12] B. Leimkuhler and C. Matthews. Rational construction of stochastic numerical methods for molecular sampling. Applied Mathematics Research eXpress, 2013(1):34–56, 2013. [13] B. Leimkuhler and C. Matthews. Molecular Dynamics: With Deterministic and Stochastic Numerical Methods. Springer, 2015. [14] B. Leimkuhler, C. Matthews, and G. Stoltz. The computation of averages from equilibrium and nonequilibrium Langevin molecular dynamics. IMA Journal of Numerical Analysis, 2015. [15] B. Leimkuhler and X. Shang. Adaptive thermostats for noisy gradient systems. arXiv preprint arXiv:1505.06889, 2015. [16] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087, 1953. [17] S. Nos´e. A unified formulation of the constant temperature molecular dynamics methods. The Journal of Chemical Physics, 81:511, 1984. [18] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22(2):400–407, 1951. [19] C. Robert and G. Casella. Monte Carlo Statistical Methods, Second Edition. Springer, 2004. [20] S. J. Vollmer, K. C. Zygalakis, and Y. W. Teh. (Non-) asymptotic properties of stochastic gradient Langevin dynamics. arXiv preprint arXiv:1501.00438, 2015. [21] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning, pages 681–688, 2011. 9
2015
177
5,677
Semi-supervised Sequence Learning Andrew M. Dai Google Inc. adai@google.com Quoc V. Le Google Inc. qvl@google.com Abstract We present two approaches to use unlabeled data to improve Sequence Learning with recurrent networks. The first approach is to predict what comes next in a sequence, which is a language model in NLP. The second approach is to use a sequence autoencoder, which reads the input sequence into a vector and predicts the input sequence again. These two algorithms can be used as a “pretraining” algorithm for a later supervised sequence learning algorithm. In other words, the parameters obtained from the pretraining step can then be used as a starting point for other supervised training models. In our experiments, we find that long short term memory recurrent networks after pretrained with the two approaches become more stable to train and generalize better. With pretraining, we were able to achieve strong performance in many classification tasks, such as text classification with IMDB, DBpedia or image recognition in CIFAR-10. 1 Introduction Recurrent neural networks (RNNs) are powerful tools for modeling sequential data, yet training them by back-propagation through time [37, 27] can be difficult [9]. For that reason, RNNs have rarely been used for natural language processing tasks such as text classification despite their ability to preserve word ordering. On a variety of document classification tasks, we find that it is possible to train an LSTM [10] RNN to achieve good performance with careful tuning of hyperparameters. We also find that a simple pretraining step can significantly stabilize the training of LSTMs. A simple pretraining method is to use a recurrent language model as a starting point of the supervised network. A slightly better method is to use a sequence autoencoder, which uses a RNN to read a long input sequence into a single vector. This vector will then be used to reconstruct the original sequence. The weights obtained from pretraining can then be used as an initialization for the standard LSTM RNNs. We believe that this semi-supervised approach [1] is superior to other unsupervised sequence learning methods, e.g., Paragraph Vectors [19], because it can allow for easy fine-tuning. In our experiments with document classification tasks with 20 Newsgroups [17] and DBpedia [20], and sentiment analysis with IMDB [22] and Rotten Tomatoes [26], LSTMs pretrained by recurrent language models or sequence autoencoders are usually better than LSTMs initialized randomly. Another important result from our experiments is that it is possible to use unlabeled data from related tasks to improve the generalization of a subsequent supervised model. For example, using unlabeled data from Amazon reviews to pretrain the sequence autoencoders can improve classification accuracy on Rotten Tomatoes from 79.0% to 83.3%, an equivalence of adding substantially more labeled data. This evidence supports the thesis that it is possible to use unsupervised learning with more unlabeled data to improve supervised learning. With sequence autoencoders, and outside unlabeled data, LSTMs are able to match or surpass previously reported results. 1 Our semi-supervised learning approach is related to Skip-Thought vectors [14], with two differences. The first difference is that Skip-Thought is a harder objective, because it predicts adjacent sentences. The second is that Skip-Thought is a pure unsupervised learning algorithm, without fine-tuning. 2 Sequence autoencoders and recurrent language models Our approach to sequence autoencoding is inspired by the work in sequence to sequence learning (also known as seq2seq) by Sutskever et al. [32], which has been successfully used for machine translation [21, 11], text parsing [33], image captioning [35], video analysis [31], speech recognition [4] and conversational modeling [28, 34]. Key to their approach is the use of a recurrent network as an encoder to read in an input sequence into a hidden state, which is the input to a decoder recurrent network that predicts the output sequence. The sequence autoencoder is similar to the above concept, except that it is an unsupervised learning model. The objective is to reconstruct the input sequence itself. That means we replace the output sequence in the seq2seq framework with the input sequence. In our sequence autoencoders, the weights for the decoder network and the encoder network are the same (see Figure 1). Figure 1: The sequence autoencoder for the sequence “WXYZ”. The sequence autoencoder uses a recurrent network to read the input sequence in to the hidden state, which can then be used to reconstruct the original sequence. We find that the weights obtained from the sequence autoencoder can be used as an initialization of another supervised network, one which tries to classify the sequence. We hypothesize that this is because the network can already memorize the input sequence. This reason, and the fact that the gradients have shortcuts, are our hypothesis of why the sequence autoencoder is a good and stable approach in initializing recurrent networks. A significant property of the sequence autoencoder is that it is unsupervised, and thus can be trained with large quantities of unlabeled data to improve its quality. Our result is that additional unlabeled data can improve the generalization ability of recurrent networks. This is especially useful for tasks that have limited labeled data. We also find that recurrent language models [2, 24] can be used as a pretraining method for LSTMs. This is equivalent to removing the encoder part of the sequence autoencoder in Figure 1. Our experimental results show that this approach works better than LSTMs with random initialization. 3 Overview of baselines In our experiments, we use LSTM recurrent networks [10] because they are generally better than RNNs. Our LSTM implementation is standard and has input gates, forget gates, and output gates [6, 7, 8]. We compare this basic LSTM against a LSTM initialized with the sequence autoencoder method. When the LSTM is initialized with a sequence autoencoder, the method is called SA-LSTM in our experiments. When LSTM is initialized with a language model, the method is called LMLSTM. We also compare our method to other baselines, e.g., bag-of-words methods or paragraph vectors, previously reported on the same datasets. In most of our experiments our output layer predicts the document label from the LSTM output at the last timestep. We also experiment with the approach of putting the label at every timestep and linearly increasing the weights of the prediction objectives from 0 to 1 [25]. This way we can inject gradients to earlier steps in the recurrent networks. We call this approach linear label gain. 2 Lastly, we also experiment with the method of jointly training the supervised learning task with the sequence autoencoder and call this method joint training. 4 Experiments In our experiments with LSTMs, we follow the basic recipes as described in [7, 32] by clipping the cell outputs and gradients. The benchmarks of focus are text understanding tasks, with all datasets being publicly available. The tasks are sentiment analysis (IMDB and Rotten Tomatoes) and text classification (20 Newsgroups and DBpedia). Commonly used methods on these datasets, such as bag-of-words or n-grams, typically ignore long-range ordering information (e.g., modifiers and their objects may be separated by many unrelated words); so one would expect recurrent methods which preserve ordering information to perform well. Nevertheless, due to the difficulty in optimizing these networks, recurrent models are not the method of choice for document classification. In our experiments with the sequence autoencoder, we train it to reproduce the full document after reading all the input words. In other words, we do not perform any truncation or windowing. We add an end of sentence marker to the end of each input sequence and train the network to start reproducing the sequence after that marker. To speed up performance and reduce GPU memory usage, we perform truncated backpropagation up to 400 timesteps from the end of the sequence. We preprocess the text so that punctuation is treated as separate tokens and we ignore any non-English characters and words in the DBpedia text. We also remove words that only appear once in each dataset and do not perform any term weighting or stemming. After training the recurrent language model or the sequence autoencoder for roughly 500K steps with a batch size of 128, we use both the word embedding parameters and the LSTM weights to initialize the LSTM for the supervised task. We then train on that task while fine tuning both the embedding parameters and the weights and use early stopping when the validation error starts to increase. We choose the dropout parameters based on a validation set. Using SA-LSTMs, we are able to match or surpass reported results for all datasets. It is important to emphasize that previous best results come from various different methods. So it is significant that one method achieves strong results for all datasets, presumably because such a method can be used as a general model for any similar task. A summary of results in the experiments are shown in Table 1. More details of the experiments are as follows. Table 1: A summary of the error rates of SA-LSTMs and previous best reported results. Dataset SA-LSTM Previous best result IMDB 7.24% 7.42% Rotten Tomatoes 16.7% 18.5% 20 Newsgroups 15.6% 17.1% DBpedia 1.19% 1.74% 4.1 Sentiment analysis experiments with IMDB In this first set of experiments, we benchmark our methods on the IMDB movie sentiment dataset, proposed by Maas et al. [22].1 There are 25,000 labeled and 50,000 unlabeled documents in the training set and 25,000 in the test set. We use 15% of the labeled training documents as a validation set. The average length of each document is 241 words and the maximum length of a document is 2,526 words. The previous baselines are bag-of-words, ConvNets [13] or Paragraph Vectors [19]. Since the documents are long, one might expect that it is difficult for recurrent networks to learn. We however find that with tuning, it is possible to train LSTM recurrent networks to fit the training set. For example, if we set the size of hidden state to be 512 units and truncate the backprop to be 400, an LSTM can do fairly well. With random embedding dimension dropout [38] and random word dropout (not published previously), we are able to reach performance of around 86.5% accuracy in the test set, which is approximately 5% worse than most baselines. 1http://ai.Stanford.edu/amaas/data/sentiment/index.html 3 Fundamentally, the main problem with this approach is that it is unstable: if we were to increase the number of hidden units or to increase the number of backprop steps, the training breaks down very quickly: the objective function explodes even with careful tuning of the gradient clipping. This is because LSTMs are sensitive to the hyperparameters for long documents. In contrast, we find that the SA-LSTM works better and is more stable. If we use the sequence autoencoders, changing the size of the hidden state or the number of backprop steps hardly affects the training of LSTMs. This is important because the models become more practical to train. Using sequence autoencoders, we overcome the optimization instability in LSTMs in such a way that it is fast and easy to achieve perfect classification on the training set. To avoid overfitting, we again use input dimension dropout, with the dropout rate chosen on a validation set. We find that dropping out 80% of the input embedding dimensions works well for this dataset. The results of our experiments are shown in Table 2 together with previous baselines. We also add an additional baseline where we initialize a LSTM with word2vec embeddings on the training set. Table 2: Performance of models on the IMDB sentiment classification task. Model Test error rate LSTM with tuning and dropout 13.50% LSTM initialized with word2vec embeddings 10.00% LM-LSTM (see Section 2) 7.64% SA-LSTM (see Figure 1) 7.24% SA-LSTM with linear gain (see Section 3) 9.17% SA-LSTM with joint training (see Section 3) 14.70% Full+Unlabeled+BoW [22] 11.11% WRRBM + BoW (bnc) [22] 10.77% NBSVM-bi (Na¨ıve Bayes SVM with bigrams) [36] 8.78% seq2-bown-CNN (ConvNet with dynamic pooling) [12] 7.67% Paragraph Vectors [19] 7.42% The results confirm that SA-LSTM with input embedding dropout can be as good as previous best results on this dataset. In contrast, LSTMs without sequence autoencoders have trouble in optimizing the objective because of long range dependencies in the documents. Using language modeling (LM-LSTM) as an initialization works well, achieving 8.98%, but less well compared to the SA-LSTM. This is perhaps because language modeling is a short-term objective, so that the hidden state only captures the ability to predict the next few words. In the above table, we use 1,024 units for memory cells, 512 units for the input embedding layer in the LM-LSTM and SA-LSTM. We also use a hidden layer 30 units with dropout of 50% between the last hidden state and the classifier. We continue to use these settings in the following experiments. In Table 3, we present some examples from the IMDB dataset that are correctly classified by SALSTM but not by a bigram NBSVM model. These examples often have long-term dependencies or have sarcasm that is difficult to detect by solely looking at short phrases. 4.2 Sentiment analysis experiments with Rotten Tomatoes and the positive effects of additional unlabeled data The success on the IMDB dataset convinces us to test our methods on another sentiment analysis task to see if similar gains can be obtained. The benchmark of focus in this experiment is the Rotten Tomatoes dataset [26].2 The dataset has 10,662 documents, which are randomly split into 80% for training, 10% for validation and 10% for test. The average length of each document is 22 words and the maximum length is 52 words. Thus compared to IMDB, this dataset is smaller both in terms of the number of documents and the number of words per document. 2http://www.cs.cornell.edu/people/pabo/movie-review-data/ 4 Table 3: IMDB sentiment classification examples that are correctly classified by SA-LSTM and incorrectly by NBSVM-bi. Text Sentiment Looking for a REAL super bad movie? If you wanna have great fun, don’t hesitate and check this one! Ferrigno is incredibly bad but is also the best of this mediocrity. Negative A professional production with quality actors that simply never touched the heart or the funny bone no matter how hard it tried. The quality cast, stark setting and excellent cinemetography made you hope for Fargo or High Plains Drifter but sorry, the soup had no seasoning...or meat for that matter. A 3 (of 10) for effort. Negative The screen-play is very bad, but there are some action sequences that i really liked. I think the image is good, better than other romanian movies. I liked also how the actors did their jobs. Negative Our first observation is that it is easier to train LSTMs on this dataset than on the IMDB dataset and the gaps between LSTMs, LM-LSTMs and SA-LSTMs are smaller than before. This is because movie reviews in Rotten Tomatoes are sentences whereas reviews in IMDB are paragraphs. As this dataset is small, our methods tend to severely overfit the training set. Combining SA-LSTMs with 95% input embedding and 50% word dropout improves generalization and allows the model to achieve 19.3% test set error.Tuning the SA-LSTM further on the validation set can improve the result to 19.3% error rate on the test set. To better the performance, we add unlabeled data from the IMDB dataset in the previous experiment and Amazon movie reviews [23] to the autoencoder training stage.3 We also run a control experiment where we use the pretrained word vectors trained by word2vec from Google News. Table 4: Performance of models on the Rotten Tomatoes sentiment classification task. Model Test error rate LSTM with tuning and dropout 20.3% LM-LSTM 21.9% LSTM with linear gain 22.2% SA-LSTM 19.3% LSTM with word vectors from word2vec Google News 20.5% SA-LSTM with unlabeled data from IMDB 18.6% SA-LSTM with unlabeled data from Amazon reviews 16.7% MV-RNN [29] 21.0% NBSVM-bi [36] 20.6% CNN-rand [13] 23.5% CNN-non-static (ConvNet with word vectors from word2vec Google News) [13] 18.5% The results for this set of experiments are shown in Table 4. Our observation is that if we use the word vectors from word2vec, there is only a small gain of 0.5%. This is perhaps because the recurrent weights play an important role in our model and are not initialized properly in this experiment. However, if we use IMDB to pretrain the sequence autoencoders, the error decreases from 20.5% to 18.6%, nearly a 2% gain in accuracy; if we use Amazon reviews, a larger unlabeled dataset (7.9 million movie reviews), to pretrain the sequence autoencoders, the error goes down to 16.7% which is another 2% gain in accuracy. 3The dataset is available at http://snap.stanford.edu/data/web-Amazon.html, which has 34 million general product reviews, but we only use 7.9 million movie reviews in our experiments. 5 This brings us to the question of how well this method of using unlabeled data fares compared to adding more labeled data. As argued by Socher et al. [30], a reason of why the methods are not perfect yet is the lack of labeled training data, they proposed to use more labeled data by labeling an addition of 215,154 phrases created by the Stanford Parser. The use of more labeled data allowed their method to achieve around 15% error in the test set, an improvement of approximately 5% over older methods with less labeled data. We compare our method to their reported results [30] on sentence-level classification. As our method does not have access to valuable labeled data, one might expect that our method is severely disadvantaged and should not perform on the same level. However, with unlabeled data and sequence autoencoders, we are able to obtain 16.7%, ranking second amongst many other methods that have access to a much larger corpus of labeled data. The fact that unlabeled data can compensate for the lack of labeled data is very significant as unlabeled data are much cheaper than labeled data. The results are shown in Table 5. Table 5: More unlabeled data vs. more labeled data. Performance of SA-LSTM with additional unlabeled data and previous models with additional labeled data on the Rotten Tomatoes task. Model Test error rate LSTM initialized with word2vec embeddings trained on Amazon reviews 21.7% SA-LSTM with unlabeled data from Amazon reviews 16.7% NB [30] 18.2% SVM [30] 20.6% BiNB [30] 16.9% VecAvg [30] 19.9% RNN [30] 17.6% MV-RNN [30] 17.1% RNTN [30] 14.6% 4.3 Text classification experiments with 20 newsgroups The experiments so far have been done on datasets where the number of tokens in a document is relatively small, a few hundred words. Our question becomes whether it is possible to use SALSTMs for tasks that have a substantial number of words, such as web articles or emails and where the content consists of many different topics. For that purpose, we carry out the next experiments on the 20 newsgroups dataset [17].4 There are 11,293 documents in the training set and 7,528 in the test set. We use 15% of the training documents as a validation set. Each document is an email with an average length of 267 words and a maximum length of 11,925 words. Attachments, PGP keys, duplicates and empty messages are removed. As the newsgroup documents are long, it was previously considered improbable for recurrent networks to learn anything from the dataset. The best methods are often simple bag-of-words. We repeat the same experiments with LSTMs and SA-LSTMs on this dataset. Similar to observations made in previous experiments, SA-LSTMs are generally more stable to train than LSTMs. To improve generalization of the models, we again use input embedding dropout and word dropout chosen on the validation set. With 70% input embedding dropout and 75% word dropout, SA-LSTM achieves 15.6% test set error which is much better than previous classifiers in this dataset. Results are shown in Table 6. 4.4 Character-level document classification experiments with DBpedia In this set of experiments, we turn our attention to another challenging task of categorizing Wikipedia pages by reading character-by-character inputs. The dataset of attention is the DBpedia dataset [20], which was also used to benchmark convolutional neural nets in Zhang and LeCun [39]. 4http://qwone.com/˜jason/20Newsgroups/ 6 Table 6: Performance of models on the 20 newsgroups classification task. Model Test error rate LSTM 18.0% LM-LSTM 15.3% LSTM with linear gain 71.6% SA-LSTM 15.6% Hybrid Class RBM [18] 23.8% RBM-MLP [5] 20.5% SVM + Bag-of-words [3] 17.1% Na¨ıve Bayes [3] 19.0% Note that unlike other datasets in Zhang and LeCun [39], DBpedia has no duplication or tainting issues so we assume that their experimental results are valid on this dataset. DBpedia is a crowdsourced effort to extract information from Wikipedia and categorize it into an ontology. For this experiment, we follow the same procedure suggested in Zhang and LeCun [39]. The task is to classify DBpedia abstracts into one of 14 categories after reading the character-by-character input. The dataset is split into 560,000 training examples and 70,000 test examples. A DBpedia document has an average of 300 characters while the maximum length of all documents is 13,467 characters. As this dataset is large, overfitting is not an issue and thus we do not perform any dropout on the input or recurrent layers. For this dataset, we use a two-layered LSTM, each layer has 512 hidden units and and the input embedding has 128 units. Table 7: Performance of models on the DBpedia character level classification task. Model Test error rate LSTM 13.64% LM-LSTM 1.50% LSTM with linear gain 1.32% SA-LSTM 2.34% SA-LSTM with linear gain 1.23% SA-LSTM with 3 layers and linear gain 1.19% SA-LSTM (word-level) 1.40% Bag-of-words 3.57% Small ConvNet 1.98% Large ConvNet 1.73% In this dataset, we find that the linear label gain as described in Section 3 is an effective mechanism to inject gradients to earlier steps in LSTMs. This linear gain method works well and achieves 1.32% test set error, which is better than SA-LSTM. Combining SA-LSTM and the linear gain method achieves 1.19% test set error, a significant improvement from the results of convolutional networks as shown in Table 7. 4.5 Object classification experiments with CIFAR-10 In these experiments, we attempt to see if our pre-training methods extend to non-textual data. To do this, we train a LSTM to read the CIFAR-10 image dataset row-by-row (where the input at each timestep is an entire row of pixels) and output the class of the image at the end. We use the same method as in [16] to perform data augmentation. We also trained a LSTM to do next row prediction given the current row (we denote this as LM-LSTM) and a LSTM to predict the image by rows after reading all its rows (SA-LSTM). We then fine-tune these on the classification task. We present the results in Table 8. While we do not achieve the results attained by state of the art convolutional networks, our 2-layer pretrained LM-LSTM is able to exceed the results of the 7 baseline convolutional DBN model [15] despite not using any convolutions and outperforms the non pre-trained LSTM. Table 8: Performance of models on the CIFAR-10 object classification task. Model Test error rate 1-layer LSTM 25.0% 1-layer LM-LSTM 23.1% 1-layer SA-LSTM 25.1% 2-layer LSTM 26.0% 2-layer LM-LSTM 18.7% 2-layer SA-LSTM 26.0% Convolution DBNs [15] 21.1% 5 Discussion In this paper, we found that it is possible to use LSTM recurrent networks for NLP tasks such as document classification. We also find that a language model or a sequence autoencoder can help stabilize the learning in recurrent networks. On five benchmarks that we tried, LSTMs can become a general classifier that reaches or surpasses the performance levels of all previous baselines. Acknowledgements: We thank Oriol Vinyals, Ilya Sutskever, Greg Corrado, Vijay Vasudevan, Manjunath Kudlur, Rajat Monga, Matthieu Devin, and the Google Brain team for their help. References [1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res., 6:1817–1853, December 2005. [2] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. In JMLR, 2003. [3] A. Cardoso-Cachopo. Datasets for single-label text categorization. http://web.ist. utl.pt/acardoso/datasets/, 2015. [Online; accessed 25-May-2015]. [4] William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015. [5] Y. Dauphin and Y. Bengio. Stochastic ratio matching of RBMs for sparse high-dimensional inputs. In NIPS, 2013. [6] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 2000. [7] A. Graves. Generating sequences with recurrent neural networks. In Arxiv, 2013. [8] K. Greff, R. K. Srivastava, J. Koutn´ık, B. R. Steunebrink, and J. Schmidhuber. LSTM: A search space odyssey. In ICML, 2015. [9] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural Networks, 2001. [10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. [11] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural machine translation. In ICML, 2014. [12] R. Johnson and T. Zhang. Effective use of word order for text categorization with convolutional neural networks. In NAACL, 2014. [13] Y. Kim. Convolutional neural networks for sentence classification, 2014. 8 [14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skipthought vectors. In NIPS, 2015. [15] A. Krizhevsky. Convolutional deep belief networks on CIFAR-10. Technical report, University of Toronto, 2010. [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [17] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995. [18] H. Larochelle, M. Mandel, R. Pascanu, and Y. Bengio. Learning algorithms for the classification restricted boltzmann machine. JMLR, 2012. [19] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, 2014. [20] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. van Kleef, S. Auer, et al. DBpedia – a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 2014. [21] T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014. [22] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In ACL, 2011. [23] J. McAuley and J. Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In RecSys, pages 165–172. ACM, 2013. [24] T. Mikolov, M. Karafi´at, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010. [25] J. Y. H. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015. [26] B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, 2005. [27] D. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 1986. [28] L. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. In EMNLP, 2015. [29] R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. Semantic compositionality through recursive matrix-vector spaces. In EMNLP, 2012. [30] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. [31] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015. [32] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [33] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. In NIPS, 2015. [34] O. Vinyals and Q. V. Le. A neural conversational model. In ICML Deep Learning Workshop, 2015. [35] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2014. [36] S. I. Wang and C. D. Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL, 2012. [37] P. J. Werbos. Beyond regression: New tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard, 1974. [38] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. [39] X. Zhang and Y. LeCun. Character-level convolutional networks for text classification. In NIPS, 2015. 9
2015
178
5,678
Non-convex Statistical Optimization for Sparse Tensor Graphical Model Wei Sun Yahoo Labs Sunnyvale, CA sunweisurrey@yahoo-inc.com Zhaoran Wang Department of Operations Research and Financial Engineering Princeton University Princeton, NJ zhaoran@princeton.edu Han Liu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ hanliu@princeton.edu Guang Cheng Department of Statistics Purdue University West Lafayette, IN chengg@stat.purdue.edu Abstract We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. 1 Introduction High-dimensional tensor-valued data are prevalent in many fields such as personalized recommendation systems and brain imaging research [1, 2]. Traditional recommendation systems are mainly based on the user-item matrix, whose entry denotes each user’s preference for a particular item. To incorporate additional information into the analysis, such as the temporal behavior of users, we need to consider a user-item-time tensor. For another example, functional magnetic resonance imaging (fMRI) data can be viewed as a three way (third-order) tensor since it contains the brain measurements taken on different locations over time for various experimental conditions. Also, in the example of microarray study for aging [3], thousands of gene expression measurements are recorded on 16 tissue types on 40 mice with varying ages, which forms a four way gene-tissue-mouse-age tensor. In this paper, we study the estimation of conditional independence structure within tensor data. For example, in the microarray study for aging we are interested in the dependency structure across different genes, tissues, ages and even mice. Assuming data are drawn from a tensor normal distribution, a straightforward way to estimate this structure is to vectorize the tensor and estimate the underlying Gaussian graphical model associated with the vector. Such an approach ignores the tensor structure 1 and requires estimating a rather high dimensional precision matrix with insufficient sample size. For instance, in the aforementioned fMRI application the sample size is one if we aim to estimate the dependency structure across different locations, time and experimental conditions. To address such a problem, a popular approach is to assume the covariance matrix of the tensor normal distribution is separable in the sense that it is the Kronecker product of small covariance matrices, each of which corresponds to one way of the tensor. Under this assumption, our goal is to estimate the precision matrix corresponding to each way of the tensor. See §1.1 for a detailed survey of previous work. Despite the fact that the assumption of the Kronecker product structure of covariance makes the statistical model much more parsimonious, it poses significant challenges. In particular, the penalized negative log-likelihood function is non-convex with respect to the unknown sparse precision matrices. Consequently, there exists a gap between computational and statistical theory. More specifically, as we will show in §1.1, existing literature mostly focuses on establishing the existence of a local optimum that has desired statistical guarantees, rather than offering efficient algorithmic procedures that provably achieve the desired local optima. In contrast, we analyze an alternating minimization algorithm which iteratively minimizes the non-convex objective function with respect to each individual precision matrix while fixing the others. The established theoretical guarantees of the proposed algorithm are as follows. Suppose that we have n observations from a K-th order tensor normal distribution. We denote by mk, sk, dk (k = 1, . . . , K) the dimension, sparsity, and max number of non-zero entries in each row of the precision matrix corresponding to the k-th way of the tensor. Besides, we define m = QK k=1 mk. The k-th precision matrix estimator from our alternating minimization algorithm achieves a p mk(mk + sk) log mk/(nm) statistical rate of convergence in Frobenius norm, which is minimax-optimal since this is the best rate one can obtain even when the rest K −1 true precision matrices are known [4]. Furthermore, under an extra irrepresentability condition, we establish a p mk log mk/(nm) rate of convergence in max norm, which is also optimal, and a dk p mk log mk/(nm) rate of convergence in spectral norm. These estimation consistency results and a sufficiently large signal strength condition further imply the model selection consistency of recovering all the edges. A notable implication of these results is that, when K ≥3, our alternating minimization algorithm can achieve estimation consistency in Frobenius norm even if we only have access to one tensor sample, which is often the case in practice. This phenomenon is unobserved in previous work. Finally, we conduct extensive experiments to evaluate the numerical performance of the proposed alternating minimization method. Under the guidance of theory, we propose a way to significantly accelerate the algorithm without sacrificing the statistical accuracy. 1.1 Related work and our contribution A special case of our sparse tensor graphical model when K = 2 is the sparse matrix graphical model, which is studied by [5–8]. In particular, [5] and [6] only establish the existence of a local optima with desired statistical guarantees. Meanwhile, [7] considers an algorithm that is similar to ours. However, the statistical rates of convergence obtained by [6, 7] are much slower than ours when K = 2. See Remark 3.6 in §3.1 for a detailed comparison. For K = 2, our statistical rate of convergence in Frobenius norm recovers the result of [5]. In other words, our theory confirms that the desired local optimum studied by [5] not only exists, but is also attainable by an efficient algorithm. In addition, for matrix graphical model, [8] establishes the statistical rates of convergence in spectral and Frobenius norms for the estimator attained by a similar algorithm. Their results achieve estimation consistency in spectral norm with only one matrix observation. However, their rate is slower than ours with K = 2. See Remark 3.11 in §3.2 for a detailed discussion. Furthermore, we allow K to increase and establish estimation consistency even in Frobenius norm for n = 1. Most importantly, all these results focus on matrix graphical model and can not handle the aforementioned motivating applications such as the gene-tissue-mouse-age tensor dataset. In the context of sparse tensor graphical model with a general K, [9] shows the existence of a local optimum with desired rates, but does not prove whether there exists an efficient algorithm that provably attains such a local optimum. In contrast, we prove that our alternating minimization algorithm achieves an estimator with desired statistical rates. To achieve it, we apply a novel theoretical framework to separately consider the population and sample optimizers, and then establish the onestep convergence for the population optimizer (Theorem 3.1) and the optimal rate of convergence for the sample optimizer (Theorem 3.4). A new concentration result (Lemma B.1) is developed for this purpose, which is also of independent interest. Moreover, we establish additional theoretical 2 guarantees including the optimal rate of convergence in max norm, the estimation consistency in spectral norm, and the graph recovery consistency of the proposed sparse precision matrix estimator. In addition to the literature on graphical models, our work is also closely related to a recent line of research on alternating minimization for non-convex optimization problems [10–13]. These existing results mostly focus on problems such as dictionary learning, phase retrieval and matrix decomposition. Hence, our statistical model and analysis are completely different from theirs. Also, our paper is related to a recent line of work on tensor decomposition. See, e.g., [14–17] and the references therein. Compared with them, our work focuses on the graphical model structure within tensor-valued data. Notation: For a matrix A = (Ai,j) 2 Rd⇥d, we denote kAk1, kAk2, kAkF as its max, spectral, and Frobenius norm, respectively. We define kAk1,off := P i6=j |Ai,j| as its off-diagonal `1 norm and |||A|||1 := maxi P j |Ai,j| as the maximum absolute row sum. Denote vec(A) as the vectorization of A which stacks the columns of A. Let tr(A) be the trace of A. For an index set S = {(i, j), i, j 2 {1, . . . , d}}, we define [A]S as the matrix whose entry indexed by (i, j) 2 S is equal to Ai,j, and zero otherwise. We denote 1d as the identity matrix with dimension d ⇥d. Throughout this paper, we use C, C1, C2, . . . to denote generic absolute constants, whose values may vary from line to line. 2 Sparse tensor graphical model 2.1 Preliminary We employ the tensor notations used by [18]. Throughout this paper, higher order tensors are denoted by boldface Euler script letters, e.g. T . We consider a K-th order tensor T 2 Rm1⇥m2⇥···⇥mK. When K = 1 it reduces to a vector and when K = 2 it reduces to a matrix. The (i1, . . . , iK)-th element of the tensor T is denoted to be Ti1,...,iK. Meanwhile, we define the vectorization of T as vec(T ) := (T1,1,...,1, . . . , Tm1,1,...,1, . . . , T1,m2,...,mK, Tm1,m2,...,mK)> 2 Rm with m = Q k mk. In addition, we define the Frobenius norm of a tensor T as kT kF := $ P i1,...,iK T 2 i1,...,iK %1/2. For tensors, a fiber refers to the higher order analogue of the row and column of matrices. A fiber is obtained by fixing all but one of the indices of the tensor, e.g., the mode-k fiber of T(k) is given by Ti1,...,,ik−1,:,ik+1,...,iK. Matricization, also known as unfolding, is the process to transform a tensor into a matrix. We denote T(k) as the mode-k matricization of a tensor T , which arranges the mode-k fibers to be the columns of the resulting matrix. Another useful operation in tensors is the k-mode product. The k-mode product of a tensor T 2 Rm1⇥m2⇥···⇥mK with a matrix A 2 RJ⇥mk is denoted as T ⇥k A and is of the size m1 ⇥· · ·⇥mk−1 ⇥J ⇥mk+1 ⇥· · ·⇥mK. Its entry is defined as (T ⇥k A)i1,...,ik−1,j,ik+1,...,iK := Pmk ik=1 Ti1,...,iKAj,ik. In addition, for a list of matrices {A1, . . . , AK} with Ak 2 Rmk⇥mk, k = 1, . . . , K, we define T ⇥{A1, . . . , AK} := T ⇥1 A1 ⇥2 · · · ⇥K AK. 2.2 Model A tensor T 2 Rm1⇥m2⇥···⇥mK follows the tensor normal distribution with zero mean and covariance matrices ⌃1, . . . , ⌃K, denoted as T ⇠TN(0; ⌃1, . . . , ⌃K), if its probability density function is p(T |⌃1, . . . , ⌃K) = (2⇡)−m/2 ⇢K Y k=1 |⌃k|−m/(2mk) ( exp $ −kT ⇥⌃−1/2k2 F /2 % , (2.1) where m = QK k=1 mk and ⌃−1/2 := {⌃−1/2 1 , . . . , ⌃−1/2 K }. When K = 1, this tensor normal distribution reduces to the vector normal distribution with zero mean and covariance ⌃1. According to [9, 18], it can be shown that T ⇠TN(0; ⌃1, . . . , ⌃K) if and only if vec(T ) ⇠N(vec(0); ⌃K ⌦ · · · ⌦⌃1), where vec(0) 2 Rm and ⌦is the matrix Kronecker product. We consider the parameter estimation for the tensor normal model. Assume that we observe independently and identically distributed tensor samples T1, . . . , Tn from TN(0; ⌃⇤ 1, . . . , ⌃⇤ K). We aim to estimate the true covariance matrices (⌃⇤ 1, . . . , ⌃⇤ K) and their corresponding true precision matrices (⌦⇤ 1, . . . , ⌦⇤ K) where ⌦⇤ k = ⌃⇤−1 k (k = 1, . . . , K). To address the identifiability issue in the parameterization of the tensor normal distribution, we assume that k⌦⇤ kkF = 1 for k = 1, . . . , K. This renormalization assumption does not change the graph structure of the original precision matrix. 3 A standard approach to estimate ⌦⇤ k, k = 1, . . . , K, is to use the maximum likelihood method via (2.1). Up to a constant, the negative log-likelihood function of the tensor normal distribution is tr[S(⌦K ⌦· · · ⌦⌦1)] −PK k=1(m/mk) log |⌦k|, where S := 1 n Pn i=1 vec(Ti)vec(Ti)>. To encourage the sparsity of each precision matrix in the high-dimensional scenario, we consider a penalized log-likelihood estimator, which is obtained by minimizing qn(⌦1, . . . , ⌦K) := 1 mtr[S(⌦K ⌦· · · ⌦⌦1)] − K X k=1 1 mk log |⌦k| + K X k=1 Pλk(⌦k), (2.2) where Pλk(·) is a penalty function indexed by the tuning parameter λk. In this paper, we focus on the lasso penalty [19], i.e., Pλk(⌦k) = λkk⌦kk1,off. This estimation procedure applies similarly to a broad family of other penalty functions. We name the penalized model from (2.2) as the sparse tensor graphical model. It reduces to the sparse vector graphical model [20, 21] when K = 1, and the sparse matrix graphical model [5–8] when K = 2. Our framework generalizes them to fulfill the demand of capturing the graphical structure of higher order tensor-valued data. 2.3 Estimation This section introduces the estimation procedure for the sparse tensor graphical model. A computationally efficient algorithm is provided to estimate the precision matrix for each way of the tensor. Recall that in (2.2), qn(⌦1, . . . , ⌦K) is jointly non-convex with respect to ⌦1, . . . , ⌦K. Nevertheless, qn(⌦1, . . . , ⌦K) is a bi-convex problem since qn(⌦1, . . . , ⌦K) is convex in ⌦k when the rest K −1 precision matrices are fixed. The bi-convex property plays a critical role in our algorithm construction and its theoretical analysis in §3. According to its bi-convex property, we propose to solve this non-convex problem by alternatively update one precision matrix with other matrices fixed. Note that, for any k = 1, . . . , K, minimizing (2.2) with respect to ⌦k while fixing the rest K −1 precision matrices is equivalent to minimizing L(⌦k) := 1 mk tr(Sk⌦k) −1 mk log |⌦k| + λkk⌦kk1,off. (2.3) Here Sk := mk nm Pn i=1 Vk i Vk> i , where Vk i := ⇥ Ti ⇥ + ⌦1/2 1 , . . . , ⌦1/2 k−1, 1mk, ⌦1/2 k+1, . . . , ⌦1/2 K ⇤ (k) with ⇥the tensor product operation and [·](k) the mode-k matricization operation defined in §2.1. The result in (2.3) can be shown by noting that Vk i = [Ti](k) $ ⌦1/2 K ⌦· · ·⌦⌦1/2 k+1 ⌦⌦1/2 k−1 ⌦· · ·⌦⌦1/2 1 %> according to the properties of mode-k matricization shown by [18]. Hereafter, we drop the superscript k of Vk i if there is no confusion. Note that minimizing (2.3) corresponds to estimating vector-valued Gaussian graphical model and can be solved efficiently via the glasso algorithm [21]. Algorithm 1 Solve sparse tensor graphical model via Tensor lasso (Tlasso) 1: Input: Tensor samples T1 . . . , Tn, tuning parameters λ1, . . . , λK, max number of iterations T. 2: Initialize ⌦(0) 1 , . . . , ⌦(0) K randomly as symmetric and positive definite matrices and set t = 0. 3: Repeat: 4: t = t + 1. 5: For k = 1, . . . , K: 6: Given ⌦(t) 1 , . . . , ⌦(t) k−1, ⌦(t−1) k+1 , . . . , ⌦(t−1) K , solve (2.3) for ⌦(t) k via glasso [21]. 7: Normalize ⌦(t) k such that k⌦(t) k kF = 1. 8: End For 9: Until t = T. 10: Output: b⌦k = ⌦(T ) k (k = 1, . . . , K). The details of our Tensor lasso (Tlasso) algorithm are shown in Algorithm 1. It starts with a random initialization and then alternatively updates each precision matrix until it converges. In §3, we will illustrate that the statistical properties of the obtained estimator are insensitive to the choice of the initialization (see the discussion following Theorem 3.5). 4 3 Theory of statistical optimization We first prove the estimation errors in Frobenius norm, max norm, and spectral norm, and then provide the model selection consistency of our Tlasso estimator. We defer all the proofs to the appendix. 3.1 Estimation error in Frobenius norm Based on the penalized log-likelihood in (2.2), we define the population log-likelihood function as q(⌦1, . . . , ⌦K) := 1 mE + tr ⇥ vec(T )vec(T )>(⌦K ⌦· · · ⌦⌦1) ⇤ − K X k=1 1 mk log |⌦k|. (3.1) By minimizing q(⌦1, . . . , ⌦K) with respect to ⌦k, k = 1, . . . , K, we obtain the population minimization function with the parameter ⌦[K]−k := {⌦1, . . . , ⌦k−1, ⌦k+1, . . . , ⌦K}, i.e., Mk(⌦[K]−k) := argmin ⌦k q(⌦1, . . . , ⌦K). (3.2) Theorem 3.1. For any k = 1, . . . , K, if ⌦j (j 6= k) satisfies tr(⌃⇤ j⌦j) 6= 0, then the population minimization function in (3.2) satisfies Mk(⌦[K]−k) = m ⇥ mk Q j6=k tr(⌃⇤ j⌦j) ⇤−1⌦⇤ k. Theorem 3.1 shows a surprising phenomenon that the population minimization function recovers the true precision matrix up to a constant in only one iteration. If ⌦j = ⌦⇤ j, j 6= k, then Mk(⌦[K]−k) = ⌦⇤ k. Otherwise, after a normalization such that kMk(⌦[K]−k)kF = 1, the normalized population minimization function still fully recovers ⌦⇤ k. This observation suggests that setting T = 1 in Algorithm 1 is sufficient. Such a suggestion will be further supported by our numeric results. In practice, when (3.1) is unknown, we can approximate it via its sample version qn(⌦1, . . . , ⌦K) defined in (2.2), which gives rise to the statistical error in the estimation procedure. Analogously to (3.2), we define the sample-based minimization function with parameter ⌦[K]−k as c Mk(⌦[K]−k) := argmin ⌦k qn(⌦1, . . . , ⌦K). (3.3) In order to prove the estimation error, it remains to quantify the statistical error induced from finite samples. The following two regularity conditions are assumed for this purpose. Condition 3.2 (Bounded Eigenvalues). For any k = 1, . . . , K, there is a constant C1 > 0 such that, 0 < C1 λmin(⌃⇤ k) λmax(⌃⇤ k) 1/C1 < 1, where λmin(⌃⇤ k) and λmax(⌃⇤ k) refer to the minimal and maximal eigenvalue of ⌃⇤ k, respectively. Condition 3.2 requires the uniform boundedness of the eigenvalues of true covariance matrices ⌃⇤ k. It has been commonly assumed in the graphical model literature [22]. Condition 3.3 (Tuning). For any k = 1, . . . , K and some constant C2 > 0, the tuning parameter λk satisfies 1/C2 p log mk/(nmmk) λk C2 p log mk/(nmmk). Condition 3.3 specifies the choice of the tuning parameters. In practice, a data-driven tuning procedure [23] can be performed to approximate the optimal choice of the tuning parameters. Before characterizing the statistical error, we define a sparsity parameter for ⌦⇤ k, k = 1, . . . , K. Let Sk := {(i, j) : [⌦⇤ k]i,j 6= 0}. Denote the sparsity parameter sk := |Sk| −mk, which is the number of nonzero entries in the off-diagonal component of ⌦⇤ k. For each k = 1, . . . , K, we define B(⌦⇤ k) as the set containing ⌦⇤ k and its neighborhood for some sufficiently large constant radius ↵> 0, i.e., B(⌦⇤ k) := {⌦2 Rmk⇥mk : ⌦= ⌦>; ⌦≻0; k⌦−⌦⇤ kkF ↵}. (3.4) Theorem 3.4. Assume Conditions 3.2 and 3.3 hold. For any k = 1, . . . , K, the statistical error of the sample-based minimization function defined in (3.3) satisfies that, for any fixed ⌦j 2 B(⌦⇤ j) (j 6= k), 00c Mk(⌦[K]−k) −Mk(⌦[K]−k) 00 F = OP r mk(mk + sk) log mk nm ! , (3.5) where Mk(⌦[K]−k) and c Mk(⌦[K]−k) are defined in (3.2) and (3.3), and m = QK k=1 mk. 5 Theorem 3.4 establishes the statistical error associated with c Mk(⌦[K]−k) for arbitrary ⌦j 2 B(⌦⇤ j) with j 6= k. In comparison, previous work on the existence of a local solution with desired statistical property only establishes theorems similar to Theorem 3.4 for ⌦j = ⌦⇤ j with j 6= k. The extension to an arbitrary ⌦j 2 B(⌦⇤ j) involves non-trivial technical barriers. Particularly, we first establish the rate of convergence of the difference between a sample-based quadratic form with its expectation (Lemma B.1) via concentration of Lipschitz functions of Gaussian random variables [24]. This result is also of independent interest. We then carefully characterize the rate of convergence of Sk defined in (2.3) (Lemma B.2). Finally, we develop (3.5) using the results for vector-valued graphical models developed by [25]. According to Theorem 3.1 and Theorem 3.4, we obtain the rate of convergence of the Tlasso estimator in terms of Frobenius norm, which is our main result. Theorem 3.5. Assume that Conditions 3.2 and 3.3 hold. For any k = 1, . . . , K, if the initialization satisfies ⌦(0) j 2 B(⌦⇤ j) for any j 6= k, then the estimator b⌦k from Algorithm 1 with T = 1 satisfies, 00b⌦k −⌦⇤ k 00 F = OP r mk(mk + sk) log mk nm ! , (3.6) where m = QK k=1 mk and B(⌦⇤ j) is defined in (3.4). Theorem 3.5 suggests that as long as the initialization is within a constant distance to the truth, our Tlasso algorithm attains a consistent estimator after only one iteration. This initialization condition ⌦(0) j 2 B(⌦⇤ j) trivially holds since for any ⌦(0) j that is positive definite and has unit Frobenius norm, we have k⌦(0) j −⌦⇤ kkF 2 by noting that k⌦⇤ kkF = 1 (k = 1, . . . , K) for the identifiability of the tensor normal distribution. In literature, [9] shows that there exists a local minimizer of (2.2) whose convergence rate can achieve (3.6). However, it is unknown if their algorithm can find such minimizer since there could be many other local minimizers. A notable implication of Theorem 3.5 is that, when K ≥3, the estimator from our Tlasso algorithm can achieve estimation consistency even if we only have access to one observation, i.e., n = 1, which is often the case in practice. To see it, suppose that K = 3 and n = 1. When the dimensions m1, m2, and m3 are of the same order of magnitude and sk = O(mk) for k = 1, 2, 3, all the three error rates corresponding to k = 1, 2, 3 in (3.6) converge to zero. This result indicates that the estimation of the k-th precision matrix takes advantage of the information from the j-th way (j 6= k) of the tensor data. Consider a simple case that K = 2 and one precision matrix ⌦⇤ 1 = 1m1 is known. In this scenario the rows of the matrix data are independent and hence the effective sample size for estimating ⌦⇤ 2 is in fact nm1. The optimality result for the vector-valued graphical model [4] implies that the optimal rate for estimating ⌦⇤ 2 is p (m2 + s2) log m2/(nm1), which matches our result in (3.6). Therefore, the rate in (3.6) obtained by our Tlasso estimator is minimax-optimal since it is the best rate one can obtain even when ⌦⇤ j (j 6= k) are known. As far as we know, this phenomenon has not been discovered by any previous work in tensor graphical model. Remark 3.6. For K = 2, our tensor graphical model reduces to matrix graphical model with Kronecker product covariance structure [5–8]. In this case, the rate of convergence of b⌦1 in (3.6) reduces to p (m1 + s1) log m1/(nm2), which is much faster than p m2(m1 + s1)(log m1 + log m2)/n established by [6] and p (m1 + m2) log[max(m1, m2, n)]/(nm2) established by [7]. In literature, [5] shows that there exists a local minimizer of the objective function whose estimation errors match ours. However, it is unknown if their estimator can achieve such convergence rate. On the other hand, our theorem confirms that our algorithm is able to find such estimator with optimal rate of convergence. 3.2 Estimation error in max norm and spectral norm We next show the estimation error in max norm and spectral norm. Trivially, these estimation errors are bounded by that in Frobenius norm shown in Theorem 3.5. To develop improved rates of convergence in max and spectral norms, we need to impose stronger conditions on true parameters. 6 We first introduce some important notations. Denote dk as the maximum number of non-zeros in any row of the true precision matrices ⌦⇤ k, that is, dk := max i2{1,...,mk} 44{j 2 {1, . . . , mk} : [⌦⇤ k]i,j 6= 0} 44, (3.7) with | · | the cardinality of the inside set. For each covariance matrix ⌃⇤ k, we define ⌃⇤ k := |||⌃⇤ k|||1. Denote the Hessian matrix Γ⇤ k := ⌦⇤−1 k ⌦⌦⇤−1 k 2 Rm2 k⇥m2 k, whose entry [Γ⇤ k](i,j),(s,t) corresponds to the second order partial derivative of the objective function with respect to [⌦k]i,j and [⌦k]s,t. We define its sub-matrix indexed by the index set Sk as [Γ⇤ k]Sk,Sk = [⌦⇤−1 k ⌦⌦⇤−1 k ]Sk,Sk, which is the |Sk| ⇥|Sk| matrix with rows and columns of Γ⇤ k indexed by Sk and Sk, respectively. Moreover, we define Γ⇤ k := 444444([Γ⇤ k]Sk,Sk)−1444444 1. In order to establish the rate of convergence in max norm, we need to impose an irrepresentability condition on the Hessian matrix. Condition 3.7 (Irrepresentability). For each k = 1, . . . , K, there exists some ↵k 2 (0, 1] such that max e2Sc k 00[Γ⇤ k]e,Sk $ [Γ⇤ k]Sk,Sk %−100 1 1 −↵k. Condition 3.7 controls the influence of the non-connected terms in Sc k on the connected edges in Sk. This condition has been widely applied in lasso penalized models [26, 27]. Condition 3.8 (Bounded Complexity). For each k = 1, . . . , K, the parameters ⌃⇤ k, Γ⇤ k are bounded and the parameter dk in (3.7) satisfies dk = o $pnm/(mk log mk) % . Theorem 3.9. Suppose Conditions 3.2, 3.3, 3.7 and 3.8 hold. Assume sk = O(mk) for k = 1, . . . , K and assume m0 ks are in the same order, i.e., m1 ⇣m2 ⇣· · · ⇣mK. For each k, if the initialization satisfies ⌦(0) j 2 B(⌦⇤ j) for any j 6= k, then the estimator b⌦k from Algorithm 1 with T = 2 satisfies, 00b⌦k −⌦⇤ k 00 1 = OP r mk log mk nm ! . (3.8) In addition, the edge set of b⌦k is a subset of the true edge set of ⌦⇤ k, that is, supp(b⌦k) ✓supp(⌦⇤ k). Theorem 3.9 shows that our Tlasso estimator achieves the optimal rate of convergence in max norm [4]. Here we consider the estimator obtained after two iterations since we require a new concentration inequality (Lemma B.3) for the sample covariance matrix, which is built upon the estimator in Theorem 3.5. A direct consequence from Theorem 3.9 is the estimation error in spectral norm. Corollary 3.10. Suppose the conditions of Theorem 3.9 hold, for any k = 1, . . . , K, we have 00b⌦k −⌦⇤ k 00 2 = OP dk r mk log mk nm ! . (3.9) Remark 3.11. Now we compare our obtained rate of convergence in spectral norm for K = 2 with that established in the sparse matrix graphical model literature. In particular, [8] establishes the rate of OP $p mk(sk _ 1) log(m1 _ m2)/(nmk) % for k = 1, 2. Therefore, when d2 k (sk _ 1), which holds for example in the bounded degree graphs, our obtained rate is faster. However, our faster rate comes at the price of assuming the irrepresentability condition. Using recent advance in nonconvex regularization [28], we can eliminate the irrepresentability condition. We leave this to future work. 3.3 Model selection consistency Theorem 3.9 ensures that the estimated precision matrix correctly excludes all non-informative edges and includes all the true edges (i, j) with |[⌦⇤ k]i,j| > C p mk log mk/(nm) for some constant C > 0. Therefore, in order to achieve the model selection consistency, a sufficient condition is to assume that, for each k = 1, . . . , K, the minimal signal ✓k := min(i,j)2supp(⌦⇤ k)[⌦⇤ k]i,j is not too small. Theorem 3.12. Under the conditions of Theorem 3.9, if ✓k ≥C p mk log mk/(nm) for some constant C > 0, then for any k = 1, . . . , K, sign $b⌦k % = sign(⌦⇤ k), with high probability. Theorem 3.12 indicates that our Tlasso estimator is able to correctly recover the graphical structure of each way of the high-dimensional tensor data. To the best of our knowledge, these is the first model selection consistency result in high dimensional tensor graphical model. 7 4 Simulations We compare the proposed Tlasso estimator with two alternatives. The first one is the direct graphical lasso (Glasso) approach [21] which applies the glasso to the vectorized tensor data to estimate ⌦⇤ 1 ⌦· · · ⌦⌦⇤ K directly. The second alternative method is the iterative penalized maximum likelihood method (P-MLE) proposed by [9], whose termination condition is set to be PK k=1 00b⌦(t) k −b⌦(t−1) k 00 F 5 K 0.001. For simplicity, in our Tlasso algorithm we set the initialization of k-th precision matrix as 1mk for each k = 1, . . . , K and the total iteration T = 1. The tuning parameter λk is set as 20 p log mk/(nmmk). For a fair comparison, the same tuning parameter is applied in the P-MLE method. In the direct Glasso approach, its tuning parameter is chosen by cross-validation via huge package [29]. We consider two simulations with a third order tensor, i.e., K = 3. In Simulation 1, we construct a triangle graph, while in Simulation 2, we construct a four nearest neighbor graph for each precision matrix. An illustration of the generated graphs are shown in Figure 1. In each simulation, we consider three scenarios, i.e., s1: n = 10 and (m1, m2, m3) = (10, 10, 10); s2: n = 50 and (m1, m2, m3) = (10, 10, 10); s3: n = 10 and (m1, m2, m3) = (100, 5, 5). We repeat each example 100 times and compute the averaged computational time, the averaged estimation error of the Kronecker product of precision matrices (m1m2m3)−100b⌦1 ⌦· · · ⌦b⌦K −⌦⇤ 1 ⌦· · · ⌦⌦⇤ K 00 F , the true positive rate (TPR), and the true negative rate (TNR). More specifically, we denote a⇤ i,j be the (i, j)-th entry of ⌦⇤ 1 ⌦· · · ⌦⌦⇤ K, and define TPR := P i,j 1(bai,j 6= 0, a⇤ i,j 6= 0)/ P i,j 1(a⇤ i,j 6= 0) and TNR := P i,j 1(bai,j = 0, a⇤ i,j = 0)/ P i 1(a⇤ i,j = 0). As shown in Figure 1, our Tlasso is dramatically faster than both alternative methods. In Scenario s3, Tlasso takes about five seconds for each replicate, the P-MLE takes about 500 seconds while the direct Glasso method takes more than one hour and is omitted in the plot. Tlasso algorithm is not only computationally efficient but also enjoys superior estimation accuracy. In all examples, the direct Glasso method has significantly larger errors than Tlasso due to ignoring the tensor graphical structure. Tlasso outperforms P-MLE in Scenarios s1 and s2 and is comparable to it in Scenario s3. 0 100 200 300 400 s1 s2 s3 Scenarios Time (seconds) Glasso P-MLE Tlasso 0 200 400 s1 s2 s3 Scenarios Time (seconds) Glasso P-MLE Tlasso 0.02 0.04 0.06 0.08 s1 s2 s3 Scenarios Errors Glasso P-MLE Tlasso 0.02 0.04 0.06 0.08 s1 s2 s3 Scenarios Errors Glasso P-MLE Tlasso Figure 1: Left two plots: illustrations of the generated graphs; Middle two plots: computational time; Right two plots: estimation errors. In each group of two plots, the left (right) is for Simulation 1 (2). Table 1 shows the variable selection performance. Our Tlasso identifies almost all edges in these six examples, while the Glasso and P-MLE method miss several true edges. On the other hand, Tlasso tends to include more non-connected edges than other methods. Table 1: A comparison of variable selection performance. Here TPR and TNR denote the true positive rate and true negative rate. Scenarios Glasso P-MLE Tlasso TPR TNR TPR TNR TPR TNR s1 0.27 (0.002) 0.96 (0.000) 1 (0) 0.89 (0.002) 1(0) 0.76 (0.004) Sim 1 s2 0.34 (0.000) 0.93 (0.000) 1 (0) 0.89 (0.002) 1(0) 0.76 (0.004) s3 / / 1 (0) 0.93 (0.001) 1(0) 0.70 (0.004) s1 0.08 (0.000) 0.96 (0.000) 0.93 (0.004) 0.88 (0.002) 1(0) 0.65 (0.005) Sim 2 s2 0.15 (0.000) 0.92 (0.000) 1 (0) 0.85 (0.002) 1(0) 0.63 (0.005) s3 / / 0.82 (0.001) 0.93 (0.001) 0.99(0.001) 0.38 (0.002) Acknowledgement We would like to thank the anonymous reviewers for their helpful comments. Han Liu is grateful for the support of NSF CAREER Award DMS1454377, NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. Guang Cheng’s research is sponsored by NSF CAREER Award DMS1151692, NSF DMS1418042, Simons Fellowship in Mathematics, ONR N00014-15-1-2331 and a grant from Indiana Clinical and Translational Sciences Institute. 8 References [1] S. Rendle and L. Schmidt-Thieme. Pairwise interaction tensor factorization for personalized tag recommendation. In International Conference on Web Search and Data Mining, 2010. [2] G.I. Allen. Sparse higher-order principal components analysis. In International Conference on Artificial Intelligence and Statistics, 2012. [3] J. Zahn, S. Poosala, A. Owen, D. Ingram, et al. AGEMAP: A gene expression database for aging in mice. PLOS Genetics, 3:2326–2337, 2007. [4] T. Cai, W. Liu, and H.H. Zhou. Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation. Annals of Statistics, 2015. [5] C. Leng and C.Y. Tang. Sparse matrix graphical models. Journal of the American Statistical Association, 107:1187–1200, 2012. [6] J. Yin and H. Li. Model selection and estimation in the matrix normal graphical model. Journal of Multivariate Analysis, 107:119–140, 2012. [7] T. Tsiligkaridis, A. O. Hero, and S. Zhou. On convergence of Kronecker graphical Lasso algorithms. IEEE Transactions on Signal Processing, 61:1743–1755, 2013. [8] S. Zhou. Gemini: Graph estimation with matrix variate normal instances. Annals of Statistics, 42:532–562, 2014. [9] S. He, J. Yin, H. Li, and X. Wang. Graphical model selection and estimation for high dimensional tensor data. Journal of Multivariate Analysis, 128:165–185, 2014. [10] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In Symposium on Theory of Computing, pages 665–674, 2013. [11] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In Advances in Neural Information Processing Systems, pages 2796–2804, 2013. [12] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere. arXiv:1504.06785, 2015. [13] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse coding. arXiv:1503.00778, 2015. [14] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15:2773–2832, 2014. [15] W. Sun, J. Lu, H. Liu, and G. Cheng. Provable sparse tensor decomposition. arXiv:1502.01425, 2015. [16] S. Zhe, Z. Xu, X. Chu, Y. Qi, and Y. Park. Scalable nonparametric multiway data analysis. In International Conference on Artificial Intelligence and Statistics, 2015. [17] S. Zhe, Z. Xu, Y. Qi, and P. Yu. Sparse bayesian multiview learning for simultaneous association discovery and diagnosis of alzheimer’s disease. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. [18] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51:455–500, 2009. [19] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58:267–288, 1996. [20] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94:19–35, 2007. [21] J. Friedman, H. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso. Biostatistics, 9:432–441, 2008. [22] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494–515, 2008. [23] W. Sun, J. Wang, and Y. Fang. Consistent selection of tuning parameters via variable selection stability. Journal of Machine Learning Research, 14:3419–3440, 2013. [24] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 2011. [25] J. Fan, Y. Feng, and Y. Wu. Network exploration via the adaptive Lasso and scad penalties. Annals of Statistics, 3:521–541, 2009. [26] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2567, 2006. [27] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing `1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011. [28] Z. Wang, H. Liu, and T. Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Annals of Statistics, 42:2164–2201, 2014. [29] T. Zhao, H. Liu, K. Roeder, J. Lafferty, and L. Wasserman. The huge package for high-dimensional undirected graph estimation in R. Journal of Machine Learning Research, 13:1059–1062, 2012. [30] A. Gupta and D. Nagar. Matrix variate distributions. Chapman and Hall/CRC Press, 2000. [31] P. Hoff. Separable covariance arrays via the Tucker product, with applications to multivariate relational data. Bayesian Analysis, 6:179–196, 2011. [32] A.P. Dawid. Some matrix-variate distribution theory: Notational considerations and a bayesian application. Biometrika, 68:265–274, 1981. [33] S. Negahban and M.J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statistics, 39:1069–1097, 2011. 9
2015
179
5,679
Sparse Linear Programming via Primal and Dual Augmented Coordinate Descent Ian E.H. Yen ∗ Kai Zhong ∗ Cho-Jui Hsieh † Pradeep Ravikumar ∗ Inderjit S. Dhillon ∗ ∗University of Texas at Austin † University of California at Davis ∗{ianyen,pradeepr,inderjit}@cs.utexas.edu zhongkai@ices.utexas.edu † chohsieh@ucdavis.edu Abstract Over the past decades, Linear Programming (LP) has been widely used in different areas and considered as one of the mature technologies in numerical optimization. However, the complexity offered by state-of-the-art algorithms (i.e. interior-point method and primal, dual simplex methods) is still unsatisfactory for problems in machine learning with huge number of variables and constraints. In this paper, we investigate a general LP algorithm based on the combination of Augmented Lagrangian and Coordinate Descent (AL-CD), giving an iteration complexity of O((log(1/ϵ))2) with O(nnz(A)) cost per iteration, where nnz(A) is the number of non-zeros in the m × n constraint matrix A, and in practice, one can further reduce cost per iteration to the order of non-zeros in columns (rows) corresponding to the active primal (dual) variables through an active-set strategy. The algorithm thus yields a tractable alternative to standard LP methods for large-scale problems of sparse solutions and nnz(A) ≪mn. We conduct experiments on large-scale LP instances from ℓ1-regularized multi-class SVM, Sparse Inverse Covariance Estimation, and Nonnegative Matrix Factorization, where the proposed approach finds solutions of 10−3 precision orders of magnitude faster than state-of-the-art implementations of interior-point and simplex methods. 1 1 Introduction Linear Programming (LP) has been studied since the early 19th century and has become one of the representative tools of numerical optimization with wide applications in machine learning such as ℓ1-regularized SVM [1], MAP inference [2], nonnegative matrix factorization [3], exemplarbased clustering [4, 5], sparse inverse covariance estimation [6], and Markov Decision Process [7]. However, as the demand for scalability keeps increasing, the scalability of existing LP solvers has become unsatisfactory. In particular, most algorithms in machine learning targeting large-scale data have a complexity linear to the data size [8, 9, 10], while the complexity of state-of-the-art LP solvers (i.e. Interior-Point method and Primal, Dual Simplex methods) is still at least quadratic in the number of variables or constraints [11]. The quadratic complexity comes from the need to solve each linear system exactly in both simplex and interior point method. In particular, the simplex method, when traversing from one corner point to another, requires solution to a linear system that has dimension linear to the number of variables or constraints, while in an Interior-Point method, finding the Newton direction requires solving a linear system of similar size. While there are sparse variants of LU and Cholesky decomposition that can utilize the sparsity pattern of matrix in a linear system, the worst-case complexity for solving such system is at least quadratic to the dimension except for very special cases such as a tri-diagonal or band-structured matrix. 1Our solver has been released here: http://www.cs.utexas.edu/˜ianyen/LPsparse/ 1 For interior point method (IPM), one remedy to the high complexity is employing an iterative method such as Conjugate Gradient (CG) to solve each linear system inexactly. However, this can hardly tackle the ill-conditioned linear systems produced by IPM when iterates approach boundary of constraints [12]. Though substantial research has been devoted to the development of preconditioners that can help iterative methods to mitigate the effect of ill-conditioning [12, 13], creating a preconditioner of tractable size is a challenging problem by itself [13]. Most commercial LP software thus still relies on exact methods to solve the linear system. On the other hand, some dual or primal (stochastic) sub-gradient descent methods have cheap cost for each iteration, but require O(1/ϵ2) iterations to find a solution of ϵ precision, which in practice can even hardly find a feasible solution satisfying all constraints [14]. Augmented Lagrangian Method (ALM) was invented early in 1969, and since then there have been several works developed Linear Program solver based on ALM [15, 16, 17]. However, the challenge of ALM is that it produces a series of bound-constrained quadratic problems that, in the traditional sense, are harder to solve than linear system produced by IPM or Simplex methods [17]. Specifically, in a Projected-CG approach [18], one needs to solve several linear systems via CG to find solution to the bound-constrained quadratic program, while there is no guarantee on how many iterations it requires. On the other hand, Projected Gradient Method (PGM), despite its guaranteed iteration complexity, has very slow convergence in practice. More recently, Multi-block ADMM [19, 20] was proposed as a variant of ALM that, for each iteration, only updates one pass (or even less) blocks of primal variables before each dual update, which however, requires a much smaller step size in the dual update to ensure convergence [20, 21] and thus requires large number of iterations for convergence to moderate precision. To our knowledge, there is still no report on a significant improvement of ALM-based methods over IPM or Simplex method for Linear Programming. In the recent years, Coordinate Descent (CD) method has demonstrated efficiency in many machine learning problems with bound constraints or other non-smooth terms [9, 10, 22, 23, 24, 25] and has solid analysis on its iteration complexity [26, 27]. In this work, we show that CD algorithm can be naturally combined with ALM to solve Linear Program more efficiently than existing methods on large-scale problems. We provide an O((log(1/ϵ))2) iteration complexity of the Augmented Lagrangian with Coordinate Descent (AL-CD) algorithm that bounds the total number of CD updates required for an ϵ-precise solution, and describe an implementation of AL-CD that has cost O(nnz(A)) for each pass of CD. In practice, an active-set strategy is introduced to further reduce cost of each iteration to the active size of variables and constraints for primal-sparse and dual-sparse LP respectively, where a primal-sparse LP has most of variables being zero, and a dual-sparse LP has few binding constraints at the optimal solution. Note, unlike in IPM, the conditioning of each subproblem in ALM does not worsen over iterations [15, 16]. The AL-CD framework thus provides an alternative to interior point and simplex methods when it is infeasible to exactly solving an n × n (or m × m) linear system. 2 Sparse Linear Program We are interested in solving linear programs of the form min x∈Rn f(x) = cT x s.t. AIx ≤bI , AEx = bE xj ≥0, j ∈[nb] (1) where AI is mI by n matrix of coefficients and AE is mE by n. Without loss of generality, we assume non-negative constraints are imposed on the first nb variables, denoted as xb, such that x = [xb; xf] and c = [cb; cf]. The inequality and equality coefficient matrices can then be partitioned as AI = [AI,b AI,f] and AE = [AE,b AE,f]. The dual problem of (1) then takes the form min y∈Rm g(y) = bT y s.t. −AT b y ≤cb , −AT f y = cf yi ≥0, i ∈[mI]. (2) 2 where m = mI + mE, b = [bI; bE], Ab = [AI,b; AE,b], Af = [AI,f; AE,f], and y = [yI; yE]. In most of LP occur in machine learning, m and n are both at scale in the order 105 ˜106, for which an algorithm with cost O(mn), O(n2) or O(m2) is unacceptable. Fortunately, there are usually various types of sparsity present in the problem that can be utilized to lower the complexity. First, the constraint matrix A = [AI; AE] are usually pretty sparse in the sense that nnz(A) ≪mn, and one can compute matrix-vector product Ax in O(nnz(A)). However, in most of current LP solvers, not only matrix-vector product but also a linear system involving A needs to be solved, which in general, has cost much more than O(nnz(A)) and can be up to O(min(n3, m3)) in the worst case. In particular, the simplex-type methods, when moving from one corner to another, requires solving a linear system that involves a sub-matrix of A with columns corresponding to the basic variables [11], while in an interior point method (IPM), one also needs to solve a normal equation system of matrix ADtAT to obtain the Newton direction, where Dt is a diagonal matrix that gradually enforces complementary slackness as IPM iteration t grows [11]. While one remedy to the high complexity is to employ iterative method such as Conjugate Gradient (CG) to solve the system inexactly within IPM, this approach can hardly handle the ill-conditionedness occurs when IPM iterates approaches boundary [12]. On the other hand, the Augmented Lagrangian approach does not have such asymptotic ill-conditionedness and thus an iterative method with complexity linear to O(nnz(A)) can be used to produce sufficiently accurate solution for each sub-problem. Besides sparsity in the constraint matrix A, two other types of structures, which we termed primal and dual sparsity, are also prevalent in the context of machine learning. A primal-sparse LP refers to an LP with optimal solution x∗comprising only few non-zero elements, while a dual-sparse LP refers to an LP with few binding constraints at optimal, which corresponds to the non-zero dual variables. In the following, we give two examples of sparse LP. L1-Regularized Support Vector Machine The problem of L1-regularized multi-class Support Vector Machine [1] min wm,ξi λ k X m=1 ∥wm∥1 + l X i=1 ξi s.t. wT yixi −wT mxi ≥em i −ξi, ∀(i, m) (3) where em i = 0 if yi = m, em i = 1 otherwise. The task is dual-sparse since among all samples i and class k, only those leads to misclassification will become binding constraints. The problem (3) is also primal-sparse since it does feature selection through ℓ1-penalty. Note the constraint matrix in (3) is also sparse since each constraint only involves two weight vectors, and the pattern xi can be also sparse. Sparse Inverse Covariance Estimation The Sparse Inverse Covariance Estimation aims to find a sparse matrix Ωthat approximate the inverse of Covariance matrix. One of the most popular approach to this solves a program of the form [6] min Ω∈Rd×d ∥Ω∥1 s.t. ∥SΩ−Id∥max ≤λ (4) which is primal-sparse due to the ∥.∥1 penalty. The problem has a dense constraint matrix, which however, has special structure where the coefficient matrix S can be decomposed into a product of two low-rank and (possibly) sparse n by d matrices S = ZT Z. In case Z is sparse or n ≪d, this decomposition can be utilized to solve the Linear Program much more efficiently. We will discuss on how to utilize such structure in section 4.3. 3 Primal and Dual Augmented Coordinate Descent In this section, we describe an Augmented Lagrangian method (ALM) that carefully tackles the sparsity in a LP. The choice between Primal and Dual ALM depends on the type of sparsity present in the LP. In particular, a primal AL method can solve a problem of few non-zero variables more efficiently, while dual ALM will be more efficient for problem with few binding constraints. In the following, we describe the algorithm only from the primal point of view, while the dual version can be obtained by exchanging the roles of primal (1) and dual (2). 3 Algorithm 1 (Primal) Augmented Lagrangian Method Initialization: y0 ∈Rm and η0 > 0. repeat 1. Solve (6) to obtain (xt+1, ξt+1) from yt. 2. Update yt+1 = yt + ηt  AIxt+1 −bI + ξt+1 AExt+1 −bE  . 3. t = t + 1. 4. Increase ηt by a constant factor if necessary. until ∥[AIxt −bI]+∥∞≤ϵp and ∥AExt −bE∥∞≤ϵ. 3.1 Augmented Lagrangian Method (Dual Proximal Method) Let g(y) be the dual objective function (2) that takes ∞if y is infeasible. The primal AL algorithm can be interpreted as a dual proximal point algorithm [16] that for each iteration t solves yt+1 = argmin y g(y) + 1 2ηt ∥y −yt∥2. (5) Since g(y) is nonsmooth, (5) is not easier to solve than the original dual problem. However, the dual of (5) takes the form: min x, ξ F(x, ξ) = cT x + ηt 2 AIx −bI + ξ AEx −bE + 1 ηt  yt I yt E  2 s.t. xb ≥0, ξ ≥0, (6) which is a bound-constrained quadratic problem. Note given (x, ξ) as Lagrangian Multipliers of (5), the corresponding y minimizing Lagrangian L(x, ξ, y) is y(x, ξ) = ηt  AIx −bI + ξ AEx −bE  +  yt I yt E  , (7) and thus one can solve (x∗, ξ∗) from (6) and find yt+1 through (7). The resulting algorithm is sketched in Algorithm 1. For problem of medium scale, (6) is not easier to solve than a linear system due to non-negative constraints, and thus an ALM is not preferred to IPM in the traditional sense. However, for large-scale problem with m × n ≫nnz(A), the ALM becomes advantageous since: (i) the conditioning of (6) does not worsen over iterations, and thus allows iterative methods to solve it approximately in time proportional to O(nnz(A)). (ii) For a primal-sparse (dual-sparse) problem, most of primal (dual) variables become binding at zero as iterates approach to the optimal solution, which yields a potentially much smaller subproblem. 3.2 Solving Subproblem via Coordinate Descent Given a dual solution yt, we employ a variant of Randomized Coordinate Descent (RCD) method to solve subproblem (6). First, we note that, given x, the part of variables in ξ can minimized in closed-form as ξ(x) = [bI −AIx −yt I/ηt]+, (8) where function [v]+ truncates each element of vector v to be non-negative as [v]+i = max{vi, 0}. Then (6) can be re-written as min x ˆF(x) = cT x + ηt 2 [AIx −bI + yt I/ηt]+ AEx −bE + yt E/ηt 2 s.t. xb ≥0. (9) 4 Algorithm 2 RCD for subproblem (6) INPUT: ηt > 0 and (xt,0, wt,0, vt,0) satisfying relation (11), (12). OUTPUT: (xt,k, wt,k, vt,k) repeat 1. Pick a coordinate j uniformly at random. 2. Compute ∇j ˆF(x), ∇2 j ˆF(x). 3. Obtain Newton direction d∗ j. 4. Do line search (15) to find step size. 5. Update xt,k+1 ←xt,k + βrd∗ j. 6. Maintain relation (11), (12). 7. k ←k + 1. until ∥d∗(x)∥∞≤ϵt. Algorithm 3 PN-CG for subproblem (6) INPUT: ηt > 0 and (xt,0, wt,0, vt,0) satisfying relation (11), (12). OUTPUT: (xt,k, wt,k, vt,k) repeat 1. Identify active variables At,k. 2. Compute [∇jF(x)]At,k and set Dt,k. 3. Find Newton direction d∗ At,k with CG. 4. Find step size via projected line search. 5. Update xt,k+1 ←(xt,k + βrd∗ j)+. 6. Maintain relation (11), (12). 7. k ←k + 1. until ∥d∗ At,k∥∞≤ϵt. Denote the objective function as ˆF(x). The gradient of (9) can be expressed as ∇ˆF(x) = c + ηtAT I [w]+ + ηtAT Ev (10) where w = AIx −bI + yt I/ηt (11) v = AEx −bE + yt E/ηt, (12) and the (generalized) Hessian of (9) is ∇2 ˆF(x) = ηtAT I D(w)AI + ηtAT EAE, (13) where D(w) is an mI by mI diagonal matrix with Dii(w) = 1 if wi > 0 and Dii = 0 otherwise. The RCD algorithm then proceeds as follows. In each iteration k, it picks a coordinate from j ∈ {1, .., n} uniformly at random and minimizes w.r.t. the coordinate. The minimization is conducted by a single-variable Newton step, which first finds the Newton direction d∗ j through minimizing a quadratic approximation d∗ j = argmin d ∇j ˆF(xt,k)d + 1 2∇2 j ˆF(xt,k)d2 s.t. xt,k j + d ≥0, (14) and then conducted a line search to find the smallest r ∈{0, 1, 2, ...} satisfying ˆF(xt,k + βrd∗ jej) −ˆF(xt,k) ≤σβr(∇j ˆF(xt,k)d∗ j). (15) for some line-search parameter σ ∈(0, 1/2], β ∈(0, 1), where ej denotes a vector with only jth element equal to 1 and all others equal to 0. Note the single-variable problem (14) has closed-form solution d∗ j = h xt,k j −∇j ˆF(xt,k j )/∇2 j ˆF(xt,k j ) i + −xt,k j , (16) which in a naive implementation, takes O(nnz(A)) time due to the computation of (11) and (12). However, in a clever implementation, one can maintain the relation (11), (12) as follows whenever a coordinate xj is updated by βrd∗ j  wt,k+1 vt,k+1  =  wt,k vt,k  + βrd∗ j  aI j aE j  , (17) where aj = [aI j; aE j ] denotes the jth column of AI and AE. Then the gradient and (generalized) second-derivative of jth coordinate ∇j ˆF(x) = cj + ηt⟨aI j, [w]+⟩+ ηt⟨aE j , v⟩ ∇2 j ˆF(x) = ηt X i:wi>0 (aI i,j)2 + X i (aE i,j)2 ! (18) 5 can be computed in O(nnz(aj)) time. Similarly, for each coordinate update, one can evaluate the difference of function value ˆF(xt,k + d∗ jej) −ˆF(xt,k) in O(nnz(aj)) by only computing terms related to the jth variable. The overall procedure for solving subproblem is summarized in Algorithm 2. In practice, a random permutation is used instead of uniform sampling to ensure that every coordinate is updated once before proceeding to the next round, which can speed up convergence and ease the checking of stopping condition ∥d∗(x)∥∞≤ϵt, and an active-set strategy is employed to avoid updating variables with d∗ j = 0. We describe details in section 4 3.3 Convergence Analysis In this section, we prove the iteration complexity of AL-CD method. Existing analysis [26, 27] shows that Randomized Coordinate Descent can be up to n times faster than Gradient-based methods in certain conditions. However, to prove a global linear rate of convergence the analysis requires objective function to be strongly convex, which is not true for our sub-problem (6). Here we follow the approach in [28, 29] to show global linear convergence of Algorithm 2 by utilizing the fact that, when restricted to a constant subspace, (6) is strongly convex. All proofs will be included in the appendix. Theorem 1 (Linear Convergence). Denote F ∗as the optimum of (6) and ¯x = [x; ξ]. The iterates {¯xk}∞ k=0 of the RCD Algorithm 2 has E[F(¯xk+1)] −F ∗≤  1 −1 γn  E[F(¯xk)] −F ∗ , (19) where γ = max  16ηtMθ(F 0 −F ∗) , 2Mθ(1 + 4L2 g) , 6 , M = maxj∈[¯n] ∥¯aj∥2 is an upper bound on coordinate-wise second derivative, and Lg is local Lipschitz-continuous constant of function g(z) = ηt∥z −b+yt/ηt∥2, and θ is constant of Hoffman’s bound that depends on the polyhedron formed by the set of optimal solutions. Then the following theorem gives a bound on the number of iterations required to find an ϵ0-precise solution in terms of the proximal minimization (5). Theorem 2 (Inner Iteration Complexity). Denote y(¯xk) as the dual solution (7) corresponding to the primal iterate ¯xk. To guarantee ∥y(¯xk) −yt+1∥≤ϵ0 (20) with probability 1 −p, it suffices running RCD Algorithm 2 for number of iterations k ≥2γn log s 2(F(¯x0) −F ∗) ηtp 1 ϵ0 ! . Now we prove the overall iteration complexity of AL-CD. Note that existing linear convergence analysis of ALM on Linear Program [16] assumes exact solutions of subproblem (6), which is not possible in practice. Our next theorem extends the linear convergence result to cases when subproblems are solved inexactly, and in particular, shows the total number of coordinate descent updates required to find an ϵ-accurate solution. Theorem 3 (Iteration Complexity). Denote {ˆyt}∞ t=1 as the sequence of iterates obtained from inexact dual proximal updates, {yt}∞ t=1 as that generated by exact updates, and yS∗as the projection of y to the set of optimal dual solutions. To guarantee ∥ˆyt −ˆyt S∗∥≤2ϵ with probability 1 −p, it suffices to run Algorithm 1 for T = (1 + 1 α) log LR ϵ  (21) outer iterations with ηt = (1 + α)L, and solve each sub-problem (6) by running Algorithm 2 for k ≥2γn  log ω ϵ  + 3 2 log  (1 + 1 α) log LR ϵ  (22) inner iterations, where L is a constant depending on the polyhedral set of optimal solutions, ω = q 2(1+α)L(F 0−F ∗) p , R = ∥proxηtg(y0) −y0∥, and F 0, F ∗are upper and lower bounds on the initial and optimal function values of subproblem respectively. 6 3.4 Fast Asymptotic Convergence via Projected Newton-CG The RCD algorithm converges to a solution of moderate precision efficiently, but in some problems a higher precision might be required. In such case, we transfer the subproblem solver from RCD to a Projected Newton-CG (PN-CG) method after iterates are close enough to the optimum. Note the Projected Newton method does not have global iteration complexity but has fast convergence for iterates very close to the optimal. Denote F(x) as the objective in (9). Each iterate of PN-CG begins by finding the set of active variables defined as At,k = {j|xt,k j > 0 ∨∇jF(xt,k) < 0}. (23) Then the algorithm fixes xt,k j = 0, ∀j /∈At,k and solves a Newton linear system w.r.t. j ∈At,k [∇2 At,kF(xt,k)]d = −[∇At,kF(xt,k)] (24) to obtain direction d∗for the current active variables. Let dAt,k denotes a size-n vector taking value in d∗for j ∈At,k and taking value 0 for j /∈At,k. The algorithm then conducts a projected line search to find smallest r ∈{0, 1, 2, ...} satisfying F([xt,k + βrdAt,k]+) −F(xt,k) ≤σβr(∇jF(xt,k)dAt,k), (25) and update x by xt,k+1 ←(xt,k + βrd∗ j)+. Compared to interior point method, one key to the tractability of this approach lies on the conditioning of linear system (24), which does not worsen as outer iteration t increases, so an iterative Conjugate Gradient (CG) method can be used to obtain accurate solution without factorizing the Hessian matrix. The only operation required within CG is the Hessian-vector product [∇2 At,kF(xt,k)]s = ηt [AT I D(wt,k)AI + AT EAE]At,k s, (26) where the operator [.]At,k takes the sub-matrix with row and column indices belonging to At,k. For a primal or dual-sparse LP, the product (26) can be evaluated very efficiently, since it only involves non-zero elements in columns of AI, AE belonging to the active set, and rows of AI corresponding to the binding constraints for which Dii(wt,k) > 0. The overall cost of the product (26) is only O nnz([AI]Dt,k,At,k) + nnz([AE]:,At,k)  , where Dt,k = {i|wt,k i > 0} is the set of current binding constraints. Considering that the computational bottleneck of PN-CG is on the CG iterations for solving linear system (24), the efficient computation of product (26) reduces the overall complexity of PN-CG significantly. The whole procedure is summarized in Algorithm 3. 4 Practical Issues 4.1 Precision of Subproblem Minimization In practice, it is unnecessary to solve subproblem (6) to high precision, especially for iterations of ALM in the beginning. In our implementation, we employ a two-phase strategy, where in the first phase we limit the cost spent on each sub-problem (6) to be a constant multiple of nnz(A), while in the second phase we dynamically increment the AL parameter ηt and inner precision ϵt to ensure sufficient decrease in the primal and dual infeasibility respectively. The two-phase strategy is particularly useful for primal or dual-sparse problem, where sub-problem in the latter phase has smaller active set that results in less computation cost even when solved to high precision. 4.2 Active-Set Strategy Our implementation of Algorithm 2 maintains an active set of variables A, which initially contains all variables, but during the RCD iterates, any variable xj binding at 0 with gradient ∇jF greater than a threshold δ will be excluded from A till the end of each subproblem solving. A will be re-initialized after each dual proximal update (7). Note in the initial phase, the cost spent on each subproblem is a constant multiple of nnz(A), so if |A| is small one would spend more iterations on the active variables to achieve faster convergence. 7 4.3 Dealing with Decomposable Constraint Matrix When we have a m by n constraint matrix A = UV T that can be decomposed into product of an m × r matrix U and a r × n matrix V T , if r ≪min{m, n} or nnz(U) + nnz(V ) ≪nnz(A), we can re-formulate the constraint Ax ≤b as Uz ≤b , V T x = z with auxiliary variables z ∈Rr. This new representation reduce the cost of Hessian-vector product in Algorithm 3 and the cost of each pass of CD in Algorithm 2 from O(nnz(A)) to O(nnz(U) + nnz(V )). 5 Numerical Experiments Table 1: Timing Results (in sec. unless specified o.w.) on Multiclass L1-regularized SVM Data nb mI P-Simp. D-Simp. Barrier D-ALCD P-ALCD rcv1 4,833,738 778,200 > 48hr > 48hr > 48hr 3,452 3,155 news 2,498,415 302,765 > 48hr 37,912 > 48hr 148 395 sector 11,597,992 666,848 > 48hr 9,282 > 48hr 1,419 2,029 mnist 75,620 540,000 6,454 2,556 73,036 146 7,207 cod-rna.rf 69,537 59,535 86,130 5,738 > 48hr 3,130 2,676 vehicle 79,429 157,646 3,296 143.33 8,858 31 598 real-sim 114,227 72,309 > 48hr 49,405 89,476 179 297 Table 2: Timing Results (in sec. unless specified o.w.) on Sparse Inverse Covariance Estimation Data nb mI mE nf P-Simp D-Simp Barrier D-ALCD P-ALCD textmine 60,876 60,876 43,038 43,038 > 48hr > 48hr > 48hr 43,096 18,507 E2006 55,834 55,834 32,174 32,174 > 48hr > 48hr 94623 > 48hr 4,207 dorothea 47,232 47,232 1,600 1,600 3,980 103 82 47 38 Table 3: Timing Results (in sec. unless specified o.w.) for Nonnegative Matrix Factorization. Data nb mI P-Simp. D-Simp. Barrier D-ALCD P-ALCD micromass 2,896,770 4,107,438 > 96hr > 96hr 280,230 12,966 12,119 ocr 6,639,433 13,262,864 > 96hr > 96hr 284,530 40,242 > 96hr In this section, we compare the AL-CD algorithm with state-of-the-art implementation of interior point and primal, dual Simplex methods in commercial LP solver CPLEX, which is of top efficiency among many LP solvers as investigated in [30]. For all experiments, the stopping criteria is set to require both primal and dual infeasibility (in the ℓ∞-norm) smaller than 10−3 and set the initial subproblem tolerance ϵt = 10−2 and ηt = 1. The LP instances are generated from L1-SVM (3), Sparse Inverse Covariance Estimation (4) and Nonnegative Matrix Factorization [3]. For the Sparse Inverse Covariance Estimation problem, we use technique introduced in section 4.3 to decompose the lowrank matrix S, and since (4) results in d independent problems for each column of the estimated matrix, we report result on only one of them. The data source and statistics are included in the appendix. Among all experiments, we observe that the proposed primal, dual AL-CD methods become particularly advantageous when the matrix A is sparse. For example, for text data set rcv1, real-sim and news in Table 1, the matrix A is particularly sparse and AL-CD can be orders of magnitude faster than other approaches by avoiding solving n × n linear system exactly. In addition, the dual-ALCD (also dual simplex) is more efficient in L1-SVM problem due to the problem’s strong dual sparsity, while the primal-ALCD is more efficient on the primal-sparse Inverse Covariance estimation problem. For the Nonnegative Matrix Factorization problem, both the dual and primal LP solutions are not particularly sparse due to the choice of matrix approximation tolerance (1% of #samples), but the AL-CD approach is still comparably more efficient. Acknowledgement We acknowledge the support of ARO via W911NF-12-1-0390, and the support of NSF via grants CCF-1320746, CCF-1117055, IIS-1149803, IIS-1320894, IIS-1447574, DMS1264033, and NIH via R01 GM117594-01 as part of the Joint DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical Sciences. 8 References [1] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. NIPS, 2004. [2] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [3] N. Gillis and R. Luce. Robust near-separable nonnegative matrix factorization using linear optimization. JMLR, 2014. [4] A. Nellore and R. Ward. Recovery guarantees for exemplar-based clustering. arXiv., 2013. [5] I. Yen, X. Lin, K. Zhong, P. Ravikumar, and I. Dhillon. A convex exemplar-based approach to MADBayes dirichlet process mixture models. In ICML, 2015. [6] M. Yuan. High dimensional inverse covariance matrix estimation via linear programming. JMLR, 2010. [7] D. Bello and G. Riano. Linear programming solvers for Markov decision processes. In Systems and Information Engineering Design Symposium, pages 90–95, 2006. [8] T. Joachims. Training linear svms in linear time. In KDD. ACM, 2006. [9] C. Hsieh, K. Chang, C. Lin, S.S. Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. In ICML, volume 307. ACM, 2008. [10] G. Yuan, C. Hsieh K. Chang, and C. Lin. A comparison of optimization methods and software for largescale l1-regularized linear classification. JMLR, 11, 2010. [11] J. Nocedal and S.J. Wright. Numerical Optimization. Springer, 2006. [12] J. Gondzio. Interior point methods 25 years later. EJOR, 2012. [13] J. Gondzio. Matrix-free interior point method. Computational Optimization and Applications, 2012. [14] V.Eleuterio and D.Lucia. Finding approximate solutions for large scale linear programs. Thesis, 2009. [15] Evtushenko, Yu. G, Golikov, AI, and N. Mollaverdy. Augmented lagrangian method for large-scale linear programming problems. Optimization Methods and Software, 20(4-5):515–524, 2005. [16] F. Delbos and J.C. Gilbert. Global linear convergence of an augmented lagrangian algorithm for solving convex quadratic optimization problems. 2003. [17] O. G¨uler. Augmented lagrangian algorithms for linear programming. Journal of optimization theory and applications, 75(3):445–470, 1992. [18] J. Mor´e J and G. Toraldo. On the solution of large quadratic programming problems with bound constraints. SIAM Journal on Optimization, 1(1):93–113, 1991. [19] M. Hong and Z. Luo. On linear convergence of alternating direction method of multipliers. arXiv, 2012. [20] H. Wang, A. Banerjee, and Z. Luo. Parallel direction method of multipliers. In NIPS, 2014. [21] C.Chen, B.He, Y.Ye, and X.Yuan. The direct extension of admm for multi-block convex minimization problems is not necessarily convergent. Mathematical Programming, 2014. [22] I.Dhillon, P.Ravikumar, and A.Tewari. Nearest neighbor based greedy coordinate descent. In NIPS, 2011. [23] I. Yen, C. Chang, T. Lin, S. Lin, and S. Lin. Indexed block coordinate descent for large-scale linear classification with limited memory. In KDD. ACM, 2013. [24] I. Yen, S. Lin, and S. Lin. A dual-augmented block minimization framework for learning with limited memory. In NIPS, 2015. [25] K. Zhong, I. Yen, I. Dhillon, and P. Ravikumar. Proximal quasi-Newton for computationally intensive l1-regularized m-estimators. In NIPS, 2014. [26] P. Richt´arik and M. Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144(1-2):1–38, 2014. [27] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012. [28] P. Wang and C. Lin. Iteration complexity of feasible descent methods for convex optimization. The Journal of Machine Learning Research, 15(1):1523–1548, 2014. [29] I. Yen, C. Hsieh, P. Ravikumar, and I.S. Dhillon. Constant nullspace strong convexity and fast convergence of proximal methods under high-dimensional settings. In NIPS, 2014. [30] B. Meindl and M. Templ. Analysis of commercial and free and open source solvers for linear optimization problems. Eurostat and Statistics Netherlands, 2012. [31] A.J. Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the National Bureau of Standards, 49(4):263–265, 1952. [32] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. [33] I. Yen, T. Lin, S. Lin, P. Ravikumar, and I. Dhillon. Sparse random feature algorithm as coordinate descent in Hilbert space. In NIPS, 2014. 9
2015
18
5,680
Lifted Symmetry Detection and Breaking for MAP Inference Tim Kopp University of Rochester Rochester, NY tkopp@cs.rochester.edu Parag Singla I.I.T. Delhi Hauz Khas, New Delhi parags@cse.iitd.ac.in Henry Kautz University of Rochester Rochester, NY kautz@cs.rochester.edu Abstract Symmetry breaking is a technique for speeding up propositional satisfiability testing by adding constraints to the theory that restrict the search space while preserving satisfiability. In this work, we extend symmetry breaking to the problem of model finding in weighted and unweighted relational theories, a class of problems that includes MAP inference in Markov Logic and similar statistical-relational languages. We introduce term symmetries, which are induced by an evidence set and extend to symmetries over a relational theory. We provide the important special case of term equivalent symmetries, showing that such symmetries can be found in low-degree polynomial time. We show how to break an exponential number of these symmetries with added constraints whose number is linear in the size of the domain. We demonstrate the effectiveness of these techniques through experiments in two relational domains. We also discuss the connections between relational symmetry breaking and work on lifted inference in statistical-relational reasoning. 1 Introduction Symmetry-breaking is an approach to speeding up satisfiability testing by adding constraints, called symmetry-breaking predicates (SBPs), to a theory [7, 1, 16]. Symmetries in the theory define a partitioning over the space of truth assignments, where the assignments in a partition either all satisfy or all fail to satisfy the theory. The added SBPs rule out some but not all of the truth assignments in the partitions, thus reducing the size of the search space while preserving satisfiability. We extend the notion of symmetry-breaking to model-finding in relational theories. A relational theory is specified by a set of first-order axioms over finite domains, optional weights on the axioms or predicates of the theory, and a set of ground literals representing evidence. By model finding we mean satisfiability testing (unweighted theories), weighted MaxSAT (weights on axioms), or maximum weighted model finding (weights on predicates). The weighted versions of model finding encompass MAP inference in Markov Logic and similar statistical-relational languages. We introduce methods for finding symmetries in a relational theory that do not depend upon solving graph isomorphism over its full propositional grounding. We show how graph isomorphism can be applied to just the evidence portion of a relational theory in order to find the set of what we call term symmetries. We go on to define the important subclass of term equivalent symmetries, and show that they can be found in O(nM log M) time where n is the number of constants and M is the size of the evidence. Next we provide the formulation for breaking term and term equivalent symmetries. An inherent problem in symmetry-breaking is that a propositional theory may have an exponential number of symmetries, so breaking them individually would increase the size of the theory exponentially. This is typically handled by breaking only a portion of the symmetries. We show that term equivalent symmetries provide a compact representation of exponentially many symmetries, and an exponen1 tially large subset of these can be broken by a small (linear) number of SBPs. We demonstrate these ideas on two relational domains and compare our approach to other methods for MAP inference in Markov Logic. 2 Background Symmetry Breaking for SAT Symmetry-breaking for satisfiability testing, introduced by Crawford et. al.[7], is based upon concepts from group theory. A permutation θ is a mapping from a set L to itself. A permutation group is a set of permutations that is closed under composition and contains the identity and a unique inverse for every element. A literal is an atom or its negation. A clause is a disjunction over literals. A CNF theory T is a set (conjunction) of clauses. Let L be the set of literals of T . We consider only permutations that respect negation, that is θ(¬l) = ¬θ(l) (l ∈L). The action of a permutation on a theory, written θ(T ), is the CNF formula created by applying θ to each literal in T . We say θ is a symmetry of T if it results in the same theory i.e. θ(T ) = T . A model M is a truth assignment to the atoms of a theory. The action of θ on M, written θ(M), is the model where θ(M)(P) = M(θ(P)). The key property of θ being a symmetry of T is that M |= T iff θ(M) |= T . The orbit of a model M under a symmetry group Θ is the set of models that can be obtained by applying any of the symmetries in Θ. A symmetry group divides the space of models into disjoint sets, where the models in an orbit either all satisfy or all do not satisfy the theory. The idea of symmetry-breaking is to add clauses to T rule out many of the models, but are guaranteed to not rule out at least one model in each orbit. Note that symmetry-breaking preserves satisfiability of a theory. Symmetries can be found in CNF theories using a reduction to graph isomorphism, a problem that is thought to require super-polynomial time in the worst case, but which can often be efficiently solved in practice [18]. The added clauses are called symmetry-breaking predicates (SBPs). If we place a fixed order on the atoms of theory, then a model can be associated with a binary number, where the i-th digit, 0 or 1, specifies the value of the i-th atom, false or true. Lex-leader SBPs rule out models that are not the lexicographically-smallest members of their orbits. The formulation below is equivalent to the lex-leader SBP given by Crawford et. al. in [7]: SBP(θ) = ^ 1≤i≤n  ^ 1≤j<i vj ⇔θ(vj)  ⇒vi ⇒θ(vi) (1) where vi is the ith variable in the ordering of n variables, and θ is a symmetry over variables. Even though graph isomorphism is relatively fast in practice, a theory may have exponentially many symmetries. Therefore, breaking all the symmetries is often impractical, though partial symmetrybreaking is still useful. It is possible to devise new SBPs that can break exponentially more symmetries than the standard form described above; we do so in Section 5.2. Relational Theories We define a relational theory as a tuple T = (F, W, E), where F is a set of first-order formulas, W a mapping of predicates and negated predicates to strictly positive real numbers (weights), and E is a set of evidence. We restrict the formulas in F to be built from predicates, variables, quantifiers, and logical connectives, but no constants or function symbols. E is a set of ground literals; that is, literals built from predicates and constant symbols. The predicate arguments and constant symbols are typed. Universal and existential quantification is over the set of the theory’s constants D (i.e. the constants that appear in its evidence). Any constants not appearing explicitly in the evidence can be incorporated by introducing a unary predicate for each constant type and adding the groundings for those constants to the evidence. Any formula containing a constant can be made constant-free, by introducing a new unary predicate for each constant, and then including that predicate applied to that constant in the evidence. A ground theory can be seen as a special case of a relational theory where each predicate is argument free. We define the weight of a positive ground literal P(C1, . . . , Ck) of a theory as W(P), and the weight of negative ground literal ¬P(C1, . . . , Ck) as W(¬P). In other words, all positive groundings of a literal have the same weight, as do all negative groundings. The weight of model M with respect to a theory (F, W, E) is 0 if M fails to satisfy any part of F or E; otherwise, it is the product of the weights of the ground atoms that are true in M. Maximum weighted model-finding is the task for finding a model of maximum weight with respect to T . A relational theory can be taken to define a probability distribution over the set of models, where the probability of model is proportional to its 2 weight. Maximum weighted model-finding thus computes MAP (most probable explanation) for a given theory. Ordinary satisfiability corresponds to the case where W simply sets the weights of all literals to 1. Languages such as Markov Logic [12] use an alternative representation and specify real-valued weights on formulas rather than positive weights on predicates and their negations. The MAP problem can be formulated as the weighted-MaxSAT problem, i.e. finding a model maximizing the sum of the weights of satisfied clauses. This can be translated to our notation by introducing a new predicate for each original formula, whose arguments are the free variables in the original formula. F asserts that the predicate is equivalent to the original formula, and W asserts that the weight of the new predicate is e raised to the weight of the original formula. Solving weighted MaxSAT in the alternate representation is thus identical to solving maximum weighted model-finding in the translated theory. For the rest of the discussion in this paper, we will assume that the theory is specified with weights on predicates (and their negations). 3 Related Work Our work has connections to research in both the machine learning and constraint-satisfaction research communities. Most research in statistical-relational machine learning has concentrated on modifying or creating novel probabilistic inference algorithms to exploit symmetries, as opposed to symmetry-breaking’s solver-independent approach. Developments include lifted versions of variable elimination [27, 8], message passing [29, 30, 23], and DPLL [14]. Our approach of defining symmetries using group theory and detecting them by graph isomorphism is shared by Bui et al.’s work on lifted variational inference [5] and Apsel et al.’s work on cluster signatures [2]. Bui notes that symmetry groups can be defined on the basis of unobserved constants in the domain, while we have developed methods to explicitly find symmetries among constants that do appear in the evidence. Niepert also gives a group-theoretic formalism of symmetries in relational theories [24, 25], applying them to MCMC methods. Another line of work is to make use of problem transformations. First-order knowledge compilation [10, 11] transforms a relational problem into a form for which MAP and marginal inference is tractable. This is a much more extensive and computationally complex transformation than symmetry-breaking. Mladenov et al. [22] propose an approach to translate a linear program over a relational domain into an equivalent lifted linear program based on message passing computations, which can then be solved using any off-the-shelf LP solver. Recent work on MAP inference in Markov Logic has identified special cases where a relational formula can be transformed by replacing a quantified formula with a single grounding of the formula [21]. Relatively little work in SRL has explicitly examined the role of evidence, separate from the firstorder part of a theory, on symmetries. One exception is [31], which presents a heuristic method for approximating an evidence set in order to increase the number of symmetries it induces. Bui et al. [4] consider the case of a theory plus evidence pair, where the theory has symmetries that are broken by the evidence. They show that if the evidence is soft and consists only of unary predicates, then lifting based on the theory followed by incorporation of evidence enables polynomial time inference. Extending the results of [4], Van Den Broeck and Darwiche [9] show that dealing with binary evidence is NP-hard in general but can be done efficiently if there is a corresponding low rank Boolean matrix factorization. They also propose approximation schemes based on this idea. We briefly touch upon the extensive literature that has grown around the use of symmetries in constraint satisfaction. Symmetry detection has been based either on graph isomorphism on propositional theories as in the original work by by Crawford et. al [7]; by interchangeability of row and/or columns in CSPs specified in matrix form [20]; by checking for other special cases of geometric symmetries [28], or by determining that domain elements for a variable are exchangeable [3]. (The last is a special case of our term equivalent symmetries.) Researchers have suggested symmetryaware modifications to backtracking CSP solvers for variable selection, branch pruning, and nogood learning [20, 13]. A recent survey of symmetry breaking for CSP [32] described alternatives to the lex-leader formulation of SBPs, including one based on Gray codes. 4 Symmetries in Relational Theories In this section, we will formally introduce the notion of symmetries over relational theories and give efficient algorithms to find them. Symmetries of a relational theory can be defined in terms of symmetries over the corresponding ground theory. 3 Definition 4.1. Let T denote a relational theory. Let the T G denote the theory obtained by grounding the formulas in T . Let L denote the set of (ground) literals in T G. We say that a permutation θ of the set L is a symmetry of the relational theory T if θ maps the ground theory T G back to itself i.e. θ(T G) = T G. We denote the action of θ on the original theory as θ(T ) = T . A straightforward way to find symmetries over a relational theory T is to first map it to corresponding ground theory T G and then find symmetries over it using reduction to graph isomorphism. The complexity of finding symmetries in this way is the same as that of graph isomorphism, which is believed to be worst-case super-polynomial. Further, the number of symmetries found is potentially exponential in the number of ground literals. This is particularly significant for relational theories since the number of ground literals itself is exponential in the highest predicate arity. Computing symmetries at the ground level can therefore be prohibitively expensive for theories with high predicate arity and many constants. In our work below, we exploit the underlying template structure of the relational theory to directly generate symmetries based on the evidence. 4.1 Term Symmetries We introduce the notion of symmetries defined over terms (constants) appearing in a theory T , called term symmetries. Definition 4.2. Let T be a relational theory. Let D be the set of constants appearing in the theory. Then, a permutation θ over the term set D is said to be a term symmetry with respect to evidence E if application of θ on the terms appearing in E, denoted by θ(E), maps E back to itself. We will also refer to θ as an evidence symmetry for the set E. The problem of finding term symmetries can be reduced to colored graph isomorphism. We construct a graph G as follows: for each predicate P and it’s negation, G has a node and a unique color is assigned to every such node. G also has a unique color for each type in the domain. There is a node for every term which takes the color of its type. We call an ordered list of terms an argument list, e.g., given the literal P(C1, C2), where C1, C2 ∈D, the argument list is (C1, C2). The type of an argument list is simply the cross-product of the types of the terms appearing in it. G has a node for every argument list appearing in the evidence, which takes the color of its type. For every evidence literal, there is an edge between the predicate node (or its negation) and the corresponding argument list node. There is also an edge between the argument list node and each of the terms appearing in the list. Thus, for the previous example, an edge will be placed between the node for P and the node for (C1, C2), as well as an edge each between (C1, C2) and the nodes for C1 and C2. Any automorphism of G will map (negated) predicate nodes to themselves and terms will be mapped in a manner that their association with the corresponding predicate node in the evidence is preserved. Hence, automorphisms of G will correspond to term symmetries in evidence E. Next, we will establish a relationship between permutation of terms in the evidence to the permutations of literals in the ground theory. Definition 4.3. Let T be a relational theory. Let E be the evidence set and let D be the set of terms appearing in E. Given a permutation θ of the terms in the set D, we associate a corresponding permutation θT over the ground literals of the form P(C1, C2, . . . , Ck) in T, such that θT (P(C1, C2, . . . , Ck)) = P(θ(C1), θ(C2), . . . , θ(Ck)) (and similarly for negated literals). We can now associate a symmetry θ over the terms with a symmetry of the theory T . The following lemma is proven in the supplementary material: Lemma 4.1. Let T be a relational theory. Let E denote the evidence set. Let D be the set of terms appearing in E. If θ is term symmetry for the evidence E, then, the associated theory permutation θT is also a symmetry of T . In order to find the term symmetries, we can resort to solving a graph isomorphism problem of size O(|E|), where |E| is the number of literals in the evidence. Directly finding symmetries over the ground literals requires solving a problem of size O(|D|k), |D| being the set of constants and k being the highest predicate arity. In the worst case where everything is fully observed, |E| = O(|D|k), but in practice it is much smaller. Next, we present an important subclass of term symmetries, called term equivalent symmetries, which capture a wide subset of all the symmetries present in the theory, and can be efficiently detected and broken. 4 4.2 Term Equivalent Symmetries A term equivalent symmetry is a set of term symmetries that divides the set of terms into equivalence classes such that any permutation which maps terms within the same equivalence class is a symmetry of the evidence set. Let Z = {Z1, Z2, . . . , Zm} denote a partition of the term set D into m disjoint subsets. Given a partition Z, we say that two terms are term equivalent (with respect to Z) if they occur in the same component of Z. We define a partition preserving permutation as follows. Definition 4.4. Given a set of terms D and its disjoint partition Z, we say that a permutation θ of the terms in D is a partition preserving permutation of D with respect to the partition Z if ∀Cj, Ck ∈ D, θ(Cj) = Ck implies that ∃Zi ∈Z st Cj, Ck ∈Zi. In other words, θ is partition preserving if it permutes terms within the same component of Z. The set of all partition preserving permutations (with respect to a partition Z) forms a group. We will denote this group by ΘZ. It is easy to see that ΘZ divides the set of terms in equivalence classes. Next, we define the notion of term equivalent symmetries. Definition 4.5. Let T be a relational theory and E denote the evidence set. Let D be the set of terms in E and Z be a disjoint partition of terms in D. Then, given the partition preserving permutation ΘZ, we say that ΘZ is a term equivalent symmetry group of D, if ∀θ ∈ΘZ, θ is a symmetry of E. We will refer to each symmetry θ ∈ΘZ as a term equivalent symmetry of E. A partition Z of term set D is a term equivalent partition if the partition preserving group ΘZ is a symmetry group of D. We refer to each partition element Zi as a term equivalent subset. The term equivalent symmetry group can be thought of as a set of symmetry subgroups ΘZi’s, one for each term subset Zi, such that, Zi allows for all possible permutations of terms within the set Zi and defines an identity mapping for terms in other subsets. Note that the size of term equivalent symmetry group is given by Πm i=1|Zi|!. Despite its large size, it can be very efficiently represented by simply storing the partition Z over the term set D. Note that this formulation works equally well for typed as well untyped theories; in a typed theory no two terms of differing types can appear in the same subset. Next, we will look at an efficient algorithm for finding a partition Z which corresponds to a term equivalent symmetry group over D. Let the evidence be given by E = {l1, l2, . . . , lk}, where each li is a ground literal. Intuitively, two terms are term equivalent if they occur in exactly the same context in the evidence. For example, if evidence for constant A is {P1(A, X), P2(A, Y, A)}, then the context for term A is P1(∗, X), P2(∗, Y, ∗). Note that here the positions where A occurred in the evidence has been marked by a ∗. Any other term sharing the same context would be term equivalent to A. To find the set of all the equivalent terms, we first compute the context for each term. Then, we sort each context based on some lexicographic order defined over predicate symbols and term symbols. Once the context has been sorted, we can simply hash the context for each term and put those which have the same context in the same equivalence class. If the evidence size is given by |E| = M and number of terms in evidence is n, then, the above procedure will take O(nM log(M)) time. The M log(M) factor is for sorting the context for a single term. Hashing the sorted term takes constant time. This is done for each term, hence the factor of n. See the supplement for more details and an example. 5 Breaking Symmetries Over Terms In this section, we provide the SBP formulation for term as well as term equivalent symmetries. 5.1 Breaking Term Symmetries Consider a relational theory T that has term symmetries {θ1, . . . , θk}. Fix an ordering P1, . . . , Pr over the predicates, an ordering over the predicate positions, and an ordering C1, . . . , C|D| over the terms. If T is typed, fix an ordering over types and an ordering over the terms within each type. This induces a straightforward ordering over the ground atoms of the theory G1, . . . , Gn. Let θ be a term symmetry. Consider the following SBP (based on Equation 1) to break θ: SBP(θ) = ^ 1≤i≤n  ^ 1≤j<i Gj ⇔θ(Gj)  ⇒Gi ⇒θ(Gi) Theorem 5.1. If T is weighted, the max model for T ∧SBP(θ) is a max model for T , and it has the same weight in both theories. 5 The proof of this theorem follows from [7]. Essentially, an ordering is placed on the atoms of the theory, which induces an ordering on the models. The SBP constraints ensure that if a set of models are in the same orbit, i.e. symmetric under θ, then only the first of those models in the ordering is admitted. 5.2 Breaking Term Equivalent Symmetries Let Z = {Z1, . . . , Z|Z|} be the term equivalent partitioning over the terms. Let θi j,k be the term symmetry that swaps Cj and Ck, the jth and kth constants in an ordering over the term equivalent subset Zi, and maps everything else to identity. We next show how to break exponentially many symmetries that respect the term equivalent partition Z, using a linear-sized SBP. Consider the following SBP (CSBP stands for Composite SBP): CSBP(Z) = |Z| ^ i=1 |Zi|−1 ^ j=1 SBP(θi j,j+1) For each term (equivalent) symmetry of the form θi j,j+1, we apply the SBP formulation from Section 5.1 to break it. The formulation is cleverly choosing which symmetries to explicitly break, so that exponentially many symmetries are broken. The inner conjunction iterates over all of the terms in a term equivalent class. An ordering is placed on the terms, and the only symmetries that are explicitly broken are those that swap two adjacent terms and map everything else to identity. All of the adjacent pairs of terms in the ordering are broken in this way, with a linear number of calls to SBP. As we show below, this excludes Ω(2|Zi|) models from the search space, while preserving at least one model in each orbit. The outer conjunction iterates over the different term equivalent subsets, breaking each one of them in turn. The following theorem states this in formal terms (see the supplement for a proof): Theorem 5.2. CSBP(Z) removes Ω(P|Z| i=1 2|Zi|) models from the search space while preserving at least one model in each orbit induced by the term equivalent symmetry group ΘZ. Corollary 5.1. If T is weighted, the max model for T ∧CSBP(Z) is a max model for T , and it has the same weight in both theories. 6 Experiments We pitted our term and term equivalent SBP formulations against other inference systems to demonstrate their efficacy. For each domain we tested on, we compared the following algorithms: (1) Vanilla: running an exact MaxSAT solver on the grounded instance (2) Shatter: running an exact MaxSAT solver on the grounded instance with SBPs added by the Shatter tool [1], (3) Term: running an exact MaxSAT solver on the grounded instance with SBPs added by our term symmetry detection algorithm, (4) Tequiv: running an exact MaxSAT solver on the grounded instance with SBPs added by our term equivalent symmetry detection algorithm (5) RockIt: running RockIt [26] on the MLN, (6) PTP: running the PTP algorithm [14] for marginal inference 1 on the MLN and (7) MWS: running the approximate MaxWalkSAT algorithm used by Alchemy-22 on the grounded instance. These algorithms were chosen so that our methods would be compared against a variety of other techniques: ground techniques including regular MaxSAT (Vanilla), propositional symmetrybreaking (Shatter), local search (MWS), and lifted techniques including cutting planes (RockIt) and lifted rules (PTP). It should be noted that all the algorithms except MWS and RockIt are exact. RockIt solves an LP relaxation but it always gave us an exact solution (whenever it could run). Therefore, we report the solution quality only for MWS. All the experiments were run on a system with a 2.2GHz Xeon processor and 62GB of memory. The algorithms for Term and Tequiv were implemented as a pre-processing step before running a MaxSAT solver. The MWS runs were allowed a varying number of flips and up to five restarts. For MWS, the reported results are with equal probability (0.5) of making a random or a greedy move; other settings were tried, and the results were similar. Both PTP and RockIt were run using the default setting of their parameters. We experimented on two relational domains: the classic pigeonhole problem (PHP) with two different variants, and an Advisor domain (details below). 1The publicly available implementation of PTP only supports marginal inference. Though marginal inference is somewhat harder problem than MAP, PTP failed to run even on the smallest of instances. 2https://code.google.com/p/alchemy-2/ 6 For ground instances that required an exact MaxSAT solver, we experimented with three different solvers: MiniMaxSAT [15], Open-WBO [19], and Sat4j [17]. This was done because different solvers and heuristics tend to work better on different application domains. We found that for PHP (variant 1) Sat4j worked best, for PHP (variant 2) Open-WBO was the best and for Advisor MiniMaxSAT worked the best. We used the best MaxSAT solver for each domain. PHP: In the PHP problem, we are given n pigeons and n−1 holes. The goal is to fit each pigeon in a unique hole (the problem is unsatisfiable). We tried two variants of this problem. In the first variant, there are hard constraints that ensure that every hole has at most one pigeon and every pigeon is in at most one hole. There is a soft constraint that states that every pigeon is in every hole. The goal is to find a max-model of the problem. The comparison with various algorithms is given in Table 1a. Vanilla times out except for the smallest instance. PTP times out on all instances. Our algorithms consistently outperform RockIt but are not able to outperform shatter (except for smaller instances where they are marginally better). MWS was run on the largest instance (n=60) and results are shown in Table 1d. MWS is unable to find the optimal solution after 10 million flips. In the second PHP variant, there are hard constraints that ensure that every pigeon is in precisely one hole. There is a soft constraint that each hole has at most one pigeon. The comparison with various algorithms is given in in Table 1b. Vanilla times out except for the two smallest instances, and PTP times out for all the instances. Our systems consistently outperform RockIt, and outperform shatter for the larger instances. Tequiv outperforms Term because the detection step is much faster. MWS (Table 1d) is able to find the optimal within 100,000 flips and is significantly faster than all other algorithms. Though MWS does better on this variant of PHP, it fails badly on the first one. Further, there is no way for MWS to know if it has reached the optimal unlike our (exact) approaches. Advisor Domain: The advisor-advisee domain has three types: student, prof, and a research area. There are predicates to indicate student/prof interest in an area and an advisor relationship between students and profs. The theory has hard constraints that ensure that a) each student has precisely one advisor b) students and their advisors share an interest. It also has a small negatively-weighted (soft) constraint saying that a prof advises two different students. The interests of all students and profs are fully observed. Students have one interest, profs have one or more (chosen randomly). There is at least one prof interested in each area. The results for this domain are given in Table 1c gives the comparisons. Vanilla is able to run only on the two smallest instances. PTP times out on every instance as before. Our algorithms outpeform shatter but is outperformed by RockIt. MWS was run on the two larger sized instances (see Table 1c) and it is able to outperform our system. In order to make sure that poor performance of PTP is not due to evidence, we modified the PHP formulation to work with a no evidence formulation. This did not help PTP either. Based on these results, there is no clear winner among the algorithms which performs the best on all of these domains. Among all the algorithms, Term, Tequiv and Shatter are the only ones which could run to completion on all the instances that we tried. Among these, our algorithms won on PHP variant 2 (large instances) and the Advisor domain. Tequiv outperformed Term on PHP variant 2 (large instances), and they performed similarly in the Advisor domain. In the PHP variants, all of the pigeons are in one term equivalent symmetry group, and all of the holes are in another. The PHP is of special interest, because pigeonhole structure was shown to be present in hard random SAT formulas with very high probability [6]. In the advisor domain, there is a term equivalent symmetry group of students for each area, and term equivalent symmetry group of professors for each set of areas. 7 Conclusion and Future Work In this work, we have provided the theoretical foundation for using symmetry-breaking techniques from satisfiability testing for MAP inference in weighted relational theories. We have presented the class of term symmetries, and a subclass of term equivalent symmetries, both of which are found in the evidence given with the theory. We have presented ways to detect and break both these classes. For the class of term equivalent symmetries, the detection method runs in low-polynomial time, and the number of SBPs required to break the symmetries is linear in the domain size. the algorithms presented, and compared them against other systems on a set of relational domains. Future work includes carefully characterizing the cases where our algorithms perform better than other systems, experimenting on additional real-world domains, comparing with more lifted inference approaches, and the application of similar techniques to the problem of marginal inference. 7 Table 1: Experiments with Lifted Symmetries for MAP Inference (a) Soft Pigeon Hole Domain — Variant 1 #Pigeon Vanilla Shatter Tequiv Term RockIt PTP 5 0.246 0.01 + 0.39 0.01 + 0.24 0.03 + 0.26 1.15 TO 10 TO 0.02 + 0.88 0.04 + 0.74 0.09 + 0.68 1.47 TO 15 TO 0.08 + 1.85 0.13 + 1.37 0.21 + 1.27 2.46 TO 20 TO 0.22 + 3.1 0.36 + 3.9 0.51 + 3.7 2.70 TO 30 TO 1.6 + 12.7 1.4 + 20 2.2 + 14 3.88 TO 40 TO 7 + 72 4.06 + 93.3 6.30 + 88.3 F TO 60 TO 54 + 889 18 + 1128 28 + 1128 F TO (b) Soft Pigeon Hole Domain — Variant 2 #Pigeon Vanilla Shatter Tequiv Term RockIt PTP 5 0.002 0.003 + 0.001 1.28 + 0.002 0.04 + 0.001 1.1 TO 10 37.44 0.01 + 0.006 1.19 + 0.004 0.07 + 0.005 6.6 TO 15 TO 0.06 + 0.01 1.29 + 0.01 0.19 + 0.01 TO TO 40 TO 3.7 + 0.59 4.9 + 0.17 6.2 + 0.21 F TO 80 TO 115 + 8.36 50 + 1.7 86 + 2.9 F TO 125 TO 1090 + 20.98 269 + 10 486 + 18 F TO (c) Advisor Domain #Prof–#Student–#Area Vanilla Shatter Tequiv Term RockIt PTP 4–12–2 2.3 0.01 + 0.10 0.02 + 0.01 0.11 + 0.01 1.25 TO 4–16–2 168 0.01 + 0.17 0.03 + 0.05 0.08 + 0.04 2.06 TO 6–18–3 TO 0.02 + 0.90 0.05 + 0.84 0.09 + 0.80 6.56 TO 6–24–3 TO 0.03 + 105 0.07 + 45 0.15 + 47 1.47 TO 8–24–4 TO 0.07 + 62 0.10 + 56 0.16 + 55 13.4 TO (d) MaxWalkSAT Results Problem Size Optimal 100,000 1,000,000 10,000,000 best time best time best time pigeon1 60 3481 3496 4.4 3494 39 3491 383 pigeon2 125 1 1 3.8 — — — — advisor 6–18–3 2094 2094 0.4 — — — — advisor 8–24–4 3552 3552 0.5 — — — — All times are given in seconds. “TO” indicates the program timed out (30min). “F” indicates the program failed without timeout (instance too large). The vanilla column is the time to find the max model of the ground theory. The first three subtables give results for exact algorithms. The Shatter, Term, and Tequiv columns are the same with propositional, term and term equivalent symmetries broken, respectively. These columns are given as x + y, where x is the time to detect and break the symmetries, and y is the time to find the max model. The RockIt column is the time it takes RockIt to find the max model. The lifted PTP algorithm was run on every instance, and timed out on each instance. The last table gives the results for the approximate algorithm MaxWalkSAT, giving the time it takes to perform a certain number of iterations, as well as the comparison between the best model found on that run and the optimal model. References [1] Fadi A. Aloul, Igor L. Markov, and Karem A. Sakallah. Shatter: efficient symmetry-breaking for boolean satisfiability. In Proc. of DAC, pages 836–839, 2003. [2] Udi Apsel, Kristian Kersting, and Martin Mladenov. Lifting relational MAP-LPs using cluster signatures. In Proc. of AAAI, pages 2403–2409, 2014. [3] Gilles Audemard, Belaid Benhamou, and Laurent Henocque. Predicting and detecting symmetries in FOL finite model search. J. Automated Reasoning, 36(3):177–212, 2006. [4] Hung B. Bui, Tuyen N. Huynh, and Rodrigo de Salvo Braz. Exact lifted inference with distinct soft evidence on every object. In Proc. of AAAI, 2012. 8 [5] Hung Hai Bui, Tuyen N. Huynh, and Sebastian Riedel. Automorphism groups of graphical models and lifted variational inference. In Proc. of UAI, pages 132–141, 2013. [6] Vasek Chv´atal and Endre Szemer´edi. Many hard examples for resolution. J. ACM, 35(4):759–768, 1988. [7] James Crawford, Matthew Ginsberg, Eugene Luks, and Amitabha Roy. Symmetry-breaking predicates for search problems. In Proc. of KR, pages 148–159, 1996. [8] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proc. of IJCAI, pages 1319–1325, 2005. [9] Guy Van den Broeck and Adnan Darwiche. On the complexity and approximation of binary evidence in lifted inference. In Proc. of NIPS, pages 2868–2876, 2013. [10] Guy Van den Broeck and Jesse Davis. Conditioning in first-order knowledge compilation and lifted probabilistic inference. In Proc. of AAAI, 2012. [11] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted probabilistic inference by first-order knowledge compilation. In Proc. of IJCAI, pages 2178–2185, 2011. [12] Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2009. [13] Pierre Flener, Justin Pearson, Meinolf Sellmann, Pascal Van Hentenryck, and Magnus Agren. Dynamic structural symmetry breaking for constraint satisfaction problems. Constraints, 14(4):506–538, 2009. [14] Vibhav Gogate and Pedro M. Domingos. Probabilistic theorem proving. In Proc. of UAI, pages 256–265, 2011. [15] Federico Heras, Javier Larrosa, and Albert Oliveras. MiniMaxSAT: An Efficient Weighted Max-SAT solver. J. Artificial Intelligence Research, 31:1–32, 2008. [16] Hadi Katebi, Karem A. Sakallah, and Igor L. Markov. Symmetry and satisfiability: An update. In Theory and Applications of Satisfiability Testing, volume 6175, pages 113–127, 2010. [17] Daniel Le Berre and Anne Parrain. The Sat4j library, release 2.2. JSAT, 7:59–64, 2010. [18] E. M. Luks. Isomorphism of graphs of bounded valence can be tested in polynomial time. J. of Computer System Science, 25:42–65, 1982. [19] Ruben Martins, Vasco Manquinho, and Inˆes Lynce. Open-WBO: A modular maxsat solver,. In Theory and Applications of Satisfiability Testing, volume 8561 of Lecture Notes in Computer Science. 2014. [20] Pedro Meseguer and Carme Torras. Exploiting symmetries within constraint satisfaction search. Artificial Intelligence, 129(1-2):133–163, 2001. [21] Happy Mittal, Prasoon Goyal, Vibhav G. Gogate, and Parag Singla. New rules for domain independent lifted MAP inference. In Proc. of NIPS, pages 649–657, 2014. [22] Martin Mladenov, Babak Ahmadi, and Kristian Kersting. Lifted linear programming. In Proc, of AISTATS, pages 788–797, 2012. [23] Martin Mladenov, Amir Globerson, and Kristian Kersting. Lifted message passing as reparametrization of graphical models. In Proc. of UAI, pages 603–612, 2014. [24] Mathias Niepert. Markov chains on orbits of permutation groups. In Proc, of UAI, pages 624–633, 2012. [25] Mathias Niepert. Symmetry-aware marginal density estimation. In Proc. of AAAI, 2013. [26] Jan Noessner, Mathias Niepert, and Heiner Stuckenschmidt. Rockit: Exploiting parallelism and symmetry for MAP inference in statistical relational models. In Proc. of AAAI, 2013. [27] David Poole. First-order probabilistic inference. In Proc. of IJCAI, pages 985–991, 2003. [28] Meinolf Sellmann and Pascal Van Hentenryck. Structural symmetry breaking. In Proc. of IJCAI, pages 298–303, 2005. [29] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In Proc. of AAAI, pages 1094– 1099, 2008. [30] Parag Singla, Aniruddh Nath, and Pedro M. Domingos. Approximate lifting techniques for belief propagation. In Proc. of AAAI, pages 2497–2504, 2014. [31] Deepak Venugopal and Vibhav Gogate. Evidence-based clustering for scalable inference in Markov logic. In Proc. of ECML/PKDD, pages 258–273, 2014. [32] Toby Walsh. Symmetry breaking constraints: Recent results. In Proc. of AAAI, 2012. 9
2015
180
5,681
Private Graphon Estimation for Sparse Graphs∗ Christian Borgs Jennifer T. Chayes Microsoft Research New England Cambridge, MA, USA. {cborgs,jchayes}@microsoft.com Adam Smith Pennsylvania State University University Park, PA, USA. asmith@psu.edu Abstract We design algorithms for fitting a high-dimensional statistical model to a large, sparse network without revealing sensitive information of individual members. Given a sparse input graph G, our algorithms output a node-differentially private nonparametric block model approximation. By node-differentially private, we mean that our output hides the insertion or removal of a vertex and all its adjacent edges. If G is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon W, our model guarantees consistency: as the number of vertices tends to infinity, the output of our algorithm converges to W in an appropriate version of the L2 norm. In particular, this means we can estimate the sizes of all multi-way cuts in G. Our results hold as long as W is bounded, the average degree of G grows at least like the log of the number of vertices, and the number of blocks goes to infinity at an appropriate rate. We give explicit error bounds in terms of the parameters of the model; in several settings, our bounds improve on or match known nonprivate results. 1 Introduction Differential Privacy. Social and communication networks have been the subject of intense study over the last few years. However, while these networks comprise a rich source of information for science, they also contain highly sensitive private information. What kinds of information can we release about these networks while preserving the privacy of their users? Simple measures, such as removing obvious identifiers, do not work; for example, several studies reidentified individuals in the graph of a social network even after all vertex and edge attributes were removed. Such attacks highlight the need for statistical and learning algorithms that provide rigorous privacy guarantees. Differential privacy [17] provides meaningful guarantees in the presence of arbitrary side information. In the context of traditional statistical data sets, differential privacy is now well-developed. By contrast, differential privacy in the context of graph data is much less developed. There are two main variants of graph differential privacy: edge and node differential privacy. Intuitively, edge differential privacy ensures that an algorithm’s output does not reveal the inclusion or removal of a particular edge in the graph, while node differential privacy hides the inclusion or removal of a node together with all its adjacent edges. Edge privacy is a weaker notion (hence easier to achieve) and has been studied more extensively. Several authors designed edge-differentially private algorithms for fitting generative graph models (e.g. [24]; see the full version for further references), but these do not appear to generalize to node privacy with meaningful accuracy guarantees. The stronger notion, node privacy, corresponds more closely to what was achieved in the case of traditional data sets, and to what one would want to protect an individual’s data: it ensures that no matter what an analyst observing the released information knows ahead of time, she learns the same ∗A full version of this extended abstract is available at http://arxiv.org/abs/1506.06162 1 things about an individual Alice regardless of whether Alice’s data are used or not. In particular, no assumptions are needed on the way the individuals’ data are generated (they need not even be independent). Node privacy was studied more recently [21, 14, 6, 26], with a focus on on the release of descriptive statistics (such as the number of triangles in a graph). Unfortunately, differential privacy’s stringency makes the design of accurate, private algorithms challenging. In this work, we provide the first algorithms for node-private inference of a high-dimensional statistical model that does not admit simple sufficient statistics. Modeling Large Graphs via Graphons. Traditionally, large graphs have been modeled using various parametric models, one of the most popular being the stochastic block model [20]. Here one postulates that an observed graph was generated by first assigning vertices at random to one of k groups, and then connecting two vertices with a probability that depends on their assigned groups. As the number of vertices of the graph in question grows, we do not expect the graph to be well described by a stochastic block model with a fixed number of blocks. In this paper we consider nonparametric models (where the number of parameters need not be fixed or even finite) given in terms of a graphon. A graphon is a measurable, bounded function W : [0, 1]2 →[0, ∞) such that W(x, y) = W(y, x), which for convenience we take to be normalized: R W = 1. Given a graphon, we generate a graph on n vertices by first assigning i.i.d. uniform labels in [0, 1] to the vertices, and then connecting vertices with labels x, y with probability ρnW(x, y), where ρn is a parameter determining the density of the generated graph Gn with ρn∥W∥∞≤1. We call Gn a W-random graph with target density ρn (or simply a ρnW-random graph). To our knowledge, random graph models of the above form were first introduced under the name latent position graphs [19], and are special cases of a more general model of “inhomogeneous random graphs” defined in [7], which is the first place were n-dependent target densities ρn were considered. For both dense graphs (whose target density does not depend on the number of vertices) and sparse graphs (those for which ρn →0 as n →∞), this model is related to the theory of convergent graph sequences, [8, 23, 9, 10] and [11, 12], respectively. Estimation and Identifiability. Assuming that Gn is generated in this way, we are then faced with the task of estimating W from a single observation of a graph Gn. To our knowledge, this task was first explicitly considered in [4], which considered graphons describing stochastic block models with a fixed number of blocks. This was generalized to models with a growing number of blocks [27, 15], while the first estimation of the nonparametric model was proposed in [5]. Most of the literature on estimating the nonparametric model makes additional assumptions on the function W, the most common one being that after a measure-preserving transformation, the integral of W over one variable is a strictly monotone function of the other, corresponding to an asymptotically strictly monotone degree distribution of Gn. (This assumption is quite restrictive: in particular, such results do not apply to graphons that represent block models.) For our purposes, the most relevant works are Wolfe and Olhede [28], Gao et al. [18], Chatterjee [13] and Abbe and Sandon [2] (as well as recent work done concurrently with this research [22]), which provide consistent estimators without monotonicity assumptions (see “Comparison to nonprivate bounds”, below). One issue that makes estimation of graphons challenging is identifiability: multiple graphons can lead to the same distribution on Gn. Specifically, two graphons W and ˜W lead to the same distribution on W-random graphs if and only if there are measure preserving maps φ, ˜φ : [0, 1] →[0, 1] such that W φ = f W eφ, where W φ is defined by W(x, y) = W(φ(x), φ(y)) [16]. Hence, there is no “canonical graphon” that an estimation procedure can output, but rather an equivalence class of graphons. Some of the literature circumvents identifiability by making strong additional assumptions, such as strict monotonicity, that imply the existence of canonical equivalent class representatives. We make no such assumptions, but instead define consistency in terms of a metric on these equivalence classes, rather than on graphons as functions. We use a variant of the L2 metric, δ2(W, W ′) = inf φ:[0,1]→[0,1] ∥W φ −W ′∥2, where φ ranges over measure-preserving bijections. (1) Our Contributions. In this paper we construct an algorithm that produces an estimate ˆW from a single instance Gn of a W-random graph with target density ρn (or simply ρ, when n is clear from the context). We aim for several properties: 2 1. ˆW is differentially private; 2. ˆW is consistent, in the sense that δ2(W, ˆW) →0 in probability as n →∞; 3. ˆW has a compact representation (in our case, as a matrix with o(n) entries); 4. The procedure works for sparse graphs, that is, when the density ρ is small; 5. On input Gn, ˆW can be calculated efficiently. Here we give an estimation procedure that obeys the first four properties, leaving the question of polynomial-time algorithms for future work. Given an input graph Gn, a privacy-parameter ϵ and a target number k of blocks, our algorithm A produces a k-block graphon ˆW = A(Gn) such that • A is ϵ-differentially node private. The privacy guarantee holds for all inputs, independent of modeling assumptions. • If (1) W is an arbitrary graphon, normalized so R W = 1, (2) the expected average degree (n−1)ρ grows at least as fast as log n, and (3) k goes to infinity sufficiently slowly with n, then, when Gn is ρW-random, the estimate ˆW for W is consistent (that is, δ2( ˆW, W) →0, both in probability and almost surely). • We give a nonprivate variant of A that converges assuming only ω(1) average degree. Combined with the general theory of convergent graphs sequences, these results in particular give a node-private procedure for estimating the edge density of all cuts in a ρW-random graph,see Section 2.2 below. The main idea of our algorithm is to use the exponential mechanism of [25] to select a block model which approximately minimizes the ℓ2 distance to the observed adjacency matrix of G, under the best possible assignment of nodes to blocks (this explicit search over assignments makes the algorithm take exponential time). In order to get an algorithm that is accurate on sparse graphs, we need several nontrivial extensions of current techniques. To achieve privacy, we use a new variation of the Lipschitz extension technique of [21, 14] to reduce the sensitivity of the δ2 distance. While those works used Lipschitz extensions for noise addition, we use of Lipshitz extensions inside the “exponential mechanism” [25] (to control the sensitivity of the score functions). To bound our algorithm’s error, we provide a new analysis of the ℓ2-minimization algorithm; we show that approximate minimizers are not too far from the actual minimizer (a “stability” property). Both aspects of our work are enabled by restricting the ℓ2 2-minimization to a set of block models whose density (in fact, L∞ norm) is not much larger than that of the underlying graph. The algorithm is presented in Section 3. Our most general result proves consistency for arbitrary graphons W but does not provides a concrete rate of convergence. However, we provide explicit rates under various assumptions on W. Specifically, we relate the error of our estimator to two natural error terms involving the graphon W: the error ϵ(O) k (W) of the best k-block approximation to W in the L2 norm (see (4) below) and an error term ϵn(W) measuring the L2-distance between the graphon W and the matrix of probabilities Hn(W) generating the graph Gn (see (5) below.) In terms of these error terms, Theorem 1 shows δ2  W, ˆW  ≤ϵ(O) k (W) + 2ϵn(W) + OP 4 s log k ρn + r k2 log n nϵ + 1 ρϵn ! . (2) provided the average degree ρn grows at least like log n. Along the way, we provide a novel analysis of a straightforward, nonprivate least-squares estimator that does not require an assumption on the average degree, and leads to an error bound with a better dependence on k: δ2  W, ˆWnonprivate  ≤ϵ(O) k (W) + 2ϵn(W) + OP 4 s log k ρn + k2 ρn2 ! . (3) It follows from the theory of graph convergence that for all graphons W, we have ϵ(O) k (W) →0 as k →∞and ϵn(W) →0 almost surely as n →∞. By selecting k appropriately, the nonprivate algorithm converges for any bounded graphon as long as ρn →∞with n; the private algorithm converges whenever ρn ≥6 log n (e.g., for constant ϵ). As proven in the full version, we also have ϵn(W) = OP (ϵ(O) k (W) + 4p k/n), though this upper bound is loose in many cases. 3 As a specific instantiation of these bounds, let us consider the case that W is exactly described by a k-block model, in which case ϵ(O) k (W) = 0 and ϵn(W) = OP ( 4p k/n) (see full version for a proof). For k ≤(n/ log2 n)1/3, ρ ≥log(k)/k and constant ϵ, our private estimator has an asymptotic error that is dominated by the (unavoidable) error of ϵn(W) = 4p k/n, showing that we do not lose anything due to privacy in this special case. Another special case is when W is α-H¨older continuous, in which case ϵ(O) k (W) = O(k−α) and ϵn(W) = OP (n−α/2); see Remark 2 below. Comparison to Previous Nonprivate Bounds. We provide the first consistency bounds for estimation of a nonparametric graph model subject to node differential privacy. Along the way, for sparse graphs, we provide more general consistency results than were previously known, regardless of privacy. In particular, to the best of our knowledge, no prior results give a consistent estimator for W that works for sparse graphs without any additional assumptions besides boundedness. When compared to results for nonprivate algorithms applied to graphons obeying additional assumptions, our bounds are often incomparable, and in other cases match the existing bounds. We start by considering graphons which are themselves step functions with a known number of steps k. In the dense case, the nonprivate algorithms of [18] and [13], as well as our nonprivate algorithm, give an asymptotic error that is dominated by the term ϵn(W) = O( 4p k/n), which is of the same order as our private estimator as long as k = ˜o(n1/3). [28] provided the first convergence results for estimating graphons in the sparse regime. Assuming that W is bounded above and below (so it takes values in a range [λ1, λ2] where λ1 > 0), they analyze an inefficient algorithm (the MLE). The bounds of [28] are incomparable to ours, though for the case of k-block graphons, both their bounds and our nonprivate bound are dominated by the term 4p k/n when ρ > (log k)/k and k ≤ρn. A different sequence of works shows how to consistently estimate the underlying block model with a fixed number of blocks k in polynomial time for very sparse graphs (as for our non-private algorithm, the only thing which is needed is that nρ →∞) [3, 1, 2]; we are not aware of concrete bounds on the convergence rate. For the case of dense α-H¨older-continuous graphons, the results of [18] give an error which is dominated by the term ϵn(W) = OP (n−α/2). For α < 1/2, our nonprivate bound matches this bound, while for α > 1/2 it is worse. [28] considers the sparse case. The rate of their estimator is incomparable to that of ours; further, their analysis requires a lower bound on the edge probabilities, while ours does not. Very recently, after our paper was submitted, both the bounds of [28] as well as our non-private bound (3) were substantially improved [22], leading to an error bound where the 4th root in (3) is replaced by a square root (at the cost of an extra constant multiplying the oracle error.) See the full version for a more detailed discussion of the previous literature. 2 Preliminaries 2.1 Notation For a graph G on [n] = {1, . . . , n}, we use E(G) and A(G) to denote the edge set and the adjacency matrix of G, respectively. The edge density ρ(G) is defined as the number of edges divided by n 2  . Finally the degree di of a vertex i in G is the number of edges containing i. We use the same notation for a weighted graph with nonnegative edge weights βij, where now ρ(G) = 2 n(n−1) P i<j βij, and di = P j̸=i βij. We use Gn to denote the set of weighted graphs on n vertices with weights in [0, 1], and Gn,d to denote the set of all graphs in Gn that have maximal degree at most d. From Matrices to Graphons. We define a graphon to be a bounded, measurable function W : [0, 1]2 →R+ such that W(x, y) = W(y, x) for all x, y ∈[0, 1]. It will be convenient to embed the set of a symmetric n × n matrix with nonnegative entries into graphons as follows: let Pn = (I1, . . . In) be the partition of [0, 1] into adjacent intervals of lengths 1/n. Define W[A] to be the step function which equals Aij on Ii × Ij. If A is the adjacency matrix of an unweighted graph G, we use W[G] for W[A]. Distances. For p ∈[1, ∞) we define the Lp norm of an n × n matrix A and a (Borel)-measurable function W : [0, 1]2 →R by ∥A∥p = 1 n2 P i,j |Aij|p1/p, and ∥f∥p = R |f(x, y)|pdxdy 1/p, 4 respectively. Associated with the L2-norm is a scalar product, defined as ⟨A, B⟩= 1 n2 P i,j AijBij for two n × n matrices A and B, and ⟨U, W⟩= R U(x, y)W(x, y)dxdy for two square integrable functions U, W : [0, 1]2 →R. Note that with this notation, the edge density and the L1 norm are related by ∥G∥1 = n−1 n ρ(G). Recalling (1), we define the δ2 distance between two matrices A, B, or between a matrix A and a graphon W by δ2(A, B) = δ2(W[A], W[B]) and δ2(A, W) = δ2(W[A], W). In addition, we will also use the in general larger distances ˆδ2(A, B) and ˆδ2(A, W), defined by taking a minimum over matrices A′ which are obtained from A by a relabelling of the indices: ˆδ2(A, B) = minA′ ∥A′−B∥2 and ˆδ2(A, W) = minA′ ∥W[A′] −W∥2. 2.2 W-random graphs, graph convergence and multi-way cuts W-random graphs and stochastic block models. Given a graphon W we define a random n × n matrix Hn = Hn(W) by choosing n “positions” x1, . . . , xn i.i.d. uniformly at random from [0, 1] and then setting (Hn)ij = W(xi, xj). If ∥W∥∞≤1, then Hn(W) has entries in [0, 1], and we can form a random graph Gn = Gn(W) on n-vertices by choosing an edge between two vertices i < j with probability (Hn)ij, independently for all i < j. Following [23] we call Gn(W) a W-random graph and Hn(W) a W-weighted random graph. We incorporate a target density ρn (or simply ρ, when n is clear from the context) by normalizing W so that R W = 1 and taking G to be a sample from Gn(ρW). In other words, we set Q = Hn(ρW) = ρHn(W) and then connect i to j with probability Qij, independently for all i < j. Stochastic block models are specific examples of W-random graph in which W is constant on sets of the form Ii × Ij, where (I1, . . . , Ik) is a partition of [0, 1] into intervals of possibly different lengths. On the other hand, an arbitrary graphon W can be well approximated by a block model. Indeed, let ϵ(O) k (W) = min B ∥W −W[B]∥2 (4) where the minimum runs over all k × k matrices B. By a straightforward argument (see, e.g., [11]) ϵ(O) k (W) = ∥W −WPk∥2 →0 as k →∞. We will take this approximation as a benchmark for our approach, and consider it the error an “oracle” could obtain (hence the superscript O). Another key term in our algorithm’s error guarantee is the distance between Hn(W) and W, ϵn(W) = ˆδ2(Hn(W), W). (5) It goes to zero as n →∞by the following lemma, which follows easily from the results of [11]. Lemma 1. Let W be a graphon with ∥W∥∞< ∞. With probability one, ∥Hn(W)∥1 →∥W∥1 and ϵn(W) →0. Convergence. Given a sequence of W-random graphs with target densities ρn, one might wonder whether the graphs Gn = Gn(ρnW) converge to W in a suitable metric. The answer is yes, and involves the so-called cut-metric δ□first introduced in [9]. Its definition is identical to the definition (1) of the norm δ2, except that instead of the L2-norm ∥· · · ∥2, it involves the FriezeKannan cut-norm ∥W∥□defined as the sup of R S×T W over all measurable sets S, T ⊂[0, 1]. In the metric δ□, the W-random graphs Gn = Gn(ρW) then converge to W in the sense that δ□  1 ρ(Gn)W[Gn], W  →0, see [11] for the proof. Estimation of Multi-Way Cuts. Using the results of [12], the convergence of Gn in the cut-metric δ□implies many interesting results for estimating various quantities defined on the graph Gn. Indeed, a consistent approximation ˆW to W in the metric δ2 is clearly consistent in the weaker metric δ□. This distance, in turn, controls various quantities of interest to computer scientists, e.g., the size of all multi-way cuts, implying that a consistent estimator for W also gives consistent estimators for all multi-way cuts. See the full version for details. 5 2.3 Differential Privacy for Graphs The goal of this paper is the development of a differentially private algorithm for graphon estimation. The privacy guarantees are formulated for worst-case inputs — we do not assume that G is generated from a graphon when analyzing privacy. This ensures that the guarantee remains meaningful no matter what an analyst knows ahead of time about G. In this paper, we consider node privacy. We call two graphs G and G′ node neighbors if one can be obtained from the other by removing one node and its adjacent edges. Definition 1 (ϵ-node-privacy). A randomized algorithm A is ϵ-node-private if for all events S in the output space of A, and node neighbors G, G′, Pr[A(G) ∈S] ≤exp(ϵ) × Pr[A(G′) ∈S] . We also need the notion of the node-sensitivity of a function f : Gn →R, defined as maximum maxG,G′ |f(G) −f(G′)|, where the maximum goes over node-neighbors. The node sensitivity is the Lipshitz constant of f viewed as a map between appropriate metrics. 3 Differentially Private Graphon Estimation 3.1 Least-squares Estimation Given a graph as input generated by an unknown graphon W, our goal is to recover a block-model approximation to W. The basic nonprivate algorithm we emulate is least squares estimation, which outputs the k × k matrix B which is closest to the input adjacency matrix A in the distance ˆδ2(B, A) = min π ∥Bπ −A∥2, where the minimum runs over all equipartitions π of [n] into k classes, i.e., over all maps π : [n] → [k] such that all classes have size as close to n/k as possible, i.e., such that ||π−1(i)| −n/k| < 1 for all i, and Bπ is the n × n block-matrix with entries (Bπ)xy = Bπ(x)π(y). If A is the adjacency matrix of a graph G, we write ˆδ2(B, G) instead of ˆδ2(B, A). In the above notation, the basic algorithm we would want to emulate is then the algorithm which outputs the least square fit ˆB = argminB ˆδ2(B, G), where the argmin runs over all symmetric k × k matrices B. 3.2 Towards a Private Algorithm Our algorithm uses a carefully chosen instantiation of the exponential mechanism of McSherry and Talwar [25]. The most direct application of their framework would be to output a random k × k matrix ˆB according to the probability distribution Pr( ˆB = B) ∝exp  −Cˆδ2 2(B, A)  , for some C > 0. The resulting algorithm is ϵ-differentially private if we set C to be ϵ over twice the node-sensitivity of the “score function”, here δ2 2(B, ·). But this value of C turns out to be too small to produce an output that is a good approximation to the least square estimator. Indeed, for a given matrix B and equipartition π, the node-sensitivity of ∥G −Bπ∥2 2 can be as large as 1 n, leading to a value of C which is too small to produce useful results for sparse graphs. To address this, we first note that we can work with an equivalent score that is much less sensitive. Given B and π, we subtract off the squared norm of G to obtain the following: score(B, π; G) = ∥G∥2 2 −∥G −Bπ∥2 2 = 2⟨G, Bπ⟩−∥Bπ∥2, and (6) score(B; G) = max π score(B, π; G), (7) where the max ranges over equipartitions π : [n] →[k]. For a fixed input graph G, maximizing the score is the same as minimizing the distance, i.e. argminB ˆδ2(B, G) = argmaxB score(B; G). 6 The sensitivity of the new score is then bounded by 2 n2 · ∥B∥∞times the maximum degree in G (since G only affects the score via the inner product ⟨G, Bπ⟩). But this is still problematic since, a priori, we have no control over either the size of ∥B∥∞or the maximal degree of G. To keep the sensitivity low, we make two modifications: first, we only optimize over matrices B whose entries bounded by (roughly) ρn (since a good estimator will have entries which are not much larger than ∥ρnW∥∞, which is of order ρn); second, we restrict the score to be accurate only on graphs whose maximum degree is at most a constant times the average degree, since this is what one expects for graphs generated from a bounded graphon. While the first restriction can be directly enforced by the algorithm, the second is more delicate, since we need to provide privacy for all inputs, including graphs with very large maximum degree. We employ an idea from [6, 21]: we first consider the restriction of score(B, π; ·) to Gn,dn where dn will be chosen to be of the order of the average degree of G, and then extend it back to all graphs while keeping the sensitivity low. 3.3 Private Estimation Algorithm Our final algorithm takes as input the privacy parameter ϵ, the graph G, a number k of blocks, and a constant λ ≥1 that will have to be chosen large enough to guarantee consistency of the algorithm. Algorithm 1: Private Estimation Algorithm Input: ϵ > 0, λ ≥1, an integer k and graph G on n vertices. Output: k × k block graphon (represented as a k × k matrix ˆB) estimating ρW Compute an (ϵ/2)-node-private density approximation ˆρ = ρ(G) + Lap(4/nϵ) ; d = λˆρn (the target maximum degree) ; µ = λˆρ (the target L∞norm for ˆB) ; For each B and π, let [ score(B, π; ·) denote a nondecreasing Lipschitz extension (from [21]) of score(B, π; ·) from Gn,d to Gn such that for all matrices A, [ score(B, π; A) ≤score(B, π; A), and define [ score(B; A) = max π [ score(B, π; A) return ˆB, sampled from the distribution Pr( ˆB = B) ∝exp  ϵ 4∆[ score(B; A)  , where ∆= 4dµ n2 = 4λ2ˆρ2 n and B ranges over matrices in Bµ = {B ∈[0, µ]k×k : all entries Bi,j are multiples of 1 n}; Our main results about the private algorithm are the following lemma and theorem. Lemma 2. Algorithm 1 is ϵ-node private. Theorem 1 (Performance of the Private Algorithm). Let W : [0, 1]2 →[0, Λ] be a normalized graphon, let 0 < ρΛ ≤1, let G = Gn(ρW), λ ≥1, and k be an integer. Assume that ρn ≥6 log n and 8Λ ≤λ ≤√n, 2 ≤k ≤min{n p ρ 2, e ρn 2 }. Then the Algorithm 1 outputs an approximation (ˆρ, ˆB) such that δ2  W, 1 ˆρW[ ˆB]  ≤ϵ(O) k (W) + 2ϵn(W) + OP 4 s λ2 log k ρn + λ r k2 log n nϵ + λ nρϵ ! . Remark 1. While Theorem 1 is stated in term of bounds which hold in probability, our proofs yield statements which hold almost surely as n →∞. Remark 2. Under additional assumptions on the graphon W, we obtain tighter bounds. For example, if we assume that W is H¨older continuous, i.e, there exist constants α ∈(0, 1] and C < ∞ such that |W(x, y) −W(x′, y′)| ≤Cδα whenever |x −x′| + |y −y′| ≤δ, then we have that ϵ(O) k (W) = O(k−α) and ϵn(W) = OP (n−α/2). 7 Remark 3. When considering the “best” block model approximation to W, one might want to consider block models with unequal block sizes; in a similar way, one might want to construct a private algorithm that outputs a block model with unequal size blocks, and produces a bound in terms of this best block model approximation instead of ϵ(O) k (W). This can be proved with our methods, with the minimal block size taking the role of 1/k in all our statements. 3.4 Non-Private Estimation Algorithm We also analyze a simple, non-private algorithm, which outputs the argmin of ˆδ2(·, A) over all k × k matrices whose entries are bounded by λρ(G). (Independently of our work, this non-private algorithm was also proposed and analysed in [22].) Our bound (3) refers to this restricted least square algorithm, and does not require any assumptions on the average degree. As in (2), we suppress the dependence of the error on λ. To include it, one has to multiply the OP term in (3) by √ λ. 4 Analysis of the Private and Non-Private Algorithm At a high level, our proof of Theorem 1 (as well as our new bounds on non-private estimation) follow from the fact that for all B and π, the expected score E[Score(B, π; G)] is equal to the score Score(B, π; Q), combined with a concentration argument. As a consequence, the maximizer ˆB of Score(B; G) will approximately minimize the L2-distance ˆδ2(B, Q), which in turn will approximately minimize ∥1 ρW[B]−W∥2, thus relating the L2-error of our estimator ˆB to the “oracle error” ϵ(O) k (W) defined in (4). Our main concentration statement is captured in the following proposition. To state it, we define, for every symmetric n × n matrix Q with vanishing diagonal, Bern0(Q) to be the distribution over symmetric matrices A with zero diagonal such that the entries {Aij : i < j} are independent Bernouilli random variables with EAij = Qij. Proposition 1. Let µ > 0, Q ∈[0, 1]n×n be a symmetric matrix with vanishing diagonal, and A ∼Bern0(Q). If 2 ≤k ≤min{n p ρ(Q), eρ(Q)n} and ˆB ∈Bµ is such that Score( ˆB; A) ≥max B∈Bµ Score(B; A) −ν2 for some ν > 0, then with probability at least 1 −2e−n, ˆδ2( ˆB, Q) ≤min B∈Bµ ˆδ2(B, Q) + ν + O 4 s µ2ρ(Q) k2 n2 + log k n ! . (8) Morally, the proposition contains almost all that is needed to establish the bound (3) proving consistency of the non-private algorithm (which, in fact, only involves the case ν = 0), even though there are several additional steps needed to complete the proof. The proposition also contains an extra ingredient which is a crucial input for the analysis of the private algorithm: it states that if instead of an optimal, least square estimator, we output an estimator whose score is only approximately maximal, then the excess error introduced by the approximation is small. To apply the proposition, we then establish a lemma which gives us a lower bound on the score of the output ˆB in terms of the maximal score and an excess error ν. There are several steps needed to execute this strategy, the most important ones involving a rigorous control of the error introduced by the Lipschitz extension inside the exponential algorithm. We defer the details to the full version. Acknowledgments. A.S. was supported by NSF award IIS-1447700 and a Google Faculty Award. Part of this work was done while visiting Boston University’s Hariri Institute for Computation and Harvard University’s Center for Research on Computation and Society. 8 References [1] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing the parameters. arXiv:1503.00609, 2015. [2] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing the parameters. Manuscript, 2015. [3] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. arXiv:1405.3267, 2014. [4] P. J. Bickel and A. Chen. A nonparametric view of network models and newman-girvan and other modularities. Proceedings of the National Academy of Sciences of the United States of America, 106:21068– 21073, 2009. [5] P. J. Bickel, A. Chen, and E. Levina. The method of moments and degree distributions for network models. Annals of Statistics, 39(5):2280–2301, 2011. [6] J. Blocki, A. Blum, A. Datta, and O. Sheffet. Differentially private data analysis of social networks via restricted sensitivity. In Innovations in Theoretical Computer Science (ITCS), pages 87–96, 2013. [7] B. Bollobas, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. Random Struct. Algorithms, 31:3–122, 2007. [8] C. Borgs, J. T. Chayes, L. Lov´asz, V. S´os, and K. Vesztergombi. Counting graph homomorphisms. In Topics in Discrete Mathematics (eds. M. Klazar, J. Kratochvil, M. Loebl, J. Matousek, R. Thomas, P.Valtr), pages 315–371. Springer, 2006. [9] C. Borgs, J. T. Chayes, L. Lov´asz, V. S´os, and K. Vesztergombi. Convergent graph sequences I: Subgraph frequencies, metric properties, and testing. Advances in Math., 219:1801–1851, 2008. [10] C. Borgs, J. T. Chayes, L. Lov´asz, V. S´os, and K. Vesztergombi. Convergent graph sequences II: Multiway cuts and statistical physics. Ann. of Math., 176:151–219, 2012. [11] C. Borgs, J. T. Chayes, H. Cohn, and Y. Zhao. An Lp theory of sparse graph convergence I: limits, sparse random graph models, and power law distributions. arXiv:1401.2906, 2014. [12] C. Borgs, J. T. Chayes, H. Cohn, and Y. Zhao. An Lp theory of sparse graph convergence II: LD convergence, quotients, and right convergence. arXiv:1408.0744, 2014. [13] S. Chatterjee. Matrix estimation by universal singular value thresholding. Annals of Statistics, 43(1): 177–214, 2015. [14] S. Chen and S. Zhou. Recursive mechanism: towards node differential privacy and unrestricted joins. In ACM SIGMOD International Conference on Management of Data, pages 653–664, 2013. [15] D. S. Choi, P. J. Wolfe, and E. M. Airoldi. Stochastic blockmodels with a growing number of classes. Biometrika, 99:273–284, 2012. [16] P. Diaconis and S. Janson. Graph limits and exchangeable random graphs. Rendiconti di Matematica, 28: 33—61, 2008. [17] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In S. Halevi and T. Rabin, editors, TCC, volume 3876, pages 265–284, 2006. [18] C. Gao, Y. Lu, and H. H. Zhou. Rate-optimal graphon estimation. arXiv:1410.5837, 2014. [19] P. D. Hoff, A. E. Raftery, and M. S. Handcock. Latent space approaches to social network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002. [20] P. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Soc Netw, 5:109–137, 1983. [21] S. P. Kasiviswanathan, K. Nissim, S. Raskhodnikova, and A. Smith. Analyzing graphs with nodedifferential privacy. In Theory of Cryptography Conference (TCC), pages 457–476, 2013. [22] O. Klopp, A. Tsybakov, and N. Verzelen. Oracle inequalities for network models and sparse graphon estimation. arXiv:1507.04118, 2015. [23] L. Lov´asz and B. Szegedy. Limits of dense graph sequences. Journal of Combinatorial Theory, Series B, 96:933–957, 2006. [24] W. Lu and G. Miklau. Exponential random graph estimation under differential privacy. In 20th ACM SIGKDD International Conference on Knowledge discovery and data mining, pages 921–930, 2014. [25] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94–103. IEEE, 2007. [26] S. Raskhodnikova and A. Smith. High-dimensional Lipschitz extensions and node-private analysis of network data. arXiv:1504.07912, 2015. [27] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel. Ann. Statist., 39(4):1878–1915, 08 2011. [28] P. Wolfe and S. C. Olhede. Nonparametric graphon estimation. arXiv:1309.5936, 2013. 9
2015
181
5,682
Online Learning with Adversarial Delays Kent Quanrud∗and Daniel Khashabi† Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 {quanrud2,khashab2}@illinois.edu Abstract We study the performance of standard online learning algorithms when the feedback is delayed by an adversary. We show that online-gradient-descent [1] and follow-the-perturbed-leader [2] achieve regret O( √ D) in the delayed setting, where D is the sum of delays of each round’s feedback. This bound collapses to an optimal O( √ T) bound in the usual setting of no delays (where D = T). Our main contribution is to show that standard algorithms for online learning already have simple regret bounds in the most general setting of delayed feedback, making adjustments to the analysis and not to the algorithms themselves. Our results help affirm and clarify the success of recent algorithms in optimization and machine learning that operate in a delayed feedback model. 1 Introduction Consider the following simple game. Let K be a bounded set, such as the unit ℓ1 ball or a collection of n experts. Each round t, we pick a point xt ∈K. An adversary then gives us a cost function ft, and we incur the loss ℓt = ft(xt). After T rounds, our total loss is the sum LT = PT t=1 ℓt, which we want to minimize. We cannot hope to beat the adversary, so to speak, when the adversary picks the cost function after we select our point. There is margin for optimism, however, if rather than evaluate our total loss in absolute terms, we compare our strategy to the best fixed point in hindsight. The regret of a strategy x1, . . . , xT ∈K is the additive difference R(T) = PT t=1 ft(xt) −arg minx∈K PT t=1 ft(x). Surprisingly, one can obtain positive results in terms of regret. Kalai and Vempala showed that a simple and randomized follow-the-leader type algorithm achieves R(T) = O( √ T) in expectation for linear cost functions [2] (here, the big-O notation assumes that the diameter of K and the ft’s are bounded by constants). If K is convex, then even if the cost vectors are more generally convex cost functions (where we incur losses of the form ℓt = ft(xt), with ft a convex function), Zinkevich showed that gradient descent achieves regret R(T) = O( √ T) [1]. There is a large body of theoretical literature about this setting, called online learning (see for example the surveys by Blum [3], Shalev-Shwartz [4], and Hazan [5]). Online learning is general enough to be applied to a diverse family of problems. For example, Kalai and Vempala’s algorithm can be applied to online combinatorial problems such as shortest paths [6], decision trees [7], and data structures [8, 2]. In addition to basic machine learning problems with convex loss functions, Zinkevich considers applications to industrial optimization, where the ∗http://illinois.edu/~quanrud2/. Supported in part by NSF grants CCF-1217462, CCF1319376, CCF-1421231, CCF-1526799. †http://illinois.edu/~khashab2/. Supported in part by a grant from Google. 1 value of goods is not known until after the goods are produced. Other examples of applications of online learning include universal portfolios in finance [9] and online topic-ranking for multi-labeled documents [10]. The standard setting assumes that the cost vector ft (or more generally, the feedback) is given to and processed by the player before making the next decision in round t + 1. Philosophically, this is not how decisions are made in real life: we rush through many different things at the same time with no pause for careful consideration, and we may not realize our mistakes for a while. Unsurprisingly, the assumption of immediate feedback is too restrictive for many real applications. In online advertising, online learning algorithms try to predict and serve ads that optimize for clicks [11]. The algorithm learns by observing whether or not an ad is clicked, but in production systems, a massive number of ads are served between the moment an ad is displayed to a user and the moment the user has decided to either click or ignore that ad. In military applications, online learning algorithms are used by radio jammers to identify efficient jamming strategies [12]. After a jammer attempts to disrupt a packet between a transmitter and a receiver, it does not know if the jamming attempt succeeded until an acknowledgement packet is sent by the receiver. In cloud computing, online learning helps devise efficient resource allocation strategies, such as finding the right mix of cheaper (and inconsistent) spot instances and more reliable (and expensive) on-demand instances when renting computers for batch jobs [13]. The learning algorithm does not know how well an allocation strategy worked for a batch job until the batch job has ended, by which time many more batch jobs have already been launched. In finance, online learning algorithms managing portfolios are subject to information and transaction delays from the market, and financial firms invest heavily to minimize these delays. One strategy to handle delayed feedback is to pool independent copies of a fixed learning algorithm, each of which acts as an undelayed learner over a subsequence of the rounds. Each round is delegated to a single instance from the pool of learners, and the learner is required to wait for and process its feedback before rejoining the pool. If there are no learners available, a new copy is instantiated and added to the pool. The size of the pool is proportional to the maximum number of outstanding delays at any point of decision, and the overall regret is bounded by the sum of regrets of the individual learners. This approach is analyzed for constant delays by Weinberger and Ordentlich [14], and a more sophisticated analysis is given by Joulani et al. [15]. If α is the expected maximum number of outstanding feedbacks, then Joulani et al. obtain a regret bound on the order of O( √ αT) (in expectation) for the setting considered here. The blackbox nature of this approach begets simultaneous bounds for other settings such as partial information and stochastic rewards. Although maintaining copies of learners in proportion to the delay may be prohibitively resource intensive, Joulani et al. provide a more efficient variant for the stochastic bandit problem, a setting not considered here. Another line of research is dedicated to scaling gradient descent type algorithms to distributed settings, where asynchronous processors naturally introduce delays in the learning framework. A classic reference in this area is the book of Bertsekas and Tsitskilis [16]. If the data is very sparse, so that input instances and their gradients are somewhat orthogonal, then intuitively we can apply gradients out of order without significant interference across rounds. This idea is explored by Recht et al. [17], who analyze and test parallel algorithm on a restricted class of strongly convex loss functions, and by Duchi et al. [18] and McMahan and Streeter [19], who design and analyze distributed variants of adaptive gradient descent [20]. Perhaps the most closely related work in this area is by Langford et al., who study the online-gradient-descent algorithm of Zinkevich when the delays are bounded by a constant number of rounds [21]. Research in this area has largely moved on from the simplistic models considered here; see [22, 23, 24] for more recent developments. The impact of delayed feedback in learning algorithms is also explored by Riabko [25] under the framework of “weak teachers”. For the sake of concreteness, we establish the following notation for the delayed setting. For each round t, let dt ∈Z+ be a non-negative integer delay. The feedback from round t is delivered at the end of round t + dt −1, and can be used in round t + dt. In the standard setting with no delays, dt = 1 for all t. For each round t, let Ft = {u ∈[T] : u + du −1 = t} be the set of rounds whose feedback appears at the end of round t. We let D = PT t=1 dt denote the sum of all delays; in the standard setting with no delays, we have D = T. In this paper, we investigate the implications of delayed feedback when the delays are adversarial (i.e., arbitrary), with no assumptions or restrictions made on the adversary. Rather than design new 2 algorithms that may generate a more involved analysis, we study the performance of the classical algorithms online-gradient-descent and follow-the-perturbed-leader, essentially unmodified, when the feedback is delayed. In the delayed setting, we prove that both algorithms have a simple regret bound of O( √ D). These bounds collapse to match the well-known O( √ T) regret bounds if there are no delays (i.e., where D = T). Paper organization In Section 2, we analyze the online-gradient-descent algorithm in the delayed setting, giving upper bounds on the regret as a function of the sum of delays D. In Section 3, we analyze the follow-the-perturbed-leader in the delayed setting and derive a regret bound in terms of D. Due to space constraints, extensions to online-mirror-descent and follow-the-lazy-leader are deferred to the appendix. We conclude and propose future directions in Section 4. 2 Delayed gradient descent Convex optimization In online convex optimization, the input domain K is convex, and each cost function ft is convex. For this setting, Zinkevich proposed a simple online algorithm, called online-gradient-descent, designed as follows [1]. The first point, x1, is picked in K arbitrarily. After picking the tth point xt, online-gradient-descent computes the gradient ∇ft|xt of the loss function at xt, and chooses xt+1 = πK(xt −η∇ft|xt) in the subsequent round, for some parameter η ∈R>0. Here, πK is the projection that maps a point x′ to its nearest point in K (discussed further below). Zinkevich showed that, assuming the Euclidean diameter of K and the Euclidean lengths of all gradients ∇ft|x are bounded by constants, online-gradientdescent has an optimal regret bound of O( √ T). Delayed gradient descent In the delayed setting, the loss function ft is not necessarily given by the adversary before we pick the next point xt+1 (or even at all). The natural generalization of online-gradient-descent to this setting is to process the convex loss functions and apply their gradients the moment they are delivered. That is, we update x′ t+1 = xt −η X s∈Ft ∇fs|xs, for some fixed parameter η, and then project xt+1 = πK(x′ t+1) back into K to choose our (t + 1)th point. In the setting of Zinkevich, we have Ft = {t} for each t, and this algorithm is exactly online-gradient-descent. Note that a gradient ∇fs|xs does not need to be timestamped by the round s from which it originates, which is required by the pooling strategies of Weinberger and Ordentlich [14] and Joulani et al. [15] in order to return the feedback to the appropriate learner. Theorem 2.1. Let K be a convex set with diameter 1, let f1, . . . , fT be convex functions over K with ∥∇ft|x∥2 ≤L for all x ∈K and t ∈[T], and let η ∈R be a fixed parameter. In the presence of adversarial delays, online-gradient-descent selects points x1, . . . , xT ∈K such that for all y ∈K, T X t=1 ft(xt) − T X t=1 ft(y) = O 1 η + ηL2(T + D)  , where D denotes the sum of delays over all rounds t ∈[T]. For η = 1/L √ T + D, Theorem 2.1 implies a regret bound of O(L √ D + T) = O(L √ D). This choice of η requires prior knowledge of the final sum D. When this sum is not known, one can calculate D on the fly: if there are δ outstanding (undelivered) cost functions at a round t, then D increases by exactly δ. Obviously, δ ≤T and T ≤D, so D at most doubles. We can therefore employ the “doubling trick” of Auer et al. [26] to dynamically adjust η as D grows. In the undelayed setting analyzed by Zinkevich, we have D = T, and the regret bound of Theorem 2.1 matches that obtained by Zinkevich. If each delay dt is bounded by some fixed value τ, Theorem 2.1 implies a regret bound of O(L √ τT) that matches that of Langford et al. [21]. In both of these special cases, the regret bound is known to be tight. 3 Before proving Theorem 2.1, we review basic definitions and facts on convexity. A function f : K →R is convex if f((1 −α)x + αy) ≤(1 −α)f(x) + αf(y) ∀x, y ∈K, α ∈[0, 1]. If f is differentiable, then f is convex iff f(x) + ∇f|x · (y −x) ≤f(y) ∀x, y ∈K. (1) For f convex but not necessarily differentiable, a subgradient of f at x is any vector that can replace ∇f|x in equation (1). The (possible empty) set of gradients of f at x is denoted by ∂f(x). The gradient descent may occasionally update along a gradient that takes us out of the constrained domain K. If K is convex, then we can simply project the point back into K. Lemma 2.2. Let K be a closed convex set in a normed linear space X and x ∈X a point, and let x′ ∈K be the closest point in K to x. Then, for any point y ∈K, ∥x −y∥2 ≤∥x′ −y∥2. We let πK denote the map taking a point x to its closest point in the convex set K. Proof of Theorem 2.1. Let y = arg minx∈K(f1(x) + · · · + fT (x)) be the best point in hindsight at the end of all T rounds. For t ∈[T], by convexity of ft, we have, ft(y) ≥ft(xt) + ∇ft|xt · (y −xt). Fix t ∈[T], and consider the distance between xt+1 and y. By Lemma 2.2, we know that ∥xt+1 −y∥2 ≤ x′ t+1 −y 2, where x′ t+1 = xt −η P s∈Ft ∇fs|xs. We split the sum of gradients applied in a single round and consider them one by one. For each s ∈Ft, let Ft,s = {r ∈Ft : r < s}, and let xt,s = xt−η P r∈Ft,s ∇fr|xr. Suppose Ft is nonempty, and fix s′ = max Ft to be the last index in Ft. By Lemma 2.2, we have, ∥xt+1 −y∥2 2 ≤ x′ t+1 −y 2 2 = xt,s′ −η∇fs′|xs′ −y 2 2 = ∥xt,s′ −y∥2 2 −2η ∇fs′|xs′ · (xt,s′ −y)  + η2 ∇fs′|xs′ 2 2. Repeatedly unrolling the first term in this fashion gives ∥xt+1 −y∥2 2 ≤∥xt −y∥2 2 −2η X s∈Ft ∇fs|xs · (xt,s −y) + η2 X s∈Ft ∥∇fs|xs∥2 2. For each s ∈Ft, by convexity of f, we have, −∇fs|xs · (xt,s −y) = ∇fs|xs · (y −xt,s) = ∇fs|xs · (y −xs) + ∇fs|xs · (xs −xt,s) ≤fs(y) −fs(xs) + ∇fs|xs · (xs −xt,s). By assumption, we also have ∥∇fs|xs∥2 ≤L for each s ∈Ft. With respect to the distance between xt+1 and y, this gives, ∥xt+1 −y∥2 2 ≤∥xt −y∥2 2 + 2η X s∈Ft (fs(y) −fs(xs) + ∇fs|xs · (xs −xt,s)) + η2 · |Ft| · L2. Solving this inequality for the regret terms P s∈Ft fs(xs)−fs(y) and taking the sum of inequalities over all rounds t ∈[T], we have, T X t=1 (ft(xt) −ft(y)) = T X t=1 X s∈Ft fs(xs) −fs(y) ≤1 2η · T X t=1 ∥xt −y∥2 2 −∥xt+1 −y∥2 2 + 2η X s∈Ft ∇fs|xs · (xs −xt,s) + η2 · |Ft| · L2 ! = 1 2η T X t=1 ∥xt −y∥2 2 −∥xt+1 −y∥2 2 ! + η 2TL2 + T X t=1 X s∈Ft ∇fs|xs · (xs −xt,s) ≤1 2η + η 2TL2 + T X t=1 X s∈Ft ∇fs|xs · (xs −xt,s). (2) 4 The first two terms are familiar from the standard analysis of online-gradient-descent. It remains to analyze the last sum, which we call the delay term. Each summand ∇fs|xs · (xs −xt,s) in the delay term contributes loss proportional to the distance between the point xs when the gradient ∇fs|xs is generated and the point xt,s when the gradient is applied. This distance is created by the other gradients that are applied in between, and the number of such in-between gradients are intimately tied to the total delay, as follows. By Cauchy-Schwartz, the delay term is bounded above by T X t=1 X s∈Ft ∇fs|xs · (xs −xt,s) ≤ T X t=1 X s∈Ft ∥∇fs|xs∥2∥xs −xt,s∥2 ≤L T X t=1 X s∈Ft ∥xs −xt,s∥2. (3) Consider a single term ∥xs −xt,s∥2 for fixed t ∈[T] and s ∈Ft. Intuitively, the difference xt,s−xs is roughly the sum of gradients received between round s and when we apply the gradient from round s in round t. More precisely, by applying the triangle inequality and Lemma 2.2, we have, ∥xt,s −xs∥2 ≤∥xt,s −xt∥2 + ∥xt −xs∥2 ≤∥xt,s −xt∥2 + ∥x′ t −xs∥2. For the same reason, we have ∥x′ t −xs∥2 ≤∥x′ t −xt−1∥2 + x′ t−1 −xs 2, and unrolling in this fashion, we have, ∥xt,s −xs∥2 ≤∥xt,s −xt∥2 + t−1 X r=s x′ r+1 −xr 2 ≤η X p∈Ft,s ∇fp|xp 2 + η t−1 X r=s X q∈Fr ∇fq|xq 2 ≤η · L · |Ft,s| + t−1 X r=s |Fr| ! . (4) After substituting equation (4) into equation (3), it remains to bound the sum PT t=1 P s∈Ft(|Ft,s|+ Pt−1 r=s|Fr|). Consider a single term |Ft,s| + Pt−1 r=s|Fr| in the sum. This quantity counts, for a gradient ∇fs|xs from round s delivered just before round t ≥s, the number of other gradients that are applied while ∇fs|xs is withheld. Fix two rounds s and t, and consider an intermediate round r ∈{s, . . . , t}. If r < t then fix q ∈Fr, and if r = t then fix q ∈Ft,s. The feedback from round q is applied in a round r between round s and round t. We divide our analysis into two scenarios. In one case, q ≤s, and the gradient from round q appears only after s, as in the following diagram. q / ∇fq|xq % · · · / s / ∇fs|xs $ · · · / r / · · · / t In the other case, q > s, as in the following diagram. s / ∇fs|xs ) · · · / q / ∇fq|xq " · · · / r / · · · / t For each round u, let du denote the number of rounds the gradient feedback is delayed (so u ∈ Fu+du). There are at most ds instances of the latter case, since q must lie in s+1, . . . , t. The first case can be charged to dq. To bound the first case, observe that for fixed q, the number of indices s such that q < s ≤dq + q ≤ds + s is at most dq. That is, all instances of the second case for a fixed q can be charged to dq. Between the two cases, we have PT t=1 P s∈Ft(|Ft,s| + Pt−1 r=s|Fr|) ≤2 PT t=1 dt, and the delay term is bounded by T X t=1 X s∈Ft ∇fs|xs · (xs −xt,s) ≤2η · L2 T X t=1 dt. With respect to the overall regret, this gives, T X t=1 (f(xt) −f(y)) ≤1 2η + η · L2 T 2 + 2 T X t=1 dt ! = O 1 η + ηL2D  , as desired. ■ 5 Remark 2.3. The delay term PT t=1 P s∈Ft ∇fs|xs · (xs −xt,s) is a natural point of entry for a sharper analysis based on strong sparseness assumptions. The distance xs −xt,s is measured by its projection against the gradient ∇fs|xs, and the preceding proof assumes the worst case and bounds the dot product with the Cauchy-Schwartz inequality. If, for example, we assume that gradients are pairwise orthogonal and analyze online-gradient-descent in the unconstrained setting, then the dot product ∇fs|xs · (xs −xt,s) is 0 and the delay term vanishes altogether. 3 Delaying the Perturbed Leader Discrete online linear optimization In discrete online linear optimization, the input domain K ⊂ Rn is a (possibly discrete) set with bounded diameter, and each cost function ft is of the form ft(x) = ct · x for a bounded-length cost vector ct. The previous algorithm online-gradientdescent does not apply here because K is not convex. A natural algorithm for this problem is follow-the-leader. Each round t, let yt = arg minx∈K x·(c1+· · ·+ct) be the optimum choice over the first t cost vectors. The algorithm picking yt in round t is called be-the-leader, and can be shown to have zero regret. Of course, bethe-leader is infeasible since the cost vector ct is revealed after picking yt. follow-theleader tries the next best thing, picking yt−1 in round t. Unfortunately, this strategy can have linear regret, largely because it is a deterministic algorithm that can be manipulated by an adversary. Kalai and Vempala [2] gave a simple and elegant correction called follow-the-perturbedleader. Let ϵ > 0 be a parameter to be fixed later, and let Qϵ = [0, 1/ϵ]n be the cube of length 1/ϵ. Each round t, follow-the-perturbed-leader randomly picks a vector c0 ∈Qϵ by the uniform distribution, and then selects xt = arg minx∈K x · (c0 + · · · + ct−1) to optimize over the previous costs plus the random perturbation c0. With the diameter of K and the lengths ∥ct∥of each cost vector held constant, Kalai and Vempala showed that follow-the-perturbed-leader has regret O( √ T) in expectation. Following the delayed and perturbed leader More generally, follow-the-perturbedleader optimizes over all information available to the algorithm, plus some additional noise to smoothen the worst-case analysis. If the cost vectors are delayed, we naturally interpret followthe-perturbed-leader to optimize over all cost vectors ct delivered in time for round t when picking its point xt. That is, the tth leader becomes the best choice with respect to all cost vectors delivered in the first t rounds: yd t = arg min x∈K t X s=1 X r∈Fs cr · x (we use the superscript d to emphasize the delayed setting). The tth perturbed leader optimizes over all cost vectors delivered through the first t rounds in addition to the random perturbation c0 ∈Qϵ: ˜yd t = arg min x∈K c0 · x + t X s=1 X r∈Fs cr · x ! . In the delayed setting, follow-the-perturbed-leader chooses xt = ˜yd t−1 in round t. We claim that follow-the-perturbed-leader has a direct and simple regret bound in terms of the sum of delays D, that collapses to Kalai and Vempala’s O( √ T) regret bound in the undelayed setting. Theorem 3.1. Let K ⊆Rn be a set with L1-diameter ≤1, c1, . . . , cT ∈Rn with ∥ct∥1 ≤1 for all t, and η > 0. In the presence of adversarial delays, follow-the-perturbed-leader picks points x1, . . . , xT ∈K such that for all y ∈K, T X t=1 E[ct · xt] ≤ T X t=1 ct · y + O ϵ−1 + ϵD  . For ϵ = 1/ √ D, Theorem 3.1 implies a regret bound of O( √ D). When D is not known a priori, the doubling trick can be used to adjust ϵ dynamically (see the discussion following Theorem 2.1). 6 To analyze follow-the-perturbed-leader in the presence of delays, we introduce the notion of a prophet, who is a sort of omniscient leader who sees the feedback immediately. Formally, the tth prophet is the best point with respect to all the cost vectors over the first t rounds: zt = arg min x∈K (c1 + · · · + ct) · x. The tth perturbed prophet is the best point with respect to all the cost vectors over the first t rounds, in addition to a perturbation c0 ∈Qϵ: ˜zt = arg min x∈K (c0 + c1 + · · · + ct) · x. (5) The prophets and perturbed prophets behave exactly as the leaders and perturbed leaders in the setting of Kalai and Vempala with no delays. In particular, we can apply the regret bound of Kalai and Vempala to the (infeasible) strategy of following the perturbed prophet. Lemma 3.2 ([2]). Let K ⊆Rn be a set with L1-diameter ≤1, let c1, . . . , cT ∈Rn be cost vectors bounded by ∥ct∥1 ≤1 for all t, and let ϵ > 0. If ˜z1, . . . , ˜zT −1 ∈K are chosen per equation (5), then PT t=1 E[ct · ˜zi−1] ≤PT t=1 ct · y + O ϵ−1 + ϵT  . for all y ∈K. The analysis by Kalai and Vempala observes that when there are no delays, two consecutive perturbed leaders ˜yt and ˜yt+1 are distributed similarly over the random noise [2, Lemma 3.2]. Instead, we will show that ˜yd t and ˜zt are distributed in proportion to delays. We first require a technical lemma that is implicit in [2]. Lemma 3.3. Let K be a set with L1-diameter ≤1, and let u, v ∈Rn be vectors. Let y, z ∈Rn be random vectors defined by y = arg miny∈K(q + u) · y and z = arg minz∈K(q + v) · z, where q is chosen uniformly at random from Q = Qn i=1[0, r], for some fixed length r > 0. Then, for any vector c, E[c · z] −E[c · y] ≤∥v −u∥1∥c∥∞ r . Proof. Let Q′ = v+Q and Q′′ = u+Q, and write y = arg miny∈K q′′·y and z = arg minz∈K q′·z, where q′ ∈Q′ and q′′ ∈Q′′ are chosen uniformly at random. Then E[c · z] −E[c · y] = Eq′′∈Q′′[c · z] −Eq′∈Q′[c · y]. Subtracting P[q′ ∈Q′ ∩Q′′]Eq′∈Q′∩Q′′[c · z] from both terms on the right, we have Eq′′∈Q′′[c · z] −Eq′∈Q′[c · y] = P[q′′ ∈Q′′ \ Q′] · Eq′′∈Q′′\Q′[c · z] −P[q′ ∈Q′ \ Q′′] · Eq′∈Q′\Q′′[c · y] By symmetry, P[q′′ ∈Q′′ \ Q′] = P[q′ ∈Q′ \ Q′′], and we have, E[c · z] −E[c · y] ≤(P[q′′ ∈Q′′ \ Q′])Eq′′∈Q′′\Q′,q′∈Q′\Q′′[c · (z −y)]. By assumption, K has L1-diameter ≤1, so ∥y −z∥1 ≤1, and by Hölder’s inequality, we have, E[c · z] −E[c · y] ≤P[q′′ ∈Q′′ \ Q′]∥c∥∞. It remains to bound P[q′′ ∈Q′′ \ Q′] = P[q′ ∈Q′ \ Q′′]. If ∥v −u∥1 ≤r, we have, vol(Q′ ∩Q′′) = n Y i=1 (r −|vi −ui|) = vol(Q′) n Y i=1  1 −|(vi −ui)| r  ≥vol(Q′)  1 −∥v −u∥1 r  . Otherwise, if ∥u −v∥1 > r, then vol(Q′ ∩Q′′) = 0 ≥vol(Q′)(1 −∥v −u∥1/r). In either case, we have, P[q′ ∈Q′ \ Q′′] = vol(Q′ ∩Q′′) vol(Q′) ≤1 −vol(Q′ ∩Q′′) vol(Q′) ≤∥v −u∥1 r , and the claim follows. ■ Lemma 3.3 could also have been proven geometrically in similar fashion to Kalai and Vempala. 7 Lemma 3.4. PT t=1 E[ct · ˜zt−1] −E  ct · ˜yd t−1  ≤ϵD, where D is the sum of delays of all cost vectors. Proof. Let ut = Pt s=1 ct be the sum of all costs through the first t rounds, and vt = P s:s+ds≤t ct be the sum of cost vectors actually delivered through the first t rounds. Then the perturbed prophet ˜zt−1 optimizes over c0 + ut−1 and ˜yd t−1 optimizes over c0 + vt−1. By Lemma 3.3, for each t, we have Ec0∼Qϵ[ct · ˜zt−1] −Ec0∼Qϵ  ct · ˜yd t−1  ≤ϵ · ∥ut−1 −vt−1∥1∥ct∥∞≤ϵ · |{s < t : s + ds ≥t}| Summed over all T rounds, we have, T X t=1 Ec0[ct · ˜zt] −Ec0  ct · ˜yd t  ≤ϵ T X t=1 |{s < t : s + ds ≥t}|. The sum PT t=1|{s < t : s + ds ≥t}| charges each cost vector cs once for every round it is delayed, and therefore equals D. Thus, PT t=1 Ec0[ct · ˜zt] −Ec0  ct · ˜yd t  ≤ϵD, as desired. ■ Now we complete the proof of Theorem 3.1. Proof of Theorem 3.1. By Lemma 3.4 and Lemma 3.2, we have, T X t=1 E  ct · ˜yd t−1  ≤ T X t=1 E[ct · ˜zt−1] + ϵD ≤arg min x∈K T X t=1 E[ct · x] + O(ϵ−1 + ϵD), as desired. ■ 4 Conclusion We prove O( √ D) regret bounds for online-gradient-descent and follow-theperturbed-leader in the delayed setting, directly extending the O( √ T) regret bounds known in the undelayed setting. More importantly, by deriving a simple bound as a function of the delays, without any restriction on the delays, we establish a simple and intuitive model for measuring delayed learning. This work suggests natural relationships between the regret bounds of online learning algorithms and delays in the feedback. Beyond analyzing existing algorithms, we hope that optimizing over the regret as a function of D may inspire different (and hopefully simple) algorithms that readily model real world applications and scale nicely to distributed environments. Acknowledgements We thank Avrim Blum for introducing us to the area of online learning and helping us with several valuable discussions. We thank the reviewers for their careful and insightful reviews: finding errors, referencing relevant works, and suggesting a connection to mirror descent. References [1] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proc. 20th Int. Conf. Mach. Learning (ICML), pages 928–936, 2003. [2] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. J. Comput. Sys. Sci., 71:291– 307, 2005. Extended abstract in Proc. 16th Ann. Conf. Comp. Learning Theory (COLT), 2003. [3] A. Blum. On-line algorithms in machine learning. In A. Fiat and G. Woeginger, editors, Online algorithms, volume 1442 of LNCS, chapter 14, pages 306–325. Springer Berlin Heidelberg, 1998. [4] S. Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn., 4(2):107–194, 2011. [5] E. Hazan. Introduction to online convex optimization. Internet draft available at http://ocobook. cs.princeton.edu, 2015. [6] E. Takimoto and M. Warmuth. Path kernels and multiplicative updates. J. Mach. Learn. Research, 4:773– 818, 2003. 8 [7] D. Helmbold and R. Schapire. Predicting nearly as well as the best pruning of a decision tree. Mach. Learn. J., 27(1):61–68, 1997. [8] A. Blum, S. Chawla, and A. Kalai. Static optimality and dynamic search optimality in lists and trees. Algorithmica, 36(3):249–260, 2003. [9] T. M. Cover. Universal portfolios. Math. Finance, 1(1):1–29, 1991. [10] K. Crammer and Y. Singer. A family of additive online algorithms for category ranking. J. Mach. Learn. Research, 3:1025–1058, 2003. [11] X. He, J. Pan, O. Jin, T. Xu, B. Liu, T. Xu, Y. Shi, A. Atallah, R. Herbrich, S. Bowers, and J. Quiñonero Candela. Practical lessons from predicting clicks on ads at facebook. In Proc. 20th ACM Conf. Knowl. Disc. and Data Mining (KDD), pages 1–9. ACM, 2014. [12] S. Amuru and R. M. Buehrer. Optimal jamming using delayed learning. In 2014 IEEE Military Comm. Conf. (MILCOM), pages 1528–1533. IEEE, 2014. [13] I. Menache, O. Shamir, and N. Jain. On-demand, spot, or both: Dynamic resource allocation for executing batch jobs in the cloud. In 11th Int. Conf. on Autonomic Comput. (ICAC), 2014. [14] M.J. Weinberger and E. Ordentlich. On delayed prediction of individual sequences. IEEE Trans. Inf. Theory, 48(7):1959–1976, 2002. [15] P. Joulani, A. György, and C. Szepesvári. Online learning under delayed feedback. In Proc. 30th Int. Conf. Mach. Learning (ICML), volume 28, 2013. [16] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. PrenticeHall, 1989. [17] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In Adv. Neural Info. Proc. Sys. 24 (NIPS), pages 693–701, 2011. [18] J. Duchi, M.I. Jordan, and B. McMahan. Estimation, optimization, and parallelism when data is sparse. In Adv. Neural Info. Proc. Sys. 26 (NIPS), pages 2832–2840, 2013. [19] H.B. McMahan and M. Streeter. Delay-tolerant algorithms for asynchronous distributed online learning. In Adv. Neural Info. Proc. Sys. 27 (NIPS), pages 2915–2923, 2014. [20] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Research, 12:2121–2159, July 2011. [21] J. Langford, A. J. Smola, and M. Zinkevich. Slow learners are fast. In Adv. Neural Info. Proc. Sys. 22 (NIPS), pages 2331–2339, 2009. [22] J. Liu, S. J. Wright, C. Ré, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordiante descent algorithm. J. Mach. Learn. Research, 16:285–322, 2015. [23] J. C. Duchi, T. Chaturapruek, and C. Ré. Asynchronous stochastic convex optimization. CoRR, abs/1508.00882, 2015. To appear in Adv. Neural Info. Proc. Sys. 28 (NIPS), 2015. [24] S. J. Wright. Coordinate descent algorithms. Math. Prog., 151(3–34), 2015. [25] D. Riabko. On the flexibility of theoretical models for pattern recognition. PhD thesis, University of London, April 2005. [26] N. Cesa-Bianchi, Y. Freund, D. Haussler, D.P. Helmbold, R.E. Schapire, and M.K. Warmuth. How to use expert advice. J. Assoc. Comput. Mach., 44(3):426–485, 1997. 9
2015
182
5,683
Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems Yuxin Chen Department of Statistics Stanford University Stanford, CA 94305 yxchen@stanfor.edu Emmanuel J. Candès Department of Mathematics and Department of Statistics Stanford University Stanford, CA 94305 candes@stanford.edu Abstract This paper is concerned with finding a solution x to a quadratic system of equations yi = |⟨ai, x⟩|2, i = 1, . . . , m. We demonstrate that it is possible to solve unstructured random quadratic systems in n variables exactly from O(n) equations in linear time, that is, in time proportional to reading the data {ai} and {yi}. This is accomplished by a novel procedure, which starting from an initial guess given by a spectral initialization procedure, attempts to minimize a nonconvex objective. The proposed algorithm distinguishes from prior approaches by regularizing the initialization and descent procedures in an adaptive fashion, which discard terms bearing too much influence on the initial estimate or search directions. These careful selection rules—which effectively serve as a variance reduction scheme—provide a tighter initial guess, more robust descent directions, and thus enhanced practical performance. Further, this procedure also achieves a nearoptimal statistical accuracy in the presence of noise. Empirically, we demonstrate that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size. 1 Introduction Suppose we are given a response vector y = [yi]1≤i≤m generated from a quadratic transformation of an unknown object x ∈Rn/Cn, i.e. yi = |⟨ai, x⟩|2 , i = 1, · · · , m, (1) where the feature/design vectors ai ∈Rn/Cn are known. In other words, we acquire measurements about the linear product ⟨ai, x⟩with all signs/phases missing. Can we hope to recover x from this nonlinear system of equations? This problem can be recast as a quadratically constrained quadratic program (QCQP), which subsumes as special cases various classical combinatorial problems with Boolean variables (e.g. the NP-complete stone problem [1, Section 3.4.1]). In the physical sciences, this problem is commonly referred to as phase retrieval [2]; the origin is that in many imaging applications (e.g. X-ray crystallography, diffraction imaging, microscopy) it is infeasible to record the phases of the diffraction patterns so that we can only record |Ax|2, where x is the electrical field of interest. Moreover, this problem finds applications in estimating the mixture of linear regression, since one can transform the latent membership variables into missing phases [3]. Despite its importance across various fields, solving the quadratic system (1) is combinatorial in nature and, in general, NP complete. To be more realistic albeit more challenging, the acquired samples are almost always corrupted by some amount of noise, namely, yi ≈|⟨ai, x⟩|2 , i = 1, · · · , m. (2) 1 For instance, in imaging applications the data are best modeled by Poisson random variables yi ind. ∼Poisson |⟨ai, x⟩|2  , i = 1, · · · , m, (3) which captures the variation in the number of photons detected by a sensor. While we shall pay special attention to the Poisson noise model due to its practical relevance, the current work aims to accommodate general—or even deterministic—noise structures. 1.1 Nonconvex optimization Assuming independent samples, the first attempt is to seek the maximum likelihood estimate (MLE): minimizez − Xm i=1 ℓ(z; yi) , (4) where ℓ(z; yi) represents the log-likelihood of a candidate z given the outcome yi. As an example, under the Poisson data model (3), one has (up to some constant offset) ℓ(z; yi) = yi log(|a∗ i z|2) −|a∗ i z|2. (5) Computing the MLE, however, is in general intractable, since ℓ(z; yi) is not concave in z. Fortunately, under unstructured random systems, the problem is not as ill-posed as it might seem, and is solvable via convenient convex programs with optimal statistical guarantees [4–12]. The basic paradigm is to lift the quadratically constrained problem into a linearly constrained problem by introducing a matrix variable X = xx∗and relaxing the rank-one constraint. Nevertheless, working with the auxiliary matrix variable significantly increases the computational complexity, which exceeds the order of n3 and is prohibitively expensive for large-scale data. This paper follows a different route, which attempts recovery by minimizing the nonconvex objective (4) or (5) directly (e.g. [2, 13–19]). The main incentive is the potential computational benefit, since this strategy operates directly upon vectors instead of lifting decision variables to higher dimension. Among this class of procedures, one natural candidate is the family of gradient-descent type algorithms developed with respect to the objective (4). This paradigm can be regarded as performing some variant of stochastic gradient descent over the random samples {(yi, ai)}1≤i≤m as an approximation to maximize the population likelihood L(z) := E(y,a)[ℓ(z; y)]. While in general nonconvex optimization falls short of performance guarantees, a recently proposed approach called Wirtinger Flow (WF) [13] promises efficiency under random features. In a nutshell, WF initializes the iterate via a spectral method, and then successively refines the estimate via the following update rule: z(t+1) = z(t) + µt m Xm i=1 ∇ℓ(z(t); yi), where z(t) denotes the tth iterate of the algorithm, and µt is the learning rate. Here, ∇ℓ(z; yi) represents the Wirtinger derivative with respect to z, which reduces to the ordinary gradient in the real setting. Under Gaussian designs, WF (i) allows exact recovery from O (n log n) noise-free quadratic equations [13];1 (ii) recovers x up to ϵ-accuracy within O(mn2 log 1/ϵ) time (or flops) [13]; and (iii) is stable and converges to the MLE under Gaussian noise [20]. Despite these intriguing guarantees, the computational complexity of WF still far exceeds the best that one can hope for. Moreover, its sample complexity is a logarithmic factor away from the information-theoretic limit. 1.2 This paper: Truncated Wirtinger Flow This paper develops a novel linear-time algorithm, called Truncated Wirtinger Flow (TWF), that achieves a near-optimal statistical accuracy. The distinguishing features include a careful initialization procedure and a more adaptive gradient flow. Informally, TWF entails two stages: 1. Initialization: compute an initial guess z(0) by means of a spectral method applied to a subset T0 of data {yi} that do not bear too much influence on the spectral estimates; 2. Loop: for 0 ≤t < T, z(t+1) = z(t) + µt m X i∈Tt+1 ∇ℓ(z(t); yi) (6) for some index set Tt+1 ⊆{1, · · · , m} over which ∇ℓ(z(t); yi) are well-controlled. 1 f(n) = O (g(n)) or f(n) ≲g(n) (resp. f(n) ≳g(n)) means there exists a constant c > 0 such that |f(n)| ≤c|g(n)| (resp. |f(n)| ≥c |g(n)|). f(n) ≍g(n) means f(n) and g(n) are orderwise equivalent. 2 Iteration 0 10 15 Relative error 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 0 20 40 60 5 least squares (CG) truncated WF SNR (dB) (n =100) 15 20 25 30 35 40 45 50 55 Relative MSE (dB) -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 truncated WF MLE w/ phase (a) (b) Figure 1: (a) Relative errors of CG and TWF vs. iteration count, where n = 1000 and m = 8n. (b) Relative MSE vs. SNR in dB, where n = 100. The curves are shown for two settings: TWF for solving quadratic equations (blue), and MLE had we observed additional phase information (green). We highlight three aspects of the proposed algorithm, with details deferred to Section 2. (a) In contrast to WF and other gradient descent variants, we regularize both the initialization and the gradient flow in a more cautious manner by operating only upon some iteration-varying index sets Tt. The main point is that enforcing such careful selection rules lead to tighter initialization and more robust descent directions. (b) TWF sets the learning rate µt in a far more liberal fashion (e.g. µt ≡0.2 under suitable conditions), as opposed to the situation in WF that recommends µt = O(1/n). (c) Computationally, each iterative step mainly consists in calculating {∇ℓ(z; yi)}, which is inexpensive and can often be performed in linear time, that is, in time proportional to evaluating the data and the constraints. Take the real-valued Poisson likelihood (5) for example: ∇ℓ(z; yi) = 2  yi |a⊤ i z|2 aia⊤ i z −aia⊤ i z  = 2 yi −|a⊤ i z|2 a⊤ i z  ai, 1 ≤i ≤m, which essentially amounts to two matrix-vector products. To see this, rewrite X i∈Tt+1 ∇ℓ(z(t); yi) = A⊤v, vi = ( 2 yi−|a⊤ i z(t)|2 a⊤ i z(t) , i ∈Tt+1, 0, otherwise, where A := [a1, · · · , am]⊤. Hence, Az(t) gives v and A⊤v the desired truncated gradient. 1.3 Numerical surprises The power of TWF is best illustrated by numerical examples. Since x and e−jφx are indistinguishable given y, we evaluate the solution based on a metric that disregards the global phase [13]: dist (z, x) := minϕ:∈[0,2π) ∥e−jϕz −x∥. (7) In the sequel, TWF operates according to the Poisson log-likelihood (5), and takes µt ≡0.2. We first compare the computational efficiency of TWF for solving quadratic systems with that of conjugate gradient (CG) for solving least square problems. As is well known, CG is among the most popular methods for solving large-scale least square problems, and hence offers a desired benchmark. We run TWF and CG respectively over the following two problems: (a) find x ∈Rn s.t. bi = a⊤ i x, 1 ≤i ≤m, (b) find x ∈Rn s.t. bi = |a⊤ i x|, 1 ≤i ≤m, where m = 8n, x ∼N(0, I), and ai ind. ∼N(0, I). This yields a well-conditioned design matrix A, for which CG converges extremely fast [21]. The relative estimation errors of both methods are reported in Fig. 1(a), where TWF is seeded by 10 power iterations. The iteration counts are plotted in different scales so that 4 TWF iterations are tantamount to 1 CG iteration. Since each iteration of CG and TWF involves two matrix vector products Az and A⊤v, the numerical plots lead to a suprisingly positive observation for such an unstructured design: 3 Figure 2: Recovery after (top) truncated spectral initialization, and (bottom) 50 TWF iterations. Even when all phase information is missing, TWF is capable of solving a quadratic system of equations only about 4 times2 slower than solving a least squares problem of the same size! The numerical surprise extends to noisy quadratic systems. Under the Poisson data model, Fig. 1(b) displays the relative mean-square error (MSE) of TWF when the signal-to-noise ratio (SNR) varies; here, the relative MSE and the SNR are defined as3 MSE := dist2(ˆx, x) / ∥x∥2 and SNR := 3∥x∥2, (8) where ˆx is an estimate. Both SNR and MSE are displayed on a dB scale (i.e. the values of 10 log10(SNR) and 10 log10(MSE) are plotted). To evaluate the quality of the TWF solution, we compare it with the MLE applied to an ideal problem where the phases (i.e. {ϕi = sign(a⊤ i x)}) are revealed a priori. The presence of this precious side information gives away the phase retrieval problem and allows us to compute the MLE via convex programming. As illustrated in Fig. 1(b), TWF solves the quadratic system with nearly the best possible accuracy, since it only incurs an extra 1.5 dB loss compared to the ideal MLE with all true phases revealed. To demonstrate the scalability of TWF on real data, we apply TWF on a 320×1280 image. Consider a type of physically realizable measurements called coded diffraction patterns (CDP) [22], where y(l) = |F D(l)x|2, 1 ≤l ≤L, (9) where m = nL, |z|2 denotes the vector of entrywise squared magnitudes, and F is the DFT matrix. Here, D(l) is a diagonal matrix whose diagonal entries are randomly drawn from {1, −1, j, −j}, which models signal modulation before diffraction. We generate L = 12 masks for measurements, and run TWF on a MacBook Pro with a 3 GHz Intel Core i7. We run 50 truncated power iterations and 50 TWF iterations, which in total cost 43.9 seconds for each color channel. The relative errors after initialization and TWF iterations are 0.4773 and 2.2 × 10−5, respectively; see Fig. 2. 1.4 Main results We corroborate the preceding numerical findings with theoretical support. For concreteness, we assume TWF proceeds according to the Poisson log-likelihood (5). We suppose the samples (yi, ai) are independently and randomly drawn from the population, and model the random features ai as ai ∼N (0, In) . (10) To start with, the following theorem confirms the performance of TWF under noiseless data. 2Similar phenomena arise in many other experiments we’ve conducted (e.g. when the sample size m ranges from 6n to 20n). In fact, this factor seems to improve slightly as m/n increases. 3To justify the definition of SNR, note that the signals and noise are captured by µi = (a⊤ i x)2 and yi −µi, respectively. The SNR is thus given by Pm i=1 µ2 i Pm i=1 Var[yi] = Pm i=1 |a⊤ i x|4 Pm i=1 |a⊤ i x|2 ≈3m∥x∥4 m∥x∥2 = 3∥x∥2. 4 Theorem 1 (Exact recovery). Consider the noiseless case (1) with an arbitrary x ∈Rn. Suppose that the learning rate µt is either taken to be a constant µt ≡µ > 0 or chosen via a backtracking line search. Then there exist some constants 0 < ρ, ν < 1 and µ0, c0, c1, c2 > 0 such that with probability exceeding 1 −c1 exp (−c2m), the TWF estimates (Algorithm 1) obey dist(z(t), x) ≤ ν(1 −ρ)t∥x∥, ∀t ∈N, (11) provided that m ≥c0n and µ ≤µ0. As discussed below, we can take µ0 ≈0.3. Theorem 1 justifies two intriguing properties of TWF. To begin with, TWF recovers the ground truth exactly as soon as the number of equations is on the same order of the number of unknowns, which is information theoretically optimal. More surprisingly, TWF converges at a geometric rate, i.e. it achieves ϵ-accuracy (i.e. dist(z(t), x) ≤ϵ ∥x∥) within at most O (log 1/ϵ) iterations. As a result, the time taken for TWF to solve the quadratic systems is proportional to the time taken to read the data, which confirms the linear-time complexity of TWF. These outperform the theoretical guarantees of WF [13], which requires O(mn2 log 1/ϵ) runtime and O(n log n) sample complexity. Notably, the performance gain of TWF is the result of the key algorithmic changes. Rather than maximizing the data usage at each step, TWF exploits the samples at hand in a more selective manner, which effectively trims away those components that are too influential on either the initial guess or the search directions, thus reducing the volatility of each movement. With a tighter initial guess and better-controlled search directions in place, TWF is able to proceed with a more aggressive learning rate. Taken collectively these efforts enable the appealing convergence property of TWF. Next, we turn to more realistic noisy data by accounting for a general additive noise model: yi = |⟨ai, x⟩|2 + ηi, 1 ≤i ≤m, (12) where ηi represents a noise term. The stability of TWF is demonstrated in the theorem below. Theorem 2 (Stability). Consider the noisy case (12). Suppose that the learning rate µt is either taken to be a positive constant µt ≡µ or chosen via a backtracking line search. If m ≥c0n, µ ≤µ0, and ∥η∥∞≤c1 ∥x∥2 , (13) then with probability at least 1 −c2 exp (−c3m), the TWF estimates (Algorithm 1) satisfy dist(z(t), x) ≲ ∥η∥ √m∥x∥+ (1 −ρ)t∥x∥, ∀t ∈N (14) for all x ∈Rn. Here, 0 < ρ < 1 and µ0, c0, c1, c2, c3 > 0 are some universal constants. Alternatively, if one regards the SNR for the model (12) as follows SNR :=  Xm i=1 |⟨ai, x⟩|4 / ∥η∥2 ≈3m∥x∥4 / ∥η∥2, (15) then we immediately arrive at another form of performance guarantee stated in terms of SNR: dist(z(t), x) ≲ 1 √ SNR ∥x∥+ (1 −ρ)t∥x∥, ∀t ∈N. (16) As a consequence, the relative error of TWF reaches O(SNR−1/2) within a logarithmic number of iterations. It is worth emphasizing that the above stability guarantee is deterministic, which holds for any noise structure obeying (13). Encouragingly, this statistical accuracy is nearly un-improvable, as revealed by a minimax lower bound that we provide in the supplemental materials. We pause to remark that several other nonconvex methods have been proposed for solving quadratic equations, which exhibit intriguing empirical performances. A partial list includes the error reduction schemes by Fienup [2], alternating minimization [14], Kaczmarz method [17], and generalized approximate message passing [15]. However, most of them fall short of theoretical support. The analytical difficulty arises since these methods employ the same samples in each iteration, which introduces complicated dependencies across all iterates. To circumvent this issue, [14] proposes a sample-splitting version of the alternating minimization method that employs fresh samples in each iteration. Despite the mathematical convenience, the sample complexity of this approach is O(n log3 n + n log2 n log 1/ϵ), which is a factor of O(log3 n) from optimal and is empirically largely outperformed by the variant that reuses all samples. In contrast, our algorithm uses the same pool of samples all the time and is therefore practically appealing. Besides, the approach in [14] does not come with provable stability guarantees. Numerically, each iteration of Fienup’s algorithm (or alternating minimization) involves solving a least squares problem, and the algorithm converges in tens or hundreds of iterations. This is computationally more expensive than TWF, whose computational complexity is merely about 4 times that of solving a least squares problem. 5 2 Algorithm: Truncated Wirtinger Flow This section explains the basic principles of truncated Wirtinger flow. For notational convenience, we denote A := [a1, · · · , am]⊤and A (M) :=  a⊤ i Mai 1≤i≤m for any M ∈Rn×n. 2.1 Truncated gradient stage z = (3, 6) x = (2.7, 8) Figure 3: The locus of −1 2∇ℓi (z) for all unit vectors ai. The red arrows depict those directions with large weights. In the case of independent real-valued data, the descent direction of the WF updates—which is the gradient of the Poisson log-likelihood—can be expressed as follows: m X i=1 ∇ℓ(z; yi) = m X i=1 2 yi −|a⊤ i z|2 a⊤ i z | {z } :=νi ai, (17) where νi represents the weight assigned to each feature ai. Unfortunately, the gradient of this form is non-integrable and hence uncontrollable. To see this, consider any fixed z ∈Rn. The typical value of min1≤i≤m |a⊤ i z| is on the order of 1 m∥z∥, leading to some excessively large weights νi. Notably, an underlying premise for a nonconvex procedure to succeed is to ensure all iterates reside within a basin of attraction, that is, a neighborhood surrounding x within which x is the unique stationary point of the objective. When a gradient is unreasonably large, the iterative step might overshoot and end up leaving this basin of attraction. Consequently, WF moving along the preceding direction might not come close to the truth unless z is already very close to x. This is observed in numerical simulations4. TWF addresses this challenge by discarding terms having too high of a leverage on the search direction; this is achieved by regularizing the weights νi via appropriate truncation. Specifically, z(t+1) = z(t) + µt m ∇ℓtr(z(t)), ∀t ∈N, (18) where ∇ℓtr (·) denotes the truncated gradient given by ∇ℓtr (z) := Xm i=1 2yi −|a⊤ i z|2 a⊤ i z ai1Ei 1(z)∩Ei 2(z) (19) for some appropriate truncation criteria specified by Ei 1 (·) and Ei 2 (·). In our algorithm, we take Ei 1 (z) and Ei 2 (z) to be two collections of events given by Ei 1(z) :=  αlb z ∥z∥≤ a⊤ i z ≤αub z ∥z∥ ; (20) Ei 2(z) :=  |yi −|a⊤ i z|2| ≤αh m y −A zz⊤ 1 |a⊤ i z| ∥z∥  , (21) where αlb z , αub z , αz are predetermined truncation thresholds. In words, we drop components whose size fall outside some confidence range—a range where the magnitudes of both the numerator and denominator of νi are comparable to their respective mean values. This paradigm could be counter-intuitive at first glance, since one might expect the larger terms to be better aligned with the desired search direction. The issue, however, is that the large terms are extremely volatile and could dominate all other components in an undesired way. In contrast, TWF makes use of only gradient components of typical sizes, which slightly increases the bias but remarkably reduces the variance of the descent direction. We expect such gradient regularization and variance reduction schemes to be beneficial for solving a broad family of nonconvex problems. 2.2 Truncated spectral initialization A key step to ensure meaningful convergence is to seed TWF with some point inside the basin of attraction, which proves crucial for other nonconvex procedures as well. An appealing initialization 4For complex-valued data, WF converges empirically, as mini |a⊤ i z| is much larger than the real case. 6 Algorithm 1 Truncated Wirtinger Flow. Input: Measurements {yi | 1 ≤i ≤m} and feature vectors {ai | 1 ≤i ≤m}; truncation thresholds αlb z , αub z , αh, and αy satisfying (by default, αlb z = 0.3, αub z = αh = 5, and αy = 3) 0 < αlb z ≤0.5, αub z ≥5, αh ≥5, and αy ≥3. (25) Initialize z(0) to be q mn Pm i=1∥ai∥2 λ˜z, where λ = q 1 m Pm i=1 yi and ˜z is the leading eigenvector of Y = 1 m Xm i=1 yiaia∗ i 1{|yi|≤α2yλ2 0}. (22) Loop: for t = 0 : T do z(t+1) = z(t) + 2µt m Xm i=1 yi − a∗ i z(t) 2 z(t)∗ai ai1Ei 1∩Ei 2, (23) where Ei 1 :=  αlb z ≤ √n ∥ai∥ |a∗ i z(t)| ∥z(t)∥≤αub z  , Ei 2 :=  |yi −|a∗ i z(t)|2| ≤αhKt √n ∥ai∥ |a∗ i z(t)| ∥z(t)∥  , (24) and Kt := 1 m Xm l=1 yl −|a∗ l z(t)|2 . Output z(T ). procedure is the spectral method [14] [13], which initializes z(0) as the leading eigenvector of eY := 1 m Pm i=1 yiaia⊤ i . This is based on the observation that for any fixed unit vector x, E[ eY ] = I + 2xx⊤, whose principal component is exactly x with an eigenvalue of 3. Unfortunately, the success of this method requires a sample complexity exceeding n log n. To see this, recall that maxi yi ≈2 log m. Letting k = arg maxi yi and ˜ak := ak/∥ak∥, one can derive ˜a⊤ k eY ˜ak ≥˜a⊤ k m−1aka⊤ k yk  ˜ak ≈(2n log m)/m, which dominates x⊤eY x ≈3 unless m ≳n log m. As a result, ˜ak is closer to the principal component of eY than x when m ≍n. This drawback turns out to be a substantial practical issue. n: signal dimension (m = 6n) 1000 2000 3000 4000 5000 Relative MSE 0.7 0.8 0.9 1 spectral method truncated spectral method Figure 4: Relative initialization error when ai ∼N(0, I). This issue can be remedied if we preclude those data yi with large magnitudes when running the spectral method. Specifically, we propose to initialize z(0) as the leading eigenvector of Y := 1 m Xm i=1 yiaia⊤ i 1{|yi|≤α2y( 1 m Pm l=1 yl)} (26) followed by proper scaling so as to ensure ∥z(0)∥≈∥x∥. As illustrated in Fig. 4, the empirical advantage of the truncated spectral method is increasingly more remarkable as n grows. 2.3 Choice of algorithmic parameters One important implementation detail is the learning rate µt. There are two alternatives that work well in both theory and practice: 1. Fixed size. Take µt ≡µ for some constant µ > 0. As long as µ is not too large, this strategy always works. Under the condition (25), our theorems hold for any positive constant µ < 0.28. 2. Backtracking line search with truncated objective. This strategy performs a line search along the descent direction and determines an appropriate learning rate that guarantees a sufficient improvement with respect to the truncated objective. Details are deferred to the supplement. Another algorithmic details to specify are the truncation thresholds αh, αlb z , αub z , and αy. The present paper isolates a concrete set of combinations as given in (25). In all theory and numerical experiments presented in this work, we assume that the parameters fall within this range. 7 m : number of measurements (n =1000) 2n 3n 4n 5n 6n Empirical success rate 0 0.5 1 TWF (Poisson objective) WF (Gaussian objective) m : number of measurements (n =1000) 2n 3n 4n 5n 6n Empirical success rate 0 0.5 1 TWF (Poisson objective) WF (Gaussian objective) SNR (dB) (n =1000) 15 20 25 30 35 40 45 50 55 Relative MSE (dB) -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 m = 6n m = 8n m = 10n (a) (b) (c) Figure 5: (a) Empirical success rates for real Gaussian design; (b) empirical success rates for complex Gaussian design; (c) relative MSE (averaged over 100 runs) vs. SNR for Poisson data. 3 More numerical experiments and discussion We conduct more extensive numerical experiments to corroborate our main results and verify the applicability of TWF on practical problems. For all experiments conducted herein, we take a fixed step size µt ≡0.2, employ 50 power iterations for initialization and T = 1000 gradient iterations. The truncation levels are taken to be the default values αlb z = 0.3, αub z = αh = 5, and αy = 3. We first apply TWF to a sequence of noiseless problems with n = 1000 and varying m. Generate the object x at random, and produce the feature vectors ai in two different ways: (1) ai ind. ∼N (0, I); (2) ai ind. ∼N (0, I) + jN (0, I). A Monte Carlo trial is declared success if the estimate ˆx obeys dist (ˆx, x) / ∥x∥≤10−5. Fig. 5(a) and 5(b) illustrate the empirical success rates of TWF (average over 100 runs for each m) for noiseless data, indicating that m ≥5n are m ≥4.5n are often sufficient under real and complex Gaussian designs, respectively. For the sake of comparison, we simulate the empirical success rates of WF, with the step size µt = min{1 −e−t/330, 0.2} as recommended by [13]. As shown in Fig. 5, TWF outperforms WF under random Gaussian features, implying that TWF exhibits either better convergence rate or enhanced phase transition behavior. Next, we empirically evaluate the stability of TWF under noisy data. Set n = 1000, produce ai ind. ∼ N (0, I), and generate yi according to the Poisson model (3). Fig. 5(c) shows the relative mean square error—on the dB scale—with varying SNR (cf. (8)). As can be seen, the empirical relative MSE scales inversely proportional to SNR, which matches our stability guarantees in Theorem 2 (since on the dB scale, the slope is about -1 as predicted by the theory (16)). m: number of measurements ( n =1000) 5n 6n 7n 8n 9n 10n Empirical success rate 0 0.5 1 Figure 6: Empirical success rate for mixed regression (p = 0.5). While this work focuses on the Poisson-type objective for concreteness, the proposed paradigm carries over to a variety of nonconvex objectives, and might have implications in solving other problems that involve latent variables, e.g. matrix completion [23–25], sparse coding [26], dictionary learning [27], and mixture problems (e.g. [28,29]). We conclude this paper with an example on estimating mixtures of linear regression. Imagine yi ≈ a⊤ i β1, with probability p, a⊤ i β2, else, 1 ≤i ≤m, (27) where β1, β2 are unknown. It has been shown in [3] that in the noiseless case, the ground truth satisfies fi(β1, β2) := y2 i + 0.5a⊤ i (β1β⊤ 2 + β2β⊤ 1 )ai −a⊤ i (β1 + β2) yi = 0, 1 ≤i ≤m, which forms a set of quadratic constraints (in particular, if one further knows β1 = −β2, then this reduces to the form (1)). Running TWF with a nonconvex objective Pm i=1 f 2 i (z1, z2) (with the assistance of a 1-D grid search proposed in [29] applied right after truncated initialization) yields accurate estimation of β1, β2 under minimal sample complexity, as illustrated in Fig. 6. Acknowledgments E. C. is partially supported by NSF under grant CCF-0963835 and by the Math + X Award from the Simons Foundation. Y. C. is supported by the same NSF grant. 8 References [1] A. Ben-Tal and A. Nemirovski. Lectures on modern convex optimization, volume 2. 2001. [2] J. R. Fienup. Phase retrieval algorithms: a comparison. Applied optics, 21:2758–2769, 1982. [3] Y. Chen, X. Yi, and C. Caramanis. A convex formulation for mixed regression with two components: Minimax optimal rates. In Conference on Learning Theory (COLT), 2014. [4] E. J. Candès, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1017–1026, 2013. [5] I. Waldspurger, A. d’Aspremont, and S. Mallat. Phase recovery, maxcut and complex semidefinite programming. Mathematical Programming, 149(1-2):47–81, 2015. [6] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing. Optics express, 19(16), 2011. [7] E. J. Candès and X. Li. Solving quadratic equations via PhaseLift when there are about as many equations as unknowns. Foundations of Computational Math., 14(5):1017–1026, 2014. [8] H. Ohlsson, A. Yang, R. Dong, and S. Sastry. Cprl–an extension of compressive sensing to the phase retrieval problem. In Advances in Neural Information Processing Systems (NIPS), 2012. [9] Y. Chen, Y. Chi, and A. J. Goldsmith. Exact and stable covariance estimation from quadratic sampling via convex programming. IEEE Trans. on Inf. Theory, 61(7):4034–4059, 2015. [10] T. Cai and A. Zhang. ROP: Matrix recovery via rank-one projections. Annals of Stats. [11] K. Jaganathan, S. Oymak, and B. Hassibi. Recovery of sparse 1-D signals from the magnitudes of their Fourier transform. In IEEE ISIT, pages 1473–1477, 2012. [12] D. Gross, F. Krahmer, and R. Kueng. A partial derandomization of phaselift using spherical designs. Journal of Fourier Analysis and Applications, 21(2):229–266, 2015. [13] E. J. Candès, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, April 2015. [14] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. NIPS, 2013. [15] P. Schniter and S. Rangan. Compressive phase retrieval via generalized approximate message passing. IEEE Transactions on Signal Processing, 63(4):1043–1055, Feb 2015. [16] A. Repetti, E. Chouzenoux, and J.-C. Pesquet. A nonconvex regularized approach for phase retrieval. International Conference on Image Processing, pages 1753–1757, 2014. [17] K. Wei. Phase retrieval via Kaczmarz methods. arXiv:1502.01822, 2015. [18] C. White, R. Ward, and S. Sanghavi. The local convexity of solving quadratic equations. arXiv:1506.07868, 2015. [19] Y. Shechtman, A. Beck, and Y. C. Eldar. GESPAR: Efficient phase retrieval of sparse signals. IEEE Transactions on Signal Processing, 62(4):928–938, 2014. [20] M. Soltanolkotabi. Algorithms and Theory for Clustering and Nonconvex Quadratic Programming. PhD thesis, Stanford University, 2014. [21] L. N. Trefethen and D. Bau III. Numerical linear algebra, volume 50. SIAM, 1997. [22] E. J. Candès, X. Li, and M. Soltanolkotabi. Phase retrieval from coded diffraction patterns. to appear in Applied and Computational Harmonic Analysis, 2014. [23] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11:2057–2078, 2010. [24] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In ACM symposium on Theory of computing, pages 665–674, 2013. [25] R. Sun and Z. Luo. Guaranteed matrix completion via nonconvex factorization. FOCS, 2015. [26] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse coding. Conference on Learning Theory (COLT), 2015. [27] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere. ICML, 2015. [28] S. Balakrishnan, M. J. Wainwright, and B. Yu. Statistical guarantees for the EM algorithm: From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014. [29] X. Yi, C. Caramanis, and S. Sanghavi. Alternating minimization for mixed linear regression. International Conference on Machine Learning, June 2014. 9
2015
183
5,684
Statistical Topological Data Analysis – A Kernel Perspective Roland Kwitt Department of Computer Science University of Salzburg rkwitt@gmx.at Stefan Huber IST Austria stefan.huber@ist.ac.at Marc Niethammer Department of Computer Science and BRIC UNC Chapel Hill mn@cs.unc.edu Weili Lin Department of Radiology and BRIC UNC Chapel Hill weili_lin@med.unc.edu Ulrich Bauer Department of Mathematics Technische Universität München (TUM) ulrich@bauer.org Abstract We consider the problem of statistical computations with persistence diagrams, a summary representation of topological features in data. These diagrams encode persistent homology, a widely used invariant in topological data analysis. While several avenues towards a statistical treatment of the diagrams have been explored recently, we follow an alternative route that is motivated by the success of methods based on the embedding of probability measures into reproducing kernel Hilbert spaces. In fact, a positive definite kernel on persistence diagrams has recently been proposed, connecting persistent homology to popular kernel-based learning techniques such as support vector machines. However, important properties of that kernel enabling a principled use in the context of probability measure embeddings remain to be explored. Our contribution is to close this gap by proving universality of a variant of the original kernel, and to demonstrate its effective use in twosample hypothesis testing on synthetic as well as real-world data. 1 Introduction Over the past years, advances in adopting methods from algebraic topology to study the “shape” of data (e.g., point clouds, images, shapes) have given birth to the field of topological data analysis (TDA) [5]. In particular, persistent homology has been widely established as a tool for capturing “relevant” topological features at multiple scales. The output is a summary representation in the form of so called barcodes or persistence diagrams, which, roughly speaking, encode the life span of the features. These “topological summaries” have been successfully used in a variety of different fields, including, but not limited to, computer vision and medical imaging. Applications range from the analysis of cortical surface thickness [8] to the structure of brain networks [15], brain artery trees [2] or histology images for breast cancer analysis [22]. Despite the success of TDA in these areas, a statistical treatment of persistence diagrams (e.g., computing means or variances) turns out to be difficult, not least because of the unusual structure of the barcodes as intervals, rather than numerical quantities [1]. While substantial advancements in 1 the direction of statistical TDA have been made by studying the structure of the space of persistence diagrams endowed with p-Wasserstein metrics (or variants thereof) [18, 19, 28, 11], it is technically and computationally challenging to work in this space. In a machine learning context, we would rather work with Hilbert spaces, primarily due to the highly regular structure and the abundance of readily available and well-studied methods for statistics and learning. One way to circumvent issues such as non-uniqueness of the Fréchet mean [18] or computationally intensive algorithmic strategies [28] is to consider mappings of persistence barcodes into linear function spaces. Statistical computations can then be performed based on probability theory on Banach spaces [14]. However, the methods proposed in [4] cannot guarantee that different probability distributions can always be distinguished by a statistical test. Contribution. In this work, we consider the task of statistical computations with persistence diagrams. Our contribution is to approach this problem by leveraging the theory of embedding probability measures into reproducing kernel Hilbert spaces [23], in our case, probability measures on the space of persistence diagrams. In particular, we start with a recently introduced kernel on persistence diagrams by Reininghaus et al. [20] and identify missing properties that are essential for a well-founded use in the aforementioned framework. By enforcing mild restrictions on the underlying space, we can in fact close the remaining gaps and prove that a minor modification of the kernel is universal in the sense of Steinwart [25] (see Section 3). Our experiments demonstrate, on a couple of synthetic and real-world data samples, how this universal kernel enables a principled solution to the selected problem of (kernel-based) two-sample hypothesis testing. Related work. In the following, we focus our attention on work related to a statistical treatment of persistent homology. Since this is a rather new field, several avenues are pursued in parallel. Mileyko et al. [18] study properties of the set of persistence diagrams when endowed with the p-Wasserstein metric. They show, for instance, that under this metric, the space is Polish and the Fréchet mean exists. However, it is not unique and no algorithmic solution is provided. Turner et al. [28] later show that the L2-Wasserstein metric on the set of persistence diagrams yields a geodesic space, and that the additional structure can be leveraged to construct an algorithm for computing the Fréchet mean and to prove a law of large numbers. In [19], Munch et al. take a different approach and introduce a probabilistic variant of the Fréchet mean as a probability measure on persistence diagrams. While this yields a unique mean, the solution itself is not a persistence diagram anymore. Techniques for computing confidence sets for persistence diagrams are investigated by Fasy et al. [11]. The authors focus on the Bottleneck metric (i.e., a special case of the p-Wasserstein metric when p = ∞), remarking that similar results could potentially be obtained for the case of the p-Wasserstein metric under stronger assumptions on the underlying topological space. While the aforementioned results concern properties of the set of persistence diagrams equipped with p-Wasserstein metrics, a different strategy is advocated by Bubenik in [4]. The key idea is to circumvent the peculiarities of the metric by mapping persistence diagrams into function spaces. One such representation is the persistence landscape, i.e., a sequence of 1-Lipschitz functions in a Banach space. While it is in general not possible to go back and forth between landscapes and persistence diagrams, the Banach space structure enables a well-founded theoretical treatment of statistical concepts, such as averages or confidence intervals [14]. Chazal et al. [6] establish additional convergence results and propose a bootstrap procedure for obtaining confidence sets. Another, less statistically oriented, approach towards a convenient summary of persistence barcodes is followed by Adcock et al. [1]. The idea is to attach numerical quantities to persistence barcodes, which can then be used as input to any machine learning algorithm in the form of feature vectors. This strategy is rooted in a study of algebraic functions on barcodes. However, it does not necessarily guarantee stability of the persistence summary representation, which is typically a desired property of a feature map [20]. Our proposed approach to statistical TDA is also closely related to work in the field of kernel-based learning techniques [21] or, to be more specific, to the embedding of probability measures into a RKHS [23] and the study of suitable kernel functions in that context [7, 24]. In fact, the idea of mapping probability measures into a RKHS has led to many developments generalizing statistical concepts, such as two-sample testing [13], testing for conditional independence, or statistical inference [12], form Euclidean spaces to other domains equipped with a kernel. In the context of supervised learning with TDA, Reininghaus et al. [20] recently established a first connection to 2 kernel-based learning techniques via the definition of a positive definite kernel on persistence diagrams. While positive definiteness is sufficient for many techniques, such as support vector machines or kernel PCA, additional properties are required in the context of embedding probability measures. Organization. Section 2 briefly reviews some background material and introduces some notation. In Section 3, we show how a slight modification of the kernel in [20] fits into the framework of embedding probability measures into a RKHS. Section 4 presents a set of experiments on synthetic and real data, highlighting the advantages of the kernel. Finally, Section 5 summarizes the main contributions and discusses future directions. 2 Background Since our discussion of statistical TDA from a kernel perspective is largely decoupled from how the topological summaries are obtained, we only review two important notions for the theory of persistent homology: filtrations and persistence diagrams. For a thorough treatment of the topic, we refer the reader to [10]. We also briefly review the concept of embedding probability measures into a RKHS, following [23]. Filtrations. A standard approach to TDA assigns to some metric space (M,dM) a growing sequence of simplicial complexes (indexed by a parameter t ∈R), typically referred to as a filtration. Recall that an abstract simplicial complex is a collection of nonempty sets that is closed under taking nonempty subsets. Persistent homology then studies the evolution of the homology of these complexes for a growing parameter t. Some widely used constructions, particularly for point cloud data, are the Vietoris–Rips and the ˇCech complex. The Vietoris–Rips complex is a simplicial complex with vertex set M such that [x0,. . . , xm] is an m-simplex iffmaxi, j ≤m dM(xi, x j) ≤t. For a point set M ⊂Rd in Euclidean space, the ˇCech complex is a simplicial complex with vertex set M ⊂Rd such that [x0,. . . , xm] is an m-simplex iffthe closed balls of radius t centered at the xi have a non-empty common intersection. A more general way of obtaining a filtration is to consider the sublevel sets f −1(−∞,t], for t ∈R, of a function f : X →R on a topological space X. For instance, in the case of surfaces meshes, a commonly used function is the heat kernel signature (HKS) [27]. The ˇCech and Vietoris–Rips filtrations appear as special cases, both being sublevel set filtrations of an appropriate function on the subsets (abstract simplices) of the vertex set M: for the ˇCech filtration, the function assigns to each subset the radius of its smallest enclosing sphere, while for the Vietoris–Rips filtration, the function assigns to each subset its diameter (equivalently, the length of its longest edge). Persistence diagrams. Studying the evolution of the topology of a filtration allows us to capture interesting properties of the metric or function used to generate the filtration. Persistence diagrams provide a concise description of the changes in homology that occur during this process. f : R →R birth death Fig. 1: A function and its 0-th persistence diagram. Existing connected components may merge, cycles may appear, etc. This leads to the appearance and disappearance of homological features of different dimension. Persistent homology tracks the birth b and death d of such topological features. The multiset of points p, where each point p = (b,d) corresponds to a birth/death time pair, is called the persistence diagram of the filtration. An example of a persistence diagram for 0-dimensional features (i.e., connected components) of a function f : X →R with X = R is shown in Fig. 2. We use the identifiers F,G to denote persistence diagrams in the remainder of the paper. Since d > b, all points lie in the half-plane above the diagonal. RKHS embedding of probability measures. An important concept for our work is the embedding of probability measures into reproducing kernel Hilbert spaces [23]. Consider a Borel probability measure P defined on a compact metric space (X,d), which we observe through the i.i.d. sample X = {xi}m i=1 with xi ∼P. Furthermore, let k : X × X →R be a positive definite kernel, i.e., a function which realizes an inner product k(x, y) = ⟨φ(x),φ(y)⟩G with x, y ∈X in some Hilbert space G for some (possibly unknown) map φ : X →G (see [26, Definition 4.1.]). Also, let H be the associated RKHS, generated by functions kx = k(x,·) : X →R induced by the kernel, i.e., H = span{kx : x ∈X} = span{⟨φ(x),φ(·)⟩G : x ∈X}, with the scalar product ⟨kx,ky⟩H = k(x, y). 3 The linear structure on the RKHS H admits the construction of means. The embedding of a probability measure P on X is now accomplished via the mean map µ : P 7→µP = Ex∼P[kx]. If this map is injective, the kernel k is called characteristic. This is true, in particular, if H is dense in the space of continuous functions X →R (with the supremum norm), in which case we refer to the kernel as universal [25]. While a universal kernel is always characteristic, the converse is not true. Since it has been shown [13] that the empirical estimate of the mean, µX = 1/m P i kxi, is a good proxy for µP, the injectivity of µ can be used to define distances between distributions P and Q, observed via samples X = {xi}m i=1 and Y = {yi}n i=1. Specifically, this can be done via the maximum mean discrepancy MMD[F ,P,Q] = sup f ∈F (Ex∼P[ f (x)] −Ey∼Q[ f (y)]), (1) where F denotes a suitable class of functions X →R, and Ex∼P[ f (x)] denotes the expectation of f (x) w.r.t. P (which can be written as ⟨µP, f ⟩by virtue of the reproducing property of k). Gretton et al. [13] restrict F to functions on a unit ball in H, i.e., F = { f ∈H : ∥f ∥∞≤1}, and show that Eq. (1) can be expressed as the RHKS distance between the means µP and µQ of the measures P and Q as MMD2[F ,P,Q] = ∥µP −µQ∥2 H . Empirical estimates of this quantity are given in [13]. This connection is of particular importance to us, since it allows for two-sample hypothesis testing in a principled manner given a suitable (characteristic/universal) kernel. Prominent examples of universal kernels for X = Rd are the Gaussian RBF kernel k(x, y) = e−γ ∥x−y ∥2 and the kernel e⟨x,y⟩. However, without a characteristic/universal kernel, MMD[F ,P,Q] = 0 does not imply P = Q. A well-known example of a non-characteristic kernel is the scalar product kernel k(x, y) = ⟨x, y⟩with x, y ∈Rd. Even if P , Q, e.g., if the variances of the distributions differ, the MMD will still be zero if the means are equal. In the context of a statistical treatment of persistent homology, the ability to embed probability measures on the space of persistence diagrams into a RKHS is appealing. Specifically, the problem of testing whether two different samples exhibit significantly different homological features – as captured in the persistence diagram – boils down to a two-sample test with null hypothesis H0 : µP = µQ vs. a general alternative HA : µP , µQ, where P and Q are probability measures on the set of persistence diagrams. The computation of this test only involves evaluations of the kernel. Enabling this procedure via a suitable universal kernel will be discussed next. 3 The universal persistence scale space kernel In the following, for 1 ≤q ≤∞we let Dq = {F | dW,q(F,∅) < ∞}, denote the metric space of persistence diagrams with the q-Wasserstein metric dW,q1, where ∅is the empty diagram. In [18, Theorem 1], Mileyko et al. show that (Dq,dW,q) is a complete metric space. When the subscript q is omitted, we do not refer to any specific instance of q-Wasserstein metric. Let us fix the numbers N ∈N and R ∈R. We denote by S the subset of D consisting of those persistence diagrams that are birth-death bounded by R (i.e., for every D ∈S the birth/death time of its points is less or equal to R; see [18, Definition 5]) and whose total multiplicities (i.e., the sum of multiplicities of all points in a diagram) are bounded by N. While this might appear restrictive at first sight, it does not really pose a limitation in practice. In fact, for data generated by some finite process (e.g., meshes have a finite number of vertices/faces, images have limited resolution, etc.), establishing N and R is typically not a problem. We remark that the aforementioned restriction is similar to enforcing boundedness of the support of persistence landscapes in [4, Section 3.6]. In [20], Reininghaus et al. introduce the persistence scale space (PSS) kernel as a stable, multi-scale kernel on the set D of persistence diagrams of finite total multiplicity, i.e., each diagram contains only finitely many points. Let p = (b,d) denote a point in a diagram F ∈D, and let p = (d,b) denote its mirror image across the diagonal. Further, let Ω= {x = (x1, x2) ∈R2, x2 ≥x1}. The feature map Φσ : D →L2(Ω) is given as the solution of a heat diffusion problem with a Dirichlet boundary condition on the diagonal by 1The q-Wasserstein metric is defined as dW,q(F,G) = infγ(P x∈F ∥x −γ(x)∥q ∞)1/q, where γ ranges over all bijections from F ∪D to G ∪D, with D denoting the multiset of diagonal points (t,t), each with countably infinite multiplicity. 4 Φσ(F) : Ω→R, x 7→ 1 4πσ X p∈F e−∥x−p∥2 4σ −e−∥x−p∥2 4σ . (2) The kernel kσ : D × D →R is then given in closed form as kσ(F,G) = ⟨Φσ(F),Φσ(G)⟩L2(Ω) = 1 8πσ X p∈F q∈G e−∥p−q∥2 8σ −e−∥p−q∥2 8σ . (3) for σ > 0 and F,G ∈D. By construction, positive definiteness of kσ is guaranteed. The kernel is stable in the sense that the distance dσ(F,G) = √k(F,F) + k(G,G) −2k(F,G) is bounded up to a constant by dW,1(F,G) [20, Theorem 2]. We have the following property: Proposition 1. Restricting the kernel in Eq. (3) to S × S, the mean map µ sends a probability measure P on S to an element µP ∈H. Proof. The claim immediately follows from [13, Lemma 3] and [24, Proposition 2], since kσ is measurable and bounded on S, and hence µP ∈H. □ While positive definiteness enables the use of kσ in many kernel-based learning techniques [21], we are interested in assessing whether it is universal, or if we can construct a universal kernel from kσ (see Section 2). The following theorem of Christmann and Steinwart [7] is particularly relevant to this question. Theorem 1. (cf. Theorem 2.2 of [7]) Let X be a compact metric space and G a separable Hilbert space such that there exists a continuous and injective map Φ : X →G. Furthermore, let K : R →R be a function that is analytic on some neighborhood of 0, i.e., it can locally be expressed by its Taylor series K(t) = ∞ X n=0 antn, t ∈[−r,r]. If an > 0 for all n ∈N0, then k : X × X →R, k(x, y) = K(⟨Φ(x),Φ(y)⟩G) = ∞ X n=0 an⟨Φ(x),Φ(y)⟩n G. (4) is a universal kernel. Kernels of the form Eq. (4) are typically referred to as Taylor kernels. Note that universality of a kernel on X refers to a specific choice of metric on X. By using the same argument as for the linear dot-product kernel in Rd (see above), the PSS kernel kσ cannot be universal with respect to the metric dkσ, which is induced by the scalar product defining kσ. On the other hand, it is unclear whether kσ is universal with respect to the metric dW,q. However, we do have the following result: Proposition 2. The kernel kU σ : S × S →R, kU σ (F,G) = exp(kσ(F,G)), (5) is universal with respect to the metric dW,1. Proof. We prove this proposition by means of Theorem 1. We set G = L2(Ω), which is a separable Hilbert space. As shown in Reininghaus et al. [20], the feature map Φσ : D →L2(Ω) is injective. Furthermore, it is continuous by construction, as the metric on D is induced by the norm on L2(Ω), and so is Φσ restricted to S. The function K : R →R is defined as x 7→exp(x), and hence is analytic on R. Its Taylor coefficients an are 1/n!, and thus are positive for any n. It remains to show that (S,dW,1) is a compact metric space. First, define R = ΩN ∩([−R, R]2)N, which is a bounded, closed, and therefore compact subspace of (R2)N. Now consider the function 5 Φσ(F1) Φσ(F2) Φσ(F3) Φσ(FN) corresponds to the 2 holes average Fig. 2: Visualization of the mean PSS function (right) taken over 30 samples from a double-annulus (cf. [19]). f : R →S that maps (p1,. . . ,pN ) ∈R to the persistence diagram {pi : 1 ≤i ≤N if pi < ∂Ω} ∈S. We note that for all D = {p1,. . . ,pn} ∈S, with n ≤N, there exists an X ∈R, e.g., X = (p1,. . . ,pn,0,. . . ,0), such that f (X) = D; this implies S = f (R). Next, we show that f is 1-Lipschitz continuous w.r.t. the 1-Wasserstein distance on persistence diagrams, i.e., ∀X = (p1,. . . ,pN ),Y = (q1,. . . ,qN ) ∈R : dW,1( f (X), f (Y)) ≤d(X,Y), where we defined d as infγ P 1≤i ≤N ∥pi −γ(pi)∥∞with γ ranging over all bijections between {p1,. . . ,pN } and {q1,. . . ,qN }. In other words, d corresponds to the 1-Wasserstein distance without allowing matches to the diagonal. Now, by definition, dW,1( f (X), f (Y)) ≤d(X,Y), because all bijections considered by d are also admissible for dW,1. Since thus R is compact and f is continuous, we have that S = f (R) is compact as well. □ We refer to the kernel of Eq. (5) as the universal persistence scale-space (u-PSS) kernel. Remark. While we prove Prop. 1 for the PSS kernel in Eq. (3), it obviously also holds for kU σ , since exponentiation does neither invalidate measurability nor boundedness. Relation to persistence landscapes. As the feature map Φσ of Eq. (2) defines a function-valued summary of persistent homology in the Hilbert space L2(Ω), the results on probability in Banach spaces [14], used in [4] for persistence landscapes, naturally apply to Φσ as well. This includes, for instance, the law of large numbers or the central limit theorem [4, Theorems 9,10]. Conversely, considering a persistence landscape λ(D) as a function in L2(N × R) or L2(R2) yields a positive definite kernel ⟨λ(·),λ(·)⟩L2 on persistence diagrams. However, it is unclear whether a universal kernel can be constructed from persistence landscapes in a way similar to the definition of kU σ . In particular, we are not aware of a proof that the construction of persistence landscapes, considered as functions in L2, is continuous with respect to dW,qfor some 1 ≤q ≤∞. For a more detailed treatment of the differences between Φσ and persistence landscapes, we refer the reader to [20]. 4 Experiments We first describe a set of experiments on synthetic data appearing in previous work to illustrate the use of the PSS feature map Φσ and the universal persistence scale-space kernel on two different tasks. We then present two applications on real-world data, where we assess differences in the persistent homology of functions on 3D surfaces of lateral ventricles and corpora callosa with respect to different group assignments (i.e., age, demented/non-demented). In all experiments, filtrations and the persistence diagrams are obtained using Dipha2, which can directly handle our types of input data. Source code to reproduce the experiments is available at https://goo.gl/KouBPT. 4.1 Synthetic data Computation of the mean PSS function. We repeat the experiment from [19, 4] of sampling from the union of two overlapping annuli. In particular, we repeatedly (N = 30 times) draw samples of size 100 (out of 10000), and then compute persistence diagrams F1,. . . ,FN for 1-dim. features by considering sublevel sets of the distance function from the points. Finally, we compute the mean of the PSS functions Φσ(Fi) defined by the feature map from Eq. (2). This simply amounts to computing 1/N · Φσ(F1 ∪· · · ∪FN ). A visualization of the pointwise average, for a fixed choice of σ, is shown in Fig. 2. We remind the reader that the convergence results used in [4] equally hold for this feature map, as explained in Section 3. In particular, the above process of taking means converges to the expected value of the PSS function. As can be seen in Fig. 2, the two 1-dim. holes manifest themselves as two “bumps” at different positions in the mean PSS function. 2available online: https://code.google.com/p/dipha/ 6 10 20 30 40 50 60 70 80 90 100 0.0 0.2 0.4 0.6 0.8 1.0 1-dim./with-noise 10 20 30 40 50 60 70 80 90 100 0.0 0.2 0.4 0.6 0.8 1.0 0-dim./with-noise 10 20 30 40 50 60 70 80 90 100 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0-dim./no-noise 10 20 30 40 50 60 70 80 90 100 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1-dim./no-noise significance level Torus: (r −2)2 + z2 = 1 Sphere: r2 = 2π Fig. 3: Left: Illustration of one random sample (of size 200) on a sphere and a torus in R3 with equal surface area. To generate a noisy sample, we add Gaussian noise N (0,0.1) to each point in a sample (indicated by the vectors). Right: Two-sample hypothesis testing results (H0 : P = Q vs. HA : P , Q) for 0- and 1-dim. features. The box plots show the variation in p-values (y-axis) over a selection of values for σ as a function of increasing sample size (x-axis). Sample sizes for which the median p-value is less than the chosen significance level (here: 0.05) are marked green, and red otherwise. Torus vs. sphere. In this slightly more involved example, we repeat an experiment from [4, Section 4.3] on the problem of discriminating between a sphere and a torus in R3, based on random samples drawn from both objects. In particular, we repeatedly (N times) draw samples from the torus and the sphere (corresponding to measures P and Q) and then compute persistence diagrams. Eventually, we test the null-hypothesis H0 : P = Q, i.e., that samples were drawn from the same object; cf. [4] for a thorough description of the full setup. We remark that our setup uses the Delaunay triangulation of the point samples instead of the Coxeter–Freudenthal–Kuhn triangulation of a regular grid as in [4]. Conceptually, the important difference is in the two-sample testing strategy. In [4], two factors influence the test: (1) the choice of a functional to map the persistence landscape to a scalar and (2) the choice of test statistic. Bubenik chooses a z-test to test for equality between the mean persistence landscapes. In contrast, we can test for true equality in distribution. This is possible since universality of the kernel ensures that the MMD of Eq. (1) is a metric for the space of probability measures on persistence diagrams. All p-values are obtained by bootstrapping the test statistic under H0 over 104 random permutations. We further vary the number of samples/object used to compute the MMD statistic from N = 10 to N = 100 and add Gaussian noise N (0,0.1) in one experiment. Results are shown in Fig. 3 over a selection of u-PSS scales σ ∈{100,10,1,0.1,0.01,0.001}. For 0-dimension features and no noise, we can always reject H0 at α = 0.05 significance. For 1-dim. features and no noise, we need at least 60 samples to reliably reject H0 at the same level of α. 4.2 Real-world data We use two real-world datasets in our experiments: (1) 3D surfaces of the corpus callosum and (2) 3D surfaces of the lateral ventricles from neotates. The corpus callosum surfaces were obtained from the longitudinal dataset of the OASIS brain database3. We use all subject data from the first visit, and the grouping criteria is disease state: dementia vs. non-dementia. Note that the demented group is comprised of individuals with very mild to mild AD. This discrimination is based on the clinical dementia rating (CDR) score; Marcus et al. [17] explain this dataset in detail. The lateral ventricle dataset is an extended version of [3]. It contains data from 43 neonates. All subjects were repeatedly imaged approximately every 3 months (starting from 2 weeks) in the first year and every 6 months in the second year. According to Bompard et al. [3], the ventricle growth is the dominant effect and occurs in a non-uniform manner most significantly during the first 6 months. This raises the question whether age also has an impact on the shape of these brain structures that can be detected by persistent homology of the HKS (see Setup below, or Section 2) function. Hence, we set our grouping criteria to be developmental age: ≤6 months vs. > 6 months. It is important to note that the heat kernel signature is not scale-invariant. For that reason, we normalize the (mean-subtracted) configuration matrices (containing the vertex coordinates of each mesh) by their Euclidean norm, cf. [9]. This ensures that our analysis is not biased by growth (scaling) effects. 3available online: http://www.oasis-brains.org 7 HKS time 1 0 p-value HKS time σ in kU σ t15 t10 t5 t1 t20 (a) (Right) lateral ventricles; Grouping: subjects ≤6months vs. > 6months HKS time 1 0 p-value HKS time σ in kU σ t15 t10 t5 t1 t20 (b) Corpora callosa; Grouping: demented vs. non-demented subjects Fig. 4: Left: Effect of increasing HKS time ti, illustrated on one exemplary surface mesh of both datasets. Right: Contour plots of p-values estimated via random permutations, shown as a function of the u-PSS kernel scale σ and the HKS time. Setup. We follow an experimental setup, similar to [16] and [20], and compute the heat kernel signature [27] for various times ti as a function defined on the 3D surface meshes. In all experiments, we use the proposed kernel u-PSS kernel kU σ of Eq. (5) and vary the HKS time ti in 1 = t1 < t2 < · · · < t20 = 10.5; Regarding the u-PSS kernel scale σi, we sweep from 10−9 = σ1 < · · · < σ10 = 101. Null- (H0) and alternative (HA) hypotheses are defined as in Section 2 with two samples of persistence diagrams {Fi}m i=1,{Gi}n i=1. The test statistic under H0 is bootstrapped using B = 5 · 104 random permutations. This is also the setup recommended in [13] for low samples sizes. Results. Figure 4 shows the estimated p-values for both datasets as a function of the u-PSS kernel scale and the HKS time for 1-dim. features. The false discovery rate is controlled by the BenjaminiHochberg procedure. On the lateral ventricle data, we observe p-values < 0.01 (for the right ventricles), especially around HKS times t10 to t15, cf. Fig. 4(a). Since the results for left and right lateral ventricles are similar, only the p-values plots for the right lateral ventricle are shown. In general, the results indicate that, at specific settings of ti, the HKS function captures salient shape features of the surface, which lead to statistically significant differences in the persistent homology. We do, however, point out that there is no clear guideline on how to choose the HKS time. In fact, setting ti too low might emphasize noise, while setting ti too high tends to smooth-out details, as can be seen in the illustration of the HKS time on the left-hand side of Fig. 4. On the corpus callosum data, cf. Fig. 4(b), no significant differences in the persistent homology of the two groups (again for 1-dim. features) can be identified with p-values ranging from 0.1 to 0.9. This does not allow to reject H0 at any reasonable level. 5 Discussion With the introduction of a universal kernel for persistence diagrams in Section 3, we enable the use of this topological summary representation in the framework of embedding probability measures into reproducing kernel Hilbert spaces. While our experiments are mainly limited to two-sample hypothesis testing, our kernel allows to use a wide variety of statistical techniques and learning methods which are situated in that framework. It is important to note that our construction, via Theorem 1, essentially depends on a restriction of the set D to a compact metric space. We remark that similar conditions are required in [4] in order to enable statistical computations, e.g., constraining the support of the persistence landscapes. However, it will be interesting to investigate which properties of the kernel remain valid when lifting these restrictions. From an application point of view, we have shown that we can test for a statistical difference in the distribution of persistence diagrams. This is in contrast to previous work, where hypothesis testing is typically limited to test for specific properties of the distributions, such as equality in mean. Acknowledgements. This work has been partially supported by the Austrian Science Fund, project no. KLI 00012. We also thank the anonymous reviewers for their valuable comments/suggestions. 8 References [1] A. Adcock, E. Carlsson, and G. Carlsson. The ring of algebraic functions on persistence bar codes. arXiv, available at http://arxiv.org/abs/1304.0530, 2013. [2] P. Bendich, J.S. Marron, E. Miller, A. Pieloch, and S. Skwerer. Persistent homology analysis of brain artery trees. arXiv, available at http://arxiv.org/abs/1411.6652, 2014. [3] L. Bompard, S. Xu, M. Styner, B. Paniagua, M. Ahn, Y. Yuan, V. Jewells, W. Gao, D. Shen, H. Zhu, and W. Lin. Multivariate longitudinal shape analysis of human lateral ventricles during the first twenty-four months of life. PLoS One, 2014. [4] P. Bubenik. Statistical topological data analysis using persistence landscapes. JMLR, 16:77–102, 2015. [5] G. Carlsson. Topology and data. Bull. Amer. Math. Soc., 46:255–308, 2009. [6] F. Chazal, B.T. Fasy, F. Lecci A. Rinaldo, and L. Wasserman. Stochastic convergence of persistence landscapes and silhouettes. In SoCG, 2014. [7] A. Christmann and I. Steinwart. Universal kernels on non-standard input spaces. In NIPS, 2010. [8] M.K. Chung, P. Bubenik, and P.T. Kim. Persistence diagrams of cortical surface data. In IPMI, 2009. [9] I.L. Dryden and K.V. Mardia. Statistical shape analysis. Wiley series in probability and statistics. Wiley, 1998. [10] H. Edelsbrunner and J. Harer. Computational Topology. An Introduction. AMS, 2010. [11] B. Fasy, F. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan, and A. Singh. Confidence sets for persistence diagrams. Ann. Statist., 42(6):2301–2339, 2014. [12] K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes’ rule: Bayesian inference with positive definite kernels. JMLR, 14:3753–3783, 2013. [13] A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. JMLR, 13:723–773, 2012. [14] M. Ledoux and M. Talagrand. Probability in Banach spaces. Classics in Mathematics. Springer, 1991. [15] H. Lee, M.K. Chung, H. Kang, and D.S. Lee. Hole detection in metabolic connectivity of Alzheimer’s disease using k-Laplacian. In MICCAI, 2014. [16] C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In CVPR, 2014. [17] D.S. Marcus, A.F. Fotenos, J.G. Csernansky, J.C. Morris, and R.L. Buckner. Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. Cognitive Neurosci., 22(12):2677–2684, 2010. [18] Y. Mileyko, S. Mukherjee, and J. Harer. Probability measures on the space of persistence diagrams. Inverse Probl., 27(12), 2011. [19] E. Munch, P. Bendich, S. Mukherjee, J. Mattingly, and J. Harer. Probabilistic Fréchet means and statistics on vineyards. CoRR, 2013. http://arxiv.org/abs/1307.6530. [20] R. Reininghaus, U. Bauer, S. Huber, and R. Kwitt. A stable multi-scale kernel for topological machine learning. In CVPR, 2015. [21] B. Schölkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001. [22] N. Singh, H. D. Couture, J. S. Marron, C. Perou, and M. Niethammer. Topological descriptors of histology images. In MLMI, 2014. [23] A. Smola, A. Gretton, L. Song, and B. Schölkopf. Hilbert space embedding for distributions. In ALT, 2007. [24] B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. Lanckriet. Hilbert space embeddings and metrics on probability measures. JMLR, 11:1517–1561, 2010. [25] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. JMLR, 2:67– 93, 2001. [26] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [27] J. Sun, M. Ovsjanikov, and L. Guibas. A concise and probably informative multi-scale signature based on heat diffusion. In SGP, 2009. [28] K. Turner, Y. Mileyko, S. Mukherjee, and J. Harer. Fréchet means for distributions of persistence diagrams. Discrete Comput. Geom., 52(1):44–70, 2014. 9
2015
184
5,685
A Structural Smoothing Framework For Robust Graph-Comparison Pinar Yanardag Department of Computer Science Purdue University West Lafayette, IN, 47906, USA ypinar@purdue.edu S.V.N. Vishwanathan Department of Computer Science University of California Santa Cruz, CA, 95064, USA vishy@ucsc.edu Abstract In this paper, we propose a general smoothing framework for graph kernels by taking structural similarity into account, and apply it to derive smoothed variants of popular graph kernels. Our framework is inspired by state-of-the-art smoothing techniques used in natural language processing (NLP). However, unlike NLP applications that primarily deal with strings, we show how one can apply smoothing to a richer class of inter-dependent sub-structures that naturally arise in graphs. Moreover, we discuss extensions of the Pitman-Yor process that can be adapted to smooth structured objects, thereby leading to novel graph kernels. Our kernels are able to tackle the diagonal dominance problem while respecting the structural similarity between features. Experimental evaluation shows that not only our kernels achieve statistically significant improvements over the unsmoothed variants, but also outperform several other graph kernels in the literature. Our kernels are competitive in terms of runtime, and offer a viable option for practitioners. 1 Introduction In many applications we are interested in computing similarities between structured objects such as graphs. For instance, one might aim to classify chemical compounds by predicting whether a compound is active in an anti-cancer screen or not. A kernel function which corresponds to a dot product in a reproducing kernel Hilbert space offers a flexible way to solve this problem [19]. Rconvolution [10] is a framework for computing kernels between discrete objects where the key idea is to recursively decompose structured objects into sub-structures. Let ⟨·, ·⟩H denote a dot product in a reproducing kernel Hilbert space, G represent a graph and φ (G) represent a vector of sub-structure frequencies. The kernel between two graphs G and G′ is computed by K (G, G′) = ⟨φ (G) , φ (G′)⟩H. Many existing graph kernels can be viewed as instances of R-convolution kernels. For instance, the graphlet kernel [22] decomposes a graph into graphlets, Weisfeiler-Lehman Subtree kernel (referred as Weisfeiler-Lehman for the rest of the paper) [23] decomposes a graph into subtrees, and the shortest-path kernel [1] decomposes a graph into shortest-paths. However, R-convolution based graph kernels suffer from a few drawbacks. First, the size of the feature space often grows exponentially. As size of the space grows, the probability that two graphs will contain similar sub-structures becomes very small. Therefore, a graph becomes similar to itself but not to any other graph in the training data. This is well known as the diagonal dominance problem [11] where the resulting kernel matrix is close to the identity matrix. Second, lower order sub-structures tend to be more numerous while a vast majority of the sub-structures occurs rarely. In other words, a few sub-structures dominate the distribution. This exhibits a strong power-law behavior and results in underestimation of the true distribution. Third, the sub-structures used to define a graph kernel are often related to each other. However, an R-convolution kernel only respects exact matchings. This problem is particu1 Figure 1: Graphlets of size k ≤5. larly important when noise is present in the training data and considering partial similarity between sub-structures might alleviate the noise problem. Our solution: In this paper, we propose to tackle the above problems by using a general framework to smooth graph kernels that are defined using a frequency vector of decomposed structures. We use structure information by encoding relationships between lower and higher order sub-structures in order to derive our method. The remainder of this paper is structured as follows. In Section 2, we review three families of graph kernels for which our smoothing is applicable. In Section 3, we review smoothing methods for multinomial distributions. In Section 4, we introduce a framework for smoothing structured objects. In Section 5, we propose a Bayesian variant of our model that is extended from the Hierarchical Pitman-Yor process [25]. In Section 6, we discuss related work. In Section 7, we compare smoothed graph kernels to their unsmoothed variants as well as to other state-of-the-art graph kernels. We report results on classification accuracy on several benchmark datasets as well as their noisy-variants. Section 8 concludes the paper. 2 Graph kernels Existing graphs kernels based on R-convolution can be categorized into three major families: graph kernels based on limited-sized subgraphs [e.g. 22], graph kernels based on subtree patterns [e.g. 18, 21], and graph kernels based on walks [e.g. 27] or paths [e.g. 1]. Graph kernels based on subgraphs: A graphlet G [17] is non-isomorphic sub-graph of size-k, (see Figure 1). Given two graphs G and G′, the kernel [22] is defined as KGK(G, G′) = D f G, f G′E where f G and f G′ are vectors of normalized counts of graphlets, that is, the i-th component of f G (resp. f G′) denotes the frequency of graphlet Gi occurring as a sub-graph of G (resp. G′). Graph kernels based on subtree patterns: Weisfeiler-Lehman [21] is a popular instance of graph kernels that decompose a graph into its subtree patterns. It simply iterates over each vertex in a graph, and compresses the label of the vertex and labels of its neighbors into a multiset label. The vertex is then relabeled with the compressed label to be used for the next iteration. Algorithm concludes after running for h iterations, and the compressed labels are used for constructing a frequency vector for each graph. Formally, given G and G′, this kernel is defined as KW L(G, G′) = D lG, lG′E where lG contains the frequency of each compressed label occurring in h iterations. Graph kernels based on walks or paths: Shortest-path graph kernel [1] is a popular instance of this family. This kernel simply compares the sorted endpoints and the length of shortest-paths that are common between two graphs. Formally, let PG represent the set of all shortest-paths in graph G, and pi ∈PG denote a triplet (ls, le, nk) where nk is the length of the path and ls and le are the labels of the source and sink vertices, respectively. The kernel between graphs G and G′ is defined as KSP (G, G′) = D pG, pG′E where i-th component of pG contains the frequency of i-th triplet occurring in graph G (resp. pG′). 3 Smoothing multinomial distributions In this section, we briefly review smoothing techniques for multinomial distributions. Let e1, e2, . . . , em represent a sequence of n discrete events drawn from a ground set A = {1, 2, . . . , V }. 2 Figure 2: Topologically sorted graphlet DAG for k ≤5 where nodes are colored based on degree. Suppose, we would like to estimate the probability P (ei = a) for some a ∈A. It is well known that the Maximum Likelihood Estimate (MLE) can be computed as PMLE (ei = a) = ca m where ca denotes the number of times the event a appears in the observed sequence and m = P j cj denotes the total number of observed events. However, MLE of the multinomial distribution is spiky since it assigns zero probability to the events that did not occur in the observed sequence. In other words, an event with low probability is often estimated to have zero probability mass. The general idea behind smoothing is to adjust the MLE of the probabilities by pushing the high probabilities downwards and pushing low or zero probabilities upwards in order to produce a more accurate distribution on the events [30]. Interpolated smoothing methods offer a flexible solution between the higher-order maximum likelihood model and lower-order smoothed model (or so-called, fallback model). The way the fallback model is designed is the key to define a new smoothing method1. Absolute discounting [15] and Interpolated Kneser-Ney [12] are two popular instances of interpolated smoothing methods: PA (ei = a) = max {ca −d, 0} m + md × d m P ′ A (ei = a) . (1) Here, d > 0 is a discount factor, md := |{a : ca > d}| is the number of events whose counts are larger than d, while P ′ A is the fallback distribution. Absolute discounting defines the fallback distribution as the smoothed version of the lower-order MLE while Kneser-Ney uses an unusual estimate of the fallback distribution by using number of different contexts that the event follows in the lower order model. 4 Smoothing structured objects In this section, we first propose a new interpolated smoothing framework that is applicable to a richer set of objects such as graphs by using a Directed Acyclic Graph (DAG). We then discuss how to design such DAGs for various graph kernels. 4.1 Structural smoothing The key to designing a new smoothing method is to define a fallback distribution, which not only incorporates domain knowledge but is also easy to estimate recursively. Suppose, we have access to a weighted DAG where every node at the k-th level represents an event from the ground set A. Moreover let wij denote the weight of the edge connecting event i to event j, and Pa (resp. Ca) denote the parents (resp. children) of event a ∈A in the DAG. We define our structural smoothing for events at level k as follows: P k SS (ei = a) = max {ca −d, 0} m + md × d m X j∈Pa P k−1 SS (j) wja P a′∈Cj wja′ . (2) The way to understand the above equation is as follows: we subtract a fixed discounting factor d from every observed event which accumulates to a total mass of md × d. Each event a receives some portion of this accumulated probability mass from its parents. The proportion of the mass that a parent j at level k −1 transmits to a given child a depends on the weight wja between the parent and the child (normalized by the sum of the weights of the edges from j to all its children), and the probability mass P k−1 SS (j) that is assigned to node j. In other words, the portion a child event a is able to obtain from the total discounted mass depends on how authoritative its parents are, and how strong the relationship between the child and its parents. 1See Table 2 in [3] for summarization of various smoothing algorithms using this general framework. 3 4.2 Designing the DAG In order to construct a DAG for smoothing structured objects, we first construct a vocabulary V that denotes the set of all unique sub-structures that are going to be smoothed. Each item in the vocabulary V corresponds to a node in the DAG. V can be generated statically or dynamically based on the type of sub-structure the graph kernel exploits. For instance, it requires a one-time O(2k) effort to generate the vocabulary of size ≤k graphlets for graphlet kernel. However, one needs to build the vocabulary dynamically in Weisfeiler-Lehman and Shortest-Path kernels since the sub-structures depend on the node labels obtained from the datasets. After constructing the vocabulary V , the parent/child relationship between sub-structures needs to be obtained. Given a sub-structure s of size k, we apply a transformation to find all possible sub-structures of size k −1 that s can be reduced into. Each sub-structure s′ that is obtained by this transformation is assigned as a parent of s. After obtaining the parent/child relationship between sub-structures, the DAG is constructed by drawing a directed edge from each parent to its children nodes. Since all descendants of a given sub-structure at depth k −1 are at depth k, this results in a topological ordering of the vertices, and hence the resulting graph is indeed a DAG. Next, we discuss how to construct such DAGs for different graph kernels. Graphlet Kernel: We construct the vocabulary V for graphlet kernel by enumerating all canonical graphlets of size up to k2. Each canonically-labeled graphlet is a node in the DAG. We then apply a transformation to infer the parent/child relationship between graphlets as follows: we place a directed edge from graphlet G to G′ if, and only if, G can be obtained from G′ by deleting a node. In other words, all edges from a graphlet G of size k −1 point to a graphlet G′ of size k. In order to assign weights to the edges, given a graphlet pair G and G′, we count the number of times G can be obtained from G′ by deleting a node (call this number nGG′). Recall that G is of size k −1 and G′ is of size k, and therefore nGG′ can at most be k. Let CG denote the set of children of node G in the DAG, and nG := P ¯ G∈CG nG ¯ G. Then we define the weight wGG′ of the edge connecting G and G′ as nGG′/nG. The idea here is that the weight encodes the proportion of different ways of extending G which results in the graphlet G′. For instance, let us consider G15 and its parents G5, G6, G7 (see Figure 2 for the DAG of graphlets with size k ≤5). Even if graphlet G15 is not observed in the training data, it still gets a probability mass proportional to the edge weight from its parents in order to overcome the sparsity problem of unseen data. Weisfeiler-Lehman Kernel: The Weisfeiler-Lehman kernel performs an exact matching between the compressed multiset labels. For instance, given two labels ABCDE and ABCDF, it simply assigns zero value for their similarity even though two labels have a partial similarity. In order to smooth Weisfeiler-Lehman kernel, we first run the original algorithm and obtain the multiset representation of each graph in the dataset. We then apply a transformation to infer the parent/child relationship between compressed labels as follows: in each iteration of Weisfeiler-Lehman algorithm, and for each multiset label of size k in the vocabulary, we generate its power set by computing all subsets of size k −1 while keeping the root node fixed. For instance, the parents of a multiset label ABCDE are {ABCD, ABCE, ABDE, ACDE}. Then, we simply construct the DAG by drawing a directed edge from parent labels to children. Notice that considering only the set of labels generated from the Weisfeiler-Lehman kernel is not sufficient enough for constructing a valid DAG. For instance, it might be the case that none of the possible parents of a given label exists in the vocabulary simply due to the sparsity problem (e.g.out of all possible parents of ABCDE, we might only observe ABCE in the training data). Thus, restricting ourselves to the original vocabulary leaves such labels orphaned in the DAG. Therefore, we consider so-called pseudo parents as a part of the vocabulary when constructing the DAG. Since the sub-structures in this kernel are data-dependent, we use a uniform weight between a parent and its children. Shortest-Path Kernel: Similar to other graph kernels discussed above, shortest-path graph kernel does not take partial similarities into account. For instance, given two shortest-paths ABCDE and ABCDF (compressed as AE5 and AF5, respectively), it assigns zero for their similarity since their sink labels are different. However, one can notice that shortest-path sub-structures exhibit a strong dependency relationship. For instance, given a shortest-path pij = {ABCDE} of size k, one can derive the shortest-paths {ABCD, ABC, AB} of size < k as a result of the optimal sub-structure property, that is, one can show that all sub-paths of a shortest-path are also shortest-paths with 2We used Nauty [13] to obtain canonically-labeled isomorphic representations of graphlets. 4 Figure 3: An illustration of table assignment, adapted from [9]. In this example, labels at the tables are given by (l1, . . . , l4) = (G44, G30, G32, G44). Black dots indicate the number of occurrences of each label in 10 draws from the Pitman-Yor process. the same source node [6]. In order to smooth shortest-path kernel, we first build the vocabulary by computing all shortest-paths for each graph. Let pij be a shortest-path of size k and pij′ be a shortestpath of size k −1 that is obtained by removing the sink node of pij. Let lij be the compressed form of pij that represents the sorted labels of its endpoints i and j concatenated to its length (resp. lij′). Then, in order to build the DAG, we draw a directed edge from lij′ of depth k −1 to lij of depth k if and only if pij′ is a sub-path of pij. In other words, all ascendants of lij consist of the compressed labels obtained from sub-paths of pij of size < k. Similar to Weisfeiler-Lehman kernel, we assign a uniform weight between parents and children. 5 Pitman-Yor Smoothing Pitman-Yor processes are known to produce power-law distributions [8]. A novel interpretation of interpolated Kneser-Ney is proposed by [25] as approximate inference in a hierarchical Bayesian model consisting of Pitman-Yor process [16]. By following a similar spirit, we extend our model to adapt Pitman-Yor process as an alternate smoothing framework. A Pitman-Yor process P on a ground set Gk+1 of size-(k + 1) graphlets is defined via Pk+1 ∼PY (dk+1, θk+1, Pk) where dk+1 is a discount parameter, 0 ≤dk+1 < 1, θ > −dk+1 is a strength parameter, and Pk is a base distribution. The most intuitive way to understand draws from the Pitman-Yor process is via the Chinese restaurant process (see Figure 3). Consider a restaurant with an infinite number of tables Algorithm 1 Insert a Customer Input: dk+1, θk+1, Pk t ←0 // Occupied tables c ←() // Counts of customers l ←() // Labels of tables if t = 0 then t ←1 append 1 to c draw graphlet Gi ∼Pk // Insert customer in parent draw Gj ∼wij append Gj to l return Gj else with probability ∝max(0, cj −d) cj ←cj + 1 return lj with probability proportional to θ + dt t ←t + 1 append 1 to c draw graphlet Gi ∼Pk // Insert customer in parent draw Gj ∼wij append Gj to l return Gj end if where customers enter the restaurant one by one. The first customer sits at the first table, and a graphlet is assigned to it by drawing a sample from the base distribution since this table is occupied for the first time. The label of the first table is the first graphlet drawn from the Pitman-Yor process. 5 Subsequent customers when they enter the restaurant decide to sit at an already occupied table with probability proportional to ci −dk+1, where ci represents the number of customers already sitting at table i. If they sit at an already occupied table, then the label of that table denotes the next graphlet drawn from the Pitman-Yor process. On the other hand, with probability θk+1 + dk+1t, where t is the current number of occupied tables, a new customer might decide to occupy a new table. In this case, the base distribution is invoked to label this table with a graphlet. Intuitively the reason this process generates power-law behavior is because popular graphlets which are served on tables with a large number of customers have a higher probability of attracting new customers and hence being generated again, similar to a rich gets richer phenomenon. In a hierarchical Pitman-Yor process, the base distribution Pk is recursively defined via a Pitman-Yor process Pk ∼PY (dk, θk, Pk−1). In order to label a table, we need a draw from Pk, which is obtained by inserting a customer into the corresponding restaurant. However, adopting the traditional hierarchical Pitman-Yor process is not straightforward in our case since the size of the context differs between levels of hierarchy, that is, a child restaurant in the hierarchy can have more than one parent restaurant to request a label from. In other words, Pk+1 is defined over Gk+1 of size nk+1 while Pk is defined over Gk of size nk ≤nk+1. Therefore, one needs a transformation function to transform base distributions of different sizes. We incorporate edge weights between parent and child restaurants by using the same weighting scheme in Section 4.2. This changes the Chinese Restaurant process as follows: When we need to label a table, we will first draw a size-k graphlet Gi ∼Pk by inserting a customer into the corresponding restaurant. Given Gi, we will draw a size-(k + 1) graphlet Gj proportional to wij, where wij is obtained from the DAG. See Algorithm 1 for pseudo code of inserting a customer. Deletion of a customer is handled similarly (see Algorithm 2). Algorithm 2 Delete a Customer Input: d, θ, P0, C, L, t with probability ∝cl cl ←cl −1 Gj ←lj if cl = 0 then Pk ∝1/wij delete cl from c delete lj from l t ←t −1 end if return G 6 Related work A survey of most popular graph kernel methods is already given in previous sections. Several methods proposed in smoothing structured objects [4], [20]. Our framework is similar to dependency tree kernels [4] since both methods are using the notion of smoothing for structured objects. However, our method is interested in the problem of smoothing the count of structured objects. Thus, while smoothing is achieved by using a DAG, we discard the DAG once the counts are smoothed. Another related work to ours is propagation kernels [14] that define graph features as counts of similar node-label distributions on the respective graphs by using Locality Sensitive Hashing (LSH). Our framework not only considers node label distributions, but also explicitly incorporates structural similarity via the DAG. Another similar work to ours is recently proposed framework by [29] which learns the co-occurrence relationship between sub-structures by using neural language models. However, their framework do not respect the structural similarity between sub-structures, which is an important property to consider especially in the presence of noise in edges or labels. 7 Experiments The aim of our experiments is threefold. First, we want to show that smoothing graph kernels significantly improves the classification accuracy. Second, we want to show that the smoothed kernels are comparable to or outperform state-of-the-art graph kernels in terms of classification 6 Table 1: Comparison of classification accuracy (± standard deviation) of shortest-path (SP), Weisfeiler-Lehman (WL), graphlet (GK) kernels with their smoothed variants. Smoothed variants with statistically significant improvements over the base kernels are shown in bold as measured by a t-test with a p value of ≤0.05. Ramon & G¨artner (Ram & G¨ar), p-random walk and random walk kernels are included for additional comparison where > 72H indicates the computation did not finish in 72 hours. Runtime for constructing the DAG and smoothing (SMTH) the counts are also reported where ” indicates seconds and ’ indicates minutes. DATASET MUTAG PTC ENZYMES PROTEINS NCI1 NCI109 SP 85.22 ±2.43 58.24 ±2.44 40.10 ±1.50 75.07 ±0.54 73.00±0.24 73.00±0.21 SMOOTHED SP 87.94 ±2.58 60.82 ±1.84 42.27 ±1.07 75.85 ±0.28 73.26±0.24 73.01±0.31 WL 82.22 ±1.87 60.41 ±1.93 53.88 ±0.95 74.49 ±0.49 84.13±0.22 83.83±0.31 SMOOTHED WL 87.44 ±1.95 60.47 ±2.39 55.30 ±0.65 75.53 ±0.50 84.66±0.18 84.72±0.21 GK 81.33 ±1.02 55.56 ±1.46 27.32 ±0.96 69.69 ±0.46 62.46±0.19 62.33±0.14 SMOOTHED GK 83.17 ±0.64 58.44 ±1.00 30.90 ±1.51 69.83 ±0.46 62.48±0.15 62.48±0.11 PYP GK 83.11 ±1.23 57.44 ±1.44 29.63 ±1.30 70.00 ±0.80 62.50±0.20 62.68±0.18 RAM & G ¨AR 84.88 ±1.86 58.47 ±0.90 16.96 ±1.46 70.73 ±0.35 56.61±0.53 54.62±0.23 P-RANDOMWALK 80.05 ±1.64 59.38 ±1.66 30.01 ±1.00 71.16 ±0.35 > 72H > 72H RANDOM WALK 83.72 ±1.50 57.85 ±1.30 24.16 ±1.64 74.22 ±0.42 > 72H > 72H DAG/SMTH (GK) 6” 1” 6” 1” 6” 1” 6” 1” 6” 3” 6” 3” DAG/SMTH (SP) 3” 1” 19” 1” 45” 1” 9’ 1” 9’ 17” 10’ 16” DAG/SMTH (WL) 1” 2” 1” 17” 10” 12’ 7’ 70’ 2” 21’ 2” 21’ DAG/SMTH (PYP) 6” 5” 6” 12” 6” 21” 6” 1’ 6” 8’ 6” 8’ accuracy, while remaining competitive in terms of computational requirements. Third, we want to show that our methods outperform base kernels when edge or label noise is presence. Datasets We used the following benchmark datasets used in graph kernels: MUTAG, PTC, ENZYMES, PROTEINS, NCI1 and NCI109. MUTAG is a dataset of 188 mutagenic aromatic and heteroaromatic nitro compounds [5] with 7 discrete labels. PTC [26] is a dataset of 344 chemical compounds has 19 discrete labels. ENZYMES is a dataset of 600 protein tertiary structures obtained from [2], and has 3 discrete labels. PROTEINS is a dataset of 1113 graphs obtained from [2] having 3 discrete labels. NCI1 and NCI109 [28] are two balanced datasets of chemical compounds having size 4110 and 4127 with 37 and 38 labels, respectively. Experimental setup We compare our framework against representative instances of major families of graph kernels in the literature. In addition to the base kernels, we also compare our smoothed kernels with the random walk kernel [7], the Ramon-G¨artner subtree [18], and p-step random walk kernel [24]. The Random Walk, p-step Random Walk and Ramon-G¨artner are written in Matlab and obtained from [22]. All other kernels were coded in Python except Pitman-Yor smoothing which is coded in C++3. We used a parallel implementation for smoothing the counts of Weisfeiler-Lehman kernel for efficiency. All kernels are normalized to have a unit length in the feature space. Moreover, we use 10-fold cross validation with a binary C-Support Vector Machine (SVM) where the C value for each fold is independently tuned using training data from that fold. In order to exclude random effects of the fold assignments, this experiment is repeated 10 times and average prediction accuracy of 10 experiments with their standard deviations are reported4. 7.1 Results In our first experiment, we compare the base kernels with their smoothed variants. As can be seen from Table 1, smoothing improves the classification accuracy of every base kernel on every dataset with majority of the improvements being statistically significant with p ≤0.05. We observe that even though smoothing improves the accuracy of graphlet kernels on PROTEINS and NCI1, the improvements are not statistically significant. We believe this is due to the fact that these datasets are not sensitive to structural noise as much as the other datasets, thus considering the partial similarities 3We modified the open source implementation of PYP: https://github.com/redpony/cpyp. 4Implementations of original and smoothed versions of the kernels, datasets and detailed discussion of parameter selection procedure with the list of parameters used in our experiments can be accessed from http: //web.ics.purdue.edu/˜ypinar/nips. 7 Figure 4: Classification accuracy vs. noise for base graph kernels (dashed lines) and their smoothed variants (non-dashed lines). do not improve the results significantly. Moreover, PYP smoothed graphlet kernels achieve statistically significant improvements in most of the datasets, however they are outperformed by smoothed graphlet kernels introduced in Section 3. In our second experiment, we picked the best smoothed kernel in terms of classification accuracy for each dataset, and compared it against the performance of state-of-the-art graph kernels (see Table 1). Smoothed kernels outperform other methods on all datasets, and the results are statistically significant on every dataset except PTC. In our third experiment, we investigated the runtime behavior of our framework with two major costs. First, one has to compute a DAG by using the original feature vectors. Next, the constructed DAG need to be used to compute smoothed representations of the feature vectors. Table 1 shows the total wallclock runtime taken by all graphs for constructing the DAG, and smoothing the counts for each dataset. As can be seen from the runtimes, our framework adds a constant factor to the original runtime for most of the datasets. While the DAG creation in Weisfeiler-Lehman kernel also adds a negligible overhead, the cost of smoothing becomes significant if the vocabulary size gets prohibitively large due to the exponential growing nature of the kernel w.r.t. to subtree parameter h. Finally, in our fourth experiment, we test the performance of graph kernels when edge or label noise is present. For edge noise, we randomly removed and added {10%, 20%, 30%} of the edges in each graph. For label noise, we randomly flipped {25%, 50%, 75%} of the node labels in each graph where random labels are selected proportionally to the original label-distribution of the graph. Figure 4 shows the performance of smoothed graph kernels under noise. As can be seen from the figure, smoothed kernels are able to outperform their base variants when noise is present. An interesting observation is that even though a significant amount of edge noise is added to PROTEINS and NCI datasets, the performance of base kernels do not change drastically. This further supports our observation that these datasets are not sensitive to structural noise as much as the other datasets. 8 Conclusion and Future Work We presented a novel framework for smoothing graph kernels inspired by smoothing techniques from natural language processing and applied our method to state-of-the-art graph kernels. Our framework is rather general, and lends itself to many extensions. For instance, by defining domainspecific parent-child relationships, one can construct different DAGs with different weighting schemes. Another interesting extension of our smoothing framework would be to apply it to graphs with continuous labels. Moreover, even though we restricted ourselves to graph kernels in this paper, our framework is applicable to any R-convolution kernel that uses a frequency-vector based representation, such as string kernels. 9 Acknowledgments We thank to Hyokun Yun for his tremendous help in implementing Pitman-Yor Processes. We also thank to anonymous NIPS reviewers for their constructive comments, and Jiasen Yang, Joon Hee Choi, Amani Abu Jabal and Parameswaran Raman for reviewing early drafts of the paper. This work is supported by the National Science Foundation under grant No. #1219015. 8 References [1] K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In ICML, pages 74–81, 2005. [2] K. M. Borgwardt, C. S. Ong, S. Sch¨onauer, S. V. N. Vishwanathan, A. J. Smola, and H.-P. Kriegel. Protein function prediction via graph kernels. In ISMB, Detroit, USA, 2005. [3] S. F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. In ACL, pages 310–318, 1996. [4] D. Croce, A. Moschitti, and R. Basili. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings of EMNLP, pages 1034–1046. Association for Computational Linguistics, 2011. [5] A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch. Structureactivity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. J. Med. Chem, 34:786–797, 1991. [6] A. Feragen, N. Kasenburg, J. Petersen, M. de Bruijne, and K. Borgwardt. Scalable kernels for graphs with continuous attributes. In NIPS, pages 216–224, 2013. [7] T. G¨artner, P. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In COLT, pages 129–143, 2003. [8] S. Goldwater, T. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. NIPS, 2006. [9] S. Goldwater, T. L. Griffiths, and M. Johnson. Producing power-law distributions and damping word frequencies with two-stage language models. JMLR, 12:2335–2382, 2011. [10] D. Haussler. Convolution kernels on discrete structures. Technical Report UCS-CRL-99-10, UC Santa Cruz, 1999. [11] J. Kandola, T. Graepel, and J. Shawe-Taylor. Reducing kernel matrix diagonal dominance using semidefinite programming. In COLT, volume 2777 of Lecture Notes in Computer Science, pages 288–302, Washington, DC, 2003. [12] R. Kneser and H. Ney. Improved backing-off for M-gram language modeling. In ICASSP, 1995. [13] B. D. McKay. Nauty user’s guide (version 2.4). Australian National University, 2007. [14] M. Neumann, R. Garnett, P. Moreno, N. Patricia, and K. Kersting. Propagation kernels for partially labeled graphs. In ICML–2012 Workshop on Mining and Learning with Graphs, Edinburgh, UK, 2012. [15] H. Ney, U. Essen, and R. Kneser. On structuring probabilistic dependences in stochastic language modeling. In Computer Speech and Language, pages 1–38, 1994. [16] J. Pitman and M. Yor. The two-parameter poisson-dirichlet distribution derived from a stable subordinator. Annals of Probability, 25(2):855–900, 1997. [17] N. Przulj. Biological network comparison using graphlet degree distribution. In ECCB, 2006. [18] J. Ramon and T. G¨artner. Expressivity versus efficiency of graph kernels. Technical report, First International Workshop on Mining Graphs, Trees and Sequences (held with ECML/PKDD’03), 2003. [19] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. 2002. [20] A. Severyn and A. Moschitti. Fast support vector machines for convolution tree kernels. Data Mining and Knowledge Discovery, 25(2):325–357, 2012. [21] N. Shervashidze and K. Borgwardt. Fast subtree kernels on graphs. In NIPS, 2010. [22] N. Shervashidze, S. V. N. Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt. Efficient graphlet kernels for large graph comparison. In AISTATS, 2009. [23] N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeilerlehman graph kernels. JMLR, 12:2539–2561, 2011. [24] A. J. Smola and R. Kondor. Kernels and regularization on graphs. In COLT, pages 144–158, 2003. [25] Y. W. Teh. A hierarchical bayesian language model based on pitman-yor processes. In ACL, 2006. [26] H. Toivonen, A. Srinivasan, R. D. King, S. Kramer, and C. Helma. Statistical evaluation of the predictive toxicology challenge 2000-2001. Bioinformatics, 19(10):1183–1193, July 2003. [27] S. V. N. Vishwanathan, N. N. Schraudolph, I. R. Kondor, and K. M. Borgwardt. Graph kernels. JMLR, 2010. [28] N. Wale, I. A. Watson, and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347–375, 2008. [29] P. Yanardag and S. Vishwanathan. Deep graph kernels. In KDD, pages 1365–1374. ACM, 2015. [30] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to information retrieval. ACM Trans. Inf. Syst., 22(2):179–214, 2004. 9
2015
185
5,686
Bandits with Unobserved Confounders: A Causal Approach Elias Bareinboim∗ Department of Computer Science Purdue University eb@purdue.edu Andrew Forney∗ Department of Computer Science University of California, Los Angeles forns@cs.ucla.edu Judea Pearl Department of Computer Science University of California, Los Angeles judea@cs.ucla.edu Abstract The Multi-Armed Bandit problem constitutes an archetypal setting for sequential decision-making, permeating multiple domains including engineering, business, and medicine. One of the hallmarks of a bandit setting is the agent’s capacity to explore its environment through active intervention, which contrasts with the ability to collect passive data by estimating associational relationships between actions and payouts. The existence of unobserved confounders, namely unmeasured variables affecting both the action and the outcome variables, implies that these two data-collection modes will in general not coincide. In this paper, we show that formalizing this distinction has conceptual and algorithmic implications to the bandit setting. The current generation of bandit algorithms implicitly try to maximize rewards based on estimation of the experimental distribution, which we show is not always the best strategy to pursue. Indeed, to achieve low regret in certain realistic classes of bandit problems (namely, in the face of unobserved confounders), both experimental and observational quantities are required by the rational agent. After this realization, we propose an optimization metric (employing both experimental and observational distributions) that bandit agents should pursue, and illustrate its benefits over traditional algorithms. 1 Introduction The Multi-Armed Bandit (MAB) problem is one of the most popular settings encountered in the sequential decision-making literature [Rob52, LR85, EDMM06, Sco10, BCB12] with applications across multiple disciplines. The main challenge in a prototypical bandit instance is to determine a sequence of actions that maximizes payouts given that each arm’s reward distribution is initially unknown to the agent. Accordingly, the problem revolves around determining the best strategy for learning this distribution (exploring) while, simultaneously, using the agent’s accumulated samples to identify the current “best” arm so as to maximize profit (exploiting). Different algorithms employ different strategies to balance exploration and exploitation, but a standard definition for the “best” arm is the one that has the highest payout rate associated with it. We will show that, perhaps surprisingly, the definition of “best” arm is more involved when unobserved confounders are present. This paper complements the vast literature of MAB that encompasses many variants including adversarial bandits (in which an omnipotent adversary can dynamically shift the reward distributions to thwart the player’s best strategies) [BFK10, AS95, BS12] contextual bandits (in which the payout, ∗The authors contributed equally to this paper. 1 and therefore the best choice of action, is a function of one or more observed environmental variables) [LZ08, DHK+11, Sli14], and many different constraints and assumptions over the underlying generative model and payout structure [SBCAY14]. For a recent survey, see [BCB12]. This work addresses the MAB problem when unobserved confounders are present (called MABUC, for short), which is arguably the most sensible assumption in real-world, practical applications (obviously weaker than assuming the inexistence of confounders). To support this claim, we should first note that in the experimental design literature, Fisher’s very motivation for considering randomizing the treatment assignment was to eliminate the influence of unobserved confounders – factors that simultaneously affect the treatment (or bandit arm) and outcome (or bandit payout), but are not accounted for in the analysis. In reality, the reason for not accounting for such factors explicitly in the analysis is that many of them are unknown a priori by the modeller [Fis51]. The study of unobserved confounders is one of the central themes in the modern literature of causal inference. To appreciate the challenges posed by these confounders, consider the comparison between a randomized clinical trial conducted by the Food and Drug Administration (FDA) versus physicians prescribing drugs in their offices. A key tenet in any FDA trial is the use of randomization for the treatment assignment, which precisely protects against biases that might be introduced by physicians. Specifically, physicians may prescribe Drug A for their wealthier patients who have better nutrition than their less wealthy ones, when unknown to the doctors, the wealthy patients would recover without treatment. On the other hand, physicians may avoid prescribing the expensive Drug A to their less privileged patients, who (again unknown to the doctors) tend to suffer less stable immune systems causing negative reactions to the drug. If a naive estimate of the drug’s causal effect is computed based on physicians’ data (obtained through random sampling, but not random assignment), the drug would appear more effective than it is in practice – a bias that would otherwise be avoided by random assignment. Confounding biases (of variant magnitude) appear in almost any application in which the goal is to learn policies (instead of statistical associations), and the use of randomization of the treatment assignment is one established tool to combat them [Pea00]. To the best of our knowledge, no method in the bandit literature has studied the issue of unobserved confounding explicitly, in spite of its pervasiveness in real-world applications. Specifically, no MAB technique makes a clear-cut distinction between experimental exploration (through random assignment as required by the FDA) and observational data (as given by random sampling in the doctors’ offices). In this paper, we explicitly acknowledge, formalize, and then exploit these different types of data-collection. More specifically, our contributions are as follow: • We show that the current bandit algorithms implicitly attempt to maximize rewards by estimating the experimental distribution, which does not guarantee an optimal strategy when unobserved confounders are present (Section 2). • Based on this observation, we translate the MAB problem to causal language, and then suggest a more appropriate metric that bandit players should optimize for when unobserved confounders are present. This leads to a new exploitation principle that can take advantage of data collected under both observational and experimental modes (Section 3). • We empower Thompson Sampling with this new principle and run extensive simulations. The experiments suggest that the new strategy is stats. efficient and consistent (Sec. 4). 2 Challenges due to Unobserved Confounders In this section, we discuss the mechanics of how the maximization of rewards is treated based on a bandit instance with unobserved confounders. Consider a scenario in which a greedy casino decides to demo two new models of slot machines, say M1 and M2 for simplicity, and wishes to make them as lucrative as possible. As such, they perform a battery of observational studies (using random sampling) to compare various traits of the casino’s gamblers to their typical slot machine choices. From these studies, the casino learns that two factors well predict the gambling habits of players when combined (unknown by the players): player inebriation and machine conspicuousness (say, whether or not a machine is blinking). Coding both of these traits as binary variables, we let B ∈{0, 1} denote whether or not a machine is blinking, and D ∈{0, 1} denote whether or not the gambler is drunk. As it turns out, a gambler’s “natural” choice of machine, X ∈{M1, M2}, can be modelled by the structural equation indicating the index of their chosen arm (starting at 0): X ←fX(B, D) = (D ∧¬B) ∨(¬D ∧B) = D ⊕B (1) 2 Figure 1: Performance of different bandit strategies in the greedy casino example. Left panel: no algorithm is able to perform better than random guessing. Right panel: Regret grows without bounds. Moreover, the casino learns that every gambler has an equal chance of being intoxicated and each machine has an equal chance of blinking its lights at a given time, namely, P(D = 0) = P(D = 1) = 0.5 and P(B = 0) = P(B = 1) = 0.5. The casino’s executives decide to take advantage of these propensities by introducing a new type of reactive slot machine that will tailor payout rates to whether or not it believes (via sensor input, assumed to be perfect for this problem) a gambler is intoxicated. Suppose also that a new gambling law requires that casinos maintain a minimum attainable payout rate for slots of 30%. Cognizant of this new law, while still wanting to maximize profits by exploiting gamblers’ natural arm choices, the casino executives modify their new slots with the payout rates depicted in Table 1a. (a) D = 0 D = 1 B = 0 B = 1 B = 0 B = 1 X = M1 *0.10 0.50 0.40 *0.20 X = M2 0.50 *0.10 *0.20 0.40 (b) P(y|X) P(y|do(X)) X = M1 0.15 0.3 X = M2 0.15 0.3 Table 1: (a) Payout rates decided by reactive slot machines as a function of arm choice, sobriety, and machine conspicuousness. Players’ natural arm choices under D, B are indicated by asterisks. (b) Payout rates according to the observational, P(Y = 1|X), and experimental P(Y = 1|do(X)), distributions, where Y = 1 represents winning (shown in the table), and 0 otherwise. The state, blind to the casino’s payout strategy, decides to perform a randomized study to verify whether the win rates meet the 30% payout requisite. Wary that the casino might try to inflate payout rates for the inspectors, the state recruits random players from the casino floor, pays them to play a random slot, and then observes the outcome. Their randomized experiment yields a favorable outcome for the casino, with win rates meeting precisely the 30% cutoff. The data looks like Table 1b (third column), assuming binary payout Y ∈{0, 1}, where 0 represents losing, and 1 winning. As students of causal inference and still suspicious of the casino’s ethical standards, we decide to go to the casino’s floor and observe the win rates of players based on their natural arm choices (through random sampling). We encounter a distribution close to Table 1b (second column), which shows that the casino is actually paying ordinary gamblers only 15% of the time. In summary, the casino is at the same time (1) exploiting the natural predilections of the gamblers’ arm choices as a function of their intoxication and the machine’s blinking behavior (based on eq. 1), (2) paying, on average, less than the legally allowed (15% instead of 30%), and (3) fooling state’s inspectors since the randomized trial payout meets the 30% legal requirement. As machine learning researchers, we decide to run a battery of experiments using the standard bandit algorithms (e.g., ϵ-greedy, Thompson Sampling, UCB1, EXP3) to test the new slot machines on the casino floor. We obtain data encoded in Figure 1a, which shows that the probability of choosing the correct action is no better than a random coin flip even after a considerable number of steps. We note, somewhat surprised, that the cumulative regret (Fig. 1b) shows no signs of abating, and 3 that we are apparently unable to learn a superior arm. We also note that the results obtained by the standard algorithms coincide with the randomized study conducted by the state (purple line). Under the presence of unobserved confounders such as in the casino example, however, P(y|do(X)) does not seem to capture the information required to maximize payout, but rather the average payout akin to choosing arms by a coin flip. Specifically, the payout given by coin flipping is the same for both machines, P(Y = 1|do(X = M1)) = P(Y = 1|do(X = M2)) = 0.3, which means that the arms are statistically indistinguishable in the limit of large sample size. Further, if we consider using the observational data from watching gamblers on the casino floor (based on their natural predilections), the average payoff will also appear independent of the machine choice, P(Y = 1|X = M1) = P(Y = 1|X = M2) = 0.15, albeit with an even lower payout. 1 Based on these observations, we can see why no arm choice is better than the other under either distribution alone, which explains the reason any algorithm based on these distributions will certainly fail to learn an optimal policy. More fundamentally, we should be puzzled by the disagreement between observational and interventional distributions. This residual difference may be encoding knowledge about the unobserved confounders, which may lead to some indication on how to differentiate the arms. This indeed may lead to some indication on how to differentiate the arms as well as a sensible strategy to play better than pure chance. In the next section, we will use causal machinery to realize this idea. 3 Bandits as a Causal Inference Problem We will use the language of structural causal models [Pea00, Ch. 7] for expressing the bandit datagenerating process and for allowing the explicit manipulation of some key concepts in our analysis – i.e., confounding, observational and experimental distributions, and counterfactuals (to be defined). Definition 3.1. (Structural Causal Model) ([Pea00, Ch. 7]) A structural causal model M is a 4-tuple ⟨U, V, f, P(u) ⟩where: 1. U is a set of background variables (also called exogenous), that are determined by factors outside of the model, 2. V is a set {V1, V2, ..., Vn} of observable variables (also called endogenous), that are determined by variables in the model (i.e., determined by variables in U ∪V ), 3. F is a set of functions {f1, f2, ..., fn} such that each fi is a mapping from the respective domains of Ui ∪PAi to Vi, where Ui ⊆U and PAi ⊆V \ Vi and the entire set F forms a mapping from U to V . In other words, each fi in vi ←fi(pai, ui), i = 1, ..., n, assigns a value to Vi that depends on the values of the select set of variables (Ui ∪PAi), and 4. P(u) is a probability distribution over the exogenous variables. Each structural model M is associated with a directed acyclic graph G, where nodes correspond to endogenous variables V and edges represent functional relationships – i.e., there exists an edge from X to Y whenever X appears in the argument of Y ’s function. We define next the MABUC problem within the structural semantics. Definition 3.2. (K-Armed Bandits with Unobserved Confounders) A K-Armed bandit problem with unobserved confounders is defined as a model M with a reward distribution over P(u) where: 1. Xt ∈{x1, ..., xk} is an observable variable encoding player’s arm choice from one of k arms, decided by Nature in the observational case, and do(Xt = π(x0, y0, ..., xt−1, yt−1)), for strategy π in the experimental case (i.e., when the strategy decides the choice), 2. Ut represents the unobserved variable encoding the payout rate of arm xt as well as the propensity to choose xt, and 3. Yt ∈0, 1 is a reward (0 for losing, 1 for winning) from choosing arm xt under unobserved confounder state ut decided by yt = fy(xt, ut). 1One may surmise that these ties are just contrived examples, or perhaps numerical coincidences, which do not appear in realistic bandit instances. Unfortunately, that’s not the case as shown in the other scenarios discussed in the paper. This phenomenon is indeed a manifestation of the deeper problem arising due to the lack of control for the unobserved confounders. 4 Figure 2: (a) Model for the standard MAB sequential decision game. (b) Model for the MABUC sequential decision game. In each model, solid nodes denote observed variables and open nodes represent unobserved variables. Square nodes denote the players strategys arm choice at time t. Dashed lines illustrate influences on future time trials that are not pictured. First note that this definition also applies to the MAB problem (without confounding) as shown in Fig. 2a. This standard MAB instance is defined by constraining the MABUC definition such that Ut affects only the outcome variable Yt – there is no edge from Ut to Xt (Def. 3.2.2). In the unconfounded case, it is clear that P(y|do(x)) = P(y|x) [Pea00, Ch. 3], which means that that payouts associated with flipping a coin to randomize the treatment or observing (through random sampling) the player gambling on the casino’s floor based on their natural predilections will yield the same answer. The variable U carries the unobserved payout parameters of each arm, which is usually the target of analysis. 2 3 Fig. 2b provides a graphical representation of the MABUC problem. Note that πt represents the system’s choice policy, which is affected by the unobserved factors encoded through the arrow from Ut to πt. One way to understand this arrow is through the idea of players’ natural predilections. In the example from the previous section, the predilection would correspond to the choices arising when the gambler is allowed to play freely on the casino’s floor (e.g., drunk players desiring to play on the blinking machines) or doctors prescribing drugs based on their gut feeling (e.g., physicians prescribing the more expensive drug to their wealthier patients). These predilections are encoded in the observational distribution P(y|x). On the other hand, the experimental distribution P(y|do(x)) encodes the process in which the natural predilections are overridden, or ceased by external policies. In our example, this distribution arises when the government’s inspectors flip a coin and send gamblers to machines based on the coin’s outcome, regardless of their predilections. Remarkably, it is possible to use the information embedded in these distinct data-collection modes (and their corresponding distributions) to understand players’ predilections and perform better than random guessing in these bandit instances. To witness, assume there exists an oracle on the casino’s floor operating by the following protocol. The oracle observes the gamblers until they are about to play a given machine. The oracle intercepts each gambler who is about to pull the arm of machine M1, for example, and suggests the player to contemplate whether following his predilection (M1) or going against it (playing M2) would lead to a better outcome. The drunk gambler, who is a clever machine learning student and familiar with Fig. 1, says that this evaluation cannot be computed a priori. He affirms that, despite spending hours on the casino estimating the payoff distribution based on players’ natural predilections (namely, P(y|x)), it is not feasible to relate this distribution with the hypothetical construction what would have happened had he decided to play differently. He also acknowledges that the experimental distribution P(y|do(x)), devoid of the gamblers’ predilections, does not support any clear comparison against his personal strategy. The oracle says that this type of reasoning is possible, but first one needs to define the concept of counterfactual. Definition 3.3. (Counterfactual) ([Pea00, pp. 204]) Let X and Y be two subsets of exogenous variables in V . The counterfactual sentence “Y would be y (in situation u), had X been x” is interpreted as the equality with Yx(u) = y, with Yx(u) being the potential response of Y to X = x. 2On a more fundamental level, it is clear that unconfoundedness is (implicitly) assumed not to hold in the general case. Otherwise, the equality between observational and experimental distributions would imply that no randomization of the action needs to be carried out since standard random sampling would recover the same distribution. In this case, this would imply that many works in the literature are acting in a suboptimal way since, in general, experiments are more expensive to perform than collecting data through random sampling. 3The interventional nature of the MAB problem is virtually not discussed in the literature, one of the few exceptions is the causal interpretation of Thompson Sampling established in [OB10]. 5 This definition naturally leads to the judgement suggested by the oracle, namely, “would I (the agent) win (Y = 1) had I played on machine M1 (X = 1)”, which can be formally written as YX=1 = 1 (we drop the M for simplicity). Assuming that the agent’s natural predilection is to play machine 1, the oracle suggests an introspection comparing the odds of winning following his intuition or going against it. The former statement can be written in counterfactual notation, probabilistically, as E(YX=1 = 1|X = 1), which reads as “the expected value of winning (Y = 1) had I play machine 1 given that I am about to play machine 1”, which contrasts with the alternative hypothesis E(YX=0 = 1|X = 1), which reads as “the expected value of winning (Y = 1) had I play machine 1 given that I am about to play machine 0”. This is also known in the literature as the effect of the treatment on the treated (ETT) [Pea00]. So, instead of using a decision rule comparing the average payouts across arms, namely (for action a), argmax a E(Y |do(X = a)), (2) which was shown in the previous section to be insufficient to handle the MABUC, we should consider the rule using the comparison between the average payouts obtained by players for choosing in favour or against their intuition, respectively, argmax a E(YX=a = 1|X = x), (3) where x is the player’s natural predilection and a is their final decision. We will call this procedure RDC (regret decision criterion), to emphasize the counterfactual nature of this reasoning step and the idea of following or disobeying the agent’s intuition, which is motivated by the notion of regret. Remarkably, RDC accounts for the agents individuality and the fact that their natural inclination encodes valuable information about the confounders that also affect the payout. In the binary case, for example, assuming that X = 1 is the player’s natural choice at some time step, if E(YX=0 = 1|X = 1) is greater than E(YX=1 = 1|X = 1), this would imply that the player should refrain of playing machine X = 1 to play machine X = 0. Assuming one wants to implement an algorithm based on RDC, the natural question that arises is how the quantities entailed by Eq. 3 can be computed from data. For the factors in the form E(YX=1 = 1|X = 1), the consistency axiom [Pea00, pp. 229] implies that E(YX=1 = 1|X = 1) = E(Y = 1|X = 1), where the l.h.s. is estimable from observational data. Counterfactuals in the form E(YX=a = 1|X = x), where a ̸= x, can be computed in the binary case through algebraic means [Pea00, pp. 396-7]. For the general case, however, ETT is not computable without knowledge of the causal graph. 4 Here, ETT will be computed in an alternative fashion, based on the idea of intention-specific randomization. The main idea is to randomize intention-specific groups, namely, interrupt any reasoning agent before they execute their choice, treat this choice as intention, delibarte, and then act. We discuss next about the algorithmic implementation of this randomization. 4 Applications & Experiments Based on the previous discussion, we can revisit the greedy casino example from Section 2, apply RDC and use the following inequality to guide agent’s decisions: E(YX=0|X = 1) > E(YX=1|X = 1) ⇔E(YX=0|X = 1) > P(Y |X = 1) (4) There are different ways of incorporating this heuristic into traditional bandit algorithms, and we describe one such approach taking the Thompson Sampling algorithm as the basis [OB10, CL11, AG11]. (For simulation source code, see https://github.com/ucla-csl/mabuc ) Our proposed algorithm, Causal Thompson Sampling (TSC) takes the following steps: (1) TSC first accepts an observational distribution as input, which it then uses to seed estimates of ETT quantities; i.e., for actions a and intuition x, by consistency we may seed knowledge of E(YX=a|X = x) = Pobs(y|x), ∀a = x. With large samples from an input set of observations, this seeding reduces (and possibly eliminates) the need to explore the payout rates associated with following intuition, leaving only the “disobeying intuition” payout rates left for the agent to learn. As such, (2) at each time step, our oracle observes the agent’s arm-choice predilection, and then uses RDC to deter4Graphical conditions for identifying ETT [Pea00, SP07] are orthogonal to the bandit problem studied in this paper, since no detailed knowledge about the causal graph (as well as infinite samples) is assumed here. 6 mine their best choice.5 Lastly, note that our seeding in (2) immediately improves the accuracy of our comparison between arms, viz. that a superior arm will emerge more quickly than had we not seeded. We can exploit this early lead in accuracy by weighting the more favorable arm, making it more likely to be chosen earlier in the learning process (which empirically improves the convergence rate as shown in the simulations). Algorithm 1 Causal Thompson Sampling (TSC) 1: procedure TSC(Pobs, T) 2: E(YX=a|X) ←Pobs(y|X) (seed distribution) 3: for t = [1, ..., T] do 4: x ←intuition(t) (get intuition for trial) 5: Q1 ←E(YX=x′|X = x) (estimated payout for counter-intuition) 6: Q2 ←P(y|X = x) (estimated payout for intuition) 7: w ←[1, 1] (initialize weights) 8: bias ←1 −|Q1 −Q2| (compute weighting strength) 9: if Q1 > Q2 then w[x] ←bias else w[x′] ←bias (choose arm to bias) 10: a ←max(β(sM1,x, fM1,x) × w[1], β(sM2,x, fM2,x) × w[2]) (choose arm) 6 11: y ←pull(a) (receive reward) 12: E(YX=a|X = x) ←y|a, x (update) In the next section, we provide simulations to support the efficacy of TSC in the MABUC context. For simplicity, we present two simulation results for the model described in Section 2.7 Experiment 1 employs the “Greedy Casino” parameterization found in Table 1, whereas Experiment 2 employs the “Paradoxical Switching” parameterization found in Table 2. Each experiment compares the performance of traditional Thompson Sampling bandit players versus TSC. (a) D = 0 D = 1 B = 0 B = 1 B = 0 B = 1 X = M1 *0.40 0.30 0.30 *0.40 X = M2 0.60 *0.10 *0.20 0.60 (b) P(y|X) P(y|do(X)) X = M1 0.4 0.35 X = M2 0.15 0.375 Table 2: “Paradoxical Switching” parameterization. (a) Payout rates decided by reactive slot machines as a function of arm choice, sobriety, and machine conspicuousness. Players’ natural arm choices under D, B are indicated by asterisks. (b) Payout rates associated with the observational and experimental distributions, respectively. Procedure. All reported simulations are partitioned into rounds of T = 1000 trials averaged over N = 1000 Monte Carlo repetitions. At each time step in a single round, (1) values for the unobserved confounders (B, D) and observed intuitive arm choice (x) are selected by their respective structural equations (see Section 2), (2) the player observes the value of x, (3) the player chooses an arm based on their given strategy to maximize reward (which may or may not employ x), and finally, (4) the player receives a Bernoulli reward Y ∈{0, 1} and records the outcome. Furthermore, at the start of every round, players possess knowledge of the problem’s observational distribution, i.e., each player begins knowing P(Y |X) (see Table 2b). However, only causallyempowered strategies will be able to make use of this knowledge, since this distribution is not, as we’ve seen, the correct one to maximize. Candidate algorithms. Standard Thompson Sampling (TS) attempts to maximize rewards based on P(y|do(X)), ignoring the intuition x. Z-Empowered Thompson Sampling (TSZ) treats the 5Note that using predilection as a criteria for the inequality does not uniquely map to the contextual bandit problem. To understand this point, note that not all variables are equally legitimate for confounding control in causal settings, while the agent’s predilection is certainly one of such variables in our setup. Specifically when considering whether a variable qualifies as a causal context requires a much deeper understanding of the data generating model, which is usually not available in the general case. 6The notation: β(sMk,x, fMk,x) means to sample from a Beta distribution with parameters equal to the successes encountered choosing action x on machine Mk(sMk,x) and the failures encountered choosing action x on that machine (fMk,x). 7For additional experimental results and parameterizations, see Appendix [BFP15]. 7 Figure 3: Simulation results for Experiments 1 and 2 comparing standard Thompson Sampling (TS), Z-Empowered Thompson Sampling (TSZ), and Causal Thompson Sampling (TS∗) predilection as a new context variable, Z, and attempts to maximize based on P(y|do(X), Z) at each round. Causal Thompson Sampling (TSC), as described above, employs the ETT inequality and input observational distribution. Evaluation metrics. We assessed each algorithms’ performances with standard bandit evaluation metrics: (1) the probability of choosing the optimal arm and (2) cumulative regret. As in traditional bandit problems, these measures are recorded as a function of the time step t averaged over all N round repetitions. Note, however, that traditional definitions of regret are not phrased in terms of unobserved confounders; our metrics, by contrast, compare each algorithm’s chosen arm to the optimal arm for a given instantiation of Bt and Dt, even though these instantiations are never directly available to the players. We believe that this is a fair operationalization for our evaluation metrics because it allows us to compare regret experienced by our algorithms to a truly optimal (albeit hypothetical) policy that has access to the unobserved confounders. Experiment 1: “Greedy Casino.” The Greedy Casino parameterization (specified in Table 1) illustrates the scenario where each arm’s payout appears to be equivalent under the observational and experimental distributions alone. Only when we concert the two distributions and condition on a player’s predilection can we obtain the optimal policy. Simulations for Experiment 1 support the efficacy of the causal approach (see Figure 3). Analyses revealed a significant difference in the regret experienced by TSZ (M = 11.03, SD = 15.96) compared to TSC (M = 0.94, SD = 15.39), t(999) = 14.52, p < .001. Standard TS was, predictably, not a competitor experiencing high regret (M = 150.47, SD = 14.09). Experiment 2: “Paradoxical Switching.” The Paradoxical Switching parameterization (specified in Table 2a) illustrates the scenario where one arm (M1) appears superior in the observational distribution, but the other arm (M2) appears superior in the experimental. Again, we must use causal analyses to resolve this ambiguity and obtain the optimal policy. Simulations for Experiment 2 also support the efficacy of the causal approach (see Figure 3). Analyses revealed a significant difference in the regret experienced by TSZ (M = 13.39, SD = 17.15) compared to TSC (M = 4.71, SD = 17.90), t(999) = 11.28, p < .001. Standard TS was, again predictably, not a competitor experiencing high regret (M = 83.56, SD = 15.75). 5 Conclusions In this paper, we considered a new class of bandit problems with unobserved confounders (MABUC) that are arguably more realistic than traditional formulations. We showed that MABUC instances are not amenable to standard algorithms that rely solely on the experimental distribution. More fundamentally, this lead to an understanding that in MABUC instances the optimization task is not attainable through the estimation of the experimental distribution, but relies on both experimental and observational quantities rooted in counterfactual theory and based on the agents’ predilections. To take advantage of our findings, we empowered the Thompson Sampling algorithm in two different ways. We first added a new rule capable of improving the efficacy of which arm to explore. We then jumpstarted the algorithm by leveraging non-experimental (observational) data that is often available, but overlooked. Simulations demonstrated that in general settings these changes lead to a more effective decision-making with faster convergence and lower regret. 8 References [AG11] S. Agrawal and N. Goyal. Analysis of thompson sampling for the multi-armed bandit problem. CoRR, abs/1111.1797, 2011. [AS95] Cesa-Bianchi N. Freund Y. Auer, P. and R. E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on, pages 322–331, Oct 1995. [BCB12] S´ebastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5:1–122, 2012. [BFK10] R. Busa-Fekete and B. K´egl. Fast boosting using adversarial bandits. In T. Joachims J. F¨urnkranz, editor, 27th International Conference on Machine Learning (ICML 2010), pages 143–150, Haifa, Israel, June 2010. [BFP15] E. Bareinboim, A. Forney, and J. Pearl. Bandits with unobserved confounders: A causal approach. Technical Report R-460, <http://ftp.cs.ucla.edu/pub/stat ser/r460L.pdf>, Cognitive Systems Laboratory, UCLA, 2015. [BS12] S. Bubeck and A. Slivkins. The best of both worlds: stochastic and adversarial bandits. CoRR, abs/1202.4473, 2012. [CL11] O. Chapelle and L. Li. An empirical evaluation of thompson sampling. In J. ShaweTaylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2249–2257. Curran Associates, Inc., 2011. [DHK+11] Miroslav Dud´ık, Daniel Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, and Tong Zhang. Efficient optimal learning for contextual bandits. CoRR, abs/1106.2369, 2011. [EDMM06] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. J. Mach. Learn. Res., 7:1079–1105, December 2006. [Fis51] R.A. Fisher. The Design of Experiments. Oliver and Boyd, Edinburgh, 6th edition, 1951. [LR85] T.L Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4 – 22, 1985. [LZ08] John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In J.C. Platt, D. Koller, Y. Singer, and S.T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 817–824. Curran Associates, Inc., 2008. [OB10] P. A. Ortega and D. A. Braun. A minimum relative entropy principle for learning and acting. J. Artif. Int. Res., 38(1):475–511, May 2010. [Pea00] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000. Second ed., 2009. [Rob52] Herbert Robbins. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc., 58(5):527–535, 09 1952. [SBCAY14] Yevgeny Seldin, Peter L. Bartlett, Koby Crammer, and Yasin Abbasi-Yadkori. Prediction with limited advice and multiarmed bandits with paid observations. In International Conference on Machine Learning, Beijing, China, 2014. [Sco10] S. L. Scott. A modern bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry, 26(6):639–658, 2010. [Sli14] A. Slivkins. Contextual bandits with similarity information. J. Mach. Learn. Res., 15(1):2533–2568, January 2014. [SP07] I. Shpitser and J Pearl. What counterfactuals can be tested. In Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence (UAI 2007), pages 352–359. AUAI Press, Vancouver, BC, Canada, 2007. 9
2015
186
5,687
Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients Bo Xie1, Yingyu Liang2, Le Song1 1Georgia Institute of Technology bo.xie@gatech.edu, lsong@cc.gatech.edu 2Princeton University yingyul@cs.princeton.edu Abstract Nonlinear component analysis such as kernel Principle Component Analysis (KPCA) and kernel Canonical Correlation Analysis (KCCA) are widely used in machine learning, statistics and data analysis, but they cannot scale up to big datasets. Recent attempts have employed random feature approximations to convert the problem to the primal form for linear computational complexity. However, to obtain high quality solutions, the number of random features should be the same order of magnitude as the number of data points, making such approach not directly applicable to the regime with millions of data points. We propose a simple, computationally efficient, and memory friendly algorithm based on the “doubly stochastic gradients” to scale up a range of kernel nonlinear component analysis, such as kernel PCA, CCA and SVD. Despite the non-convex nature of these problems, our method enjoys theoretical guarantees that it converges at the rate ˜O(1/t) to the global optimum, even for the top k eigen subspace. Unlike many alternatives, our algorithm does not require explicit orthogonalization, which is infeasible on big datasets. We demonstrate the effectiveness and scalability of our algorithm on large scale synthetic and real world datasets. 1 Introduction Scaling up nonlinear component analysis has been challenging due to prohibitive computation and memory requirements. Recently, methods such as Randomized Component Analysis (RCA) [12] are able to scale to larger datasets by leveraging random feature approximation. Such methods approximate the kernel function by using explicit random feature mappings, then perform subsequent steps in the primal form, resulting in linear computational complexity. Nonetheless, theoretical analysis [18, 12] shows that in order to get high quality results, the number of random features should grow linearly with the number of data points. Experimentally, one often sees that the statistical performance of the algorithm improves as one increases the number of random features. Another approach to scale up the kernel component analysis is to use stochastic gradient descent and online updates [15, 16]. These stochastic methods have also been extended to the kernel case [9, 5, 8]. They require much less computation than their batch counterpart, converge in O(1/t) rate, and are naturally applicable to streaming data setting. Despite that, they share a severe drawback: all data points used in the updates need to be saved, rendering them impractical for large datasets. In this paper, we propose to use the “doubly stochastic gradients” for nonlinear component analysis. This technique is a general framework for scaling up kernel methods [6] for convex problems and has been successfully applied to many popular kernel machines such as kernel SVM, kernel ridge regressions, and Gaussian process. It uses two types of stochastic approximation simultaneously: random data points instead of the whole dataset (as in stochastic update rules), and random features instead of the true kernel functions (as in RCA). These two approximations lead to the following 1 benefits: (1) Computation efficiency The key computation is the generation of a mini-batch of random features and the evaluation of them on a mini-batch of data points, which is very efficient. (2) Memory efficiency Instead of storing training data points, we just keep a small program for regenerating the random features, and sample previously used random features according to prespecified random seeds. This leads to huge savings: the memory requirement up to step t is O(t), independent of the dimension of the data. (3) Adaptibility Unlike other approaches that can only work with a fixed number of random features beforehand, doubly stochastic approach is able to increase the model complexity by using more features when new data points arrive, and thus enjoys the advantage of nonparametric methods. Although on first look our method appears similar to the approach in [6], the two methods are fundamentally different. In [6], they address convex problems, whereas our problem is highly nonconvex. The convergence result in [6] crucially relies on the properties of convex functions, which do not translate to our problem. Instead, our analysis centers around the stochastic update of power iterations, which uses a different set of proof techniques. In this paper, we make the following contributions. General framework We show that the general framework of doubly stochastic updates can be applied in various kernel component analysis tasks, including KPCA, KSVD, KCCA, etc.. Strong theoretical guarantee We prove that the finite time convergence rate of doubly stochastic approach is ˜O(1/t). This is a significant result since 1) the global convergence result is w.r.t. a nonconvex problem; 2) the guarantee is for update rules without explicit orthogonalization. Previous works require explicit orthogonalization, which is impractical for kernel methods on large datasets. Strong empirical performance Our algorithm can scale to datasets with millions of data points. Moreover, the algorithm can often find much better solutions thanks to the ability to use many more random features. We demonstrate such benefits on both synthetic and real world datasets. Since kernel PCA is a typical task, we focus on it in the paper and provide a description of other tasks in Section 4.3. Although we only state the guarantee for kernel PCA, the analysis naturally carries over to the other tasks. 2 Related work Many efforts have been devoted to scale up kernel methods. The random feature approach [17, 18] approximates the kernel function with explicit random feature mappings and solves the problem in primal form, thus circumventing the quadratic computational complexity. It has been applied to various kernel methods [11, 6, 12], among which most related to our work is RCA [12]. One drawback of RCA is that their theoretical guarantees are only for kernel matrix approximation: it does not say anything about how close the solution obtained from RCA is to the true solution. In contrast, we provide a finite time convergence rate of how our solution approaches the true solution. In addition, even though a moderate size of random features can work well for tens of thousands of data points, datasets with tens of millions of data points require many more random features. Our online approach allows the number of random features, hence the flexibility of the function class, to grow with the number of data points. This makes our method suitable for data streaming setting, which is not possible for previous approaches. Online algorithms for PCA have a long history. Oja proposed two stochastic update rules for approximating the first eigenvector and provided convergence proof in [15, 16], respectively. These rules have been extended to the generalized Hebbian update rules [19, 20, 3] that compute the top k eigenvectors (the subspace case). Similar ones have also been derived from the perspective of optimization and stochastic gradient descent [20, 2]. They are further generalized to the kernel case [9, 5, 8]. However, online kernel PCA needs to store all the training data, which is impractical for large datasets. Our doubly stochastic method avoids this problem by using random features and keeping only a small program for regenerating previously used random features according to pre-specified seeds. As a result, it can scale up to tens of millions of data points. For finite time convergence rate, [3] proved the O(1/t) rate for the top eigenvector in linear PCA using Oja’s rule. For the same task, [21] proposed a noise reduced PCA with linear convergence rate, where the rate is in terms of epochs, i.e., number of passes over the whole dataset. The noisy power method presented in [7] provided linear convergence for a subspace, although it only converges linearly to a constant error level. In addition, the updates require explicit orthogonalization, which 2 Algorithm 1: {αi}t 1 = DSGD-KPCA(P(x), k) Require: P(ω), φω(x). 1: for i = 1, . . . , t do 2: Sample xi ∼P(x). 3: Sample ωi ∼P(ω) with seed i. 4: hi = Evaluate(xi, {αj}i−1 j=1) ∈Rk. 5: αi = ηiφωi(xi)hi. 6: αj = αj −ηiα⊤ j hihi, for j = 1, . . . , i −1. 7: end for Algorithm 2: h = Evaluate(x, {αi}t i=1) Require: P(ω), φω(x). 1: Set h = 0 ∈Rk. 2: for i = 1, . . . , t do 3: Sample ωi ∼P(ω) with seed i. 4: h = h + φωi(x)αi. 5: end for is impractical for kernel methods. In comparison, our method converges in O(1/t) for a subspace, without the need for orthogonalization. 3 Preliminaries Kernels and Covariance Operators A kernel k(x, y) : X × X 7→R is a function that is positive-definite (PD), i.e., for all n > 1, c1, . . . , cn ∈R, and x1, . . . , xn ∈X, we have Pn i,j=1 cicjk(xi, xj) ≥0. A reproducing kernel Hilbert space (RKHS) F on X is a Hilbert space of functions from X to R. F is an RKHS if and only if there exists a k(x, x′) such that ∀x ∈X, k(x, ·) ∈F, and ∀f ∈F, ⟨f(·), k(x, ·)⟩F = f(x). Given P(x), k(x, x′) with RKHS F, the covariance operator A : F 7→F is a linear self-adjoint operator defined as Af(·) := Ex[f(x) k(x, ·)], ∀f ∈F, and furthermore ⟨g, Af⟩F = Ex[f(x) g(x)], ∀g ∈F. Let F = (f1(·), f2(·), . . . , fk(·)) be a list of k functions in the RKHS, and we define matrix-like notation AF(·) := (Af1(·), . . . , Afk(·)), and F ⊤AF is a k × k matrix, whose (i, j)-th element is ⟨fi, Afj⟩F. The outer-product of a function v ∈F defines a linear operator vv⊤such that (vv⊤)f(·) := ⟨v, f⟩F v(·), ∀f ∈F. Let V = (v1(·), . . . , vk(·)) be a list of k functions, then the weighted sum of a set of linear operators can be denoted using matrix-like notation as V ΣkV ⊤:= Pk i=1 λiviv⊤ i , where Σk is a diagonal matrix with λi on the i-th entry of the diagonal. Eigenfunctions and Kernel PCA A function v is an eigenfunction of A with the corresponding eigenvalue λ if Av(·) = λv(·). Given a set of eigenfunctions {vi} and associated eigenvalues {λi}, where ⟨vi, vj⟩F = δij, we can write the eigen-decomposion as A = V ΣkV ⊤+V⊥Σ⊥V ⊤ ⊥, where V is the list of top k eigenfunctions, Σk is a diagonal matrix with the corresponding eigenvalues, V⊥is the list of the rest of the eigenfunctions, and Σ⊥is a diagonal matrix with the rest of the eigenvalues. Kernel PCA aims to identify the the top k subspace V . In the finite data case, the empirical covariance operator is A = 1 n P i k(xi, ·) ⊗k(xi, ·). According to the representer theorem, we have vi = Pn j=1 αj ik(xj, ·), where {αi}k i=1 ∈Rn are weights for the data points. Using Av(·) = λv(·) and the kernel trick, we have Kαi = λiαi, where K is the n × n Gram matrix. Random feature approximation The random feature approximation for shift-invariant kernels k(x, y) = k(x −y), e.g., Gaussian RBF kernel, relies on the identity k(x −y) = R Rd eiω⊤(x−y) dP(ω) = E [φω(x)φω(y)] since the Fourier transform of a PD function is nonnegative, thus can be considered as a (scaled) probability measure [17]. We can therefore approximate the kernel function as an empirical average of samples from the distribution. In other words, k(x, y) ≈ 1 B P i φωi(x)φωi(y), where {(ωi)}B i are i.i.d. samples drawn from P(ω). For Gaussian RBF kernel, k(x −x′) = exp(−∥x −x′∥2/2σ2), this yields a Gaussian distribution P(ω) = exp(−σ2∥ω∥2/2). See [17] for more details. 4 Algorithm In this section, we describe an efficient algorithm based on the “doubly stochastic gradients” to scale up kernel PCA. KPCA is essentially an eigenvalue problem in a functional space. Traditional approaches convert it to the dual form, leading to another eigenvalue problem whose size equals the number of training points, which is not scalable. Other approaches solve it in the primal form with stochastic functional gradient descent. However, these algorithms need to store all the training points seen so far. They quickly run into memory issues when working with millions of data points. We propose to tackle the problem with “doubly stochastic gradients”, in which we make two unbiased stochastic approximations. One stochasticity comes from sampling data points as in stochastic gradient descent. Another source of stochasticity is from random features to approximate the kernel. 3 One technical difficulty in designing doubly stochastic KPCA is an explicit orthogonalization step required in the update rules, which ensures the top k eigenfunctions are orthogonal. This is infeasible for kernel methods on a large dataset since it requires solving an increasingly larger KPCA problem in every iteration. To solve this problem, we formulate the orthogonality constraints into Lagrange multipliers which leads to an Oja-style update rule. The new update enjoys small per iteration complexity and converges to the ground-truth subspace. We present the algorithm by first deriving the stochastic functional gradient update without random feature approximations, then introducing the doubly stochastic updates. For simplicity of presentation, the following description uses one data point and one random feature at a time, but typically a mini-batch of data points and random features are used in each iteration. 4.1 Stochastic functional gradient update Kernel PCA can be formulated as the following non-convex optimization problem max G tr G⊤AG  s.t. G⊤G = I, (1) where G := g1, . . . , gk and gi is the i-th function. Gradient descent on the Lagrangian leads to Gt+1 = Gt + ηt I −GtG⊤ t  AGt. Using a stochastic approximation for A: Atf(·) = f(xt) k(xt, ·), we have AtGt = k(xt, ·)g⊤ t and G⊤ t AtGt = gtg⊤ t , where gt =  g1 t (xt), . . . , gk t (xt) ⊤. Therefore, the update rule is Gt+1 = Gt I −ηtgtg⊤ t  + ηtk(xt, ·)g⊤ t . (2) This rule can also be derived using stochastic gradient and Oja’s rule [15, 16]. 4.2 Doubly stochastic update The update rule (2) must store all the data points it has seen so far, which is impractical for large scale datasets. To address this issue, we use the random feature approximation k(x, ·) ≈φωi(x)φωi(·). Denote Ht the function we get at iteration t, the update rule becomes Ht+1 = Ht  I −ηththt ⊤ + ηtφωt(xt)φωt(·)ht ⊤, (3) where ht is the evaluation of Ht at the current data point: ht =  h1 t(xt), . . . , hk t (xt) ⊤. The specific updates in terms of the coefficients are summarized in Algorithms 1 and 2. Note that in theory new random features are drawn in each iteration, but in practice one can revisit these random features. 4.3 Extensions Locating individual eigenfunctions The algorithm only finds the eigen subspace, but not necessarily individual eigenfunctions. A modified version, called Generalized Hebbian Algorithm (GHA) [19] can be used for this purpose: Gt+1 = Gt + ηtAtGt −ηtGt UT  G⊤ t AtGt  , where UT [·] is an operator that sets the lower triangular parts to zero. Latent variable models and kernel SVD Recently, spectral methods have been proposed to learn latent variable models with provable guarantees [1, 22], in which the key computation is SVD. Our algorithm can be straightforwardly extended to solve kernel SVD, with two simultaneous update rules. The algorithm is summarized in Algorithm 3. See the supplementary for derivation details. Kernel CCA and generalized eigenvalue problem Given two variables X and Y , CCA finds two projections such that the correlations between the two projected variables are maximized. It is equivalent to a generalized eigenvalue problem, which can also be solved in our framework. We present the updates for coefficients in Algorithm 4, and derivation details in the supplementary. Kernel sliced inverse regression Kernel sliced inverse regression [10] aims to do sufficient dimension reduction in which the found low dimension representation preserves the statistical correlation with the targets. It also reduces to a generalized eigenvalue problem, and has been shown to find the same subspace as KCCA [10]. 5 Analysis In this section, we provide finite time convergence guarantees for our algorithm. As discussed in the previous section, explicit orthogonalization is not scalable for the kernel case, therefore we 4 Algorithm 3: DSGD-KSVD Require: P(ω), φω(x), k. Output: {αi, βi}t 1 1: for i = 1, . . . , t do 2: Sample xi ∼P(x). Sample yi ∼P(y). 3: Sample ωi ∼P(ω) with seed i. 4: ui = Evaluate(xi, {αj}i−1 j=1) ∈Rk. 5: vi = Evaluate(yi, {βj}i−1 j=1) ∈Rk. 6: W = uiv⊤ i + viu⊤ i 7: αi = ηiφωi(xi)vi. 8: βi = ηiφωi(yi)ui. 9: αj = αj −ηiWαj, for j = 1, . . . , i −1. 10: βj = βj −ηiWβj, for j = 1, . . . , i −1. 11: end for Algorithm 4: DSGD-KCCA Require: P(ω), φω(x), k. Output: {αi, βi}t 1 1: for i = 1, . . . , t do 2: Sample xi ∼P(x). Sample yi ∼P(y). 3: Sample ωi ∼P(ω) with seed i. 4: ui = Evaluate(xi, {αj}i−1 j=1) ∈Rk. 5: vi = Evaluate(yi, {βj}i−1 j=1) ∈Rk. 6: W = uiv⊤ i + viu⊤ i 7: αi = ηiφωi(xi) [vi −Wui]. 8: βi = ηiφωi(yi) [ui −Wvi]. 9: end for need to provide guarantees for the updates without orthogonalization. This challenge is even more prominent when using random features, since it introduces additional variance. Furthermore, our guarantees are w.r.t. the top k-dimension subspace. Although the convergence without normalization for a top eigenvector has been established before [15, 16], the subspace case is complicated by the fact that there are k angles between k-dimension subspaces, and we need to bound the largest angle. To the best of our knowledge, our result is the first finite time convergence result for a subspace without explicit orthogonalization. Note that even though it appears our algorithm is similar to [6] on the surface, the underlying analysis is fundamentally different. In [6], the result only applies to convex problems where every local optimum is a global optimum while the problems we consider are highly non-convex. As a result, many techniques that [6] builds upon are not applicable. Conditions and Assumptions We will focus on the case when a good initialization V0 is given: V ⊤ 0 V0 = I, cos2 θ(V, V0) ≥1/2. (4) In other words, we analyze the later stage of the convergence, which is typical in the literature (e.g., [21]). The early stage can be analyzed using established techniques (e.g., [3]). In practice, one can achieve a good initialization by solving a small RCA problem [12] with, e.g. thousands, of data points and random features. Throughout the paper we suppose |k(x, x′)| ≤κ, |φω(x)| ≤φ and regard κ and φ as constants. Note that this is true for all the kernels and corresponding random features considered. We further regard the eigengap λk −λk+1 as a constant, which is also true for typical applications and datasets. 5.1 Update without random features Our guarantee is on the cosine of the principal angle between the computed subspace and the ground truth eigen subspace (also called potential function): cos2 θ(V, Gt) = minw ∥V ⊤Gtw∥ 2 ∥Gtw∥2 . Consider the two different update rules, one with explicit orthogonalization and another without Ft+1 ←orth(Ft + ηtAtFt) Gt+1 ←Gt + ηt I −GtG⊤ t  AtGt where At is the empirical covariance of a mini-batch. Our final guarantee for Gt is the following. Theorem 1 Assume (4) and suppose the mini-batch sizes satisfy that for any 1 ≤i ≤t, ∥A −Ai∥< (λk −λk+1)/8. There exist step sizes ηi = O(1/i) such that 1 −cos2 θ(V, Gt) = O(1/t). The convergence rate O(1/t) is in the same order as that of computing only the top eigenvector in linear PCA [3]. The bound requires the mini-batch size is large enough so that the spectral norm of A is approximated up to the order of the eigengap. This is because the increase of the potential is in the order of the eigengap. Similar terms appear in the analysis of the noisy power method [7] which, however, requires orthogonalization and is not suitable for the kernel case. We do not specify the 5 mini-batch size, but by assuming suitable data distributions, it is possible to obtain explicit bounds; see for example [23, 4]. Proof sketch We first prove the guarantee for the orthogonalized subspace Ft which is more convenient to analyze, and then show that the updates for Ft and Gt are first order equivalent so Gt enjoys the same guarantee. To do so, we will require lemma 2 and 3 below Lemma 2 1 −cos2 θ(V, Ft) = O(1/t). Let c2 t denote cos2 θ(V, Ft), then a key step in proving the lemma is to show the following recurrence c2 t+1 ≥c2 t(1 + 2ηt(λk −λk+1 −2 ∥A −At∥)(1 −c2 t)) −O(η2 t ). (5) We will need the mini-batch size large enough so that 2 ∥A −At∥is smaller than the eigen-gap. Another key element in the proof of the theorem is the first order equivalence of the two update rules. To show this, we introduce F(Gt) ←orth(Gt + ηtAtGt) to denote the subspace by applying the update rule of Ft on Gt. We show that the potentials of Gt+1 and F(Gt) are close: Lemma 3 cos2 θ(V, Gt+1) = cos2 θ(V, F(Gt)) ± O(η2 t ). The lemma means that applying the two update rules to the same input will result in two subspaces with similar potentials. Then by (5), we have 1 −cos2 θ(V, Gt) = O(1/t) which leads to our theorem. The proof of Lemma 3 is based on the observation that cos2 θ(V, X) = λmin(V ⊤X(X⊤X)−1X⊤V ). Comparing the Taylor expansions w.r.t. ηt for X = Gt+1 and X = F(Gt) leads to the lemma. 5.2 Doubly stochastic update The Ht computed in the doubly stochastic update is no longer in the RKHS so the principal angle is not well defined. Instead, we will compare the evaluation of functions from Ht and the true principal subspace V respectively on a point x. Formally, we show that for any function v ∈V with unit norm ∥v∥F = 1, there exists a function h in Ht such that for any x, err := |v(x) −h(x)|2 is small with high probability. To do so, we need to introduce a companion update rule: ˜Gt+1 ←˜Gt + ηtk(xt, ·)h⊤ t −ηt ˜Gthth⊤ t resulting in function in the RKHS, but the update makes use of function values from ht ∈Ht which is outside the RKHS. Let w = ˜G⊤v be the coefficients of v projected onto ˜G, h = Htw, and z = ˜Gtw. Then the error can be decomposed as |v(x) −h(x)|2 = |v(x) −z(x) + z(x) −h(x)|2 ≤2 |v(x) −z(x)|2 + 2 |z(x) −h(x)|2 ≤2κ2 ∥v −z∥2 F | {z } (I: Lemma 5) + 2 |z(x) −h(x)|2 | {z } (II: Lemma 6) . (6) By definition, ∥v −z∥2 F = ∥v∥2 F −∥z∥2 F ≤1−cos2 θ(V, ˜Gt), so the first error term can be bounded by the guarantee on ˜Gt, which can be obtained by similar arguments in Theorem 1. For the second term, note that ˜Gt is defined in such a way that the difference between z(x) and h(x) is a martingale, which can be bounded by careful analysis. Theorem 4 Assume (4) and suppose the mini-batch sizes satisfy that for any 1 ≤i ≤t, ∥A −Ai∥< (λk −λk+1)/8 and are of order Ω(ln t δ). There exist step sizes ηi = O(1/i), such that the following holds. If Ω(1) = λk( ˜G⊤ i ˜Gi) ≤λ1( ˜G⊤ i ˜Gi) = O(1) for all 1 ≤i ≤t, then for any x and any function v in the span of V with unit norm ∥v∥F = 1, we have that with probability at least 1 −δ, there exists h in the span of Ht satisfying |v(x) −h(x)|2 = O 1 t ln t δ  . The point-wise error scales as ˜O(1/t) with the step t. Besides the condition that ∥A −Ai∥is up to the order of the eigengap, we additionally need that the random features approximate the kernel function up to constant accuracy on all the data points up to time t, which eventually leads to Ω(ln t δ) mini-batch sizes. Finally, we need ˜G⊤ i ˜Gi to be roughly isotropic, i.e., ˜Gi is roughly orthonormal. Intuitively, this should be true for the following reasons: ˜G0 is orthonormal; the update for ˜Gt is close to that for Gt, which in turn is close to Ft that are orthonormal. 6 10 5 10 6 10 −4 10 −3 10 −2 10 −1 10 0 Number of data points Potential function Convergence O(1/t) (a) −2 −1 0 1 2 −0.1 −0.05 0 0.05 0.1 estimated groundtruth (b) Figure 1: (a) Convergence of our algorithm on the synthetic dataset. It is on par with the ˜O(1/t) rate denoted by the dashed red line. (b) Recovery of the top three eigenfunctions. Our algorithm (in red) matches the ground-truth (dashed blue). (a) (b) Figure 2: Visualization of the molecular space dataset by the first two principal components. Bluer dots represent lower PCE values while redder dots are for higher PCE values. (a) Kernel PCA; (b) linear PCA. (Best viewed in color) Proof sketch In order to bound term I in (6), we show that Lemma 5 1 −cos2 θ(V, ˜Gt) = O 1 t ln t δ  . This is proved by following similar arguments to get the recurrence (5), except with an additional error term, which is caused by the fact that the update rule for ˜Gt+1 is using the evaluation ht(xt) rather than ˜gt(xt). Bounding this additional term thus relies on bounding the difference between ht(x) −˜gt(x), which is also what we need for bounding term II in (6). For this, we show: Lemma 6 For any x and unit vector w, with probability ≥1 −δ over (Dt, ωt), |˜gt(x)w − ht(x)w|2 = O 1 t ln t δ  . The key to prove this lemma is that our construction of ˜Gt makes sure that the difference between ˜gt(x)w and ht(x)w consists of their difference in each time step. Furthermore, the difference forms a martingale and thus can be bounded by Azuma’s inequality. See the supplementary for the details. 6 Experiments Synthetic dataset with analytical solution We first verify the convergence rate of DSGD-KPCA on a synthetic dataset with analytical solution of eigenfunctions [24]. If the data follow a Gaussian distribution, and we use a Gaussian kernel, then the eigenfunctions are given by the Hermite polynomials. We generated 1 million such 1-D data points, and ran DSGD-KPCA with a total of 262144 random features. In each iteration, we use a data mini-batch of size 512, and a random feature minibatch of size 128. After all random features are generated, we revisit and adjust the coefficients of existing random features. The kernel bandwidth is set as the true bandwidth. The step size is scheduled as ηt = θ0/(1 + θ1t), where θ0 and θ1 are two parameters. We use a small θ1 ≈0.01 such that in early stages the step size is large enough to arrive at a good initial solution. Figure 1a shows the convergence rate of the proposed algorithm seeking top k = 3 subspace. The potential function is the squared sine of the principle angle. We can see the algorithm indeed converges at the rate O(1/t). Figure 1b show the recovered top k = 3 eigenfunctions compared with the ground-truth. The solution coincides with one eigenfunction, and deviates only slightly from two others. Kernel PCA visualization on molecular space dataset MolecularSpace dataset contains 2.3 million molecular motifs [6]. We are interested in visualizing the dataset with KPCA. The data are represented by sorted Coulomb matrices of size 75 × 75 [14]. Each molecule also has an attribute called power conversion efficiency (PCE). We use a Gaussian kernel with bandwidth chosen by the “median trick”. We ran kernel PCA with a total of 16384 random features, with a feature mini-batch size of 512, and data mini-batch size of 1024. We ran 4000 iterations with step size ηt = 1/(1 + 0.001 ∗t). Figure 2 presents visualization by projecting the data onto the top two principle components. Compared with linear PCA, KPCA shrinks the distances between the clusters and brings out the important structures in the dataset. We can also see higher PCE values tend to lie towards the center of the ring structure. Nonparametric Latent Variable Model We learn latent variable models with DSGD-KSVD using one million data points [22], achieving higher quality solutions compared with two other approaches. The dataset consists of two latent components, one is a Gaussian distribution and the other a Gamma distribution with shape parameter α = 1.2. DSGD-KSVD uses a total of 8192 random features, and uses a feature mini-batch of size 256 and a data mini-batch of size 512. We compare with 1) random 7 −10 −5 0 5 10 15 0 0.05 0.1 0.15 0.2 0.25 Estimated Component2 Estimated Component1 True Component2 True Component1 (a) −10 −5 0 5 10 15 0 0.05 0.1 0.15 0.2 0.25 Estimated Component2 Estimated Component1 True Component2 True Component1 (b) −10 −5 0 5 10 15 0 0.05 0.1 0.15 0.2 0.25 Estimated Component2 Estimated Component1 True Component2 True Component1 (c) Figure 3: Recovered latent components: (a) DSGD-KSVD, (b) 2048 random features, (c) 2048 Nystrom features. 512 1024 2048 4096 10240 0 0.02 0.04 0.06 0.08 Random feature dimension Mean squared error Random Fourier features Nystrom features DSGD Figure 4: Comparison on KUKA dataset. Fourier features, and 2) random Nystrom features, both of fixed 2048 functions [12]. Figures 3 shows the learned conditional distributions for each component. We can see DSGD-KSVD achieves almost perfect recovery, while Fourier and Nystrom random feature methods either confuse high density areas or incorrectly estimate the spread of conditional distributions. KCCA MNIST8M We compare our algorithm on MNIST8M in the KCCA task, which consists of 8.1 million digits and their transformations. We divide each image into the left and right halves, and learn their correlations. The evaluation criteria is the total correlations on the top k = 50 canonical correlation directions measured on a separate test set of size 10000. We compare with 1) random Fourier and 2) random Nystrom features on both total correlation and running time. Our algorithm uses a total of 20480 features, with feature mini-batches of size 2048 and data mini-batches of size 1024, and with 3000 iterations. The kernel bandwidth is set using the “median trick” and is the same for all methods. All algorithms are run 5 times, and the mean is reported. The results are presented in Table 1. Our algorithm achieves the best test-set correlations in comparable run time with random Fourier features. This is especially significant for random Fourier features, since the run time would increase by almost four times if double the number of features were used. In addition, Nystrom features generally achieve better results than Fourier features since they are data dependent. We can also see that for large datasets, it is important to use more random features for better performance. Table 1: KCCA results on MNIST 8M (top 50 largest correlations) # of feat Random features Nystrom features corrs. minutes corrs. minutes 256 25.2 3.2 30.4 3.0 512 30.7 7.0 35.3 5.1 1024 35.3 13.9 38.0 10.1 2048 38.8 54.3 41.1 27.0 4096 41.5 186.7 42.7 71.0 DSGD-KCCA (20480) corrs. minutes 43.5 183.2 linear CCA corrs. minutes 27.4 1.1 Kernel sliced inverse regression on KUKA dataset We evaluate our algorithm under the setting of kernel sliced inverse regression [10], a way to perform sufficient dimension reduction (SDR) for high dimension regression. After performing SDR, we fit a linear regression model using the projected input data, and evaluate mean squared error (MSE). The dataset records rhythmic motions of a KUKA arm at various speeds, representing realistic settings for robots [13]. We use a variant that contains 2 million data points generated by the SL simulator. The KUKA robot has 7 joints, and the high dimension regression problem is to predict the torques from positions, velocities and accelerations of the joints. The input has 21 dimensions while the output is 7 dimensions. Since there are seven independent joints, we set the reduced dimension to be seven. We randomly select 20% as test set and out of the remaining training set, we randomly choose 5000 as validation set to select step sizes. The total number of random features is 10240, with mini-feature batch and mini-data batch both equal to 1024. We run a total of 2000 iterations using step size ηt = 15/(1+0.001∗t). Figure 4 shows the regression errors for different methods. The error decreases with more random features, and our algorithm achieves lowest MSE by using 10240 random features. Nystrom features do not perform as well in this setting probably because the spectrum decreases slowly (there are seven independent joints) as Nystrom features are known to work well for fast decreasing spectrum. Acknowledge The research was supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR N00014-151-2340, NSF IIS-1218749, NSF CAREER IIS-1350983, NSF CCF-0832797, CCF-1117309, CCF1302518, DMS-1317308, Simons Investigator Award, and Simons Collaboration Grant. 8 References [1] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu. Two svds suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. CoRR, abs/1204.6703, 2012. [2] R. Arora, A. Cotter, and N. Srebro. Stochastic optimization of pca with capped msg. In Advances in Neural Information Processing Systems, pages 1815–1823, 2013. [3] A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental pca. In Advances in Neural Information Processing Systems, pages 3174–3182, 2013. [4] T. T. Cai and H. H. Zhou. Optimal rates of convergence for sparse covariance matrix estimation. The Annals of Statistics, 40(5):2389–2420, 2012. [5] T.-J. Chin and D. Suter. Incremental kernel principal component analysis. IEEE Transactions on Image Processing, 16(6):1662–1674, 2007. [6] B. Dai, B. Xie, N. He, Y. Liang, A. Raj, M.-F. F. Balcan, and L. Song. Scalable kernel methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems, pages 3041–3049, 2014. [7] M. Hardt and E. Price. The noisy power method: A meta algorithm with applications. In Advances in Neural Information Processing Systems, pages 2861–2869, 2014. [8] P. Honeine. Online kernel principal component analysis: A reduced-order model. IEEE Trans. Pattern Anal. Mach. Intell., 34(9):1814–1826, 2012. [9] K. Kim, M. O. Franz, and B. Sch¨olkopf. Iterative kernel principal component analysis for image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(9):1351–1366, 2005. [10] M. Kim and V. Pavlovic. Covariance operator based dimensionality reduction with extension to semisupervised settings. In International Conference on Artificial Intelligence and Statistics, pages 280–287, 2009. [11] Q. Le, T. Sarlos, and A. J. Smola. Fastfood — computing hilbert space expansions in loglinear time. In International Conference on Machine Learning, 2013. [12] D. Lopez-Paz, S. Sra, A. Smola, Z. Ghahramani, and B. Sch¨olkopf. Randomized nonlinear component analysis. In International Conference on Machine Learning (ICML), 2014. [13] F. Meier, P. Hennig, and S. Schaal. Incremental local gaussian regression. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 972–980. Curran Associates, Inc., 2014. [14] G. Montavon, K. Hansen, S. Fazli, M. Rupp, F. Biegler, A. Ziehe, A. Tkatchenko, A. von Lilienfeld, and K.-R. M¨uller. Learning invariant representations of molecules for atomization energy prediction. In Neural Information Processing Systems, pages 449–457, 2012. [15] E. Oja. A simplified neuron model as a principal component analyzer. J. Math. Biology, 15:267–273, 1982. [16] E. Oja. Subspace methods of pattern recognition. John Wiley and Sons, New York, 1983. [17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008. [18] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Neural Information Processing Systems, 2009. [19] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward network. Neural Networks, 2:459–473, 1989. [20] N. N. Schraudolph, S. G¨unter, and S. V. N. Vishwanathan. Fast iterative kernel PCA. In B. Sch¨olkopf, J. Platt, and T. Hofmann, editors, Advances in Neural Information Processing Systems 19, Cambridge MA, June 2007. MIT Press. [21] O. Shamir. A stochastic pca algorithm with an exponential convergence rate. arXiv preprint arXiv:1409.2848, 2014. [22] L. Song, A. Anamdakumar, B. Dai, and B. Xie. Nonparametric estimation of multi-view latent variable models. In International Conference on Machine Learning (ICML), 2014. [23] R. Vershynin. How close is the sample covariance matrix to the actual covariance matrix? Journal of Theoretical Probability, 25(3):655–686, 2012. [24] C. K. I. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers. In P. Langley, editor, Proc. Intl. Conf. Machine Learning, pages 1159–1166, San Francisco, California, 2000. Morgan Kaufmann Publishers. 9
2015
187
5,688
A Market Framework for Eliciting Private Data Bo Waggoner Harvard SEAS bwaggoner@fas.harvard.edu Rafael Frongillo University of Colorado raf@colorado.edu Jacob Abernethy University of Michigan jabernet@umich.edu Abstract We propose a mechanism for purchasing information from a sequence of participants. The participants may simply hold data points they wish to sell, or may have more sophisticated information; either way, they are incentivized to participate as long as they believe their data points are representative or their information will improve the mechanism’s future prediction on a test set. The mechanism, which draws on the principles of prediction markets, has a bounded budget and minimizes generalization error for Bregman divergence loss functions. We then show how to modify this mechanism to preserve the privacy of participants’ information: At any given time, the current prices and predictions of the mechanism reveal almost no information about any one participant, yet in total over all participants, information is accurately aggregated. 1 Introduction A firm that relies on the ability to make difficult predictions can gain a lot from a large collection of data. The goal is often to estimate values y ∈Y given observations x ∈X according to an appropriate class of hypotheses F describing the relationship between x and y (for example, y = a · x+b for linear regression). In classic statistical learning theory, the goal is formalized as attempting to approximately solve min f∈F E x,y Loss(f; (x, y)) (1) where Loss(·) is an appropriate inutility function and (x, y) is drawn from an unknown distribution. In the present paper we are concerned with the case in which the data are not drawn or held by a central authority but are instead inherently distributed. By this we mean that the data is (disjointly) partitioned across a set of agents, with agent i privately possessing some portion of the dataset Si, and agents have no obvious incentive to reveal this data to the firm seeking it. The vast swaths of data available in our personal email accounts could provide massive benefits to a range of companies, for example, but users are typically loathe to provide account credentials, even when asked politely. We will be concerned with the design of financial mechanisms that provide a community of agents, each holding a private set of data, an incentive to contribute to the solution of a large learning or prediction task. Here we use the term ‘mechanism’ to mean an algorithmic interface that can receive and answer queries, as well as engage in monetary exchange (deposits and payouts). Our aim will be to design such a mechanism that satisfies the following three properties: 1. The mechanism is efficient in that it approaches a solution to (1) as the amount of data and participation grows while spending a constant, fixed total budget. 2. The mechanism is incentive-compatible in the sense that agents are rewarded when their contributions provide marginal value in terms of improved hypotheses, and are not rewarded for bad or misleading information. 3. The mechanism provides reasonable privacy guarantees, so that no agent j (or outside observer) can manipulate the mechanism in order to infer the contributions of agent i ̸= j. 1 Ultimately we would like our mechanism to approach the performance of a learning algorithm that had direct access to all the data, while only spending a constant budget to acquire data and improve predictions and while protecting participants’ privacy. Our construction relies on the recent surge in literature on prediction markets [13, 14, 19, 20], popular for some time in the field of economics and recently studied in great detail in computer science [8, 16, 6, 15, 18, 1]. A prediction market is a mechanism designed for the purpose of information aggregation, particularly when there is some underlying future event about which many members of the population may have private and useful information. For instance, it may elicit predictions about which team will win an upcoming sporting event, or which candidate will win an election. These predictions are eventually scored on the actual outcome of the event. Applying these prediction market techniques allows participants to essentially “trade in a market” based on their data. (This approach is similar to prior work on crowdsourcing contests [3].) Members of the population have private information, just as with prediction markets — in this case, data points or beliefs — and the goal is to incentivize them to reveal and aggregate that information into a final hypothesis or prediction. Their final profits are tied to the outcome of a test set of data, with each participant being paid in accordance with how much their information improved the performance on the test set. Our techniques depart from the framework of [3] in two significant aspects: (a) we focus on the particular problem of data aggregation, and most of our results take advantage of kernel methods; and (b) our mechanisms are the first to combine differential privacy guarantees with data aggregation in a prediction-market framework. This framework will provide efficiency and truthfulness. We will also show how to achieve privacy in many scenarios. We will give mechanisms where the prices and predictions published satisfy (ϵ, δ)-differential privacy [10] with respect to each participant’s data. The mechanism’s output can still give reasonable predictions while no observer can infer much about any participant’s input data. 2 Mechanisms for Eliciting and Aggregating Data We now give a broad description of the mechanism we will study. In brief, we imagine a central authority (the mechanism, or market) maintaining a hypothesis f t representing the current aggregation of all the contributions made thus far. A new (or returning) participant may query f t at no cost, perhaps evaluating the quality of the predictions on a privately-held dataset, and can then propose an update df t+1 to f t that possibly requires an investment (a “bet”). Bets are evaluated at the close of the market when a true data sample is generated (analogous to a test set), and payouts are distributed according to the quality of the updates. After describing this initial framework as Mechanism 1, which is based loosely on the setting of [3], we turn our attention to the special case in which our hypotheses must lie in a Reproducing Kernel Hilbert Space (RKHS) [17] for a given kernel k(·, ·). This kernel-based “nonparametric mechanism” is particularly well-suited for the problem of data aggregation, as the betting space of the participants consists essentially of updates of the form df t = αtk(zt, ·), where zt is the data object offered by the participant and αt ∈R is the “magnitude” of the bet. A drawback of Mechanism 1 is the lack of privacy guarantees associated with the betting protocol: utilizing one’s data to make bets or investments in the mechanism can lead to a loss of privacy for the owner of that data. When a participant submits a bet of the form df t = αtk(zt, ·), where zt could contain sensitive personal information, another participant may be able to infer zt by querying the mechanism. One of the primary contributions of the present work, detailed in Section 3, is a technique to allow for productive participation in the mechanism while maintaining a guarantee on the privacy of the data submitted. 2.1 The General Template There is a space of examples X ×Y, where x ∈X are features and y ∈Y are labels. The mechanism designer chooses a function space F consisting of f : X × Y →R, and assumed to have Hilbert space structure; one may view F as either the hypothesis class or the associated loss class, that is where fh(x, y) measures the loss/performance of hypothesis h on observation x and label y. In each case we will refer to f ∈F as a hypothesis, eliding the distinction between fh and h. 2 The pricing scheme of the mechanism relies on a convex cost function Cx(·) : F →R which is parameterized by elements x ∈X but whose domain is the set of hypotheses F. The cost function is publicly available and determined in advance. The interaction with the mechanism is a sequential process of querying and betting. On round t −1 the mechanism publishes a hypothesis f t−1, the “state” of the market, which participants may query. Each participant arrives sequentially, and on round t a participant may place a “bet” df t ∈F, also called a “trade” or “update”, modifying the hypothesis f t−1 →f t = f t−1 + df t. Finally participation ends and the mechanism samples (or reveals) a test example1 (x, y) from the underlying distribution and pays (or charges) each participant according to the relative performance of their marginal contributions. Precisely, the total reward for participant t’s bet df t is the value df t(x, y) minus the cost Cx(f t) −Cx(f t−1). Mechanism 1: The Market Template MARKET announces f 0 ∈F for t = 1, 2, . . . , T do PARTICIPANT may query functions ∇fCx(f t−1) and f t−1(x, y) for examples (x, y) PARTICIPANT t may submit a bet df t ∈F to MARKET MARKET updates state f t = f t−1 + df t MARKET observes a true sample (x, y) for t = 1, 2, . . . , T do PARTICIPANT t receives payment df t(x, y) + Cx(f t−1) −Cx(f t) The design of cost-function prediction markets has been an area of active research over the past several years, starting with [8] and many further refinements and generalizations [1, 6, 15]. The general idea is that the mechanism can efficiently provide price quotes via a function C(·) which acts as a potential on the space of outstandings shares; see [1] for a thorough review. In the present work we have added an additional twist which is that the function Cx(·) is given an additional parameterization of the observation x. We will not dive too deeply into the theoretical aspects of this generalization, but this is a straightforward extension of existing theory. Key special case: exponential family mechanism. For those more familiar with statistics and machine learning, there is a natural and canonical family of problems that can be cast within the general framework of Mechanism 1, which we will call the exponential family prediction mechanism following [2]. Assume that F can be parameterized as F = {fθ : θ ∈Rd}, that we are given a sufficient statistics summary function φ : X × Y →Rd, and that function evaluation is given by fθ(x, y) = ⟨θ, φ(x, y)⟩. We let Cx(f) := log R Y exp(f(x, y))dy so that Cx(fθ) = log R Y exp(⟨θ, φ(x, y)⟩dy. In other words, we have chosen our mechanism to encode a particular exponential family model, with Cx(·) chosen as the conditional log partition function over the distribution on y given x. If the market has settled on a function fθ, then one may interpret that as the aggregate market belief on the distribution of X × Y is pθ(x, y) = exp(⟨θ, φ(x, y)⟩−A(θ)) where A(θ) = log R X×Y exp(⟨θ, φ(x, y)⟩) dx dy. How may we view this as a “market aggregate” belief? Notice that if a trader observes the market state of fθ and she is considering a bet of the form df = fθ −fθ′, the eventual profit will be fθ′(x, y) −fθ(x, y) + Cx(fθ) −Cx(fθ′) = log pθ′(y|x) pθ(y|x) . I.e., the profit is precisely the conditional log likelihood ratio of the update θ →θ′. Example: Logistic regression. Let X = Rk, Y = {−1, 1}, and take F to be the set of functions fθ(x, y) = y · (θ⊤x) for θ ∈Rk. Then by our construction, Cx(f) = log(exp(f(x, 1)) + exp(f(x, −1))) = log(exp(θ⊤x) + exp(−θ⊤x)), and we let f 0 = f0 ≡0. The payoff of a participant placing a bet which moves the market state to f 1 = fθ, upon outcome (x, y), is: fθ(x, y) + Cx(f0) −Cx(fθ) = yθ⊤x + log(2) −log(exp(θ⊤x) + exp(−θ⊤x)) = log(2) −log(1 + exp(−2yθ⊤x)) , 1This can easily be extended to a test set by taking the average performance over the test set. 3 which is simply negative logistic loss of the parameter choice 2θ. A participant wishing to maximize profit under a belief distribution p(x, y) should therefore choose θ via logistic regression, θ∗= arg min θ E (x,y)∼p  log(1 −exp(2yθ⊤x))  . (2) 2.2 Properties of the Market We next describe two nice properties of Mechanism 1: incentive-compatibility and bounded budget. Recall that, for the exponential family markets discussed above, a trader moving the market hypothesis from f t−1 to f t was compensated according to the conditional log-likelihood ratio of f t−1 and f t on the test data point. The implication is that traders are incentivized to minimize a KL divergence between the market’s estimate of the distribution and the true underlying distribution. We refer to this property as incentive-compatibility because traders’ interests are aligned with the mechanism designer’s. This property indeed holds generally for Mechanism 1, where the KL divergence is replaced with a general Bregman divergence corresponding to the Fenchel conjugate of Cx(·); see Proposition 1 in the appendix for details. Given that the mechanism must make a sequence of (possibly negative) payments to traders, a natural question is whether there is the potential for large downside for the mechanism in terms of total payment (budget). In the context of the exponential family mechanism, this question is easy to answer: after a sequence of bets moving the market state parameter θ0 →θ1 →. . . →θfinal, the total loss to the mechanism corresponds to the total payouts made to traders, X i fθi+1(x, y) −fθi(x, y) + Cx(fθi) −Cx(fθi+1) = log pθfinal(y|x) pθ0(y|x) ; that is, the worst-case loss is exactly the worst-case conditional log-likelihood ratio. In the context of logistic regression this quantity can always be guaranteed to be no more than log 2 as long as the initial parameter is set to θ = 0. For Mechanism 1 more generally, one has tight bounds on the worst-case loss following from such results from prediction markets [1, 8], and we give a more detailed statement in Proposition 2 in the appendix. Price sensitivity parameter λC. In choosing the cost function family C = {Cx : x ∈X}, an important consideration is the “scale” of each Cx, or how quickly changes in the market hypothesis f t translate to changes in the “instantaneous prices” ∇Cx(f t) (which give the marginal cost for an infinitesimal bet df t+1). Formally, this is captured by the price sensitivity λC, defined as the upper bound on the operator norm (with respect to the L1 norm) of the Hessian of the cost function Cx (over all x). A choice of small λC translates to a small worst-case budget required by the mechanism. However, it means that the market prices are sensitive in that the same update df t changes the prices much more quickly. When we consider protecting the privacy of trader updates in Section 3, we will see that privacy imposes restrictions on the price sensitivity. 2.3 A Nonparametric Mechanism via Kernel Methods The framework we have discussed thus far has involved a general function space F as the “state” of the mechanism, and the contributions by participants are in the form of modifications to these functions. One of the downsides of this generic template is that participants may not be able to reason about F, and they may have information about the optimal f only through their own privately-held dataset S ⊂X ×Y. A more specific class of functions would be those parameterized by actual data. This brings us to a well-studied type of non-parametric hypothesis class, namely the reproducing kernel Hilbert space (RKHS). We can design a market based on an RKHS, which we will refer to as a kernel market, that brings together a number of ideas including recent work of [21] as well as kernel exponential families [4]. We have a positive semidefinite kernel k : Z × Z →R and associated reproducing kernel Hilbert space F, with basis {fz(·) = k(z, ·) : z ∈Z}. The reproducing property is that for all f ∈F, ⟨f, k(z, ·)⟩= f(z). Now each hypothesis f ∈F can be expressed as f(·) = P s αsk(zs, ·) for some collection of points {(αs, zs)}. The kernel approach has several nice properties. One is a natural extension of the exponential family mechanism using an RKHS as a building block of the class of exponential family distributions [4]. A 4 key assumption in the exponential family mechanism is that evaluating f can be viewed as an inner product in some feature space; this is precisely what one has given a kernel framework. Specifically, assume we have some PSD kernel k : X × X →R, where Y = {−1, 1}. Then we can define the associated classification kernel ˆk : (X × Y) × (X × Y) →R according to ˆk((x, y), (x′, y′)) := yy′k(x, x′). Under certain conditions [4], we again can take Cx(f) = log R Y exp(f(x, y))dy, and for any f in the RKHS associated to ˆk, we have an associated distribution of the form pf(x, y) ∝ exp(f(x, y)). And again, a participant updating the market from f t−1 to f t is rewarded by the conditional log-likelihood ratio of f t−1 and f t on the test data. The second nice property mirrors one of standard kernel learning methods, namely that under certain conditions one need only search the subset of the RKHS spanned by the basis {k((xi, yi), ·) : (xi, yk) ∈S}, where S is the set of available data; this is a direct result of the Representer Theorem [17]. In the context of the kernel market, this suggests that participants need only interact with the mechanism by pushing updates that lie in the span of their own data. In other words, we only need to consider updates of the form df = αk((x, y), ·). This naturally suggests the idea of directly purchasing data points from traders. Buying Data Points. So far, we have supposed that a participant knows what trade df t she prefers to make. But what if she simply has a data point (x, y) drawn from the underlying distribution? We would like to give this trader a “simple” trading interface in which she can sell her data to the mechanism without having to reason about the correct df t for this data point. Our proposal is to mimic the behavior of natural learning algorithms, such as stochastic gradient descent, when presented with (x, y). The market can offer the trader the purchase bundle corresponding to the update of the learning algorithm on this data point. In principle, this approach can be used with any online learning algorithm. In particular, stochastic gradient descent gives a clean update rule, which we now describe. The expected profit (which is the negative of expected loss) for trade df t is Ex  Cx(f t−1 + df t) −Cx(f t−1) −Ey|x[df t(x, y)]  . Given a draw (x, y), the loss function on which to take a gradient step is − Cx(f t−1 + df t) −Cx(f t−1) −df t(x, y)  , whose gradient is −∇f t−1Cx + δx,y (where δx,y is the indicator on data point x, y). This suggests that the market offer the participant the trade df t = ϵ ∇f t−1Cx −δx,y  , where ϵ can be chosen arbitrarily as a “learning rate”. This can be interpreted as buying a unit of shares in the participant’s data point (x, y), then “hedging” by selling a small amount of all other shares in proportion to their current prices (recall that the current prices are given by ∇f tCx). In the kernel setting, the choice of stochastic gradient descent may be somewhat problematic, because it can result in non-sparse share purchases. It may instead be desirable to use algorithms that guarantee sparse updates—a modern discussion of such approaches can be found in [22, 23]. Given this framework, participants with access to a private set of samples from the true underlying distribution can simply opt for this “standard bundle” corresponding to their data point, which is precisely a stochastic gradient descent update. With a small enough learning rate, and assuming that the data point is truly independent of the current hypothesis (i.e. (x, y) has not been previously incorporated), the trade is guaranteed to make at least some positive profit in expectation. More sophisticated alternative strategies are also possible of course, but even the proposed simple bet type has earning potential. 3 Protecting Participants’ Privacy We now extend the mechanism to protect privacy of the participants: An adversary observing the hypotheses and prices of the mechanism, and even controlling the trades of other participants, should not be able to infer too much about any one trader’s update df t. This is especially relevant when participants sell data to the mechanism and this data can be sensitive, e.g. medical data. Here, privacy is formalized by (ϵ, δ)-differential privacy, to be defined shortly. One intuitive characterization is that, for any prior distribution some adversary has about a trader’s data, the adversary’s posterior belief after observing the mechanism would be approximately the same even if the trader did not participate at all. The idea is that, rather than posting the exact prices and trades made in the market, we will publish noisy versions, with the random noise giving the above guarantee. 5 A naive approach would be to add independent noise to each participant’s trade. However, this would require a prohibitively-large amount of noise; the final market hypothesis would be determined by the random noise just as much as by the data and trades. The central challenge is to add carefully correlated noise that is large enough to hide the effects of any one participant’s data point, but not so large that the prices (equivalently, hypothesis) become meaningless. We show this is possible by adjusting the “price sensitivity” λC of the mechanism, a measure of how fast prices change in response to trades defined in 2.2. It will turn out to suffice to set the price sensitivity to be O(1/polylog T) when there are T participants. This can roughly be interpreted as saying that any one participant does not move the market price noticeably (so their privacy is protected), but just O(polylog T) traders together can move the prices completely. We now formally define differential privacy and discuss two useful tools at our disposal. 3.1 Differential Privacy and Tools Differential privacy in our context is defined as follows. Consider a randomized function M operating on inputs of the form ⃗f = (df 1, . . . , df T ) and having outputs of the form s. Then M is (ϵ, δ)-differentially private if, for any coordinate t of the vector, any two distinct df t 1, df t 2, and any (measurable) set of outputs S, we have Pr[M(f −t, df t 1) ∈S)] ≤eϵ Pr[M(f −t, df t 2) ∈S] + δ. The notation f −t means the vector ⃗f with the tth entry removed. Intuitively, M is private if modifying the tth entry in the vector to a different entry does not change the distribution on outputs too much. In our case, the data to be protected will be the trade df t of each participant t, and the space of outputs will be the entire sequence of prices/predictions published by the mechanism. To preserve privacy, each trade must have a bounded size (e.g. consist only of one data point). To enforce this, we define the following parameter chosen by the mechanism designer: ∆= max allowed df p ⟨df, df⟩, (3) where the maximum is over all trades df allowed by the mechanism. That is, ∆is a scalar capturing the maximum allowed size of any one trade. For instance, if all trades are restricted to be of the form df = αk(z, ·), then we would have ∆= maxα,z α p k(z, z). We next describe the two tools we require. Tool 1: Private functions via Gaussian processes. Given a current market state f t = f 0 +df 1 + · · · + df t, where f t lies in a RKHS, we construct a “private” version ˆf t such that queries to ˆf t are “accurate” — close to the outputs of f t — but also private with respect to each df j. In fact, it will become convenient to privately output partial sums of trades, so we wish to output a ˆft1:t2 that is private and approximates ft1:t2 = Pt2 j=t1 df j. This is accomplished by the following construction due to [11]. Theorem 1 ([11], Corollary 9). Let G be the sample path of a Gaussian process with mean zero and whose covariance is given by the kernel function k.2 Then ˆft1:t2 = ft1:t2 + ∆ √ 2 ln(2/δ) ϵ G . (4) is (ϵ, δ)-differentially private with respect to each df j for j ∈{t1, . . . , t2}. In general, ˆft1:t2 may be an infinite-dimensional object and thus impossible to finitely represent. In this case, the theorem implies that releasing the results of any number of queries ˆft1:t2(z) is differentially private. (Of course, the more queries that are released, the larger the chance of high error on some query.) This is computationally feasible as each sample G(z) is simply a sample from a Gaussian having known covariance with the previous samples drawn. Unfortunately, it would not be sufficient to independently release ˆf1:t at each time t, because the amount of noise required would be prohibitive. This leads us to our next tool. 2Formally, each G(z) is a random variable and, for any finite subset of Z, the corresponding variables are distributed as a multivariate normal with covariance given by k. 6 • df0 • df1 • df2 • df3 • df4 • df5 • df6 • df7 • df8 • df9 • df10 • df11 • df12 • df13 • df14 • df15 • df16 Figure 1: Picturing the continual observation technique for preserving privacy. Each df t is a trade (e.g. a data point sold to the market). The goal is to release, at each time step t, a noisy version of f t = Pt j=1 df j. To do so, start at t and follow the arrow back to s(t). Take the partial sum of df j for j from s(t) to t and add some random noise. Trace the next arrow from s(t) to s(s(t)) to get another partial sum and add noise to that sum as well. Repeat until 0 is reached, then add together all the noisy partial sums to get the output at time t, which will equal f t plus noise. The key point is that we can re-use many of the noisy partial sums in many different time steps. For instance, the noisy partial sum from 0 to 8 can be re-used when releasing all of f 9, . . . , f 15. Meanwhile, each df t participates in few noisy partial sums (the number of arrows passing above it). Tool 2: Continual observation technique. The idea of this technique, pioneered by [9, 5], is to construct ˆf t = Pt j=0 df t by adding together noisy partial sums of the form ˆft1:t2 as constructed in Equation 4. The idea for choosing these partial sums is pictured in Figure 1: For a function s(t) that returns an integer smaller than t, we take ˆf t = ˆf s(t)+1:t + ˆf s(s(t))+1:s(t) + · · · + ˆf 0:0. Specifically, s(t) is determined by writing t in binary, then flipping the rightmost “one” bit to zero. This is pictured in Figure 1. The intuition behind why this technique helps is twofold. First, the total noise in ˆf t is the sum of noises of its partial sums, and it turns out that there are at most ⌈log T⌉terms. Second, the total noise we need to add to protect privacy is governed by how many different partial sums each df j participates in, and it turns out that this number is also at most ⌈log T⌉. This allows for much better privacy and accuracy guarantees than naively treating each step independently. 3.2 Mechanism and Results Combining our market template in Mechanism 1 with the above privacy tools, we obtain Mechanism 2. There are some key differences. First, we have a bound Q on the total number of queries. (Each query x returns the instantaneous prices in the market for x.) This is because each query reveals information about the participants, so intuitively, allowing too many queries must sacrifice either privacy or accuracy. Fortunately, this bound Q can be an arbitrarily large polynomial in the number of traders without affecting the quality of the results. Second, we have PAC-style guarantees on accuracy: with probability 1 −γ, all price queries return values within α of their true prices. Third, it is no longer straightforward to compute and represent the market prices ∇Cx( ˆf t) unless Y is finite. We leave the more general analysis of Mechanism 2 to future work. Either exactly or approximately, Mechanism 2 inherits the desirable properties of Mechanism 1, such as bounded budget and incentive-compatitibility (that is, participants are incentivized to minimize the risk of the market hypothesis). In addition, we show that it preserves privacy while maintaining accuracy, for an appropriate choice of the price sensitivity λC. Theorem 2. Consider Mechanism 2, where ∆is the maximimum trade size (Equation 3) and d = |Y|. Then Mechanism 2 is (ϵ, δ) differentially private and, with T traders and Q price queries, has the following accuracy guarantee: with probability 1 −γ, for each query x the returned prices satisfy ∥∇Cx( ˆf t) −∇Cx(f t)∥∞≤α by setting λC = αϵ 2d∆2 q ln Qd γ ln 2 log T δ log(T )3 . If one for example takes δ, γ = exp [−polylog(Q, T)], then except for a superpolynomially low failure probability, Mechanism 2 answers all queries to within accuracy α by setting the price sensitivity to be λC = O (αϵ/polylog(Q, T)). We note, however, that this is a somewhat weaker guarantee than is usually desired in the differential privacy literature, where ideally δ is exponentially small. 7 Mechanism 2: Privacy Protected Market Parameters: ϵ, δ (privacy), α, γ (accuracy), k (kernel), ∆(trade size 3), Q (#queries), T (#traders) MARKET announces ˆf 0 = f 0, sets r = 0, sets C with λC = λC(ϵ, δ, α, γ, ∆, Q, T) (Theorem 2) for t = 1, 2, . . . , T do PARTICIPANT t proposes a bet df t MARKET updates true position f t = f t−1 + df t MARKET instantiates ˆf s(t)+1,t as defined in Equation 4 while r ≤Q and some OBSERVER wishes to make a query do OBSERVER r submits pricing query on x MARKET returns prices ∇Cx( ˆf t), where ˆf t = ˆf s(t)+1:t + ˆf s(s(t))+1:s(t) + · · · + ˆf 0:0 MARKET sets r ←r + 1 MARKET observes a true sample (x, y) for t = 1, 2, . . . , T do PARTICIPANT receives payment f t−1(x, y) −f t(x, y) −Cx( ˆf t−1 + df t) + Cx( ˆf t−1) Computing ∇Cx( ˆf t). We have already discussed limiting to finite |Y| in order to efficiently compute the marginal prices ∇Cx( ˆf t). However, it is still not immediately clear how to compute these prices, and hence how to implement Mechanism 2. Here, we show that the problem can be solved when C comes from an exponential family, so that Cx(f) = log R Y exp [f(x, y)] dy. In this case, the marginal prices given by the gradient of C have a nice exponential-weights form, namely the price of shares in (x, y) is pt x(y) = ∇yCx(f t) = ef(x,y) P y∈Y ef(x,y) . Thus evaluating the prices can be done by evaluating f t(x, y) for each y ∈Y. We also note that the worst-case bound used here could be greatly improved by taking into account the structure of the kernel. For “smooth” cases such as the Gaussian kernel, querying a second point very close to the first one requires very little additional randomness and builds up very little additional error. We gave only a worst-case bound that holds for all kernels. Adding a transaction fee. In the appendix, we discuss the potential need for transaction fees. Adding a small Θ(α) fee suffices to deter arbitrage opportunities introduced by noisy pricing. Discussion The main contribution of this work was to bring together several tools to construct a mechanism for incentivized data aggregation with “contest-like” incentive properties, privacy guarantees, and limited downside for the mechanism. Our proposed mechanisms are also extensions of the prediction market literature. Building upon the work of Abernethy et al. [1] we introduce the following innovations: • Conditional markets. Our framework of Mechanism 1 can be interpreted as a prediction market for conditional predictions p(y|x) rather than a classic market which would elicit the joint distribution p(x, y), or just the marginals. (This is similar to decision markets [12, 7], but without out the associated incentive problems.) Naturally then, we couple conditional predictions with restricted hypothesis spaces, allowing F to capture, e.g., a linear relationship between x and y. • Nonparametric securities. We also extend to nonparametric hypothesis spaces using kernels, following the kernel-based scoring rules of [21]. • Privacy guarantees. We provide the first private prediction market (to our knowledge), showing that information about individual trades is not revealed. Our approach for preserving privacy also holds in the classic prediction market setting with similar privacy and accuracy guarantees. Many directions remain for future work. These mechanisms could be made more practical and perhaps even better privacy guarantees derived, especially in nonparametric settings. One could also explore the connections to similar settings, such as when agents have costs for acquiring data. Acknoledgements J. Abernethy acknowledges the generous support of the US National Science Foundation under CAREER Grant IIS-1453304 and Grant IIS-1421391. 8 References [1] Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation, 1(2), May 2013. [2] Jacob Abernethy, Sindhu Kutty, S´ebastien Lahaie, and Rahul Sami. Information aggregation in exponential family markets. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 395–412. ACM, 2014. [3] Jacob D Abernethy and Rafael M Frongillo. A collaborative mechanism for crowdsourcing prediction problems. In Advances in Neural Information Processing Systems, pages 2600–2608, 2011. [4] St´ephane Canu and Alex Smola. Kernel methods and the exponential family. Neurocomputing, 69(7):714– 720, 2006. [5] T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Transactions on Information and System Security (TISSEC), 14(3):26, 2011. [6] Y. Chen and J.W. Vaughan. A new understanding of prediction markets via no-regret learning. In Proceedings of the 11th ACM Conference on Electronic Commerce (EC), pages 189–198, 2010. [7] Yiling Chen, Ian Kash, Mike Ruberry, and Victor Shnayder. Decision markets with good incentives. In Internet and Network Economics, pages 72–83. Springer, 2011. [8] Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence (UAI), pages 49–56, 2007. [9] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 715–724. ACM, 2010. [10] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 2014. [11] Rob Hall, Alessandro Rinaldo, and Larry Wasserman. Differential privacy for functions and functional data. The Journal of Machine Learning Research, 14(1):703–727, 2013. [12] R Hanson. Decision markets. Entrepreneurial Economics: Bright Ideas from the Dismal Science, pages 79–85, 2002. [13] R. Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1):105–119, 2003. [14] R. Hanson. Logarithmic market scoring rules for modular combinatorial information aggregation. Journal of Prediction Markets, 1(1):3–15, 2007. [15] Abraham Othman and Tuomas Sandholm. Automated market makers that enable new settings: extending constant-utility cost functions. In Proceedings of the Second Conference on Auctions, Market Mechanisms and their Applications (AMMA), pages 19–30, 2011. [16] David M. Pennock and Rahul Sami. Computational aspects of prediction markets. In Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani, editors, Algorithmic Game Theory, chapter 26. Cambridge University Press, 2007. [17] Bernhard Sch¨olkopf and Alexander J Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002. [18] Amos J. Storkey. Machine learning markets. In Proceedings of AI and Statistics (AISTATS), pages 716– 724, 2011. [19] J. Wolfers and E. Zitzewitz. Prediction markets. Journal of Economic Perspectives, 18(2):107–126, 2004. [20] Justin Wolfers and Eric Zitzewitz. Interpreting prediction market prices as probabilities. Technical report, National Bureau of Economic Research, 2006. [21] Erik Zawadzki and S´ebastien Lahaie. Nonparametric scoring rules. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. [22] Lijun Zhang, Rong Jin, Chun Chen, Jiajun Bu, and Xiaofei He. Efficient online learning for large-scale sparse kernel logistic regression. In AAAI, 2012. [23] Lijun Zhang, Jinfeng Yi, Rong Jin, Ming Lin, and Xiaofei He. Online kernel learning with a near optimal sparsity bound. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 621–629, 2013. 9
2015
188
5,689
A Generalization of Submodular Cover via the Diminishing Return Property on the Integer Lattice Tasuku Soma The University of Tokyo tasuku soma@mist.i.u-tokyo.ac.jp Yuichi Yoshida National Institute of Informatics, and Preferred Infrastructure, Inc. yyoshida@nii.ac.jp Abstract We consider a generalization of the submodular cover problem based on the concept of diminishing return property on the integer lattice. We are motivated by real scenarios in machine learning that cannot be captured by (traditional) submodular set functions. We show that the generalized submodular cover problem can be applied to various problems and devise a bicriteria approximation algorithm. Our algorithm is guaranteed to output a log-factor approximate solution that satisfies the constraints with the desired accuracy. The running time of our algorithm is roughly O(n log(nr) log r), where n is the size of the ground set and r is the maximum value of a coordinate. The dependency on r is exponentially better than the naive reduction algorithms. Several experiments on real and artificial datasets demonstrate that the solution quality of our algorithm is comparable to naive algorithms, while the running time is several orders of magnitude faster. 1 Introduction A function f : 2S →R+ is called submodular if f(X) + f(Y ) ≥f(X ∪Y ) + f(X ∩Y ) for all X, Y ⊆S, where S is a finite ground set. An equivalent and more intuitive definition is by the diminishing return property: f(X ∪{s}) −f(X) ≥f(Y ∪{s}) −f(Y ) for all X ⊆Y and s ∈S \ Y . In the last decade, the optimization of a submodular function has attracted particular interest in the machine learning community. One reason of this is that many real-world models naturally admit the diminishing return property. For example, document summarization [12, 13], influence maximization in viral marketing [7], and sensor placement [10] can be described with the concept of submodularity, and efficient algorithms have been devised by exploiting submodularity (for further details, refer to [8]). A variety of proposed models in machine learning [4, 13, 18] boil down to the submodular cover problem [21]; for given monotone and nonnegative submodular functions f, c : 2S →R+, and α > 0, we are to minimize c(X) subject to f(X) ≥α. (1) Intuitively, c(X) and f(X) represent the cost and the quality of a solution, respectively. The objective of this problem is to find X of minimum cost with the worst quality guarantee α. Although this problem is NP-hard since it generalizes the set cover problem, a simple greedy algorithm achieves tight log-factor approximation and it practically performs very well. The aforementioned submodular models are based on the submodularity of a set function, a function defined on 2S. However, we often encounter problems that cannot be captured by a set function. Let us give two examples: Sensor Placement: Let us consider the following sensor placement scenario. Suppose that we have several types of sensors with various energy levels. We assume a simple trade-off between 1 information gain and cost. Sensors of a high energy level can collect a considerable amount of information, but we have to pay a high cost for placing them. Sensors of a low energy level can be placed at a low cost, but they can only gather limited information. In this scenario, we want to decide which type of sensor should be placed at each spot, rather than just deciding whether to place a sensor or not. Such a scenario is beyond the existing models based on submodular set functions. Optimal Budget Allocation: A similar situation also arises in the optimal budget allocation problem [2]. In this problem, we want to allocate budget among ad sources so that (at least) a certain number of customers is influenced while minimizing the total budget. Again, we have to decide how much budget should be set aside for each ad source, and hence set functions cannot capture the problem. We note that a function f : 2S →R+ can be seen as a function defined on a Boolean hypercube {0, 1}S. Then, the above real scenarios prompt us to generalize the submodularity and the diminishing return property to functions defined on the integer lattice ZS +. The most natural generalization of the diminishing return property to a function f : ZS + →R+ is the following inequality: f(x + χs) −f(x) ≥f(y + χs) −f(y) (2) for x ≤y and s ∈S, where χs is the s-th unit vector. If f satisfies (2), then f also satisfies the following lattice submodular inequality: f(x) + f(y) ≥f(x ∨y) + f(x ∧y) (3) for all x, y ∈ZS +, where ∨and ∧are the coordinate-wise max and min operations, respectively. While the submodularity and the diminishing return property are equivalent for set functions, this is not the case for functions over the integer lattice; the diminishing return property (2) is stronger than the lattice submodular inequality (3). We say that f is lattice submodular if f satisfies (3), and if f further satisfies (2) we say that f is diminishing return submodular (DR-submodular for short). One might feel that the DR-submodularity (2) is too restrictive. However, considering the fact that the diminishing return is more crucial in applications, we may regard the DR-submodularity (2) as the most natural generalization of the submodularity, at least for applications mentioned so far [17, 6]. For example, under a natural condition, the objective function in the optimal budget allocation satisfies (2) [17]. The DR-submodularity was also considered in the context of submodular welfare [6]. In this paper, we consider the following generalization of the submodular cover problem for set functions: Given a monotone DR-submodular function f : ZS + →R+, a subadditive function c : ZS + →R+, α > 0, and r ∈Z+, we are to minimize c(x) subject to f(x) ≥α, 0 ≤x ≤r1, (4) where we say that c is subadditive if c(x+y) ≤c(x)+c(y) for all x, y ∈ZS +. We call problem (4) the DR-submodular cover problem. This problem encompasses problems that boil down to the submodular cover problem for set functions and their generalizations to the integer lattice. Furthermore, the cost function c is generalized to a subadditive function. In particular, we note that two examples given above can be rephrased using this problem (see Section 4 for details). If c is also monotone DR-submodular, one can reduce the problem (4) to the set version (1) (for technical details, see Section 3.1). The problem of this naive reduction is that it only yields a pseudo-polynomial time algorithm; the running time depends on r rather than log r. Since r can be huge in many practical settings (e.g., the maximum energy level of a sensor), even linear dependence on r could make an algorithm impractical. Furthermore, for a general subadditive function c, this naive reduction does not work. 1.1 Our Contribution For the problem (4), we devise a bicriteria approximation algorithm based on the decreasing threshold technique of [3]. More precisely, our algorithm takes the additional parameters 0 < ϵ, δ < 1. The output x ∈ZS + of our algorithm is guaranteed to satisfy that c(x) is at most (1 + 3ϵ)ρ  1 + log d β  times the optimum and f(x) ≥(1 −δ)α, where ρ is the curvature of c (see Section 3 for the definition), d = maxs f(χs) is the maximum value of f over all standard unit vectors, and β is the minimum value of the positive increments of f in the feasible region. 2 Running Time (dependency on r): An important feature of our algorithm is that the running time depends on the bit length of r only polynomially whereas the naive reduction algorithms depend on it exponentially as mentioned above. More precisely, the running time of our algorithm is O( n ϵ log nrcmax δcmin log r), which is polynomial in the input size, whereas the naive algorithm is only psuedo-polynomial time algorithm. In fact, our experiments using real and synthetic datasets show that our algorithm is considerably faster than naive algorithms. Furthermore, in terms of the objective value (that is, the cost of the output), our algorithm also exhibits comparable performance. Approximation Guarantee: Our approximation guarantee on the cost is almost tight. Note that the DR submodular cover problem (4) includes the set cover problem, in which we are given a collection of sets, and we want to find a minimum number of sets that covers all the elements. In our context, S corresponds to the collection of sets, the cost c is the number of chosen sets, and f is the number of covered elements. It is known that we cannot obtain an o(log m)-approximation unless P ̸= NP, where m is the number of elements [16]. However, since for the set cover problem we have ρ = 1, d = O(m), and β = 1, our approximation guarantee is O(log m). 1.2 Related Work Our result can be compared with several results in the literature for the submodular cover problem for set functions. It is shown by Wolsey [21] that if c(X) = |X|, a simple greedy algorithm yields (1 + log d β )-approximation, which coincides with our approximation ratio except for the (1 + 3ϵ) factor. Note that ρ = 1 when c(X) = |X|, or more generally, when c is modular. Recently, Wan et al. [20] discussed a slightly different setting, in which c is also submodular and both f and c are integer valued. They proved that the greedy algorithm achieves ρH(d)-approximation, where H(d) = 1+1/2+· · ·+1/d is the d-th harmonic number. Again, their ratio asymptotically coincides with our approximation ratio (Note that β ≥1 when f is integer valued). Another common submodular-based model in machine learning is in the form of the submodular maximization problem: Given a monotone submodular set function f : {0, 1}S →R+ and a feasible set P ⊆[0, 1]S (e.g., a matroid polytope or a knapsack polytope), we want to maximize f(x) subject to x ∈P ∩{0, 1}S. Such models can be widely found in various tasks as already described. We note that the submodular cover problem and the submodular maximization problem are somewhat dual to each other. Indeed, Iyer and Bilmes [5] showed that a bicriteria algorithm of one of these problems yields a bicriteria algorithm for the other. Being parallel to our setting, generalizing the submodular maximization problem to the integer lattice ZS + is a natural question. In this direction, Soma et al. [17] considered the maximization of lattice submodular functions (not necessarily being DR-submodular) and devised a constant-factor approximation pseudo-polynomial time algorithm. We note that our result is not implied by [17] via the duality of [5]. In fact, such reduction only yields a pseudo-polynomial time algorithm. 1.3 Organization of This Paper The rest of this paper is organized as follows: Section 2 sets the mathematical basics of submodular functions over the integer lattice. Section 3 describes our algorithm and the statement of our main theorem. In Section 4, we show various experimental results using real and artificial datasets. Section 5 sketches the proof of the main theorem. Finally, we conclude the paper in Section 6. 2 Preliminaries Let S be a finite set. For each s ∈S, we denote the s-th unit vector by χs; that is, χs(t) = 1 if t = s, otherwise χs(t) = 0. A function f : ZS →R is said to be lattice submodular if f(x) + f(y) ≥f(x ∨y) + f(x ∧y) for all x, y ∈ZS. A function f is monotone if f(x) ≥f(y) for all x, y ∈ZS with x ≥y. For x, y ∈ZS and a function f : ZS →R, we denote f(y | x) := f(y + x) −f(x). A function f is diminishing return submodular (or DR-submodular) if f(x + χs) −f(x) ≥f(y + χs) −f(y) for each x ≤y ∈ZS and s ∈S. For a DR-submodular function f, one can immediately check that f(kχs | x) ≥f(kχs | y) for arbitrary x ≤y, s ∈S, and k ∈Z+. A function f is subadditive if f(x + y) ≤f(x) + f(y) for x, y ∈ZS. For each x ∈ZS +, we define {x} to be the multiset in which each s ∈S is contained x(s) times. 3 In [17], a lattice submodular function f : ZS →R is said to have the diminishing return property if f is coordinate-wise concave: f(x + 2χs) −f(x + χs) ≤f(x + χs) −f(x) for each x ∈ZS and s ∈S. We note that our definition is consistent with [17]. Formally, we have the following lemma, whose proof can be found in Appendix. Lemma 2.1. A function f : ZS →R is DR-submodular if and only if f is lattice submodular and coordinate-wise concave. The following is fundamental for a monotone DR-submodular function. A proof is placed in Appendix due to the limitation of space. Lemma 2.2. For a monotone DR-submodular function f, f(x) −f(y) ≤P s∈{x} f(χs | y) for arbitrary x, y ∈ZS. 3 Algorithm for the DR-submodular Cover Recall the DR-submodular cover problem (4). Let f : ZS + →R+ be a monotone DR-submodular function and let c : ZS + →R+ be a subadditive cost function. The objective is to minimize c(x) subject to f(x) ≥α and 0 ≤x ≤r1, where α > 0 and r ∈Z+ are the given constants. Without loss of generality, we can assume that max{f(x) : 0 ≤x ≤r1} = α (otherwise, we can consider bf(x) := min{f(x), α} instead of f). Furthermore, we can assume c(x) > 0 for any x ∈ZS +. A pseudocode description of our algorithm is presented in Algorithm 1. The algorithm can be viewed as a modified version of the greedy algorithm and works as follows: We start with the initial solution x = 0 and increase each coordinate of x gradually. To determine the amount of increments, the algorithm maintains a threshold θ that is initialized to be sufficiently large enough. For each s ∈S, the algorithm finds the largest integer step size 0 < k ≤r −x(s) such that the marginal cost-gain ratio f(kχs|x) kc(χs) is above the threshold θ. If such k exists, the algorithm updates x to x + kχs. After repeating this for each s ∈S, the algorithm decreases the threshold θ by a factor of (1 −ϵ). If x becomes feasible, the algorithm returns the current x. Even if x does not become feasible, the final x satisfies f(x) ≥(1 −δ)α if we iterate until θ gets sufficiently small. Algorithm 1 Decreasing Threshold for the DR-Submodular Cover Problem Input: f : ZS + →R+, c : ZS + →R+, r ∈N, α > 0, ϵ > 0, δ > 0. Output: 0 ≤x ≤r1 such that f(x) ≥α. 1: x ←0, d ←max s∈S f(χs), cmin ←min s∈S c(χs), cmax ←max s∈S c(χs) 2: for (θ = d cmin ; θ ≥ δ ncmaxrd; θ ←θ(1 −ϵ)) do 3: for all s ∈S do 4: Find maximum integer 0 < k ≤r −x(s) such that f(kχs|x) kc(χs) ≥θ with binary search. 5: If such k exists then x ←x + kχs. 6: If f(x) ≥α then break the outer for loop. 7: return x Before we claim the theorem, we need to define several parameters on f and c. Let β := min{f(χs | x) : s ∈S, x ∈ZS +, f(χs | x) > 0} and d := maxs f(χs). Let cmax := maxs c(χs) and cmin := mins c(χs). Define the curvature of c to be ρ := min x∗:optimal solution P s∈{x∗} c(χs) c(x∗) . (5) Definition 3.1. For γ ≥1 and 0 < δ < 1, a vector x ∈ZS + is a (γ, δ)-bicriteria approximate solution if c(x) ≤γ · c(x∗), f(x) ≥(1 −δ)α, and 0 ≤x ≤r1. Our main theorem is described below. We sketch the proof in Section 5. Theorem 3.2. Algorithm 1 outputs a  (1 + 3ϵ)ρ  1 + log d β  , δ  -bicriteria approximate solution in O  n ϵ log nrcmax δcmin log r  time. 4 3.1 Discussion Integer-valued Case. Let us make a simple remark on the case that f is integer valued. Without loss of generality, we can assume α ∈Z+. Then, Algorithm 1 always returns a feasible solution for any 0 < δ < 1/α. Therefore, our algorithm can be easily modified to an approximation algorithm if f is integer valued. Definition of Curvature. Several authors [5, 19] use a different notion of curvature called the total curvature, whose natural extension for a function over the integer lattice is as follows: The total curvature κ of c : ZS + →R+ is defined as κ := 1 −mins∈S c(χs|r1−χs) c(χs) . Note that κ = 0 if c is modular, while ρ = 1 if c is modular. For example, Iyer and Bilmes [5] devised a bicriteria approximation algorithm whose approximation guarantee is roughly O((1 −κ)−1 log β d ). Let us investigate the relation between ρ and κ for DR-submodular functions. One can show that 1 −κ ≤ρ ≤(1 −κ)−1 (see Lemma E.1 in Appendix), which means that our bound in terms of ρ is tighter than one in terms of (1 −κ)−1. Comparison to Naive Reduction Algorithm. If c is also a monotone DR-submodular function, one can reduce (4) to the set version (1) as follows. For each s ∈S, create r copies of s and let ˜S be the set of these copies. For ˜X ⊆˜S, define x ˜ X ∈ZS + be the integral vector such that x ˜ X(s) is the number of copies of s contained in ˜X. Then, ˜f( ˜X) := f(x ˜ X) is submodular. Similarly, ˜c( ˜X) := c(x ˜ X) is also submodular if c is a DR-submodular function. Therefore we may apply a standard greedy algorithm of [20, 21] to the reduced problem and this is exactly what Greedy does in our experiment (see Section 4). However, this straightforward reduction only yields a pseudopolynomial time algorithm since | ˜S| = nr; even if the original algorithm was linear, the resulting algorithm would require O(nr) time. Indeed this difference is not negligible since r can be quite large in practical applications, as illustrated by our experimental evaluation. Lazy Evaluation. We finally note that we can combine the lazy evaluation technique [11, 14], which significantly reduces runtime in practice, with our algorithm. Specifically, we first push all the elements in S to a max-based priority queue. Here, the key of an element s ∈S is f(χs) c(χs) . Then the inner loop of Algorithm 1 is modified as follows: Instead of checking all the elements in S, we pop elements whose keys are at least θ. For each popped element s ∈S, we find k such that 0 < k ≤r −x(s) with f(kχs|x) kc(χs) ≥θ with binary search. If there is such k, we update x with x + kχs. Finally, we push s again with the key f(χs|x) c(χs) if x(s) < r. The correctness of this technique is obvious because of the DR-submodularity of f. In particular, the key of each element s ∈S in the queue is always at least f(χs|x) c(χs) , where x is the current vector. Hence, we never miss s ∈S with f(kχs|x) kc(χs) ≥θ. 4 Experiments 4.1 Experimental Setting We conducted experiments on a Linux server with an Intel Xeon E5-2690 (2.90 GHz) processor and 256 GB of main memory. The experiments required, at most, 4 GB of memory. All the algorithms were implemented in C++ and compiled with g++ 4.6.3. In our experiments, the cost function c : ZS + →R+ is always chosen as c(x) = ∥x∥1 := P s∈S x(s). Let f : ZS + →R+ be a submodular function and α be the worst quality guarantee. We implemented the following four methods: • Decreasing-threshold is our method with the lazy evaluation technique. We chose δ = 0.01 as stated otherwise. • Greedy is a method in which, starting from x = 0, we iteratively increment x(s) for s ∈S that maximizes f(x + χs) −f(x) until we get f(x) ≥α. We also implemented the lazy evaluation technique [11]. 5 • Degree is a method in which we assign x(s) a value proportional to the marginal f(χs) − f(0), where ∥x∥1 is determined by binary search so that f(x) ≥α. Precisely speaking, x(s) is approximately proportional to the marginal since x(s) must be an integer. • Uniform is a method that returns k1 for minimum k ∈Z+ such that f(k1) ≥α. We use the following real-world and synthetic datasets to confirm the accuracy and efficiency of our method against other methods. We set r = 100, 000 for both problems. Sensor placement. We used a dataset acquired by running simulations on a 129-vertex sensor network used in Battle of the Water Sensor Networks (BWSN) [15]. We used the “bwsn-utilities” [1] program to simulate 3000 random injection events to this network for a duration of 96 hours. Let S and E be the set of the 129 sensors in the network and the set of the 3000 events, respectively. For each sensor s ∈S and event e ∈E, a value z(s, e) is provided, which denotes the time, in minutes, the pollution has reached s after the injection time.1 We define a function f : ZS + →R+ as follows: Let x ∈ZS + be a vector, where we regard x(s) as the energy level of the sensor s. Suppose that when the pollution reaches a sensor s, the probability that we can detect it is 1 −(1 −p)x(s), where p = 0.0001. In other words, by spending unit energy, we obtain an extra chance of detecting the pollution with probability p. For each event e ∈E, let se be the first sensor where the pollution is detected in that injection event. Note that se is a random variable. Let z∞= max e∈E,s∈S z(s, e). Then, we define f as follows: f(x) = E e∈E E se[z∞−z(se, e)], where z(se, e) is defined as z∞when there is no sensor that managed to detect the pollution. Intuitively speaking, E se[z∞−z(se, e)] expresses how much time we managed to save in the event e on average. Then, we take the average over all the events. A similar function was also used in [11] to measure the performance of a sensor allocation although they only considered the case p = 1. This corresponds to the case that by spending unit energy at a sensor s, we can always detect the pollution that has reached s. We note that f(x) is DR-submodular (see Lemma F.1 for the proof). Budget allocation problem. In order to observe the behavior of our algorithm for large-scale instances, we created a synthetic instance of the budget allocation problem [2, 17] as follows: The instance can be represented as a bipartite graph (S, T; E), where S is a set of 5,000 vertices and T is a set of 50,000 vertices. We regard a vertex in S as an ad source, and a vertex in T as a person. Then, we fix the degrees of vertices in S so that their distribution obeys the power law of γ := 2.5; that is, the fraction of ad sources with out-degree d is proportional to d−γ. For a vertex s ∈S of the supposed degree d, we choose d vertices in T uniformly at random and connect them to s with edges. We define a function f : ZS + →R+ as f(x) = X t∈T  1 − Y s∈Γ(t) (1 −p)x(s) , (6) where Γ(t) is the set of vertices connected to t and p = 0.0001. Here, we suppose that, by investing a unit cost to an ad source s ∈S, we have an extra chance of influencing a person t ∈T with s ∈Γ(t) with probability p. Then, f(x) can be seen as the expected number of people influenced by ad sources. We note that f is known to be a monotone DR-submodular function [17]. 4.2 Experimental Results Figure 1 illustrates the obtained objective value ∥x∥1 for various choices of the worst quality guarantee α on each dataset. We chose ϵ = 0.01 in Decreasing threshold. We can observe that Decreasing threshold attains almost the same objective value as Greedy, and it outperforms Degree and Uniform. Figure 2 illustrates the runtime for various choices of the worst quality guarantee α on each dataset. We chose ϵ = 0.01 in Decreasing threshold. We can observe that the runtime growth of Decreasing threshold is significantly slower than that of Greedy. 1Although three other values are provided, they showed similar empirical results and we omit them. 6 0 500 1000 1500 2000 2500 3000 ® 0 5000 10000 15000 20000 25000 30000 Objective value Uniform Decreasing threshold Degree Greedy (a) Sensor placement (BWSN) 0 5000 10000 15000 20000 ® 0.0 0.5 1.0 1.5 2.0 2.5 Objective value 1e8 Greedy Decreasing threshold Degree Uniform (b) Budget allocation (synthetic) Figure 1: Objective values 0 500 1000 1500 2000 2500 3000 ® 10-2 10-1 100 101 102 103 104 time (s) Uniform Decreasing threshold Degree Greedy (a) Sensor placement (BWSN) 0 5000 10000 15000 20000 ® 10-2 10-1 100 101 102 103 104 time (s) Greedy Decreasing threshold Degree Uniform (b) Budget allocation (synthetic) Figure 2: Runtime 0 500 1000 1500 2000 2500 3000 ® 0 10-3 10-2 10-1 100 101 102 103 Relative increase of the objective value 1.0 0.1 0.01 0.001 0.0001 (a) Relative cost increase 0 500 1000 1500 2000 2500 3000 ® 10-1 100 101 102 103 104 time (s) 1.0 0.1 0.01 0.001 0.0001 Greedy (b) Runtime Figure 3: Effect of ϵ Figures 3(a) and 3(b) show the relative increase of the objective value and the runtime, respectively, of our method against Greedy on the BWSN dataset. We can observe that the relative increase of the objective value gets smaller as α increases. This phenomenon can be well explained by considering the extreme case that α = max f(r1). In this case, we need to choose x = r1 anyway in order to achieve the worst quality guarantee, and the order of increasing coordinates of x does not matter. Also, we can see that the empirical runtime grows as a function of 1 ϵ , which matches our theoretical bound. 5 Proof of Theorem 3.2 In this section, we outline the proof of the main theorem. Proofs of some minor claims can be found in Appendix. First, we introduce a notation. Let us assume that x is updated L times in the algorithm. Let xi be the variable x after the i-th update (i = 0, . . . , L). Note that x0 = 0 and xL is the final output of the algorithm. Let si ∈S and ki ∈Z+ be the pair used in the i-th update for i = 1, . . . , L; that is, xi = xi−1 + kiχsi for i = 1, . . . , L. Let µ0 := 0 and µi := kic(χsi) f(kiχsi|xi−1) for i = 1, . . . , L. Let ˆµ0 := 0 and ˆµi := θ−1 i for i = 1, . . . , L, where θi is the threshold value on the i-th update. Note that ˆµi−1 ≤ˆµi for i = 1, . . . , L. Let x∗be an optimal solution such that ρ · c(x∗) = P s∈{x∗} c(χs). We regard that in the i-th update, the elements of {x∗} are charged by the value of µi(f(χs | xi−1) −f(χs | xi)). Then, the total charge on {x∗} is defined as T(x, f) := X s∈{x∗} L X i=1 µi(f(χs | xi−1) −f(χs | xi)). Claim 5.1. Let us fix 1 ≤i ≤L arbitrary and let θ be the threshold value on the i-th update. Then, f(kiχsi | xi−1) kic(χsi) ≥θ and f(χs | xi−1) c(χs) ≤ θ 1 −ϵ (s ∈S). Eliminating θ from the inequalities in Claim 5.1, we obtain kic(χsi) f(kiχsi | xi−1) ≤ 1 1 −ϵ c(χs) f(χs | xi−1) (i = 1, . . . , L, s ∈S) (7) 7 Furthermore, we have µi ≤ˆµi ≤ 1 1−ϵµi for i = 1, . . . , L. Claim 5.2. c(x) ≤ 1 1−ϵT(x, f). Claim 5.3. For each s ∈{x∗}, the total charge on s is at most 1 1−ϵ(1 + log(d/β))c(χs). Proof. Let us fix s ∈{x∗} and let l be the minimum i such that f(χs | xi) = 0. By (7), we have µi = kic(χsi) f(kiχsi | xi−1) ≤ 1 1 −ϵ · c(χs) f(χs | xi−1). (i = 1, . . . , l) Then, we have L X i=1 µi(f(χs | xi−1) −f(χs | xi)) = l−1 X i=1 µi(f(χs | xi−1) −f(χs | xi)) + µlf(χs | xl−1) ≤ 1 1 −ϵc(χs) l−1 X i=1 (f(χs | xi−1) −f(χs | xi)) f(χs | xi−1) + f(χs | xl−1) f(χs | xl−1)  ≤ 1 1 −ϵc(χs)  1 + l−1 X i=1  1 − f(χs | xi) f(χs | xi−1)  ≤ 1 1 −ϵc(χs)  1 + l−1 X i=1 log f(χs | xi−1) f(χs | xi)  (since 1 −1/x ≤log x for x ≥1) = 1 1 −ϵc(χs)  1 + log f(χs | x0) f(χs | xl−1)  ≤ 1 1 −ϵ  1 + log d β  c(χs) Proof of Theorem 3.2. Combining these claims, we have c(x) ≤ 1 1 −ϵ · T(x, f) ≤ 1 (1 −ϵ)2 ·  1 + log d β  · X s∈{x∗} c(χs) ≤(1 + 3ϵ) ·  1 + log d β  · ρc(x∗). Thus, x is an approximate solution with the desired ratio. Let us see that x approximately satisfies the constraint; that is, f(x) ≥(1 −δ)α. We will now consider a slightly modified version of the algorithm; in the modified algorithm, the threshold is updated until f(x) = α. Let x′ be the output of the modified algorithm. Then, we have f(x′) −f(x) ≤ X s∈{x′} f(χs | x) ≤ X s∈{x′} δc(χs) cmaxnrd ≤δd ≤δα The third inequality holds since c(χs) ≤cmax and |{x′}| ≤nr. Thus f(x) ≥(1 −δ)α. 6 Conclusions In this paper, motivated by real scenarios in machine learning, we generalized the submodular cover problem via the diminishing return property over the integer lattice. We proposed a bicriteria approximation algorithm with the following properties: (i) The approximation ratio to the cost almost matches the one guaranteed by the greedy algorithm [21] and is almost tight in general. (ii) We can satisfy the worst solution quality with the desired accuracy. (iii) The running time of our algorithm is roughly O(n log n log r). The dependency on r is exponentially better than that of the greedy algorithm. We confirmed by experiment that compared with the greedy algorithm, the solution quality of our algorithm is almost the same and the runtime is several orders of magnitude faster. Acknowledgments The first author is supported by JSPS Grant-in-Aid for JSPS Fellows. The second author is supported by JSPS Grant-in-Aid for Young Scientists (B) (No. 26730009), MEXT Grant-in-Aid for Scientific Research on Innovative Areas (24106003), and JST, ERATO, Kawarabayashi Large Graph Project. The authors thank Satoru Iwata and Yuji Nakatsukasa for reading a draft of this paper. 8 References [1] http://www.water-simulation.com/wsp/about/bwsn/. [2] N. Alon, I. Gamzu, and M. Tennenholtz. Optimizing budget allocation among channels and influencers. In Proc. of WWW, pages 381–388, 2012. [3] A. Badanidiyuru and J. Vondr´ak. Fast algorithms for maximizing submodular functions. In Proc. of SODA, pages 1497–1514, 2014. [4] Y. Chen, H. Shioi, C. A. F. Montesinos, L. P. Koh, S. Wich, and A. Krause. Active detection via adaptive submodularity. In Proc. of ICML, pages 55–63, 2014. [5] R. Iyer and J. Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In Proc. of NIPS, pages 2436–2444, 2013. [6] M. Kapralov, I. Post, and J. Vondrak. Online submodular welfare maximization: Greedy is optimal. In Proc. of SODA, pages 1216–1225, 2012. [7] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social network. In Proc. of KDD, pages 137–146, 2003. [8] A. Krause and D. Golovin. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems, pages 71–104. Cambridge University Press, 2014. [9] A. Krause and J. Leskovec. Efficient sensor placement optimization for securing large water distribution networks. Journal of Water Resources Planning and Management, 134(6):516– 526, 2008. [10] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. The Journal of Machine Learning Research, 9:235–284, 2008. [11] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak detection in networks. In Proc. of KDD, pages 420–429, 2007. [12] H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 912–920, 2010. [13] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proc. of NAACL, pages 510–520, 2011. [14] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization Techniques, Lecture Notes in Control and Information Sciences, 7:234–243, 1978. [15] A. Ostfeld, J. G. Uber, E. Salomons, J. W. Berry, W. E. Hart, C. A. Phillips, J.-P. Watson, G. Dorini, P. Jonkergouw, Z. Kapelan, F. di Pierro, S.-T. Khu, D. Savic, D. Eliades, M. Polycarpou, S. R. Ghimire, B. D. Barkdoll, R. Gueli, J. J. Huang, E. A. McBean, W. James, A. Krause, J. Leskovec, S. Isovitsch, J. Xu, C. Guestrin, J. VanBriesen, M. Small, P. Fischbeck, A. Preis, M. Propato, O. Piller, G. B. Trachtman, Z. Y. Wu, and T. Walski. The battle of the water sensor networks (BWSN): A design challenge for engineers and algorithms. Journal of Water Resources Planning and Management, 134(6):556–568, 2008. [16] R. Raz and S. Safra. A sub-constant error-probability low-degree test, and a sub-constant error-probability PCP characterization of NP. In Proc. of STOC, pages 475–484, 1997. [17] T. Soma, N. Kakimura, K. Inaba, and K. Kawarabayashi. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In Proc. of ICML, 2014. [18] H. O. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to localize objects with minimal supervision. In Proc. of ICML, 2014. [19] M. Sviridenko, J. Vondr´ak, and J. Ward. Optimal approximation for submodular and supermodular optimization with bounded curvature. In Proc. of SODA, pages 1134–1148, 2015. [20] P.-J. Wan, D.-Z. Du, P. Pardalos, and W. Wu. Greedy approximations for minimum submodular cover with submodular cost. Computational Optimization and Applications, 45(2):463–474, 2009. [21] L. A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393, 1982. 9
2015
189
5,690
Data Generation as Sequential Decision Making Philip Bachman McGill University, School of Computer Science phil.bachman@gmail.com Doina Precup McGill University, School of Computer Science dprecup@cs.mcgill.ca Abstract We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation – perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [9]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets. 1 Introduction Directed generative models are naturally interpreted as specifying sequential procedures for generating data. We traditionally think of this process as sampling, but one could also view it as making sequences of decisions for how to set the variables at each node in a model, conditioned on the settings of its parents, thereby generating data from the model. The large body of existing work on reinforcement learning provides powerful tools for addressing such sequential decision making problems. We encourage the use of these tools to understand and improve the extended processes currently driving advances in generative modelling. We show how sequential decision making can be applied to general prediction tasks by developing models which construct predictions by iteratively refining a working hypothesis under guidance from exogenous input and endogenous feedback. We begin this paper by reinterpreting several recent generative models as sequential decision making processes, and then show how changes inspired by this point of view can improve the performance of the LSTM-based model introduced in [3]. Next, we explore the connections between directed generative models and reinforcement learning more fully by developing an approach to training policies for sequential data imputation. We base our approach on formulating imputation as a finitehorizon Markov Decision Process which one can also interpret as a deep, directed graphical model. We propose two policy representations for the imputation MDP. One extends the model in [3] by inserting an explicit feedback loop into the generative process, and the other addresses the MDP more directly. We train our models/policies using techniques motivated by guided policy pearch [9, 10, 11, 8]. We examine their qualitative and quantitative performance across imputation problems covering a range of difficulties (i.e. different amounts of data to impute and different “missingness mechanisms”), and across multiple datasets. Given the relative paucity of existing approaches to the general imputation problem, we compare our models to each other and to two simple baselines. We also test how our policies perform when they use fewer/more steps to refine their predictions. As imputation encompasses both classification and standard (i.e. unconditional) generative modelling, our work suggests that further study of models for the general imputation problem is worthwhile. The performance of our models suggests that sequential stochastic construction of predictions, guided by both input and feedback, should prove useful for a wide range of problems. Training these models can be challenging, but lessons from reinforcement learning may bring some relief. 1 2 Directed Generative Models as Sequential Decision Processes Directed generative models have grown in popularity relative to their undirected counter-parts [6, 14, 12, 4, 5, 16, 15] (etc.). Reasons include: the development of efficient methods for training them, the ease of sampling from them, and the tractability of bounds on their log-likelihoods. Growth in available computing power compounds these benefits. One can interpret the (ancestral) sampling process in a directed model as repeatedly setting subsets of the latent variables to particular values, in a sequence of decisions conditioned on preceding decisions. Each subsequent decision restricts the set of potential outcomes for the overall sequence. Intuitively, these models encode stochastic procedures for constructing plausible observations. This section formally explores this perspective. 2.1 Deep AutoRegressive Networks The deep autoregressive networks investigated in [4] define distributions of the following form: p(x) = X z p(x|z)p(z), with p(z) = p0(z0) T Y t=1 pt(zt|z0, ..., zt−1) (1) in which x indicates a generated observation and z0, ..., zT represent latent variables in the model. The distribution p(x|z) may be factored similarly to p(z). The form of p(z) in Eqn. 1 can represent arbitrary distributions over the latent variables, and the work work in [4] mainly concerned approaches to parameterizing the conditionals pt(zt|z0, ..., zt−1) that restricted representational power in exchange for computational tractability. To appreciate the generality of Eqn. 1, consider using zt that are univariate, multivariate, structured, etc. One can interpret any model based on this sequential factorization of p(z) as a non-stationary policy pt(zt|st) for selecting each action zt in a state st, with each st determined by all zt′ for t′ < t, and train it using some form of policy search. 2.2 Generalized Guided Policy Search We adopt a broader interpretation of guided policy search than one might initially take from, e.g., [9, 10, 11, 8]. We provide a review of guided policy search in the supplementary material. Our expanded definition of guided policy search includes any optimization of the general form: minimize p,q E iq∼Iq E ip∼Ip(·|iq)  E τ∼q(τ|iq,ip) [ℓ(τ, iq, ip)] + λ div (q(τ|iq, ip), p(τ|ip))  (2) in which p indicates the primary policy, q indicates the guide policy, Iq indicates a distribution over information available only to q, Ip indicates a distribution over information available to both p and q, ℓ(τ, iq, ip) computes the cost of trajectory τ in the context of iq/ip, and div(q(τ|iq, ip), p(τ|ip)) measures dissimilarity between the trajectory distributions generated by p/q. As λ > 0 goes to infinity, Eqn. 2 enforces the constraint p(τ|ip) = q(τ|iq, ip), ∀τ, ip, iq. Terms for controlling, e.g., the entropy of p/q can also be added. The power of the objective in Eq. 2 stems from two main points: the guide policy q can use information iq that is unavailable to the primary policy p, and the primary policy need only be trained to minimize the dissimilarity term div(q(τ|iq, ip), p(τ|ip)). For example, a directed model structured as in Eqn. 1 can be interpreted as specifying a policy for a finite-horizon MDP whose terminal state distribution encodes p(x). In this MDP, the state at time 1 ≤t ≤T +1 is determined by {z0, ..., zt−1}. The policy picks an action zt ∈Zt at time 1 ≤t ≤T, and picks an action x ∈X at time t = T + 1. I.e., the policy can be written as pt(zt|z0, ..., zt−1) for 1 ≤t ≤T, and as p(x|z0, ..., zT ) for t = T + 1. The initial state z0 ∈Z0 is drawn from p0(z0). Executing the policy for a single trial produces a trajectory τ ≜{z0, ..., zT , x}, and the distribution over xs from these trajectories is just p(x) in the corresponding directed generative model. The authors of [4] train deep autoregressive networks by maximizing a variational lower bound on the training set log-likelihood. To do this, they introduce a variational distribution q which provides q0(z0|x∗) and qt(zt|z0, ..., zt−1, x∗) for 1 ≤t ≤T, with the final step q(x|z0, ..., zT , x∗) given by a Dirac-delta at x∗. Given these definitions, the training in [4] can be interpreted as guided policy search for the MDP described in the previous paragraph. Specifically, the variational distribution q provides a guide policy q(τ|x∗) over trajectories τ ≜{z0, ..., zT , x∗}: q(τ|x∗) ≜q(x|z0, ..., zT , x∗)q0(z0|x∗) T Y t=1 qt(zt|z0, ..., zt−1, x∗) (3) 2 The primary policy p generates trajectories distributed according to: p(τ) ≜p(x|z0, ..., zT )p0(z0) T Y t=1 pt(zt|z0, ..., zt−1) (4) which does not depend on x∗. In this case, x∗corresponds to the guide-only information iq ∼Iq in Eqn. 2. We now rewrite the variational optimization as: minimize p,q E x∗∼DX  E τ∼q(τ|x∗) [ℓ(τ, x∗)] + KL(q(τ|x∗) || p(τ))  (5) where ℓ(τ, x∗) ≜0 and DX indicates the target distribution for the terminal state of the primary policy p.1 When expanded, the KL term in Eqn. 5 becomes: KL(q(τ|x∗) || p(τ)) = (6) E τ∼q(τ|x∗) " log q0(z0|x∗) p0(z0) + T X t=1 log qt(zt|z0, ..., zt−1, x∗) pt(zt|z0, ..., zt−1) −log p(x∗|z0, ..., zT ) # Thus, the variational approach used in [4] for training directed generative models can be interpreted as a form of generalized guided policy search. As the form in Eqn. 1 can represent any finite directed generative model, the preceding derivation extends to all models we discuss in this paper.2 2.3 Time-reversible Stochastic Processes One can simplify Eqn. 1 by assuming suitable forms for X and Z0, ..., ZT . E.g., the authors of [16] proposed a model in which Zt ≡X for all t and p0(x0) was Gaussian. We can write their model as: p(xT ) = X x0,...,xT −1 pT (xT |xT −1)p0(x0) T −1 Y t=1 pt(xt|xt−1) (7) where p(xT ) indicates the terminal state distribution of the non-stationary, finite-horizon Markov process determined by {p0(x0), p1(x1|x0), ..., pT (xT |xT −1)}. Note that, throughout this paper, we (ab)use sums over latent variables and trajectories which could/should be written as integrals. The authors of [16] observed that, for any reasonably smooth target distribution DX and sufficiently large T, one can define a “reverse-time” stochastic process qt(xt−1|xt) with simple, time-invariant dynamics that transforms q(xT ) ≜DX into the Gaussian distribution p0(x0). This q is given by: q0(x0) = X x1,...,xT q1(x0|x1)DX (xT ) T Y t=2 qt(xt−1|xt) ≈p0(x0) (8) Next, we define q(τ) as the distribution over trajectories τ ≜{x0, ..., xT } generated by the reversetime process determined by {q1(x0|x1), ..., qT (xT −1|xT ), DX (xT )}. We define p(τ) as the distribution over trajectories generated by the “forward-time” process in Eqn. 7. The training in [16] is equivalent to guided policy search using guide trajectories sampled from q, i.e. it uses the objective: minimize p,q E τ∼q(τ) " log q1(x0|x1) p0(x0) + T −1 X t=1 log qt+1(xt|xt+1) pt(xt|xt−1) + log DX (xT ) pT (xT |xT −1) # (9) which corresponds to minimizing KL(q || p). If the log-densities in Eqn. 9 are tractable, then this minimization can be done using basic Monte-Carlo. If, as in [16], the reverse-time process q is not trained, then Eqn. 9 simplifies to: minimizep Eq(τ) h −log p0(x0) −PT t=1 log pt(xt|xt−1) i . This trick for generating guide trajectories exhibiting a particular distribution over terminal states xT – i.e. running dynamics backwards in time starting from xT ∼DX – may prove useful in settings other than those considered in [16]. E.g., the LapGAN model in [1] learns to approximately invert a fixed (and information destroying) reverse-time process. The supplementary material expands on the content of this subsection, including a derivation of Eqn. 9 as a bound on Ex∼DX [−log p(x)]. 1We could pull the −log p(x∗|z0, ..., zT ) term from the KL and put it in the cost ℓ(τ, x∗), but we prefer the “path-wise KL” formulation for its elegance. We abuse notation using KL(δ(x = x∗) || p(x)) ≜−log p(x∗). 2This also includes all generative models implemented and executed on an actual computer. 3 2.4 Learning Generative Stochastic Processes with LSTMs The authors of [3] introduced a model for sequentially-deep generative processes. We interpret their model as a primary policy p which generates trajectories τ ≜{z0, ..., zT , x} with distribution: p(τ) ≜p(x|sθ(τ<x))p0(z0) T Y t=1 pt(zt), with τ<x ≜{z0, ..., zT } (10) in which τ<x indicates a latent trajectory and sθ(τ<x) indicates a state trajectory {s0, ..., sT } computed recursively from τ<x using the update st ←fθ(st−1, zt) for t ≥1. The initial state s0 is given by a trainable constant. Each state st ≜[ht; vt] represents the joint hidden/visible state ht/vt of an LSTM and fθ(state, input) computes a standard LSTM update.3 The authors of [3] defined all pt(zt) as isotropic Gaussians and defined the output distribution p(x|sθ(τ<x)) as p(x|cT ), where cT ≜c0 + PT t=1 ωθ(vt). Here, c0 is a trainable constant and ωθ(vt) is, e.g., an affine transform of vt. Intuitively, ωθ(vt) transforms vt into a refinement of the “working hypothesis” ct−1, which gets updated to ct = ct−1 + ωθ(vt). p is governed by parameters θ which affect fθ, ωθ, s0, and c0. The supplementary material provides pseudo-code and an illustration for this model. To train p, the authors of [3] introduced a guide policy q with trajectory distribution: q(τ|x∗) ≜q(x|sφ(τ<x), x∗)q0(z0|x∗) T Y t=1 qt(zt|˜st, x∗), with τ<x ≜{z0, ..., zT } (11) in which sφ(τ<x) indicates a state trajectory {˜s0, ..., ˜sT } computed recursively from τ<x using the guide policy’s state update ˜st ←fφ(˜st−1, gφ(sθ(τ<t), x∗)). In this update ˜st−1 is the previous guide state and gφ(sθ(τ<t), x∗) is a deterministic function of x∗and the partial (primary) state trajectory sθ(τ<t) ≜{s0, ..., st−1}, which is computed recursively from τ<t ≜{z0, ..., zt−1} using the state update st ←fθ(st−1, zt). The output distribution q(x|sφ(τ<x), x∗) is defined as a Dirac-delta at x∗.4 Each qt(zt|˜st, x∗) is a diagonal Gaussian distribution with means and log-variances given by an affine function Lφ(˜vt) of ˜vt. q0(z0) is defined as identical to p0(z0). q is governed by parameters φ which affect the state updates fφ(˜st−1, gφ(sθ(τ<t), x∗)) and the step distributions qt(zt|˜st, x∗). gφ(sθ(τ<t), x∗) corresponds to the “read” operation of the encoder network in [3]. Using our definitions for p/q, the training objective in [3] is given by: minimize p,q E x∗∼DX E τ∼q(τ|x∗) " T X t=1 log qt(zt|˜st, x∗) pt(zt) −log p(x∗|s(τ<x)) # (12) which can be written more succinctly as Ex∗∼DX KL(q(τ|x∗) || p(τ)). This objective upper-bounds Ex∗∼DX [−log p(x∗)], where p(x) ≜P τ<x p(x|sθ(τ<x))p(τ<x). 2.5 Extending the LSTM-based Generative Model We propose changing p in Eqn. 10 to: p(τ) ≜p(x|sθ(τ<x))p0(z0) QT t=1 pt(zt|st−1). We define pt(zt|st−1) as a diagonal Gaussian distribution with means and log-variances given by an affine function Lθ(vt−1) of vt−1 (remember that st ≜[ht; vt]), and we define p0(z0) as an isotropic Gaussian. We set s0 using s0 ←fθ(z0), where fθ is a trainable function (e.g. a neural network). Intuitively, our changes make the model more like a typical policy by conditioning its “action” zt on its state st−1, and upgrade the model to an infinite mixture by placing a distribution over its initial state s0. We also consider using ct ≜Lθ(ht), which transforms the hidden part of the LSTM state st directly into an observation. This makes ht a working memory in which to construct an observation. The supplementary material provides pseudo-code and an illustration for this model. We train this model by optimizing the objective: minimize p,q E x∗∼DX E τ∼q(τ|x∗) " log q0(z0|x∗) p0(z0) + T X t=1 log qt(zt|˜st, x∗) pt(zt|st−1) −log p(x∗|s(τ<x)) # (13) 3For those unfamiliar with LSTMs, a good introduction can be found in [2]. We use LSTMs including input gates, forget gates, output gates, and peephole connections for all tests presented in this chapter. 4It may be useful to relax this assumption. 4 where we now have to deal with pt(zt|st−1), p0(z0), and q0(z0|x∗), which could be treated as constants in the model from [3]. We define q0(z0|x∗) as a diagonal Gaussian distribution whose means and log-variances are given by a trainable function gφ(x∗). Figure 1: The left block shows σ(ct) for t ∈ {1, 3, 5, 9, 16}, for a policy p with ct ≜c0 + Pt t′=1 Lθ(vt′). The right block is analogous, for a model using ct ≜Lθ(ht). When trained for the binarized MNIST benchmark used in [3], our extended model scored a negative log-likelihood of 85.5 on the test set.5 For comparison, the score reported in [3] was 87.4.6 After finetuning the variational distribution (i.e. q) on the test set, our model’s score improved to 84.8, which is quite strong considering it is an upper bound. For comparison, see the best upper bound reported for this benchmark in [15], which was 85.1. When the model used the alternate cT ≜Lθ(hT ), the raw/finetuned test scores were 85.9/85.3. Fig. 1 shows samples from the model. Model/test code is available at http://github.com/Philip-Bachman/ Sequential-Generation. 3 Developing Models for Sequential Imputation The goal of imputation is to estimate p(xu|xk), where x ≜[xu; xk] indicates a complete observation with known values xk and missing values xu. We define a mask m ∈M as a (disjoint) partition of x into xu/xk. By expanding xu to include all of x, one recovers standard generative modelling. By shrinking xu to include a single element of x, one recovers standard classification/regression. Given distribution DM over m ∈M and distribution DX over x ∈X, the objective for imputation is: minimize p E x∼DX E m∼DM  −log p(xu|xk)  (14) We now describe a finite-horizon MDP for which guided policy search minimizes a bound on the objective in Eqn. 14. The MDP is defined by mask distribution DM, complete observation distribution DX , and the state spaces {Z0, ..., ZT } associated with each of T steps. Together, DM and DX define a joint distribution over initial states and rewards in the MDP. For the trial determined by x ∼DX and m ∼DM, the initial state z0 ∼p(z0|xk) is selected by the policy p based on the known values xk. The cost ℓ(τ, xu, xk) suffered by trajectory τ ≜{z0, ..., zT } in the context (x, m) is given by −log p(xu|τ, xk), i.e. the negative log-likelihood of p guessing the missing values xu after following trajectory τ, while seeing the known values xk. We consider a policy p with trajectory distribution p(τ|xk) ≜p(z0|xk) QT t=1 p(zt|z0, ..., zt−1, xk), where xk is determined by x/m for the current trial and p can’t observe the missing values xu. With these definitions, we can find an approximately optimal imputation policy by solving: minimize p E x∼DX E m∼DM E τ∼p(τ|xk)  −log p(xu|τ, xk)  (15) I.e. the expected negative log-likelihood of making a correct imputation on any given trial. This is a valid, but loose, upper bound on the imputation objective in Eq. 14 (from Jensen’s inequality). We can tighten the bound by introducing a guide policy (i.e. a variational distribution). As with the unconditional generative models in Sec. 2, we train p to imitate a guide policy q shaped by additional information (here it’s xu). This q generates trajectories with distribution q(τ|xu, xk) ≜ q(z0|xu, xk) QT t=1 q(zt|z0, ..., zt−1, xu, xk). Given this p and q, guided policy search solves: minimize p,q E x∼DX E m∼DM  E τ∼q(τ|iq,ip) [−log q(xu|τ, iq, ip)] + KL(q(τ|iq, ip) || p(τ|ip))  (16) where we define iq ≜xu, ip ≜xk, and q(xu|τ, iq, ip) ≜p(xu|τ, ip). 5Data splits from: http://www.cs.toronto.edu/˜larocheh/public/datasets/binarized_mnist 6The model in [3] significantly improves its score to 80.97 when using an image-specific architecture. 5 3.1 A Direct Representation for Sequential Imputation Policies We define an imputation trajectory as cτ ≜{c0, ..., cT }, where each partial imputation ct ∈X is computed from a partial step trajectory τ<t ≜{z1, ..., zt}. A partial imputation ct−1 encodes the policy’s guess for the missing values xu immediately prior to selecting step zt, and cT gives the policy’s final guess. At each step of iterative refinement, the policy selects a zt based on ct−1 and the known values xk, and then updates its guesses to ct based on ct−1 and zt. By iteratively refining its guesses based on feedback from earlier guesses and the known values, the policy can construct complexly structured distributions over its final guess cT after just a few steps. This happens naturally, without any post-hoc MRFs/CRFs (as in many approaches to structured prediction), and without sampling values in cT one at a time (as required by existing NADE-type models [7]). This property of our approach should prove useful for many tasks. We consider two ways of updating the guesses in ct, mirroring those described in Sec. 2. The first way sets ct ←ct−1 + ωθ(zt), where ωθ(zt) is a trainable function. We set c0 ≜[cu 0; ck 0] using a trainable bias. The second way sets ct ←ωθ(zt). We indicate models using the first type of update with the suffix -add, and models using the second type of update with -jump. Our primary policy pθ selects zt at each step 1 ≤t ≤T using pθ(zt|ct−1, xk), which we restrict to be a diagonal Gaussian. This is a simple, stationary policy. Together, the step selector pθ(zt|ct−1, xk) and the imputation constructor ωθ(zt) fully determine the behaviour of the primary policy. The supplementary material provides pseudo-code and an illustration for this model. We construct a guide policy q similarly to p. The guide policy shares the imputation constructor ωθ(zt) with the primary policy. The guide policy incorporates additional information x ≜[xu; xk], i.e. the complete observation for which the primary policy must reconstruct some missing values. The guide policy chooses steps using qφ(zt|ct−1, x), which we restrict to be a diagonal Gaussian. We train the primary/guide policy components ωθ, pθ, and qφ simultaneously on the objective: minimize θ,φ E x∼DX E m∼DM  E τ∼qφ(τ|xu,xk) [−log q(xu|cu T )] + KL(q(τ|xu, xk) || p(τ|xk))  (17) where q(xu|cu T ) ≜p(xu|cu T ). We train our models using Monte-Carlo roll-outs of q, and stochastic backpropagation as in [6, 14]. Full implementations and test code are available from http:// github.com/Philip-Bachman/Sequential-Generation. 3.2 Representing Sequential Imputation Policies using LSTMs To make it useful for imputation, which requires conditioning on the exogenous information xk, we modify the LSTM-based model from Sec. 2.5 to include a “read” operation in its primary policy p. We incorporate a read operation by spreading p over two LSTMs, pr and pw, which respectively “read” and “write” an imputation trajectory cτ ≜{c0, ..., cT }. Conveniently, the guide policy q for this model takes the same form as the primary policy’s reader pr. This model also includes an “infinite mixture” initialization step, as used in Sec. 2.5, but modified to incorporate conditioning on x and m. The supplementary material provides pseudo-code and an illustration for this model. Following the infinite mixture initialization step, a single full step of execution for p involves several substeps: first p updates the reader state using sr t ←f r θ (sr t−1, ωr θ(ct−1, sw t−1, xk)), then p selects a step zt ∼pθ(zt|vr t ), then p updates the writer state using sw t ←f w θ (sw t−1, zt), and finally p updates its guesses by setting ct ←ct−1 + ωw θ (vw t ) (or ct ←ωw θ (hw t )). In these updates, sr,w t ≜[hr,w t ; vr,w t ] refer to the states of the (r)reader and (w)writer LSTMs. The LSTM updates f r,w θ and the read/write operations ωr,w θ are governed by the policy parameters θ. We train p to imitate trajectories sampled from a guide policy q. The guide policy shares the primary policy’s writer updates f w θ and write operation ωw θ , but has its own reader updates f q φ and read operation ωq φ. At each step, the guide policy: updates the guide state sq t ←f q φ(sq t−1, ωq φ(ct−1, sw t−1, x)), then selects zt ∼qφ(zt|vq t ), then updates the writer state sw t ←f w θ (sw t−1, zt), and finally updates its guesses ct ←ct−1 + ωw θ (vw t ) (or ct ←ωw θ (hw t )). As in Sec. 3.1, the guide policy’s read operation ωq φ gets to see the complete observation x, while the primary policy only gets to see the known values xk. We restrict the step distributions pθ/qφ to be diagonal Gaussians whose means and log-variances are affine functions of vr t /vq t . The training objective has the same form as Eq. 17. 6 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Mask Probability 50 100 150 200 250 300 350 Imputation NLL vs. Available Information TM-orc TM-hon VAE-imp GPSI-add GPSI-jump LSTM-add LSTM-jump (a) 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Mask Probability 70 72 74 76 78 80 82 84 86 88 Imputation NLL vs. Available Information GPSI-add GPSI-jump LSTM-add LSTM-jump (b) 0 2 4 6 8 10 12 14 16 Refinement Steps 82 84 86 88 90 92 94 96 98 The Effect of Increased Refinement Steps GPSI-add GPSI-jump (c) Figure 2: (a) Comparing the performance of our imputation models against several baselines, using MNIST digits. The x-axis indicates the % of pixels which were dropped completely at random, and the scores are normalized by the number of imputed pixels. (b) A closer view of results from (a), just for our models. (c) The effect of increased iterative refinement steps for our GPSI models. 4 Experiments We tested the performance of our sequential imputation models on three datasets: MNIST (28x28), SVHN (cropped, 32x32) [13], and TFD (48x48) [17]. We converted images to grayscale and shift/scaled them to be in the range [0...1] prior to training/testing. We measured the imputation log-likelihood log q(xu|cu T ) using the true missing values xu and the models’ guesses given by σ(cu T ). We report negative log-likelihoods, so lower scores are better in all of our tests. We refer to variants of the model from Sec. 3.1 as GPSI-add and GPSI-jump, and to variants of the model from Sec. 3.2 as LSTM-add and LSTM-jump. Except where noted, the GPSI models used 6 refinement steps and the LSTM models used 16.7 We tested imputation under two types of data masking: missing completely at random (MCAR) and missing at random (MAR). In MCAR, we masked pixels uniformly at random from the source images, and indicate removal of d% of the pixels by MCAR-d. In MAR, we masked square regions, with the occlusions located uniformly at random within the borders of the source image. We indicate occlusion of a d × d square by MAR-d. On MNIST, we tested MCAR-d for d ∈{50, 60, 70, 80, 90}. MCAR-100 corresponds to unconditional generation. On TFD and SVHN we tested MCAR-80. On MNIST, we tested MAR-d for d ∈{14, 16}. On TFD we tested MAR-25 and on SVHN we tested MAR-17. For test trials we sampled masks from the same distribution used in training, and we sampled complete observations from a held-out test set. Fig. 2 and Tab. 1 present quantitative results from these tests. Fig. 2(c) shows the behavior of our GPSI models when we allowed them fewer/more refinement steps. MNIST TFD SVHN MAR-14 MAR-16 MCAR-80 MAR-25 MCAR-80 MAR-17 LSTM-add 170 167 1381 1377 525 568 LSTM-jump 172 169 – – – – GPSI-add 177 175 1390 1380 531 569 GPSI-jump 183 177 1394 1384 540 572 VAE-imp 374 394 1416 1399 567 624 Table 1: Imputation performance in various settings. Details of the tests are provided in the main text. Lower scores are better. Due to time constraints, we did not test LSTM-jump on TFD or SVHN. These scores are normalized for the number of imputed pixels. We tested our models against three baselines. The baselines were “variational auto-encoder imputation”, honest template matching, and oracular template matching. VAE imputation ran multiple steps of VAE reconstruction, with the known values held fixed and the missing values re-estimated with each reconstruction step.8 After 16 refinement steps, we scored the VAE based on its best 7GPSI stands for “Guided Policy Search Imputer”. The tag “-add” refers to additive guess updates, and “-jump” refers to updates that fully replace the guesses. 8We discuss some deficiencies of VAE imputation in the supplementary material. 7 (a) (b) (c) Figure 3: This figure illustrates the policies learned by our models. (a): models trained for (MNIST, MAR-16). From top→bottom the models are: GPSI-add, GPSI-jump, LSTM-add, LSTM-jump. (b): models trained for (TFD, MAR-25), with models in the same order as (a) – but without LSTMjump. (c): models trained for (SVHN, MAR-17), with models arranged as for (b). guesses. Honest template matching guessed the missing values based on the training image which best matched the test image’s known values. Oracular template matching was like honest template matching, but matched directly on the missing values. Our models significantly outperformed the baselines. In general, the LSTM-based models outperformed the more direct GPSI models. We evaluated the log-likelihood of imputations produced by our models using the lower bounds provided by the variational objectives with respect to which they were trained. Evaluating the template-based imputations was straightforward. For VAE imputation, we used the expected log-likelihood of the imputations sampled from multiple runs of the 16-step imputation process. This provides a valid, but loose, lower bound on their log-likelihood. As shown in Fig. 3, the imputations produced by our models appear promising. The imputations are generally of high quality, and the models are capable of capturing strongly multi-modal reconstruction distributions (see subfigure (a)). The behavior of GPSI models changed intriguingly when we swapped the imputation constructor. Using the -jump imputation constructor, the imputation policy learned by the direct model was rather inscrutable. Fig. 2(c) shows that additive guess updates extracted more value from using more refinement steps. When trained on the binarized MNIST benchmark discussed in Sec. 2.5, i.e. with binarized images and subject to MCAR-100, the LSTMadd model produced raw/fine-tuned scores of 86.2/85.7. The LSTM-jump model scored 87.1/86.3. Anecdotally, on this task, these “closed-loop” models seemed more prone to overfitting than the “open-loop” models in Sec. 2.5. The supplementary material provides further qualitative results. 5 Discussion We presented a point of view which links methods for training directed generative models with policy search in reinforcement learning. We showed how our perspective can guide improvements to existing models. The importance of these connections will only grow as generative models rapidly increase in structural complexity and effective decision depth. We introduced the notion of imputation as a natural generalization of standard, unconditional generative modelling. Depending on the relation between the data-to-generate and the available information, imputation spans from full unconditional generative modelling to classification/regression. We showed how to successfully train sequential imputation policies comprising millions of parameters using an approach based on guided policy search [9]. Our approach outperforms the baselines quantitatively and appears qualitatively promising. Incorporating, e.g., the local read/write mechanisms from [3] should provide further improvements. 8 References [1] Emily L Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative models using a laplacian pyramid of adversarial networks. arXiv:1506.05751 [cs.CV], 2015. [2] Alex Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850 [cs.NE], 2013. [3] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for image generation. In International Conference on Machine Learning (ICML), 2015. [4] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning (ICML), 2014. [5] Diederik P Kingma, Danilo J Rezende, Shakir Mohamed, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems (NIPS), 2014. [6] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. [7] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In International Conference on Machine Learning (ICML), 2011. [8] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), 2014. [9] Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013. [10] Sergey Levine and Vladlen Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems (NIPS), 2013. [11] Sergey Levine and Vladlen Koltun. Learning complex neural network policies with trajectory optimization. In International Conference on Machine Learning (ICML), 2014. [12] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In International Conference on Machine Learning (ICML), 2014. [13] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. [14] Danilo Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning (ICML), 2014. [15] Danilo J Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning (ICML), 2015. [16] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2015. [17] Joshua Susskind, Adam Anderson, and Geoffrey E Hinton. The toronto face database. 2010. 9
2015
19
5,691
Space-Time Local Embeddings Ke Sun1∗ Jun Wang2 Alexandros Kalousis3,1 St´ephane Marchand-Maillet1 1 Viper Group, Computer Vision and Multimedia Laboratory, University of Geneva sunk.edu@gmail.com, Stephane.Marchand-Maillet@unige.ch, and 2 Expedia, Switzerland, jwang1@expedia.com, and 3 Business Informatics Department, University of Applied Sciences, Western Switzerland, Alexandros.Kalousis@hesge.ch Abstract Space-time is a profound concept in physics. This concept was shown to be useful for dimensionality reduction. We present basic definitions with interesting counter-intuitions. We give theoretical propositions to show that space-time is a more powerful representation than Euclidean space. We apply this concept to manifold learning for preserving local information. Empirical results on nonmetric datasets show that more information can be preserved in space-time. 1 Introduction As a simple and intuitive representation, the Euclidean space ℜd has been widely used in various learning tasks. In dimensionality reduction, n given high-dimensional points in ℜD, or their pairwise (dis-)similarities, are usually represented as a corresponding set of points in ℜd (d < D). The representation power of ℜd is limited. Some of its limitations are listed next. Œ The maximum number of points which can share a common nearest neighbor is limited (2 for ℜ; 5 for ℜ2) [1, 2], while such centralized structures do exist in real data.  ℜd can at most embed (d + 1) points with uniform pair-wise similarities. It is hard to model pair-wise relationships with less variance. Ž Even if d is large enough, ℜd as a metric space must satisfy the triangle inequality, and therefore must admit transitive similarities [2], meaning that a neighbor’s neighbor should also be nearby. Such relationships can be violated on real data, e.g. social networks.  The Gram matrix of n real vectors must be positive semi-definite (p. s. d.). Therefore ℜd cannot faithfully represent the negative eigen-spectrum of input similarities, which was discovered to be meaningful [3]. To tackle the above limitations of Euclidean embeddings, a commonly-used method is to impose a statistical mixture model. Each embedding point is a random point on several candidate locations w. r. t. some mixture weights. These candidate locations can be in the same ℜd [4]. This allows an embedding point to jump across a long distance through a “statistical worm-hole”. Or, they can be in m independent ℜd’s [2, 5], resulting in m different views of the input data. Another approach beyond Euclidean embeddings is to change the embedding destination to a curved space Md. This Md can be a Riemannian manifold [6] with a positive definite metric, or equivalently, a curved surface embedded in a Euclidean space [7, 8]. To learn such an embedding requires a closed-form expression of the distance measure. This Md can also be semi-Riemannian [9] with an indefinite metric. This semi-Riemannian representation, under the names “pseudo-Euclidean space”, “Minkowski space”, or more conveniently, “space-time”, was shown [3, 7, 10–12] to be a powerful representation for non-metric datasets. In these works, an embedding is obtained through a spectral decomposition of a “pseudo-Gram” matrix, which is computed based on some input data. On the other hand, manifold learning methods [4, 13, 14] are capable of learning a p. s. d. kernel Gram matrix, that encapsulates useful information into a narrow band of its eigen-spectrum. ∗Corresponding author 1 Usually, local neighborhood information is more strongly preserved as compared to non-local information [4, 15], so that the input information is unfolded in a non-linear manner to achieve the desired compactness. The present work advocates the space-time representation. Section 2 introduces the basic concepts. Section 3 gives several simple propositions that describe the representation power of space-time. As novel contributions, section 4 applies the space-time representation to manifold learning. Section 5 shows that using the same number of parameters, more information can be preserved by such embeddings as compared to Euclidean embeddings. This leads to new data visualization techniques. Section 6 concludes and discusses possible extensions. 2 Space-time The fundamental measurements in geometry are established by the concept of a metric [6]. Intuitively, it is a locally- or globally-defined inner product. The metric of a Euclidean space ℜd is everywhere identity. The inner product between any two vectors y1 and y2 is ⟨y1, y2⟩= yT 1 Idy2, where Id is the d × d identity matrix. A space-time ℜds,dt is a (ds + dt)-dimensional real vector space, where ds ≥0, dt ≥0, and the metric is M =  Ids 0 0 −Idt  . (1) This metric is not trivial. It is semi-Riemannian with a background in physics [9]. A point in ℜds,dt is called an event, denoted by y = (y1, . . . , yds, yds+1, . . . , yds+dt)T . The first ds dimensions are space-like, where the measurements are exactly the same as in a Euclidean space. The last dt dimensions are time-like, which cause counter-intuitions. In accordance to the metric M in eq. (1), ∀y1, y2 ∈ℜds,dt, ⟨y1, y2⟩= ds X l=1 yl 1yl 2 − ds+dt X l=ds+1 yl 1yl 2. (2) In analogy to using inner products to define distances, the following definition gives a dissimilarity measure between two events in ℜds,dt. Definition 1. The space-time interval, or shortly interval, between any two events y1 and y2 is c(y1, y2) = ⟨y1, y1⟩+ ⟨y2, y2⟩−2⟨y1, y2⟩= ds X l=1 (yl 1 −yl 2)2 − ds+dt X l=ds+1 (yl 1 −yl 2)2. (3) The space-time interval c(y1, y2) can be positive, zero or negative. With respect to a reference point y0 ∈ℜds,dt, the set {y : c(y, y0) = 0} is called a light cone. Figure 1a shows a light cone in ℜ2,1. Within the light cone, c(y, y0) < 0, i. e., negative interval occurs; outside the light cone, c(y, y0) > 0. The following counter-intuitions help to establish the concept of space-time. A low-dimensional ℜds,dt can accommodate an arbitrarily large number of events sharing a common nearest neighbor. In ℜ2,1, let A = (0, 0, 1), and put {B1, B2, . . . , } evenly on the circle {(y1, y2, 0) : (y1)2 +(y2)2 = 1} at time 0. Then, A is the unique nearest neighbor of B1, B2, . . . . A low-dimensional ℜds,dt can represent uniform pair-wise similarities between an arbitrarily large number of points. In ℜ1,1, the similarities within {Ai : Ai = (i, i)}n i=1 are uniform. In ℜds,dt, the triangle inequality is not necessarily satisfied. In ℜ2,1, let A = (−1, 0, 0), B = (0, 0, 1), C = (1, 0, 0). Then c(A, C) > c(A, B) + c(B, C). The trick is that, as B’s absolute time value increases, its intervals with all events at time 0 are shrinking. Correspondingly, similarity measures in ℜds,dt can be non-transitive. The fact that B is similar to A and C independently does not necessarily mean that A and C are similar. A neighborhood of y0 ∈ℜ2,1 is {(y1, y2, y3) : (y1−y1 0)2+(y2−y2 0)2−(y3−y3 0)2 ≤ϵ}, where ϵ ∈ ℜ. This hyperboloid has infinite “volume”, no matter how small ϵ is. Comparatively, a neighborhood in ℜd is much narrower, with an exponentially shrinking volume as its radius decreases. 2 lightcone y0 space space time (a) 0 time space c = 1 c = 0.5 c = −0.5 c = −1 (b) p⋆ g(K 2,1 n ) g(K 3,0 n ) ˆp2,1 ˆp3,0 ∆n (c) Figure 1: (a) A space-time; (b) A space-time “compass” in ℜ1,1. The colored lines show equalinterval contours with respect to the origin; (c) All possible embeddings in ℜ2,1 (resp. ℜ3) are mapped to a sub-manifold of ∆n, as shown by the red (resp. blue) line. Dimensionality reduction projects the input p⋆onto these sub-manifolds, e. g. by minimizing the KL divergence. 3 The representation capability of space-time This section formally discusses some basic properties of ℜds,dt in relation to dimensionality reduction. We first build a tool to shift between two different representations of an embedding: a matrix of c(yi, yj) and a matrix of ⟨yi, yj⟩. From straightforward derivations, we have Lemma 1. Cn = {Cn×n : ∀i, Cii = 0; ∀i ̸= j, Cij = Cji} and Kn = {Kn×n : ∀i, Pn j=1 Kij = 0; ∀i ̸= j, Kij = Kji} are two families of real symmetric matrices. dim(Cn) = dim(Kn) = n(n −1)/2. A linear mapping from Cn to Kn and its inverse are given by K(C) = −1 2(In −1 neeT )C(In −1 neeT ), C(K) = diag(K)eT + ediag(K)T −2K, (4) where e = (1, · · · , 1)T , and diag(K) means the diagonal entries of K as a column vector. Cn and Kn are the sets of interval matrices and “pseudo-Gram” matrices, respectively [3, 12]. In particular, a p. s. d. K ∈Kn means a Gram matrix, and the corresponding C(K) means a square distance matrix. The double centering mapping K(C) is widely used to generate a (pseudo-)Gram matrix from a dissimilarity matrix. Proposition 2. ∀C⋆∈Cn, ∃n events in ℜds,dt, s. t. ds + dt ≤n −1 and their intervals are C⋆. Proof. ∀C⋆∈Cn, K⋆= K(C⋆) has the eigen-decomposition K⋆= Prank(K⋆) l=1 λ⋆ l v⋆ l (v⋆ l )T where rank(K⋆) ≤n −1 and {v⋆ l } are orthonormal. For each l = 1, · · · , rank(K⋆), p |λ⋆ l |v⋆ l gives the coordinates in one dimension, which is space-like if λ⋆ l > 0 or time-like if λ⋆ l < 0. Remark 2.1. ℜds,dt (ds +dt ≤n −1) can represent any interval matrix C⋆∈Cn, or equivalently, any K⋆∈Kn. Comparatively, ℜd (d ≤n −1) can only represent {K ∈Kn : K ⪰0}. A pair-wise distance matrix in ℜd is invariant to rotations. In other words, the direction information of a point cloud is completely discarded. In ℜds,dt, some direction information is kept to distinguish between space-like and time-like dimensions. As shown in fig. 1b, one can tell the direction in ℜ1,1 by moving a point along the curve {(y1)2 + (y2)2 = 1} and measuring its interval w. r. t. the origin. Local embedding techniques often use similarity measures in a statistical simplex ∆n = n p = (pij) : 1 ≤i ≤n; 1 ≤j ≤n; i < j; ∀i, ∀j, pij > 0; P i,j:i<j pij = 1 o . This ∆n has one less dimension than Cn and Kn so that dim(∆n) = n(n −1)/2 −1. A mapping from Kn (Cn) to ∆n is given by pij ∝f (Cij(K)), (5) where f(·) is a positive-valued strictly monotonically decreasing function, so that a large probability mass is assigned to a pair of events with a small interval. Proposition 2 trivially extends to Proposition 3. ∀p⋆∈∆n, ∃n events in ℜds,dt, s. t. ds + dt ≤n −1 and their similarities are p⋆. Remark 3.1. ℜds,dt (ds + dt ≤n −1) can represent any n × n symmetric positive similarities. 3 Typically in eq. (5) we have f(x) = exp (−x). The pre-image in Cn of any given p⋆∈∆n is the curve  C⋆+ 2δ eeT −In  : ∀i ̸= j, C⋆ ij = −ln p⋆ ij; δ ∈ℜ , where 2δ eeT −In  means a uniform increment on the off-diagonal entries of C⋆. By eq. (4), the corresponding curve in Kn is  K⋆(δ) = K⋆+ δ In −1 neeT  : δ ∈ℜ , where K⋆(0) = K⋆= K(C⋆). Because In −1 neeT  shares with K⋆a common eigenvector e with zero eigenvalue, and the rest eigenvalues are all 1, there exist orthonormal vectors {v⋆ l }n−1 l=1 and real numbers {λ⋆ l }rank(K⋆) l=1 , s. t. K⋆= Prank(K⋆) l=1 λ⋆ l v⋆ l (v⋆ l )T , and In −1 neeT  = Pn−1 l=1 v⋆ l (v⋆ l )T . Therefore K⋆(δ) = rank(K⋆) X l=1 (λ⋆ l + δ)v⋆ l (v⋆ l )T + n−1 X l=rank(K⋆)+1 δv⋆ l (v⋆ l )T . (6) Depending on δ, K⋆(δ) can be negative definite, positive definite, or somewhere in between. This is summarized in the following theorem. Theorem 4. If f(x) = exp(−x) in eq. (5), the pre-image in Kn of ∀p⋆∈∆n is a continuous curve {K⋆(δ) : δ ∈ℜ}. ∃δ0, δ1 ∈ℜ, s. t. ∀δ < δ0, K⋆(δ) ≺0, ∀δ > δ1, K⋆(δ) ≻0, and the number of positive eigenvalues of K⋆(δ) increases monotonically with δ. With enough dimensions, any p⋆∈∆n can be perfectly represented in a space-only, or timeonly, or space-time-mixed ℜds,dt. There is no particular reason to favor a space-only model, because the objective of dimensionality reduction is to get a compact model with a small number of dimensions, regardless of whether they are space-like or time-like. Formally, K ds,dt n = {K+ −K−: rank(K+) ≤ds; rank(K−) ≤dt; K+ ⪰0; K−⪰0} is a low-rank subset of Kn. In the domain Kn, dimensionality reduction based on the input p⋆finds some ˆ Kds,dt ∈ K ds,dt n , which is close to the curve K⋆(δ). In the probability domain ∆n, the image of K ds,dt n under some mapping g : Kn →∆n is g(K ds,dt n ). As shown in fig. 1c, dimensionality reduction finds some ˆpds,dt ∈g(K ds,dt n ), so that ˆpds,dt is the closest point to p⋆w. r. t. some information theoretic measure. The proximity of p⋆to ˆpds,dt, i. e. its proximity to g(K ds,dt n ), measures the quality of the model ℜds,dt as the embedding target space, when the model scale or the number of dimensions is given. We will investigate the latter approach, which depends on the choice of ds, dt, the mapping g, and some proximity measure on ∆n. We will show that, with the same number of dimensions ds + dt, the region g(K ds,dt n ) with space-time-mixed dimensions is naturally close to certain input p⋆. 4 Space-time local embeddings We project a given similarity matrix p⋆∈∆n to some ˆ K ∈K ds,dt n , or equivalently, to a set of events Y = {yi}n i=1 ⊂ℜds,dt, so that ∀i, ∀j, ⟨yi, yj⟩= ˆ Kij as in eq. (2), and the similarities among these events resemble p⋆. As discussed in section 3, a mapping g : Kn →∆n helps transfer K ds,dt n into a sub-manifold of ∆n, so that the projection can be done inside ∆n. This mapping expressed in the event coordinates is given by pij(Y ) ∝exp ∥yt i −yt j∥2 1 + ∥ys i −ys j∥2 , (7) where ys = (y1, . . . , yds)T , yt = (yds+1, . . . , yds+dt)T , and ∥· ∥denotes the 2-norm. For any pair of events yi and yj, pij(Y ) increases when their space coordinates move close, and/or when their time coordinates move away. This agrees with the basic intuitions of space-time. For time-like dimensions, the heat kernel is used to make pij(Y ) sensitive to time variations. This helps to suppress events with large absolute time values, which make the embedding less interpretable. For space-like dimensions, the Student-t kernel, as suggested by t-SNE [13], is used, so that there could be more “volume” to accommodate the often high-dimensional input data. Based on our experience, this hybrid parametrization of pij(Y ) can better model real data as compared to alternative parametrizations. Similar to SNE [4] and t-SNE [13], an optimal embedding can be obtained by minimizing the Kullback-Leibler (KL) divergence from the input p⋆to the output p(Y ), given by KL(Y ) = X i,j:i<j p⋆ ij ln p⋆ ij pij(Y ). (8) 4 According to some straightforward derivations, its gradients are ∂KL ∂yt i = −2 X j:j̸=i p⋆ ij −pij(Y )  yt i −yt j  , (9) ∂KL ∂ys i = 2 X j:j̸=i 1 1 + ∥ys i −ys j∥2 p⋆ ij −pij(Y )  ys i −ys j  , (10) where ∀i, ∀j, p⋆ ij = p⋆ ji and pij(Y ) = pji(Y ). As an intuitive interpretation of a gradient descent process w. r. t. eqs. (9) and (10), we have that if pij(Y ) < p⋆ ij, i. e. yi and yj are put too far from each other, then ys i and ys j are attracting, and yt i and yt j are repelling, so that their space-time interval becomes shorter; if pij(Y ) > p⋆ ij, then yi and yj are repelling in space and attracting in time. During gradient descent, {ys i } are updated by the delta-bar-delta scheme as used in t-SNE [13], where each scalar parameter has its own adaptive learning rate initialized to γs > 0; {yt i} are updated based on one global adaptive learning rate initialized to γt > 0. The learning of time should be more cautious, because pij(Y ) is more sensitive to time variations by eq. (7). Therefore, the ratio γt/γs should be very small, e.g. 1/100. 5 Empirical results Aiming at potential applications in data visualization and social network analysis, we compare SNE [4], t-SNE [13], and the method proposed in section 4 denoted as SNEST . They are based on the same optimizer but correspond to different sub-manifolds of ∆n, as presented by the curves in fig. 1c. Given different embeddings of the same dataset using the same number of dimensions, we perform model selection based on the KL divergence as explained in the end of section 3. We generated a toy dataset SCHOOL, representing a school with two classes. Each class has 20 students standing evenly on a circle, where each student is communicating with his (her) 4 nearest neighbours, and one teacher, who is communicating with all the students in the same class and the teacher in the other class. The input p⋆is distributed evenly on the pairs (i, j) who are socially connected. NIPS22 contains a 4197 × 3624 author-document matrix from NIPS 1988 to 2009 [2]. After discarding the authors who have only one NIPS paper, we get 1418 authors who co-authored 2121 papers. The co-authorship matrix is CA1418×1418, where CAij denotes the number of papers that author i co-authored with author j. The input similarity p⋆is computed so that p⋆ ij ∝ CAij(1/ P j CAij + 1/ P i CAij), where the number of co-authored papers is normalized by each author’s total number of papers. NIPS17 is built in the same way using only the first 17 volumes. GrQc is an arXiv co-authorship graph [16] with 5242 nodes and 14496 edges. After removing one isolated node, a matrix CA5241×5241 gives the numbers of co-authored papers between any two authors who submitted to the general relativity and quantum cosmology category from January 1993 to April 2003. The input similarity p⋆satisfies p⋆ ij ∝CAij(1/ P j CAij + 1/ P i CAij). W5000 is the semantic similarities among 5000 English words in WS5000×5000 [2, 17]. Each WSij is an asymmetric non-negative similarity from word i to word j. The input is normalized into a probability vector p⋆so that p⋆ ij ∝WSij/ P j WSij +WSji/ P i WSji. W1000 is built in the same way using a subset of 1000 words. Table 1 shows the KL divergence in eq. (8). In most cases, SNEST for a fixed number of free parameters has the lowest KL. On NIPS22, GrQc, W1000 and W5000, the embedding by SNEST in ℜ2,1 is even better than SNE and t-SNE in ℜ4, meaning that the embedding by SNEST is both compact and faithful. This is in contrast to the mixture approach for visualization [2], which multiplies the number of parameters to get a faithful representation. Fixing the free parameters to two dimensions, t-SNE in ℜ2 has the best overall performance, and SNEST in ℜ1,1 is worse. We also discovered that, using d dimensions, ℜd−1,1 usually performs better than alternative choices such as ℜd−2,2, which are not shown due to space limitation. A timelike dimension allows adaptation to non-metric data. The investigated similarities, however, are 5 Table 1: KL divergence of different embeddings. After repeated runs on different configurations for each embedding, the minimal KL that we have achieved within 5000 epochs is shown. The bold numbers show the winners among SNE, t-SNE and SNEST using the same number of parameters. SCHOOL NIPS17 NIPS22 GrQc W1000 W5000 SNE →ℜ2 0.52 1.88 2.98 3.19 3.67 4.93 SNE →ℜ3 0.36 0.85 1.79 1.82 3.20 4.42 SNE →ℜ4 0.19 0.35 1.01 1.03 2.76 3.93 t-SNE →ℜ2 0.61 0.88 1.29 1.24 2.15 3.00 t-SNE →ℜ3 0.58 0.85 1.23 1.14 2.00 2.79 t-SNE →ℜ4 0.58 0.84 1.22 1.11 1.96 2.74 SNEST →ℜ1,1 0.43 0.91 1.62 2.34 2.59 3.64 SNEST →ℜ2,1 0.31 0.60 0.97 1.00 1.92 2.57 SNEST →ℜ3,1 0.29 0.54 0.93 0.88 1.79 2.39 time teachers (a) 50 100 150 200 ∥ys i −ys j∥ 0 1 2 ∥yt i −yt j∥ 0.1 1 10 100 exp(∥yt i −yt j∥2)/(1 + ∥ys i −ys j∥2) 0 50 100 (b) Figure 2: (a) The embedding of SCHOOL by SNEST in ℜ2,1. The black (resp. colored) dots denote the students (resp. teachers). The paper coordinates (resp. color) mean the space (resp. time) coordinates. The links mean social connections. (b) The contour of exp(∥yt i−yt j∥2) 1+∥ys i −ys j ∥2 in eq. (7) as a function of ∥ys i −ys j∥(x-axis) and ∥yt i −yt j∥(y-axis). The unit of the displayed levels is 10−3. mainly space-like, in the sense that a random pair of people or words are more likely to be dissimilar (space-like) rather than similar (time-like). According to our experience, on such datasets, good performance is often achieved with mainly space-like dimensions mixed with a small number of time-dimensions, e.g. ℜ2,1 or ℜ3,1 as suggested by table 1. To interpret the embeddings, fig. 2a presents the embedding of SCHOOL in ℜ2,1, where the space and time are represented by paper coordinates and three colors levels, respectively. Each class is embedded as a circle. The center of each class, the teacher, is lifted to a different time, so as to be near to all students in the same class. One teacher being blue, while the other being red, creates a “hyper-link” between the teachers, because their large time difference makes them nearby in ℜ2,1. Figures 3 and 4 show the embeddings of NIPS22 and W5000 in ℜ2,1. Similar to the (t-)SNE visualizations [2, 4, 13], it is easy to find close authors or words embedded nearby. The learned p(Y ), however, is not equivalent to the visual proximity, because of the counter-intuitive time dimension. How much does the visual proximity reflect the underlying p(Y )? From the histogram of the time coordinates, we see that the time values are in the narrow range [−1.5, 1.5], while the range of the space coordinates is at least 100 times larger. Figure 2b shows the similarity function on the right-hand-side of eq. (7) over an interesting range of ∥ys i −ys j∥and ∥yt i −yt j∥. In this range, large similarity values are very sensitive to space variations, and their red level curves are almost vertical, meaning that the similarity information is largely carried by space coordinates. Therefore, the visualization of neighborhoods is relatively accurate: visually nearby points are indeed similar; proximity in a neighborhood is informative regarding p(Y ). On the other hand, small similarity values are less sensitive to space variations, and their blue level curves span a large distance in space, meaning that the visual distance between dissimilar points is less informative regarding p(Y ). For 6 −250 −150 0 150 250 −250 −150 0 150 250 Achan Amari Atiya Atkeson Attias Bach Baldi Ballard Barber Bartlett Barto Beck Bengio Bengio Bialek Bishop Black Blair Blei Bower Bradley Buhmann Caruana Cauwenberghs Chapelle Cohn Cottrell Courville Cowan Crammer Cristianini Darrell Das Dayan DeFreitas DeWeese Denker Doya Frasconi Freeman Frey Fukumizu Gerstner Ghahramani Giles Goldstein Gordon Graepel Gray Gretton Griffiths Grimes Gupta Hasler Hastie Herbrich Hinton Hochreiter Hofmann Horn Jaakkola Jin Johnson Jordan Kakade Kawato Kearns Koch Koller Lafferty LeCun Lee Lee Lee Leen Lewicki Li Lippmann Liu Maass Malik Marchand Meir Mel Minch Mitchell Mjolsness Mohri Montague Moody Moore Morgan Movellan Mozer Muller Murray Ng Nowlan Obermayer Opper Pearlmutter Pillow Platt Poggio Pouget Rahimi Rao Rasmussen Ratsch Riesenhuber Rosenfeld Roth Roweis Rumelhart Ruppin Saad Sahani Saul Scholkopf Schraudolph Schuurmans Scott Seeger Sejnowski Seung Shawe-Taylor Simard Simoncelli Singer Singh Smith Smola Smyth Sollich Stevens Sun Sutton Teh Tenenbaum Tesauro Thrun Tishby Touretzky Tresp Vapnik Viola Waibel Wainwright Wang Wang Warmuth Weinshall Weiss Welling Weston Williams Williamson Willsky Winther Xing Yu Yuille Zador Zemel Zhang Zhang Sminchisescu Grauman Garrigues Kim Kulis −1.5 0.0 1.5 50 100 150 200 250 histogram of time coordinates <-1.0 -0.5 0 0.5 >1.0 ---time--> Figure 3: An embedding of NIPS22 in ℜ2,1. “Major authors” with at least 10 NIPS papers or with a time value in the range (−∞, −1] ∪[1, ∞) are shown by their names. Other authors are shown by small dots. The paper coordinates are in space-like dimensions. The positions of the displayed names are adjusted up to a tiny radius to avoid text overlap. The color of each name represents the time dimension. The font size is proportional to the absolute time value. example, a visual distance of 165 with a time difference of 1 has roughly the same similarity as a visual distance of 100 with no time difference. This is a matter of embedding dissimilar samples far or very far and does not affect much the visual perception, which naturally requires less accuracy on such samples. However, perception errors could still occur in these plots, although they are increasingly unlikely as the observation radius turns small. In viewing such visualizations, one must count in the time represented by the colors and font sizes, and remember that a point with a large absolute time value should be weighted higher in similarity judgment. Consider the learning of yi by eq. (9), if the input p⋆ ij is larger than what can be faithfully modeled in a space-only model, then j will push i to a different time. Therefore, the absolute value of time is a significance measurement. By fig. 2a, the connection hubs, and points with remote connections, are more likely to be at a different time. Emphasizing the embedding points with large absolute time values helps the user to focus on important points. One can easily identify well-known authors and popular words in figs. 3 and 4. This type of information is not discovered by traditional embeddings. 6 Conclusions and Discussions We advocate the use of space-time representation for non-metric data. While previous works on such embeddings [3, 12] compute an indefinite kernel by simple transformations of the input data, we learn a low-rank indefinite kernel by manifold learning, trying to better preserve the neigh7 −150 −100 −50 0 50 100 150 −150 −100 −50 0 50 100 150 FIELD COMPUTER BODY CONDEMN DISOWN RANGE INTENSITY ATTENTION BE CHEERLEADER CHICKEN CONFUSION CRISIS CULTURE GRACE HANG HOBBY PARSLEY RESISTANCE ANIMAL BEAR CLEANING DECENCY DRUGS EXERCISE HIDDEN IMPATIENCE MADE PLAN POEM RESTORE SALESMAN SPLIT BLOCK CLEANER EGO EVERYDAY GRADUATION LACK MAIN MANAGEMENT MEDICINE MOVE NERVES PROFESSIONAL RABBIT RARE REASON RENOUNCE RETREAT RUNNER SUPERSTITION THERAPY TRAUMA ATTRACT CLAIMS CLOTHES DISBELIEVE FORT FRAY FREE MOLE NORM OUTLINE PROTEIN RAPE REBEL RESPECT SALES SCAR SHED SPY STROKE TRAITOR UNION WOOD WORTHLESS BARREL CAR CHISEL CONGRESS CONSEQUENCE COVERED DARING DECORATE DIFFERENCE DUE ELABORATE EMPIRE EXCEL EXTRAVAGANT FAIR FAMILY FLAP FOG FUZZ HIGHLIGHT HONOR IMPORTANT IMPRESSION ITALIAN KEEPER MUSIC NATURAL PARADE PASSAGE PERSONALITY PLUG POLICEMAN POTENTIAL PROCESS SAUSAGE SCIENTIFIC SEAL SPACE SUPPORT SUSPENSE THEORY TOURIST TRAVEL TUBE ANNOYING ASSOCIATE AWARD BUSY CAPTURE CLAY COMFORT COMMUNIST COMPULSION CONFUSE CRIME CRUNCH DETERIORATE DIRECTION DOMINATE DOWNTOWN ELIMINATION ENGINEER EUROPE EVALUATE FACTORY FISH FREEDOM FRONTIER GHOST GROW HOLE ISSUE KIDS LACE MAFIA MASTER MINT NERVE OATMEAL PERFORMANCE PERISH PRESENTATION PROVE PUBERTY RACK RIGHTEOUSNESS ROAD SNEAK STAIN STICK SWAMP TABOO TEND TOPPING VIOLENT WARN WORRY BIRD BLOW BOND BUMPS CAPACITY COMMON CONTROLS COVER CREATIVITY CROOKED DANCER DELAY DEPLETION DICE DISASTER DISCIPLINE DISTINCT DOOR DRAGON EMERGENCY FAITHFUL FOOTBALL GET GOD GRIND GROWTH HOROSCOPE INVENTOR IRON JEWISH LABEL LOBSTER MEASURE OPINION PAINTER PINK PLASTIC PLUSH POTATO PRECIOUS PROJECT PROOF PROTECTION RANK RECEIPT REDUCE RETURN RIBS SCUM SENSITIVE SPIKE SPIT STAFF STRIPE STUBBORN STYLE SUGGEST TILL TROPICAL UNSURE WORM WRESTLING ABSENCE BEER BISCUIT BLAME BOWL COAST COMPOUND CORNER CRITICISM DANGEROUS DILIGENCE ELECTRICIAN ELEGANT ELF EVENT EXTREME FORBID GRAVE HELPFUL HORMONES INTEREST KITCHEN LEADER LEAN LEO LIMP LUXURY MAIDEN MARBLE MONKEY MORAL MUSCLE NEGOTIATION ORDER PANIC PANTS PARENTS PARTY PASTRY PERCENT PIG PINCH PLACE POPULAR PROTECT RECKLESS REGRET REPLACE RESPONSIBILITY SCENERY SILVERWARE SOAP STOLEN SWING THINK THRESHOLD TRADE UNEVEN USE WINE ABUNDANCE ATTEND ATTIC BALLOON BATTERY BIRDS BOARD BUFFALO BUM CARD CHALLENGE CLAMP COLESLAW COOKED CREW CUE DECISION DISMAY ECONOMIC ENVIRONMENT FAVOR FIT FLOWER GENERAL GLIDE HARDY HEALTH HIKER HISTORY JAPAN LEVEL LIFT LIMIT LIZARD MAKE UP MISCHIEF MISSILE MIXED NEUTRAL NOT OBNOXIOUS OUTDOORS OVERFLOW PIE POISE POSSIBLE RATE REACTION REVIVAL SECRETARY SEW SKILL SMEAR SOUTHERN SPEAKER SPELL SQUEEZE STIMULUS STRAWBERRY SYMBOL TIP TREE TWELVE UNDERSTANDING UNLOAD VASELINE VIOLATION VOTE WASTED WELCOME ACCIDENT ACCUMULATE AFTERNOON ANARCHY BASE BEYOND BLACKMAIL BLOOD BREAST BREEZEWAY BROWN BUILDING BUTTERFLY CAST CHARITY CHUCK CLEAR CODE COURSE CRUSH DATE DISGUST DISPERSE DO DRIFT DRUG ECSTASY EGG ENDLESS ENTERTAIN ESSENCE EVICT EXPLORER FATTENING FLOWERS FORBIDDEN FOREIGN FUSS GHETTO GIVE UP GONE HANDLE INTAKE INTIMATE LANDSCAPE LOVERS MILD MIXTURE MOTORCYCLE NONSENSE ORANGE JUICE OUTRAGEOUS PEACEFUL PILE PLAIN PREDATOR REPENTANCE RIVER ROCKS RUBBER SERIOUS SHAKE SHARK SINGER SINKER SNEAKY SPECIFIC SPRAY SQUASH STRANGER TEN TONE UNIFORM VOID WOLF ACCOMPLISHED AD AFFAIR ALONE ARTS BABY BACTERIA BITE BRIEFCASE CAPTION CHANCE CLAM COLD DAMP DELIVER DOCTOR DRAIN DRILL DRUNK DUCKS ELEPHANT ESCAPE EXPERIENCE EYEBALL FAKE FIGURE FLUTE FLY FOLD FOUL FUSE GARLIC GLOVES GREEK HAIR HAIRCUT HANDKERCHIEF INFLATION LEARN MATH MEANING MICROSCOPE MONEY MOUTH NECK OPPONENT ORIENT OUTSTANDING PAT PLEASE RAT RITUAL STICKER SWIMMER TEAPOT TELEVISION TOGETHER TRAIN TREAT WASTE WRITER ADMIT APARTMENT AURA AUTHORITY AWARE AX BEG BROKE CHART COMMANDER COSTUME CRACKER CROSS CUTE DAMN DARE DEER DEFENSE DELIGHT DIAMETER DOLPHIN EFFORT ENGAGE EXTRA FEELING FILL FRY GIVING GOO GULLY GUN HAY HIKING HIT HYPNOTIZE IMITATE INDEPENDENT INTESTINE LEGAL LEMONADE LIVER MARINES MEET MILK NOMAD OATH OFTEN PANTYHOSE PERFECT PLANETS POUR PROFESSION RAIN RECENT RELIEF REPEAT ROBE SENSE SHADOW SLIVER SLURP SPONTANEOUS STAIRS STEAM STIFF STINGY SUPERMAN TEMPER THESIS TURTLE VALVE VEER WAKE WATER WELFARE WRAP ANOTHER APPLE BAG BAT BLENDER BLOCKADE BUS CAMPING CLUMSY COAT CONSOLE COUNTER COURT CURTAINS DIRECT DIVISION GOODS HELPER HOSTESS IDENTITY INDIAN INTEGRITY KEEP LUNCH MARINE MUSTY OIL OZONE PADDY PENGUIN PERSUADE ROACH ROYAL RUNNING SERIES SHEEP SUNDAY SUNSHINE TAIL TART TELEPHONE TELESCOPE TRUCK VALUE VODKA WANDER AFFECT ANKLE BOAT CART CHEEK DISCOVER DUNK DUST EGYPT ERA EXCISE EXPRESS FUMES HAND HAUL HEAT HEDGE HORSE HOTEL IMAGE LOVER MENTHOL MESSAGE MOLASSES MOTION POSSESSION QUALITY SCREEN SCRIBBLE SHY SIGNAL SISTER SNAP SOME TAR TASTY UNICORN AHOY BICYCLE BOIL BOUND BRITTLE CHANGE CHINESE CONTEMPORARY CONTEXT COWGIRL CUPBOARD EXPLODE FIREPLACE FRAIL FRUSTRATE HELICOPTER HUNGER IDOL INNOCENCE INSTANCE LAKE LICK LOFT MEMORY MINER NOTHING NUN PROVERB PROVISION QUARTER RADIATOR SALUTE SINCE SLUG WIDE ACCOUNT ANGEL BASS BOXER CATTLE CHAMPION CHASE CORN DESCENT DRAFT EINSTEIN FAVORITE FEATHER FEVER FIREMAN FLEET GRASSHOPPER HOT DOGS JUNIOR LEAD LIGHTNING MAROON MAXIMUM ORIGINATE PERSON PIANO PIZZA POUND RED RESTRICTION SHOP SHUTTER SITTING SNOW SPOIL SQUID STALL SUNSET TALE TERMINAL TIRED TRAILER TURKEY WATERFALL ZIT AARDVARK BEARD BIRTH BOOT BOOTS BREATH BUZZ CYLINDER DOWNSTAIRS FOR HANDBAG HEADACHE HOCKEY KEYS LONG MAJORITY OPENING PRIME PRONOUN RECLINER SHOT SMELL SPADE STABLE SUBMARINE TARGET THIRSTY TOOTHPASTE WEEK ANNIHILATE BACK BORROW CENTS COCA-COLA COMPONENTS COOL ELDERS HANDICAP PALE RAM RIDER SCREAM SPIDER SUPERMARKET ADD CORAL CRANE CUBE EAGLE GROOM HOOP LAVA LEMON NEPHEW SAUCER VALENTINE WHO ANISETTE BET BOY BRAKE CRATER MONARCH PARENT WASP DEFEAT DRYER GOING HARBOR MAN PARROT SMALL STRAY ADDITION EMERALD HER SABER SWOON ADORE SALOON THIRST SWABS NEST PROFIT DILL −1.5 0.0 1.5 500 1000 histogram of time coordinates <-0.8 -0.4 0 0.4 >0.8 ---time--> Figure 4: An embedding of W5000 in ℜ2,1. Only a subset is shown for a clear visualization. The position of each word represents its space coordinates up to tiny adjustments to avoid overlap. The color of each word shows its time value. The font size represents the absolute time value. bours [4]. We discovered that, using the same number of dimensions, certain input information is better preserved in space-time than Euclidean space. We built a space-time visualizer of non-metric data, which automatically discovers important points. To enhance the proposed visualization, an interactive interface can allow the user select one reference point, and show the true similarity values, e.g., by aligning other points so that the visual distances correspond to the similarities. Proper constraints or regularization could be proposed, so that the time values are discrete or sparse, and the resulting embedding can be more easily interpreted. The proposed learning is on a sub-manifold K ds,dt n ⊂Kn, or a corresponding sub-manifold of ∆n. Another interesting sub-manifold of Kn could be  K −ttT : K ≻0; t ∈ℜn , which extends the p. s. d. cone to any matrix in Kn with a compact negative eigen-spectrum. It is possible to construct a sub-manifold of Kn so that the embedder can learn whether a dimension is space-like or time-like. As another axis of future investigation, given the large family of manifold learners, there can be many ways to project the input information onto these sub-manifolds. The proposed method SNEST is based on the KL divergence in ∆n. Some immediate extensions can be based on other dissimilarity measures in Kn or ∆n. This could also be useful for faithful representations of graph datasets with indefinite weights. Acknowledgments This work has been supported be the Department of Computer Science, University of Geneva, in collaboration with Swiss National Science Foundation Project MAAYA (Grant number 144238). 8 References [1] K. Zeger and A. Gersho. How many points in Euclidean space can have a common nearest neighbor? In International Symposium on Information Theory, page 109, 1994. [2] L. van der Maaten and G. E. Hinton. Visualizing non-metric similarities in multiple maps. Machine Learning, 87(1):33–55, 2012. [3] J. Laub and K. R. M¨uller. Feature discovery in non-metric pairwise data. JMLR, 5(Jul):801– 818, 2004. [4] G. E. Hinton and S. T. Roweis. Stochastic neighbor embedding. In NIPS 15, pages 833–840. MIT Press, 2003. [5] J. Cook, I. Sutskever, A. Mnih, and G. E. Hinton. Visualizing similarity data with a mixture of maps. In AISTATS’07, pages 67–74, 2007. [6] J. Jost. Riemannian Geometry and Geometric Analysis. Universitext. Springer, 6th edition, 2011. [7] R. C. Wilson, E. R. Hancock, E. Pekalska, and R. P. W. Duin. Spherical embeddings for non-Euclidean dissimilarities. In CVPR’10, pages 1903–1910, 2010. [8] D. Lunga and O. Ersoy. Spherical stochastic neighbor embedding of hyperspectral data. Geoscience and Remote Sensing, IEEE Transactions on, 51(2):857–871, 2013. [9] B. O’Neill. Semi-Riemannian Geometry With Applications to Relativity. Number 103 in Series: Pure and Applied Mathematics. Academic Press, 1983. [10] L. Goldfarb. A unified approach to pattern recognition. Pattern Recognition, 17(5):575–582, 1984. [11] E. Pekalska and R. P. W. Duin. The Dissimilarity Representation for Pattern Recognition: Foundations and Applications. World Scientific, 2005. [12] J. Laub, J. Macke, K. R. M¨uller, and F. A. Wichmann. Inducing metric violations in human similarity judgements. In NIPS 19, pages 777–784. MIT Press, 2007. [13] L. van der Maaten and G. E. Hinton. Visualizing data using t-SNE. JMLR, 9(Nov):2579–2605, 2008. [14] N. D. Lawrence. Spectral dimensionality reduction via maximum entropy. In AISTATS’11, JMLR W&CP 15, pages 51–59, 2011. [15] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction. In ICML’04, pages 839–846, 2004. [16] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graph evolution: Densification and shrinking diameters. ACM Transactions on Knowledge Discovery from Data, 1(1), 2007. [17] D. L. Nelson, C. L. McEvoy, and T. A Schreiber. The university of South Florida word association, rhyme, and word fragment norms. 1998. http://www.usf.edu/ FreeAssociation. 9
2015
190
5,692
Mixing Time Estimation in Reversible Markov Chains from a Single Sample Path Daniel Hsu Columbia University djhsu@cs.columbia.edu Aryeh Kontorovich Ben-Gurion University karyeh@cs.bgu.ac.il Csaba Szepesv´ari University of Alberta szepesva@cs.ualberta.ca Abstract This article provides the first procedure for computing a fully data-dependent interval that traps the mixing time tmix of a finite reversible ergodic Markov chain at a prescribed confidence level. The interval is computed from a single finite-length sample path from the Markov chain, and does not require the knowledge of any parameters of the chain. This stands in contrast to previous approaches, which either only provide point estimates, or require a reset mechanism, or additional prior knowledge. The interval is constructed around the relaxation time trelax, which is strongly related to the mixing time, and the width of the interval converges to zero roughly at a √n rate, where n is the length of the sample path. Upper and lower bounds are given on the number of samples required to achieve constant-factor multiplicative accuracy. The lower bounds indicate that, unless further restrictions are placed on the chain, no procedure can achieve this accuracy level before seeing each state at least Ω(trelax) times on the average. Finally, future directions of research are identified. 1 Introduction This work tackles the challenge of constructing fully empirical bounds on the mixing time of Markov chains based on a single sample path. Let (Xt)t=1,2,... be an irreducible, aperiodic timehomogeneous Markov chain on a finite state space [d] := {1, 2, . . . , d} with transition matrix P . Under this assumption, the chain converges to its unique stationary distribution π = (πi)d i=1 regardless of the initial state distribution q: lim t→∞Prq (Xt = i) = lim t→∞(qP t)i = πi for each i ∈[d]. The mixing time tmix of the Markov chain is the number of time steps required for the chain to be within a fixed threshold of its stationary distribution: tmix := min  t ∈N : sup q max A⊂[d] |Prq (Xt ∈A) −π(A)| ≤1/4  . (1) Here, π(A) = P i∈A πi is the probability assigned to set A by π, and the supremum is over all possible initial distributions q. The problem studied in this work is the construction of a non-trivial confidence interval Cn = Cn(X1, X2, . . . , Xn, δ) ⊂[0, ∞], based only on the observed sample path (X1, X2, . . . , Xn) and δ ∈(0, 1), that succeeds with probability 1 −δ in trapping the value of the mixing time tmix. This problem is motivated by the numerous scientific applications and machine learning tasks in which the quantity of interest is the mean π(f) = P i πif(i) for some function f of the states of a Markov chain. This is the setting of the celebrated Markov Chain Monte Carlo (MCMC) paradigm [1], but the problem also arises in performance prediction involving time-correlated data, as is common in reinforcement learning [2]. Observable bounds on mixing times are useful in the 1 design and diagnostics of these methods; they yield effective approaches to assessing the estimation quality, even when a priori knowledge of the mixing time or correlation structure is unavailable. Main results. We develop the first procedure for constructing non-trivial and fully empirical confidence intervals for Markov mixing time. Consider a reversible ergodic Markov chain on d states with absolute spectral gap γ⋆and stationary distribution minorized by π⋆. As is well-known [3, Theorems 12.3 and 12.4], (trelax −1) ln 2 ≤tmix ≤trelax ln 4 π⋆ (2) where trelax := 1/γ⋆is the relaxation time. Hence, it suffices to estimate γ⋆and π⋆. Our main results are summarized as follows. 1. In Section 3.1, we show that in some problems n = Ω((d log d)/γ⋆+ 1/π⋆) observations are necessary for any procedure to guarantee constant multiplicative accuracy in estimating γ⋆(Theorems 1 and 2). Essentially, in some problems every state may need to be visited about log(d)/γ⋆times, on average, before an accurate estimate of the mixing time can be provided, regardless of the actual estimation procedure used. 2. In Section 3.2, we give a point-estimator for γ⋆, and prove in Theorem 3 that it achieves multiplicative accuracy from a single sample path of length ˜O(1/(π⋆γ3 ⋆)).1 We also provide a point-estimator for π⋆that requires a sample path of length ˜O(1/(π⋆γ⋆)). This establishes the feasibility of estimating the mixing time in this setting. However, the valid confidence intervals suggested by Theorem 3 depend on the unknown quantities π⋆and γ⋆. We also discuss the importance of reversibility, and some possible extensions to nonreversible chains. 3. In Section 4, the construction of valid fully empirical confidence intervals for π⋆and γ⋆ are considered. First, the difficulty of the task is explained, i.e., why the standard approach of turning the finite time confidence intervals of Theorem 3 into a fully empirical one fails. Combining several results from perturbation theory in a novel fashion we propose a new procedure and prove that it avoids slow convergence (Theorem 4). We also explain how to combine the empirical confidence intervals from Algorithm 1 with the non-empirical bounds from Theorem 3 to produce valid empirical confidence intervals. We prove in Theorem 5 that the width of these new intervals converge to zero asymptotically at least as fast as those from either Theorem 3 and Theorem 4. Related work. There is a vast statistical literature on estimation in Markov chains. For instance, it is known that under the assumptions on (Xt)t from above, the law of large numbers guarantees that the sample mean πn(f) := 1 n Pn t=1 f(Xt) converges almost surely to π(f) [4], while the central limit theorem tells us that as n →∞, the distribution of the deviation √n(πn(f) −π(f)) will be normal with mean zero and asymptotic variance limn→∞n Var (πn(f)) [5]. Although these asymptotic results help us understand the limiting behavior of the sample mean over a Markov chain, they say little about the finite-time non-asymptotic behavior, which is often needed for the prudent evaluation of a method or even its algorithmic design [6–13]. To address this need, numerous works have developed Chernoff-type bounds on Pr(|πn(f) −π(f)| > ǫ), thus providing valuable tools for non-asymptotic probabilistic analysis [6, 14–16]. These probability bounds are larger than corresponding bounds for independent and identically distributed (iid) data due to the temporal dependence; intuitively, for the Markov chain to yield a fresh draw Xt′ that behaves as if it was independent of Xt, one must wait Θ(tmix) time steps. Note that the bounds generally depend on distribution-specific properties of the Markov chain (e.g., P , tmix, γ⋆), which are often unknown a priori in practice. Consequently, much effort has been put towards estimating these unknown quantities, especially in the context of MCMC diagnostics, in order to provide datadependent assessments of estimation accuracy [e.g., 11, 12, 17–19]. However, these approaches generally only provide asymptotic guarantees, and hence fall short of our goal of empirical bounds that are valid with any finite-length sample path. Learning with dependent data is another main motivation to our work. Many results from statistical learning and empirical process theory have been extended to sufficiently fast mixing, dependent 1The ˜O(·) notation suppresses logarithmic factors. 2 data [e.g., 20–26], providing learnability assurances (e.g., generalization error bounds). These results are often given in terms of mixing coefficients, which can be consistently estimated in some cases [27]. However, the convergence rates of the estimates from [27], which are needed to derive confidence bounds, are given in terms of unknown mixing coefficients. When the data comes from a Markov chain, these mixing coefficients can often be bounded in terms of mixing times, and hence our main results provide a way to make them fully empirical, at least in the limited setting we study. It is possible to eliminate many of the difficulties presented above when allowed more flexible access to the Markov chain. For example, given a sampling oracle that generates independent transitions from any given state (akin to a “reset” device), the mixing time becomes an efficiently testable property in the sense studied in [28, 29]. On the other hand, when one only has a circuit-based description of the transition probabilities of a Markov chain over an exponentially-large state space, there are complexity-theoretic barriers for many MCMC diagnostic problems [30]. 2 Preliminaries 2.1 Notations We denote the set of positive integers by N, and the set of the first d positive integers {1, 2, . . . , d} by [d]. The non-negative part of a real number x is [x]+ := max{0, x}, and ⌈x⌉+ := max{0, ⌈x⌉}. We use ln(·) for natural logarithm, and log(·) for logarithm with an arbitrary constant base. Boldface symbols are used for vectors and matrices (e.g., v, M), and their entries are referenced by subindexing (e.g., vi, Mi,j). For a vector v, ∥v∥denotes its Euclidean norm; for a matrix M, ∥M∥ denotes its spectral norm. We use Diag(v) to denote the diagonal matrix whose (i, i)-th entry is vi. The probability simplex is denoted by ∆d−1 = {p ∈[0, 1]d : Pd i=1 pi = 1}, and we regard vectors in ∆d−1 as row vectors. 2.2 Setting Let P ∈(∆d−1)d ⊂[0, 1]d×d be a d × d row-stochastic matrix for an ergodic (i.e., irreducible and aperiodic) Markov chain. This implies there is a unique stationary distribution π ∈∆d−1 with πi > 0 for all i ∈[d] [3, Corollary 1.17]. We also assume that P is reversible (with respect to π): πiPi,j = πjPj,i, i, j ∈[d]. (3) The minimum stationary probability is denoted by π⋆:= mini∈[d] πi. Define the matrices M := Diag(π)P and L := Diag(π)−1/2M Diag(π)−1/2 . The (i, j)th entry of the matrix Mi,j contains the doublet probabilities associated with P : Mi,j = πiPi,j is the probability of seeing state i followed by state j when the chain is started from its stationary distribution. The matrix M is symmetric on account of the reversibility of P , and hence it follows that L is also symmetric. (We will strongly exploit the symmetry in our results.) Further, L = Diag(π)1/2P Diag(π)−1/2, hence L and P are similar and thus their eigenvalue systems are identical. Ergodicity and reversibility imply that the eigenvalues of L are contained in the interval (−1, 1], and that 1 is an eigenvalue of L with multiplicity 1 [3, Lemmas 12.1 and 12.2]. Denote and order the eigenvalues of L as 1 = λ1 > λ2 ≥· · · ≥λd > −1. Let λ⋆:= max{λ2, |λd|}, and define the (absolute) spectral gap to be γ⋆:= 1−λ⋆, which is strictly positive on account of ergodicity. Let (Xt)t∈N be a Markov chain whose transition probabilities are governed by P . For each t ∈N, let π(t) ∈∆d−1 denote the marginal distribution of Xt, so π(t+1) = π(t)P , t ∈N. Note that the initial distribution π(1) is arbitrary, and need not be the stationary distribution π. The goal is to estimate π⋆and γ⋆from the length n sample path (Xt)t∈[n], and also to construct fully empirical confidence intervals that π⋆and γ⋆with high probability; in particular, the construction 3 of the intervals should not depend on any unobservable quantities, including π⋆and γ⋆themselves. As mentioned in the introduction, it is well-known that the mixing time of the Markov chain tmix (defined in Eq. 1) is bounded in terms of π⋆and γ⋆, as shown in Eq. (2). Moreover, convergence rates for empirical processes on Markov chain sequences are also often given in terms of mixing coefficients that can ultimately be bounded in terms of π⋆and γ⋆(as we will show in the proof of our first result). Therefore, valid confidence intervals for π⋆and γ⋆can be used to make these rates fully observable. 3 Point estimation In this section, we present lower and upper bounds on achievable rates for estimating the spectral gap as a function of the length of the sample path n. 3.1 Lower bounds The purpose of this section is to show lower bounds on the number of observations necessary to achieve a fixed multiplicative (or even just additive) accuracy in estimating the spectral gap γ⋆. By Eq. (2), the multiplicative accuracy lower bound for γ⋆gives the same lower bound for estimating the mixing time. Our first result holds even for two state Markov chains and shows that a sequence length of Ω(1/π⋆) is necessary to achieve even a constant additive accuracy in estimating γ⋆. Theorem 1. Pick any ¯π ∈(0, 1/4). Consider any estimator ˆγ⋆that takes as input a random sample path of length n ≤1/(4¯π) from a Markov chain starting from any desired initial state distribution. There exists a two-state ergodic and reversible Markov chain distribution with spectral gap γ⋆≥1/2 and minimum stationary probability π⋆≥¯π such that Pr [|ˆγ⋆−γ⋆| ≥1/8] ≥3/8. Next, considering d state chains, we show that a sequence of length Ω(d log(d)/γ⋆) is required to estimate γ⋆up to a constant multiplicative accuracy. Essentially, the sequence may have to visit all d states at least log(d)/γ⋆times each, on average. This holds even if π⋆is within a factor of two of the largest possible value of 1/d that it can take, i.e., when π is nearly uniform. Theorem 2. There is an absolute constant c > 0 such that the following holds. Pick any positive integer d ≥3 and any ¯γ ∈(0, 1/2). Consider any estimator ˆγ⋆that takes as input a random sample path of length n < cd log(d)/¯γ from a d-state reversible Markov chain starting from any desired initial state distribution. There is an ergodic and reversible Markov chain distribution with spectral gap γ⋆∈[¯γ, 2¯γ] and minimum stationary probability π⋆≥1/(2d) such that Pr [|ˆγ⋆−γ⋆| ≥¯γ/2] ≥1/4. The proofs of Theorems 1 and 2 are given in Appendix A.2 3.2 A plug-in based point estimator and its accuracy Let us now consider the problem of estimating γ⋆. For this, we construct a natural plug-in estimator. Along the way, we also provide an estimator for the minimum stationary probability, allowing one to use the bounds from Eq. (2) to trap the mixing time. Define the random matrix c M ∈[0, 1]d×d and random vector ˆπ ∈∆d−1 by c Mi,j := |{t ∈[n −1] : (Xt, Xt+1) = (i, j)}| n −1 , i, j ∈[d] , ˆπi := |{t ∈[n] : Xt = i}| n , i ∈[d] . Furthermore, define Sym(bL) := 1 2(bL + bL ⊤) 2A full version of this paper, with appendices, is available on arXiv [31]. 4 to be the symmetrized version of the (possibly non-symmetric) matrix bL := Diag(ˆπ)−1/2 c M Diag(ˆπ)−1/2. Let ˆλ1 ≥ˆλ2 ≥· · · ≥ˆλd be the eigenvalues of Sym(bL). Our estimator of the minimum stationary probability π⋆is ˆπ⋆:= mini∈[d] ˆπi, and our estimator of the spectral gap γ⋆is ˆγ⋆:= 1 −max{ˆλ2, |ˆλd|}. These estimators have the following accuracy guarantees: Theorem 3. There exists an absolute constant C > 0 such that the following holds. Assume the estimators ˆπ⋆and ˆγ⋆described above are formed from a sample path of length n from an ergodic and reversible Markov chain. Let γ⋆> 0 denote the spectral gap and π⋆> 0 the minimum stationary probability. For any δ ∈(0, 1), with probability at least 1 −δ, |ˆπ⋆−π⋆| ≤C   s π⋆log d π⋆δ γ⋆n + log d π⋆δ γ⋆n   (4) and |ˆγ⋆−γ⋆| ≤C   s log d δ · log n π⋆δ π⋆γ⋆n + log 1 γ⋆ γ⋆n  . (5) Theorem 3 implies that the sequence lengths required to estimate π⋆and γ⋆to within constant multiplicative factors are, respectively, ˜O  1 π⋆γ⋆  and ˜O  1 π⋆γ3⋆  . By Eq. (2), the second of these is also a bound on the required sequence length to estimate tmix. The proof of Theorem 3 is based on analyzing the convergence of the sample averages c M and ˆπ to their expectation, and then using perturbation bounds for eigenvalues to derive a bound on the error of ˆγ⋆. However, since these averages are formed using a single sample path from a (possibly) non-stationary Markov chain, we cannot use standard large deviation bounds; moreover applying Chernoff-type bounds for Markov chains to each entry of c M would result in a significantly worse sequence length requirement, roughly a factor of d larger. Instead, we adapt probability tail bounds for sums of independent random matrices [32] to our non-iid setting by directly applying a blocking technique of [33] as described in the article of [20]. Due to ergodicity, the convergence rate can be bounded without any dependence on the initial state distribution π(1). The proof of Theorem 3 is given in Appendix B. Note that because the eigenvalues of L are the same as that of the transition probability matrix P , we could have instead opted to estimate P , say, using simple frequency estimates obtained from the sample path, and then computing the second largest eigenvalue of this empirical estimate bP . In fact, this approach is a way to extend to non-reversible chains, as we would no longer rely on the symmetry of M or L. The difficulty with this approach is that P lacks the structure required by certain strong eigenvalue perturbation results. One could instead invoke the Ostrowski-Elsner theorem [cf. Theorem 1.4 on Page 170 of 34], which bounds the matching distance between the eigenvalues of a matrix A and its perturbation A + E by O(∥E∥1/d). Since ∥bP −P ∥is expected to be of size O(n−1/2), this approach will give a confidence interval for γ⋆whose width shrinks at a rate of O(n−1/(2d))—an exponential slow-down compared to the rate from Theorem 3. As demonstrated through an example from [34], the dependence on the d-th root of the norm of the perturbation cannot be avoided in general. Our approach based on estimating a symmetric matrix affords us the use of perturbation results that exploit more structure. Returning to the question of obtaining a fully empirical confidence interval for γ⋆and π⋆, we notice that, unfortunately, Theorem 3 falls short of being directly suitable for this, at least without further assumptions. This is because the deviation terms themselves depend inversely both on γ⋆and π⋆, and hence can never rule out 0 (or an arbitrarily small positive value) as a possibility for γ⋆or π⋆.3 In effect, the fact that the Markov chain could be slow mixing and the long-term frequency of some 3Using Theorem 3, it is possible to trap γ⋆in the union of two empirical confidence intervals—one around ˆγ⋆and the other around zero, both of which shrink in width as the sequence length increases. 5 Algorithm 1 Empirical confidence intervals Input: Sample path (X1, X2, . . . , Xn), confidence parameter δ ∈(0, 1). 1: Compute state visit counts and smoothed transition probability estimates: Ni := |{t ∈[n −1] : Xt = i}| , i ∈[d]; Ni,j := |{t ∈[n −1] : (Xt, Xt+1) = (i, j)}| , bPi,j := Ni,j + 1/d Ni + 1 , (i, j) ∈[d]2. 2: Let bA# be the group inverse of bA := I −bP . 3: Let ˆπ ∈∆d−1 be the unique stationary distribution for bP . 4: Compute eigenvalues ˆλ1≥ˆλ2≥· · · ≥ˆλd of Sym(bL), where bL := Diag(ˆπ)1/2 bP Diag(ˆπ)−1/2. 5: Spectral gap estimate: ˆγ⋆:= 1 −max{ˆλ2, |ˆλd|}. 6: Empirical bounds for | bPi,j−Pi,j| for (i, j) ∈[d]2: c := 1.01, τn,δ := inf{t ≥0 : 2d2(1 + ⌈logc 2n t ⌉+)e−t ≤δ}, and bBi,j :=    rcτn,δ 2Ni + v u u tcτn,δ 2Ni + s 2c bPi,j(1 −bPi,j)τn,δ Ni + (5/3)τn,δ + | bPi,j −1/d| Ni    2 . 7: Relative sensitivity of π: ˆκ := 1 2 max n bA# j,j −min n bA# i,j : i ∈[d] o : j ∈[d] o . 8: Empirical bounds for maxi∈[d] |ˆπi −πi| and max S i∈[d]{| p πi/ˆπi −1|, | p ˆπi/πi −1|}: ˆb := ˆκ max n bBi,j : (i, j) ∈[d]2o , ˆρ := 1 2 max [ i∈[d] ( ˆb ˆπi , ˆb [ˆπi −ˆb]+ ) . 9: Empirical bounds for |ˆγ⋆−γ⋆|: ˆw := 2ˆρ + ˆρ2 + (1 + 2ˆρ + ˆρ2) X (i,j)∈[d]2 ˆπi ˆπj ˆB2 i,j !1/2 . states could be small makes it difficult to be confident in the estimates provided by ˆγ⋆and ˆπ⋆. This suggests that in order to obtain fully empirical confidence intervals, we need an estimator that is not subject to such effects—we pursue this in Section 4. Theorem 3 thus primarily serves as a point of comparison for what is achievable in terms of estimation accuracy when one does not need to provide empirical confidence bounds. 4 Fully empirical confidence intervals In this section, we address the shortcoming of Theorem 3 and give fully empirical confidence intervals for the stationary probabilities and the spectral gap γ⋆. The main idea is to use the Markov property to eliminate the dependence of the confidence intervals on the unknown quantities (including π⋆and γ⋆). Specifically, we estimate the transition probabilities from the sample path using simple frequency estimates: as a consequence of the Markov property, for each state, the frequency estimates converge at a rate that depends only on the number of visits to the state, and in particular the rate (given the visit count of the state) is independent of the mixing time of the chain. 6 As discussed in Section 3, it is possible to form a confidence interval for γ⋆based on the eigenvalues of an estimated transition probability matrix by appealing to the Ostrowski-Elsner theorem. However, as explained earlier, this would lead to a slow O(n−1/(2d)) rate. We avoid this slow rate by using an estimate of the symmetric matrix L, so that we can use a stronger perturbation result (namely Weyl’s inequality, as in the proof of Theorem 3) available for symmetric matrices. To form an estimate of L based on an estimate of the transition probabilities, one possibility is to estimate π using a frequency-based estimate for π as was done in Section 3, and appeal to the relation L = Diag(π)1/2P Diag(π)−1/2 to form a plug-in estimate. However, as noted in Section 3.2, confidence intervals for the entries of π formed this way may depend on the mixing time. Indeed, such an estimate of π does not exploit the Markov property. We adopt a different strategy for estimating π, which leads to our construction of empirical confidence intervals, detailed in Algorithm 1. We form the matrix bP using smoothed frequency estimates of P (Step 1), then compute the so-called group inverse bA# of bA = I −bP (Step 2), followed by finding the unique stationary distribution ˆπ of bP (Step 3), this way decoupling the bound on the accuracy of ˆπ from the mixing time. The group inverse bA# of bA is uniquely defined; and if bP defines an ergodic chain (which is the case here due to the use of the smoothed estimates), bA# can be computed at the cost of inverting an (d−1)×(d−1) matrix [35, Theorem 5.2].4 Further, once given bA#, the unique stationary distribution ˆπ of bP can be read out from the last row of bA# [35, Theorem 5.3]. The group inverse is also be used to compute the sensitivity of π. Based on ˆπ and bP , we construct the plug-in estimate bL of L, and use the eigenvalues of its symmetrization to form the estimate ˆγ⋆of the spectral gap (Steps 4 and 5). In the remaining steps, we use perturbation analyses to relate ˆπ and π, viewing P as the perturbation of bP ; and also to relate ˆγ⋆and γ⋆, viewing L as a perturbation of Sym(bL). Both analyses give error bounds entirely in terms of observable quantities (e.g., ˆκ), tracing back to empirical error bounds for the smoothed frequency estimates of P . The most computationally expensive step in Algorithm 1 is the computation of the group inverse bA#, which, as noted reduces to matrix inversion. Thus, with a standard implementation of matrix inversion, the algorithm’s time complexity is O(n + d3), while its space complexity is O(d2). To state our main theorem concerning Algorithm 1, we first define κ to be analogous to ˆκ from Step 7, with bA# replaced by the group inverse A# of A := I −P . The result is as follows. Theorem 4. Suppose Algorithm 1 is given as input a sample path of length n from an ergodic and reversible Markov chain and confidence parameter δ ∈(0, 1). Let γ⋆> 0 denote the spectral gap, π the unique stationary distribution, and π⋆> 0 the minimum stationary probability. Then, on an event of probability at least 1 −δ, πi ∈[ˆπi −ˆb, ˆπi + ˆb] for all i ∈[d], and γ⋆∈[ˆγ⋆−ˆw, ˆγ⋆+ ˆw]. Moreover, ˆb and ˆw almost surely satisfy (as n →∞) ˆb = O max (i,j)∈[d]2 κ r Pi,j log log n πin ! , ˆw = O κ π⋆ r log log n π⋆n + r d log log n π⋆n ! .5 The proof of Theorem 4 is given in Appendix C. As mentioned above, the obstacle encountered in Theorem 3 is avoided by exploiting the Markov property. We establish fully observable upper and lower bounds on the entries of P that converge at a p n/ log log n rate using standard martingale tail inequalities; this justifies the validity of the bounds from Step 6. Properties of the group inverse [35, 36] and eigenvalue perturbation theory [34] are used to validate the empirical bounds on πi and γ⋆ developed in the remaining steps of the algorithm. The first part of Theorem 4 provides valid empirical confidence intervals for each πi and for γ⋆, which are simultaneously valid at confidence level δ. The second part of Theorem 4 shows that the 4 The group inverse of a square matrix A, a special case of the Drazin inverse, is the unique matrix A# satisfying AA#A = A, A#AA# = A# and A#A = AA#. 5In Theorems 4 and 5, our use of big-O notation is as follows. For a random sequence (Yn)n and a (nonrandom) positive sequence (εθ,n)n parameterized by θ, we say “Yn = O(εθ,n) holds almost surely as n →∞” if there is some universal constant C > 0 such that for all θ, lim supn→∞Yn/εθ,n ≤C holds almost surely. 7 width of the intervals decrease as the sequence length increases. We show in Appendix C.5 that κ ≤d/γ⋆, and hence ˆb = O  max(i,j)∈[d]2 d γ⋆ q Pi,j log log n πin  , ˆw = O  d π⋆γ⋆ q log log n π⋆n  . It is easy to combine Theorems 3 and 4 to yield intervals whose widths shrink at least as fast as both the non-empirical intervals from Theorem 3 and the empirical intervals from Theorem 4. Specifically, determine lower bounds on π⋆and γ⋆using Algorithm 1, π⋆≥mini∈[d][ˆπi −ˆb]+ , γ⋆≥[ˆγ⋆−ˆw]+; then plug-in these lower bounds for π⋆and γ⋆in the deviation bounds in Eq. (5) from Theorem 3. This yields a new interval centered around the estimate of γ⋆from Theorem 3, and it no longer depends on unknown quantities. The interval is a valid 1 −2δ probability confidence interval for γ⋆, and for sufficiently large n, the width shrinks at the rate given in Eq. (5). We can similarly construct an empirical confidence interval for π⋆using Eq. (4), which is valid on the same 1 −2δ probability event.6 Finally, we can take the intersection of these new intervals with the corresponding intervals from Algorithm 1. This is summarized in the following theorem, which we prove in Appendix D. Theorem 5. The following holds under the same conditions as Theorem 4. For any δ ∈(0, 1), the confidence intervals bU and bV described above for π⋆and γ⋆, respectively, satisfy π⋆∈bU and γ⋆∈bV with probability at least 1 −2δ. Furthermore, the widths of these intervals almost surely satisfy (as n →∞) |bU| = O r π⋆log d π⋆δ γ⋆n ! , |bV | = O  min q log d δ ·log(n) π⋆γ⋆n , ˆw  , where ˆw is the width from Algorithm 1. 5 Discussion The construction used in Theorem 5 applies more generally: Given a confidence interval of the form In = In(γ⋆, π⋆, δ) for some confidence level δ and a fully empirical confidence set En(δ) for (γ⋆, π⋆) for the same level, I′ n = En(δ) ∩∪(γ,π)∈En(δ)In(γ, π, δ) is a valid fully empirical 2δlevel confidence interval whose asymptotic width matches that of In up to lower order terms under reasonable assumptions on En and In. In particular, this suggests that future work should focus on closing the gap between the lower and upper bounds on the accuracy of point-estimation. Another interesting direction is to reduce the computation cost: The current cubic cost in the number of states can be too high even when the number of states is only moderately large. Perhaps more important, however, is to extend our results to large state space Markov chains: In most practical applications the state space is continuous or is exponentially large in some natural parameters. As follows from our lower bounds, without further assumptions, the problem of fully data dependent estimation of the mixing time is intractable for information theoretical reasons. Interesting directions for future work thus must consider Markov chains with specific structure. Parametric classes of Markov chains, including but not limited to Markov chains with factored transition kernels with a few factors, are a promising candidate for such future investigations. The results presented here are a first step in the ambitious research agenda outlined above, and we hope that they will serve as a point of departure for further insights in the area of fully empirical estimation of Markov chain parameters based on a single sample path. References [1] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Series in Statistics. Springer-Verlag, 2001. [2] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). A Bradford Book, 1998. [3] D. Levin, Y. Peres, and E. Wilmer. Markov Chains and Mixing Times. AMS, 2008. [4] S. P. Meyn and R. L. Tweedie. Markov Chains and Stochastic Stability. Springer, 1993. [5] C. Kipnis and S. R. S. Varadhan. Central limit theorem for additive functionals of reversible markov processes and applications to simple exclusions. Comm. Math. Phys., 104(1):1–19, 1986. 6For the π⋆interval, we only plug-in lower bounds on π⋆and γ⋆only where these quantities appear as 1/π⋆ and 1/γ⋆in Eq. (4). It is then possible to “solve” for observable bounds on π⋆. See Appendix D for details. 8 [6] I. Kontoyiannis, L. A. Lastras-Monta˜no, and S. P. Meyn. Exponential bounds and stopping rules for MCMC and general Markov chains. In VALUETOOLS, page 45, 2006. [7] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, pages 65–72, 2006. [8] V. Mnih, Cs. Szepesv´ari, and J.-Y. Audibert. Empirical Bernstein stopping. In ICML, pages 672–679, 2008. [9] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample-variance penalization. In COLT, 2009. [10] L. Li, M. L. Littman, T. J. Walsh, and A. L. Strehl. Knows what it knows: a framework for self-aware learning. Machine Learning, 82(3):399–443, 2011. [11] J. M. Flegal and G. L. Jones. Implementing MCMC: estimating with confidence. In Handbook of Markov chain Monte Carlo, pages 175–197. Chapman & Hall/CRC, 2011. [12] B. M. Gyori and D. Paulin. Non-asymptotic confidence intervals for MCMC in practice. arXiv:1212.2016, 2014. [13] A. Swaminathan and T. Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In ICML, 2015. [14] D. Gillman. A Chernoff bound for random walks on expander graphs. SIAM Journal on Computing, 27(4):1203–1220, 1998. [15] C. A. Le´on and F. Perron. Optimal Hoeffding bounds for discrete reversible Markov chains. Annals of Applied Probability, pages 958–970, 2004. [16] D. Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods. Electronic Journal of Probability, 20:1–32, 2015. [17] S. T. Garren and R. L. Smith. Estimating the second largest eigenvalue of a Markov transition matrix. Bernoulli, 6:215–242, 2000. [18] G. L. Jones and J. P. Hobert. Honest exploration of intractable probability distributions via markov chain monte carlo. Statist. Sci., 16(4):312–334, 11 2001. [19] Y. Atchad´e. Markov Chain Monte Carlo confidence intervals. Bernoulli, 2015. (to appear). [20] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability, 22(1):94–116, January 1994. [21] R. L. Karandikar and M. Vidyasagar. Rates of uniform convergence of empirical means with mixing processes. Statistics and Probability Letters, 58(3):297–307, 2002. [22] D. Gamarnik. Extension of the PAC framework to finite and countable Markov chains. IEEE Transactions on Information Theory, 49(1):338–345, 2003. [23] M. Mohri and A. Rostamizadeh. Stability bounds for non-iid processes. In NIPS, 2008. [24] M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes. In NIPS, 2009. [25] I. Steinwart and A. Christmann. Fast learning from non-i.i.d. observations. In NIPS, 2009. [26] I. Steinwart, D. Hush, and C. Scovel. Learning from dependent observations. Journal of Multivariate Analysis, 100(1):175–194, 2009. [27] D. McDonald, C. Shalizi, and M. Schervish. Estimating beta-mixing coefficients. In AISTATS, pages 516–524, 2011. [28] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White. Testing that distributions are close. In FOCS, pages 259–269. IEEE, 2000. [29] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White. Testing closeness of discrete distributions. Journal of the ACM (JACM), 60(1):4:2–4:25, 2013. [30] N. Bhatnagar, A. Bogdanov, and E. Mossel. The computational complexity of estimating MCMC convergence time. In RANDOM, pages 424–435. Springer, 2011. [31] D. Hsu, A. Kontorovich, and C. Szepesv´ari. Mixing time estimation in reversible Markov chains from a single sample path. CoRR, abs/1506.02903, 2015. [32] J. Tropp. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 2015. [33] S. Bernstein. Sur l’extension du theoreme limite du calcul des probabilites aux sommes de quantites dependantes. Mathematische Annalen, 97:1–59, 1927. [34] G. W. Stewart and J. Sun. Matrix perturbation theory. Academic Press, Boston, 1990. [35] C. D. Meyer Jr. The role of the group generalized inverse in the theory of finite Markov chains. SIAM Review, 17(3):443–464, 1975. [36] G. Cho and C. Meyer. Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra and its Applications, 335:137–150, 2001. 9
2015
191
5,693
Online Rank Elicitation for Plackett-Luce: A Dueling Bandits Approach Bal´azs Sz¨or´enyi Technion, Haifa, Israel / MTA-SZTE Research Group on Artificial Intelligence, Hungary szorenyibalazs@gmail.com R´obert Busa-Fekete, Adil Paul, Eyke H¨ullermeier Department of Computer Science University of Paderborn Paderborn, Germany {busarobi,adil.paul,eyke}@upb.de Abstract We study the problem of online rank elicitation, assuming that rankings of a set of alternatives obey the Plackett-Luce distribution. Following the setting of the dueling bandits problem, the learner is allowed to query pairwise comparisons between alternatives, i.e., to sample pairwise marginals of the distribution in an online fashion. Using this information, the learner seeks to reliably predict the most probable ranking (or top-alternative). Our approach is based on constructing a surrogate probability distribution over rankings based on a sorting procedure, for which the pairwise marginals provably coincide with the marginals of the PlackettLuce distribution. In addition to a formal performance and complexity analysis, we present first experimental studies. 1 Introduction Several variants of learning-to-rank problems have recently been studied in an online setting, with preferences over alternatives given in the form of stochastic pairwise comparisons [6]. Typically, the learner is allowed to select (presumably most informative) alternatives in an active way—making a connection to multi-armed bandits, where single alternatives are chosen instead of pairs, this is also referred to as the dueling bandits problem [28]. Methods for online ranking can mainly be distinguished with regard to the assumptions they make about the probabilities pi,j that, in a direct comparison between two alternatives i and j, the former is preferred over the latter. If these probabilities are not constrained at all, a complexity that grows quadratically in the number M of alternatives is essentially unavoidable [27, 8, 9]. Yet, by exploiting (stochastic) transitivity properties, which are quite natural in a ranking context, it is possible to devise algorithms with better performance guaranties, typically of the order M log M [29, 28, 7]. The idea of exploiting transitivity in preference-based online learning establishes a natural connection to sorting algorithms. Naively, for example, one could simply apply an efficient sorting algorithm such as MergeSort as an active sampling scheme, thereby producing a random order of the alternatives. What can we say about the optimality of such an order? The problem is that the probability distribution (on rankings) induced by the sorting algorithm may not be well attuned with the original preference relation (i.e., the probabilities pi,j). In this paper, we will therefore combine a sorting algorithm, namely QuickSort [15], and a stochastic preference model that harmonize well with each other—in a technical sense to be detailed later on. This harmony was first presented in [1], and our main contribution is to show how it can be exploited for online rank elicitation. More specifically, we assume that pairwise comparisons obey the marginals of a Plackett-Luce model [24, 19], a widely used parametric distribution over rankings (cf. Section 5). Despite the quadratic worst case complexity of QuickSort, we succeed in developing its budgeted version (presented in Section 6) with a complexity of O(M log M). While only returning partial orderings, this version allows us to devise PAC-style algorithms that find, respectively, a close-to-optimal item (Section 7) and a close-to-optimal ranking of all items (Section 8), both with high probability. 1 2 Related Work Several studies have recently focused on preference-based versions of the multi-armed bandit setup, also known as dueling bandits [28, 6, 30], where the online learner is only able to compare arms in a pairwise manner. The outcome of the pairwise comparisons essentially informs the learner about pairwise preferences, i.e., whether or not an option is preferred to another one. A first group of papers, including [28, 29], assumes the probability distributions of pairwise comparisons to possess certain regularity property, such as strong stochastic transitivity. A second group does not make assumptions of that kind; instead, a target (“ground-truth”) ranking is derived from the pairwise preferences, for example using the Copeland, Borda count and Random Walk procedures [9, 8, 27]. Our work is obviously closer to the first group of methods. In particular, the study presented in this paper is related to [7] which investigates a similar setup for the Mallows model. There are several approaches to estimating the parameters of the Plackett-Luce (PL) model, including standard statistical methods such as likelihood estimation [17] and Bayesian parameter estimation [14]. Pairwise marginals are also used in [26], in connection with the method-of-moments approach; nevertheless, the authors assume that full rankings are observed from a PL model. Algorithms for noisy sorting [2, 3, 12] assume a total order over the items, and that the comparisons are representative of that order (if i precedes j, then the probability of option i being preferred to j is bigger than some λ > 1/2). In [25], the data is assumed to consist of pairwise comparisons generated by a Bradley-Terry model, however, comparisons are not chosen actively but according to some fixed probability distribution. Pure exploration algorithms for the stochastic multi-armed bandit problem sample the arms a certain number of times (not necessarily known in advance), and then output a recommendation, such as the best arm or the m best arms [4, 11, 5, 13]. While our algorithms can be viewed as pure exploration strategies, too, we do not assume that numerical feedback can be generated for individual options; instead, our feedback is qualitative and refers to pairs of options. 3 Notation A set of alternatives/options/items to be ranked is denoted by I. To keep the presentation simple, we assume that items are identified by natural numbers, so I = [M] = {1, . . . , M}. A ranking is a bijection r on I, which can also be represented as a vector r = (r1, . . . , rM) = (r(1), . . . , r(M)), where rj = r(j) is the rank of the jth item. The set of rankings can be identified with the symmetric group SM of order M. Each ranking r naturally defines an associated ordering o = (o1, . . . , oM) 2 SM of the items, namely the inverse o = r−1 defined by or(j) = j for all j 2 [M]. For a permutation r, we write r(i, j) for the permutation in which ri and rj, the ranks of items i and j, are replaced with each other. We denote by L(ri = j) = {r 2 SM | ri = j} the subset of permutations for which the rank of item i is j, and by L(rj > ri) = {r 2 SM | rj > ri} those for which the rank of j is higher than the rank of i, that is, item i is preferred to j, written i ≻j. We write i ≻r j to indicate that i is preferred to j with respect to ranking r. We assume SM to be equipped with a probability distribution P : SM ! [0, 1]; thus, for each ranking r, we denote by P(r) the probability to observe this ranking. Moreover, for each pair of items i and j, we denote by pi,j = P(i ≻j) = X r2L(rj>ri) P(r) (1) the probability that i is preferred to j (in a ranking randomly drawn according to P). These pairwise probabilities are called the pairwise marginals of the ranking distribution P. We denote the matrix composed of the values pi,j by P = [pi,j]1i,jM. 4 Preference-based Approximations Our learning problem essentially consists of making good predictions about properties of P. Concretely, we consider two different goals of the learner, depending on whether the application calls for the prediction of a single item or a full ranking of items: In the first problem, which we call PAC-Item or simply PACI, the goal is to find an item that is almost as good as the optimal one, with optimality referring to the Condorcet winner. An item i⇤is 2 a Condorcet winner if pi⇤,i > 1/2 for all i 6= i⇤. Then, we call an item j a PAC-item, if it is beaten by the Condorcet winner with at most an ✏-margin: |pi⇤,j −1/2| < ✏. This setting coincides with those considered in [29, 28]. Obviously, it requires the existence of a Condorcet winner, which is indeed guaranteed in our approach, thanks to the assumption of a Plackett-Luce model. The second problem, called AMPR, is defined as finding the most probable ranking [7], that is, r⇤= argmaxr2SM P(r). This problem is especially challenging for ranking distributions for which the order of two items is hard to elicit (because many entries of P are close to 1/2). Therefore, we again relax the goal of the learner and only require it to find a ranking r with the following property: There is no pair of items 1 i, j M, such that r⇤ i < r⇤ j , ri > rj and pi,j > 1/2 + ✏. Put in words, the ranking r is allowed to differ from r⇤only for those items whose pairwise probabilities are close to 1/2. Any ranking r satisfying this property is called an approximately most probable ranking (AMPR). Both goals are meant to be achieved with probability at least 1 −δ, for some δ > 0. Our learner operates in an online setting. In each iteration, it is allowed to gather information by asking for a single pairwise comparison between two items—or, using the dueling bandits jargon, to pull two arms. Thus, it selects two items i and j, and then observes either preference i ≻j or j ≻i; the former occurs with probability pi,j as defined in (1), the latter with probability pj,i = 1−pi,j. Based on this observation, the learner updates its estimates and decides either to continue the learning process or to terminate and return its prediction. What we are mainly interested in is the sample complexity of the learner, that is, the number of pairwise comparisons it queries prior to termination. Before tackling the problems introduced above, we need some additional notation. The pair of items chosen by the learner in the t-th comparison is denoted (it, jt), where it < jt, and the feedback received is defined as ot = 1 if it ≻jt and ot = 0 if jt ≻it. The set of steps among the first t iterations in which the learner decides to compare items i and j is denoted by It i,j = {` 2 [t] | (i`, j`) = (i, j)}, and the size of this set by nt i,j = #It i,j.1 The proportion of “wins” of item i against item j up to iteration t is then given by bp t i,j = 1 nt i,j P `2It i,j o`. Since our samples are independent and identically distributed (i.i.d.), the relative frequency bp t i,j is a reasonable estimate of the pairwise probability (1). 5 The Plackett-Luce Model The Plackett-Luce (PL) model is a widely-used probability distribution on rankings [24, 19]. It is parameterized by a “skill” vector v = (v1, . . . , vM) 2 RM + and mimics the successive construction of a ranking by selecting items position by position, each time choosing one of the remaining items i with a probability proportional to its skill vi. Thus, with o = r−1, the probability of a ranking r is P(r | v) = M Y i=1 voi voi + voi+1 + · · · + voM . (2) As an appealing property of the PL model, we note that the marginal probabilities (1) are very easy to calculate [21], as they are simply given by pi,j = vi vi + vj . (3) Likewise, the most probable ranking r⇤can be obtained quite easily, simply by sorting the items according to their skill parameters, that is, r⇤ i < r⇤ j iff vi > vj. Moreover, the PL model satisfies strong stochastic transitivity, i.e., pi,k ≥max(pi,j, pj,k) whenever pi,j ≥1/2 and pj,k ≥1/2 [18]. 6 Ranking Distributions based on Sorting In the classical sorting literature, the outcome of pairwise comparisons is deterministic and determined by an underlying total order of the items, namely the order the sorting algorithm seeks to find. Now, if the pairwise comparisons are stochastic, the sorting algorithm can still be run, however, the result it will return is a random ranking. Interestingly, this is another way to define a probability distribution over the rankings: P(r) = P(r | P) is the probability that r is returned by the algorithm if 1We omit the index t if there is no danger of confusion. 3 stochastic comparisons are specified by P. Obviously, this view is closely connected to the problem of noisy sorting (see the related work section). In a recent work by Ailon [1], the well-known QuickSort algorithm is investigated in a stochastic setting, where the pairwise comparisons are drawn from the pairwise marginals of the Plackett-Luce model. Several interesting properties are shown about the ranking distribution based on QuickSort, notably the property of pairwise stability. We denote the QuickSort-based ranking distribution by PQS(· | P), where the matrix P contains the marginals (3) of the Plackett-Luce model. Then, it can be shown that PQS(· | P) obeys the property of pairwise stability, which means that it preserves the marginals, although the distributions themselves might not be identical, i.e., PQS(· | P) 6= P(· | v). Theorem 1 (Theorem 4.1 in [1]). Let P be given by the pairwise marginals (3), i.e., pi,j = vi/(vi + vj). Then, pi,j = PQS(i ≻j | P) = P r2L(rj>ri) PQS(r | P). One drawback of the QuickSort algorithm is its complexity: To generate a random ranking, it compares O(M 2) items in the worst case. Next, we shall introduce a budgeted version of the QuickSort algorithm, which terminates if the algorithm compares too many pairs, namely, more than O(M log M). Upon termination, the modified Quicksort algorithm only returns a partial order. Nevertheless, we will show that it still preserves the pairwise stability property. 6.1 The Budgeted QuickSort-based Algorithm Algorithm 1 BQS(A, B) Require: A, the set to be sorted, and a budget B Ensure: (r, B00), where B00 is the remaining budget, and r is the (partial) order that was constructed based on B −B00 samples 1: Initialize r to be the empty partial order over A 2: if B 0 or |A| 1 then return (r, 0) 3: pick an element i 2 A uniformly at random 4: for all j 2 A \ {i} do 5: draw a random sample oij according to the PL marginal (3) 6: update r accordingly 7: A0 = {j 2 A | j 6= i & oi,j = 0} 8: A1 = {j 2 A | j 6= i & oi,j = 1} 9: (r0, B0) = BQS(A0, B −|A| + 1) 10: (r00, B00) = BQS(A1, B0) 11: update r based on r0 and r00 12: return (r, B00) Algorithm 1 shows a budgeted version of the QuickSort-based random ranking generation process described in the previous section. It works in a way quite similar to the standard QuickSort-based algorithm, with the notable difference of terminating as soon as the number of pairwise comparisons exceeds the budget B, which is a parameter assumed as an input. Obviously, the BQS algorithm run with A = [M] and B = 1 (or B > M 2) recovers the original QuickSort-based sampling algorithm as a special case. A run of BQS(A, 1) can be represented quite naturally as a random tree ⌧: the root is labeled [M], end whenever a call to BQS(A, B) initiates a recursive call BQS(A0, B0), a child node with label A0 is added to the node with label A. Note that each such tree determines a ranking, which is denoted by r⌧, in a natural way. The random ranking generated by BQS(A, 1) for some subset A ✓[M] was analyzed by Ailon [1], who showed that it gives back the same marginals as the original Plackett-Luce model (as recalled in Theorem 1). Now, for B > 0, denote by ⌧B the tree the algorithm would have returned for the budget B instead of 1. 2 Additionally, let T B denote the set of all possible outcomes of ⌧B, and for two distinct indices i and j, let T B i,j denote the set of all trees T 2 T B in which i and j are incomparable in the associated ranking (i.e., some leaf of T is labelled by a superset of {i, j}). The main result of this section is that BQS does not introduce any bias in the marginals (3), i.e., Theorem 1 also holds for the budgeted version of BQS. Proposition 2. For any B > 0, any set A ✓I and any indices i, j 2 A, the partial order r = r⌧B generated by BQS(A, B) satisfies P(i ≻r j | ⌧B 2 T B \ T B i,j) = vi vi+vj . That is, whenever two items i and j are comparable by the partial ranking r generated by BQS, i ≻r j with probability exactly vi vi+vj . The basic idea of the proof (deferred to the appendix) is to show that, conditioned on the event that i and j are incomparable by r, i ≻r j would have been 2Put differently, ⌧is obtained from ⌧B by continuing the execution of BQS ignoring the stopping criterion B 0. 4 obtained with probability vi vi+vj in case execution of BQS had been continued (see Claim 6). The result then follows by combining this with Theorem 1. 7 The PAC-Item Problem and its Analysis Algorithm 2 PLPAC(δ, ✏) 1: for i, j = 1 ! M do . Initialization 2: bpi,j = 0 . bP = [bpi,j]M⇥M 3: ni,j = 0 . bN = [ni,j]M⇥M 4: Set A = {1, . . . , M} 5: repeat 6: r = BQS(A, a −1) where a = #A . Sorting based random ranking 7: update the entries of bP and N corresponding to A based on r 8: set ci,j = r 1 2ni,j log 4M 2n2 i,j δ for all i 6= j 9: for (i, j 2 A) ^ (i 6= j) do 10: if bpi,j + ci,j < 1/2 then 11: A = A \ {i} . Discard 12: C = {i 2 A | (8j 2 A \ {i}) bpi,j −ci,j > 1/2 −✏} 13: until (#C ≥1) 14: return C Our algorithm for finding the PAC item is based on the sorting-based sampling technique described in the previous section. The pseudocode of the algorithm, called PLPAC, is shown in Algorithm 2. In each iteration, we generate a ranking, which is partial (line 6), and translate this ranking into pairwise comparisons that are used to update the estimates of the pairwise marginals. Based on these estimates, we apply a simple elimination strategy, which consists of eliminating an item i if it is significantly beaten by another item j, that is, bpi,j + ci,j < 1/2 (lines 9– 11). Finally, the algorithm terminates when it finds a PAC-item for which, by definition, |pi⇤,i −1/2| < ✏. To identify an item i as a PAC-item, it is enough to guarantee that i is not beaten by any j 2 A with a margin bigger than ✏, that is, pi,j > 1/2 −✏for all j 2 A. This sufficient condition is implemented in line 12. Since we only have empirical estimates of the pi,j values, the test of the condition does of course also take the confidence intervals into account. Note that vi = vj, i 6= j, implies pi,j = 1/2. In this case, it is not possible to decide whether pi,j is above 1/2 or not on the basis of a finite number of pairwise comparisons. The ✏-relaxation of the goal to be achieved provides a convenient way to circumvent this problem. 7.1 Sample Complexity Analysis of PLPAC First, let rt denote the (partial) ordering produced by BQS in the t-th iteration. Note that each of these (partial) orderings defines a bucket order: The indices are partitioned into different classes (buckets) in such a way that none of the pairs are comparable within one class, but pairs from different classes are; thus, if i and i0 belong to some class and j and j0 belong to some other class, then either i ≻rt j and i0 ≻rt j0, or j ≻rt i and j0 ≻rt i0. More specifically, the BQS algorithm with budget a −1 (line 6) always results in a bucket order containing only two buckets since no recursive call is carried out with this budget. Then one might show that the optimal arm i⇤and an arbitrary arm i(6= i⇤) fall into different buckets “often enough”. This observation allows us to upper-bound the number of pairwise comparisons taken by PLPAC with high probability. The proof of the next theorem is deferred to Appendix B. Theorem 3. Set ∆i = (1/2) max{✏, pi⇤,i −1/2} = (1/2) max{✏, vi⇤−vi 2(vi⇤+vi)} for each index i 6= i⇤. With probability at least 1 −δ, after O ⇣ maxi6=i⇤ 1 ∆2 i log M ∆iδ ⌘ calls for BQS with budget M − 1, PLPAC terminates and outputs an ✏-optimal arm. Therefore, the total number of samples is O ⇣ M maxi6=i⇤ 1 ∆2 i log M ∆iδ ⌘ . In Theorem 3, the dependence on M is of order M log M. It is easy to show that ⌦(M log M) is a lower bound, therefore our result is optimal from this point of view. Our model assumptions based on the PL model imply some regularity properties for the pairwise marginals, such as strong stochastic transitivity and stochastic triangle inequality (see Appendix A of [28] for the proof). Therefore, the INTERLEAVED FILTER [28] and BEAT THE MEAN [29] algorithms can be directly applied in our online framework. Both algorithms achieve a similar sample complexity of order M log M. Yet, our experimental study in Section 9.1 clearly shows that, provided our model assumptions on pairwise marginals are valid, PLPAC outperforms both algorithms in terms of empirical sample complexity. 5 8 The AMPR Problem and its Analysis For strictly more than two elements, the sorting-based surrogate distribution and the PL distribution are in general not identical, although their mode rankings coincide [1]. The mode r⇤of a PL model is the ranking that sorts the items in decreasing order of their skill values: ri < rj iff vi > vj for any i 6= j. Moreover, since vi > vj implies pi,j > 1/2, sorting based on the Copeland score bi = #{1 j M | (i 6= j) ^ (pi,j > 1/2)} yields a most probable ranking r⇤. Our algorithm is based on estimating the Copeland score of the items. Its pseudo-code is shown in Algorithm 3 in Appendix C. As a first step, it generates rankings based on sorting, which is used to update the pairwise probability estimates bP. Then, it computes a lower and upper bound bi and bi for each of the scores bi. The lower bound is given as bi = #{j 2 [M]\{i} | bpi,j −c > 1/2}, which is the number of items that are beaten by item i based on the current empirical estimates of pairwise marginals. Similarly, the upper bound is given as bi = bi + si, where si = #{j 2 [M] \ {i} | 1/2 2 [bpi,j −c, bpi,j + c]}. Obviously, si is the number of pairs for which, based on the current empirical estimates, it cannot be decided whether pi,j is above or below 1/2. As an important observation, note that there is no need to generate a full ranking based on sorting in every case, because if [bi, bi] \ [bj, bj] = ;, then we already know the order of items i and j with respect to r⇤. Motivated by this observation, consider the interval graph G = ([M], E) based on the [bi, bi], where E = {(i, j) 2 [M]2 | [bi, bi]\[bj, bj] 6= ;}. Denote the connected components of this graph by C1, . . . , Ck ✓[M]. Obviously, if two items belong to different components, then they do not need to be compared anymore. Therefore, it is enough to call the sorting-based sampling with the connected components. Finally, the algorithm terminates if the goal is achieved (line 20). More specifically, it terminates if there is no pair of items i and j, for which the ordering with respect to r⇤is not elicited yet, i.e., [bi, bi] \ [bj, bj] 6= ;, and their pairwise probabilities is close to 1/2, i.e., |pi,j −1/2| < ✏. 8.1 Sample Complexity Analysis of PLPAC-AMPR Denote by qM the expected number of comparisons of the (standard) QuickSort algorithm on M elements, namely, qM = 2M log M + O(log M) (see e.g., [22]). Thanks to the concentration property of the performance of the QuickSort algorithm, there is no pair of items that falls into the same bucket “too often” in bucket order which is output by BQS. This observation allows us to upper-bound the number of pairwise comparisons taken by PLPAC-AMPR with high probability. The proof of the next theorem is deferred to Appendix D. Theorem 4. Set ∆0 (i) = (1/2) max{✏, v(i+1)−v(i) 2(v(i+1)+v(i))} for each 1 i M, where v(i) denotes the ith largest skill parameter. With probability at least 1 −δ, after O ⇣ max1iM−1 1 (∆0 (i))2 log M ∆0 (i)δ ⌘ calls for BQS with budget 3 2qM, the algorithm PLPAC terminates and outputs an ✏-optimal arm. Therefore, the total number of samples is O ⇣ (M log M) max1iM−1 1 (∆0 (i))2 log M ∆0 (i)δ ⌘ . Remark 5. The RankCentrality algorithm proposed in [23] converts the empirical pairwise marginals bP into a row-stochastic matrix bQ. Then, considering bQ as a transition matrix of a Markov chain, it ranks the items based on its stationary distribution. In [25], the authors show that if the pairwise marginals obey a PL distribution, this algorithm produces the mode of this distribution if the sample size is sufficiently large. In their setup, the learning algorithm has no influence on the selection of pairs to be compared; instead, comparisons are sampled using a fixed underlying distribution over the pairs. For any sampling distribution, their PAC bound is of order at least M 3, whereas our sample complexity bound in Theorem 4 is of order M log2 M. 9 Experiments Our approach strongly exploits the assumption of a data generating process that can be modeled by means of a PL distribution. The experimental studies presented in this section are mainly aimed at showing that it is doing so successfully, namely, that it has advantages compared to other approaches in situations where this model assumption is indeed valid. To this end, we work with synthetic data. 6 Nevertheless, in order to get an idea of the robustness of our algorithm toward violation of the model assumptions, some first experiments on real data are presented in Appendix I.3 9.1 The PAC-Item Problem We compared our PLPAC algorithm with other preference-based algorithms applicable in our setting, namely INTERLEAVED FILTER (IF) [28], BEAT THE MEAN (BTM) [29] and MALLOWSMPI [7]. While each of these algorithms follows a successive elimination strategy and discards items one by one, they differ with regard to the sampling strategy they follow. Since the time horizon must be given in advance for IF, we run it with T 2 {100, 1000, 10000}, subsequently referred to as IF(T). The BTM algorithm can be accommodated into our setup as is (see Algorithm 3 in [29]). The MALLOWSMPI algorithm assumes a Mallows model [20] instead of PL as an underlying probability distribution over rankings, and it seeks to find the Condorcet winner—it can be applied in our setting, too, since a Condorcet winner does exist for PL. Since the baseline methods are not able to handle ✏-approximation except the BTM, we run our algorithm with ✏= 0 (and made sure that vi 6= vj for all 1 i 6= j M). Number of arms 5 10 15 Sample complexity #104 0 1 2 3 4 5 6 PLPAC IF(100) IF(1000) IF(10000) BTM MallowsMPI (a) c = 0 Number of arms 5 10 15 Sample complexity #104 0 1 2 3 4 5 6 7 PLPAC IF(100) IF(1000) IF(10000) BTM MallowsMPI (b) c = 2 Number of arms 5 10 15 Sample complexity #104 0 2 4 6 8 10 12 14 16 18 PLPAC IF(100) IF(1000) IF(10000) BTM MallowsMPI (c) c = 5 Figure 1: The sample complexity for M = {5, 10, 15}, δ = 0.1, ✏= 0. The results are averaged over 100 repetitions. We tested the learning algorithm by setting the parameters of PL to vi = 1/(c + i) with c = {0, 1, 2, 3, 5}. The parameter c controls the complexity of the rank elicitation task, since the gaps between pairwise probabilities and 1/2 are of the form |pi,j −1/2| = | 1 2 − 1 1+ i+c j+c |, which converges to zero as c ! 1. We evaluated the algorithm on this test case with varying numbers of items M = {5, 10, 15} and with various values of parameter c, and plotted the sample complexities, that is, the number of pairwise comparisons taken by the algorithms prior to termination. The results are shown in Figure 1 (only for c = {0, 2, 5}, the rest of the plots are deferred to Appendix E). As can be seen, the PLPAC algorithm significantly outperforms the baseline methods if the pairwise comparisons match with the model assumption, namely, they are drawn from the marginals of a PL distribution. MALLOWSMPI achieves a performance that is slightly worse than PLPAC for M = 5, and its performance is among the worst ones for M = 15. This can be explained by the elimination strategy of MALLOWSMPI, which heavily relies on the existence of a gap mini6=j |pi,j −1/2| > 0 between all pairwise probabilities and 1/2; in our test case, the minimal gap pM,M−1 −1/2 = 1 2−1/(c+M) −1/2 > 0 is getting smaller with increasing M and c . The poor performance of BTM for large c and M can be explained by the same argument. 9.2 The AMPR Problem Since the RankCentrality algorithm produces the most probable ranking if the pairwise marginals obey a PL distribution and the sample size is sufficiently large (cf. Remark 5), it was taken as a baseline. Using the same test case as before, input data of various size was generated for RankCentrality based on uniform sampling of pairs to be compared. Its performance is shown by the black lines in Figure 2 (the results for c = {1, 3, 4} are again deferred to Appendix F). The accuracy in a single run of the algorithm is 1 if the output of RankCentrality is identical with the most probable ranking, and 0 otherwise; this accuracy was averaged over 100 runs. 3In addition, we conducted some experiments to asses the impact of parameter ✏and to test our algorithms based on Clopper-Pearson confidence intervals. These experiments are deferred to Appendix H and G due to lack of space. 7 Sample size 102 104 106 Optimal recovery fraction 0 0.2 0.4 0.6 0.8 1 RankCentrality (M=5) RankCentrality (M=10) RankCentrality (M=15) PLPAC-AMPR (M=5) PLPAC-AMPR (M=10) PLPAC-AMPR (M=15) (a) c = 0 Sample size 102 104 106 Optimal recovery fraction 0 0.2 0.4 0.6 0.8 1 RankCentrality (M=5) RankCentrality (M=10) RankCentrality (M=15) PLPAC-AMPR (M=5) PLPAC-AMPR (M=10) PLPAC-AMPR (M=15) (b) c = 2 Sample size 102 104 106 Optimal recovery fraction 0 0.2 0.4 0.6 0.8 1 RankCentrality (M=5) RankCentrality (M=10) RankCentrality (M=15) PLPAC-AMPR (M=5) PLPAC-AMPR (M=10) PLPAC-AMPR (M=15) (c) c = 5 Figure 2: Sample complexity for finding the approximately most probable ranking (AMPR) with parameters M 2 {5, 10, 15}, δ = 0.05, ✏= 0. The results are averaged over 100 repetitions. We also run our PLPAC-AMPR algorithm and determined the number of pairwise comparisons it takes prior to termination. The horizontal lines in Figure 2 show the empirical sample complexity achieved by PLPAC-AMPR with ✏= 0. In accordance with Theorem 4, the accuracy of PLPACAMPR was always significantly higher than 1 −δ (actually equal to 1 in almost every case). As can be seen, RankCentrality slightly outperforms PLPAC-AMPR in terms of sample complexity, that is, it achieves an accuracy of 1 for a smaller number of pairwise comparisons. Keep in mind, however, that PLPAC-AMPR only terminates when its output is correct with probability at least 1−δ. Moreover, it computes the confidence intervals for the statistics it uses based on the ChernoffHoeffding bound, which is known to be very conservative. As opposed to this, RankCentrality is an offline algorithm without any performance guarantee if the sample size in not sufficiently large (see Remark 5). Therefore, it is not surprising that, asymptotically, its empirical sample complexity shows a better behavior than the complexity of our online learner. As a final remark, ranking distributions can principally be defined based on any sorting algorithm, for example MergeSort. However, to the best of our knowledge, pairwise stability has not yet been shown for any sorting algorithm other than QuickSort. We empirically tested the MergeSort algorithm in our experimental study, simply by using it in place of budgeted QuickSort in the PLPAC-AMPR algorithm. We found MergeSort inappropriate for the PL model, since the accuracy of PLPAC-AMPR, when being used with MergeSort instead of QuickSort, drastically drops on complex tasks; for details, see Appendix J. The question of pairwise stability of different sorting algorithms for various ranking distributions, such as the Mallows model, is an interesting research avenue to be explored. 10 Conclusion and Future Work In this paper, we studied different problems of online rank elicitation based on pairwise comparisons under the assumption of a Plackett-Luce model. Taking advantage of this assumption, our idea is to construct a surrogate probability distribution over rankings based on a sorting procedure, namely QuickSort, for which the pairwise marginals provably coincide with the marginals of the PL distribution. In this way, we manage to exploit the (stochastic) transitivity properties of PL, which is at the origin of the efficiency of our approach, together with the idea of replacing the original QuickSort with a budgeted version of this algorithm. In addition to a formal performance and complexity analysis of our algorithms, we also presented first experimental studies showing the effectiveness of our approach. Needless to say, in addition to the problems studied in this paper, there are many other interesting problems that can be tackled within the preference-based framework of online learning. For example, going beyond a single item or ranking, we may look for a good estimate bP of the entire distribution P, for example, an estimate with small Kullback-Leibler divergence: KL ⇣ P, bP ⌘ < ✏. With regard to the use of sorting algorithms, another interesting open question is the following: Is there any sorting algorithm with a worst case complexity of order M log M, which preserves the marginal probabilities? This question might be difficult to answer since, as we conjecture, the MergeSort and the InsertionSort algorithms, which are both well-known algorithms with an M log M complexity, do not satisfy this property. 8 References [1] Nir Ailon. Reconciling real scores with binary comparisons: A new logistic based model for ranking. In Advances in Neural Information Processing Systems 21, pages 25–32, 2008. [2] M. Braverman and E. Mossel. Noisy sorting without resampling. In Proceedings of the nineteenth annual ACM-SIAM Symposium on Discrete algorithms, pages 268–276, 2008. [3] M. Braverman and E. Mossel. Sorting from noisy information. CoRR, abs/0910.1191, 2009. [4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings of the 20th ALT, ALT’09, pages 23–37, Berlin, Heidelberg, 2009. Springer-Verlag. [5] S. Bubeck, T. Wang, and N. Viswanathan. Multiple identifications in multi-armed bandits. In Proceedings of The 30th ICML, pages 258–265, 2013. [6] R. Busa-Fekete and E. H¨ullermeier. A survey of preference-based online learning with bandit algorithms. In Algorithmic Learning Theory (ALT), volume 8776, pages 18–39, 2014. [7] R. Busa-Fekete, E. H¨ullermeier, and B. Sz¨or´enyi. Preference-based rank elicitation using statistical models: The case of Mallows. In (ICML), volume 32 (2), pages 1071–1079, 2014. [8] R. Busa-Fekete, B. Sz¨or´enyi, and E. H¨ullermeier. Pac rank elicitation through adaptive sampling of stochastic pairwise preferences. In AAAI, pages 1701–1707, 2014. [9] R. Busa-Fekete, B. Sz¨or´enyi, P. Weng, W. Cheng, and E. H¨ullermeier. Top-k selection based on adaptive sampling of noisy preferences. In Proceedings of the 30th ICML, JMLR W&CP, volume 28, 2013. [10] C. J. Clopper and E. S. Pearson. The Use of Confidence or Fiducial Limits Illustrated in the Case of the Binomial. Biometrika, 26(4):404–413, 1934. [11] E. Even-Dar, S. Mannor, and Y. Mansour. PAC bounds for multi-armed bandit and markov decision processes. In Proceedings of the 15th COLT, pages 255–270, 2002. [12] Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy information. SIAM J. Comput., 23(5):1001–1018, October 1994. [13] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification. In NIPS 24, pages 2222–2230, 2011. [14] J. Guiver and E. Snelson. Bayesian inference for plackett-luce ranking models. In Proceedings of the 26th ICML, pages 377–384, 2009. [15] C. A. R. Hoare. Quicksort. Comput. J., 5(1):10–15, 1962. [16] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13–30, 1963. [17] D.R. Hunter. MM algorithms for generalized bradley-terry models. The Annals of Statistics, 32(1):384– 406, 2004. [18] R. Luce and P. Suppes. Handbook of Mathematical Psychology, chapter Preference, Utility and Subjective Probability, pages 249–410. Wiley, 1965. [19] R. D. Luce. Individual choice behavior: A theoretical analysis. Wiley, 1959. [20] C. Mallows. Non-null ranking models. Biometrika, 44(1):114–130, 1957. [21] John I. Marden. Analyzing and Modeling Rank Data. Chapman & Hall, 1995. [22] C.J.H. McDiarmid and R.B. Hayward. Large deviations for quicksort. Journal of Algorithms, 21(3):476– 507, 1996. [23] S. Negahban, S. Oh, and D. Shah. Iterative ranking from pairwise comparisons. In Advances in Neural Information Processing Systems, pages 2483–2491, 2012. [24] R. Plackett. The analysis of permutations. Applied Statistics, 24:193–202, 1975. [25] Arun Rajkumar and Shivani Agarwal. A statistical convergence perspective of algorithms for rank aggregation from pairwise data. In ICML, pages 118–126, 2014. [26] H. A. Soufiani, W. Z. Chen, D. C. Parkes, and L. Xia. Generalized method-of-moments for rank aggregation. In Advances in Neural Information Processing Systems (NIPS), pages 2706–2714, 2013. [27] T. Urvoy, F. Clerot, R. F´eraud, and S. Naamane. Generic exploration and k-armed voting bandits. In Proceedings of the 30th ICML, JMLR W&CP, volume 28, pages 91–99, 2013. [28] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The k-armed dueling bandits problem. Journal of Computer and System Sciences, 78(5):1538–1556, 2012. [29] Y. Yue and T. Joachims. Beat the mean bandit. In Proceedings of the ICML, pages 241–248, 2011. [30] M. Zoghi, S. Whiteson, R. Munos, and M. Rijke. Relative upper confidence bound for the k-armed dueling bandit problem. In ICML, pages 10–18, 2014. 9
2015
192
5,694
Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets Pascal Vincent∗, Alexandre de Brébisson, Xavier Bouthillier Département d’Informatique et de Recherche Opérationnelle Université de Montréal, Montréal, Québec, CANADA ∗and CIFAR Abstract An important class of problems involves training deep neural networks with sparse prediction targets of very high dimension D. These occur naturally in e.g. neural language models or the learning of word-embeddings, often posed as predicting the probability of next words among a vocabulary of size D (e.g. 200 000). Computing the equally large, but typically non-sparse D-dimensional output vector from a last hidden layer of reasonable dimension d (e.g. 500) incurs a prohibitive O(Dd) computational cost for each example, as does updating the D × d output weight matrix and computing the gradient needed for backpropagation to previous layers. While efficient handling of large sparse network inputs is trivial, the case of large sparse targets is not, and has thus so far been sidestepped with approximate alternatives such as hierarchical softmax or sampling-based approximations during training. In this work we develop an original algorithmic approach which, for a family of loss functions that includes squared error and spherical softmax, can compute the exact loss, gradient update for the output weights, and gradient for backpropagation, all in O(d2) per example instead of O(Dd), remarkably without ever computing the D-dimensional output. The proposed algorithm yields a speedup of D 4d, i.e. two orders of magnitude for typical sizes, for that critical part of the computations that often dominates the training time in this kind of network architecture. 1 Introduction Many modern applications of neural networks have to deal with data represented, or representable, as very large sparse vectors. Such representations arise in natural language related tasks, where the dimension D of that vector is typically (a multiple of) the size of the vocabulary, and also in the sparse user-item matrices of collaborative-filtering applications. It is trivial to handle very large sparse inputs to a neural network in a computationally efficient manner: the forward propagation and update to the input weight matrix after backpropagation are correspondingly sparse. By contrast, training with very large sparse prediction targets is problematic: even if the target is sparse, the computation of the equally large network output and the corresponding gradient update to the huge output weight matrix are not sparse and thus computationally prohibitive. This has been a practical problem ever since Bengio et al. [1] first proposed using a neural network for learning a language model, in which case the computed output vector represents the probability of the next word and is the size of the considered vocabulary, which is becoming increasingly large in modern applications [2]. Several approaches have been proposed to attempt to address this difficulty essentially by sidestepping it. They fall in two categories: • Sampling or selection based approximations consider and compute only a tiny fraction of the output’s dimensions sampled at random or heuristically chosen. The reconstruction sampling of Dauphin et al. [3], the efficient use of biased importance sampling in Jean et al. [4], the use of 1 Noise Contrastive Estimation [5] in Mnih and Kavukcuoglu [6] and Mikolov et al. [7] all fall under this category. As does the more recent use of approximate Maximum Inner Product Search based on Locality Sensitive Hashing techniques[8, 9] to select a good candidate subset. • Hierarchical softmax [10, 7] imposes a heuristically defined hierarchical tree structure for the computation of the normalized probability of the target class. Compared to the initial problem of considering all D output dimensions, both kinds of approaches are crude approximations. In the present work, we will instead investigate a way to actually perform the exact gradient update that corresponds to considering all D outputs, but do so implicitly, in a computationally efficient manner, without actually computing the D outputs. This approach works for a relatively restricted class of loss functions, the simplest of which is linear output with squared error (a natural choice for sparse real-valued regression targets). The most common choice for multiclass classification, the softmax loss is not part of that family, but we may use an alternative spherical softmax, which will also yield normalized class probabilities. For simplicity and clarity, our presentation will focus on squared error and on an online setting. We will briefly discuss its extension to minibatches and to the class of possible loss functions in sections 3.5 and 3.6. 2 The problem 2.1 Problem definition and setup We are concerned with gradient-descent based training of a deep feed-forward neural network with target vectors of very high dimension D (e.g. D = 200 000) but that are sparse, i.e. a comparatively small number, at most K ≪D, of the elements of the target vector are non-zero. Such a Ksparse vector will typically be stored and represented compactly as 2K numbers corresponding to pairs (index, value). A network to be trained with such targets will naturally have an equally large output layer of dimension D. We can also optionally allow the input to the network to be a similarly high dimensional sparse vector of dimension Din. Between the large sparse target, output, and (optionally large sparse) input, we suppose the network’s intermediate hidden layers to be of smaller, more typically manageable, dimension d ≪D (e.g. d = 500)1. Mathematical notation: Vectors are denoted using lower-case letters, e.g. h, and are considered column-vectors; corresponding row vectors are denoted with a transpose, e.g. hT . Matrices are denoted using upper-case letters, e.g. W, with W T the transpose of W. The ith column of W is denoted Wi , and its ith row W:i (both viewed as a column vector). U −T = U −1T denotes the transpose of the inverse of a square matrix. Id is the d × d identity matrix. Network architecture: We consider a standard feed forward neural network architecture as depicted in Figure 1. An input vector x ∈RDin is linearly transformed into a linear activation a(1) = W (1)T x + b(1) through a Din × d input weight matrix W (1) (and an optional bias vector b(1) ∈Rd). This is typically followed by a non-linear transformation s to yield the representation of the first hidden layer h(1) = s(a(1)). This first hidden layer representation is then similarly transformed through a number of subsequent non-linear layers (that can be of any usual kind amenable to backpropagation) e.g. h(k) = s(a(k)) with a(k) = W (k)T h(k−1) + b(k) until we obtain last hidden layer representation h = h(m). We then obtain the final D-dimensional network output as o = Wh where W is a D × d output weight matrix, which will be our main focus in this work. Finally, the network’s D-dimensional output o is compared to the D-dimensional target vector y associated with input x using squared error, yielding loss L = ∥o −y∥2. Training procedure: This architecture is a typical (possibly deep) multi-layer feed forward neural network architecture with a linear output layer and squared error loss. Its parameters (weight matrices and bias vectors) will be trained by gradient descent, using gradient backpropagation [11, 12, 13] to efficiently compute the gradients. The procedure is shown in Figure 1. Given an example from the training set as an (input,target) pair (x, y), a pass of forward propagation proceeds as outlined above, computing the hidden representation of each hidden layer in turn based on the previous one, and finally the network’s predicted output o and associated loss L. A pass of gradient backpropagation then works in the opposite direction, starting from ∇o = ∂L ∂o = 2(o −y) and 1Our approach does not impose any restriction on the architecture nor size of the hidden layers, as long as they are amenable to usual gradient backpropagation. 2 ‣ Training deep neural networks with very large sparse targets is an important problem ‣ Arises e.g. in Neural Language Models [1] with large vocabulary size (e.g. D = 500 000 one-hot target). ‣ Efficient handling of large sparse inputs is trivial. ‣ But backprop training with large sparse targets is prohibitively expensive. ‣ Focus on output layer: maps last hidden representation h of reasonable dimension d (e.g. 500) to very large output o of dimension D (e.g. 500 000) with a Dxd parameter matrix W: ... ￿ ￿￿ ￿ ... ￿ ￿￿ ￿ (large D, but K-sparse) (large D, but K-sparse) ... (small d) ... large D, not sparse Loss Input x Target y Output o last hidden h L = ￿o −y￿2 hidden 2 (small d) hidden 1 (small d) O(Kd) O(d2) O(d2) O(Dd) Prohibitivley expensive! Ex: D = 500 000, K=5 Ex: d = 500 O(D) O(d2) O(d2) O(d2) Forward propagation Backpropagation (dxd) W(2) O(D) !o = 2(o-y) O(Dd) !h = W T !o O(Dd) W "W- ! !o hT O(Kd) W(1) "W (1)- ! x !aT cheap! W(1) (Dxd) Prohibitivley expensive! Altogether: O( Dd ) 3 Problem: expensive computation we suppose K << d << D o = Wh Seco with Q = W T W Supposing we hav d×d matrix (we will ˆh = Qh has a com representation of y, vector, so that com O(K). So the overal O(Kd+d2) = O((K of magnitude cheape If we define inter computation of L ca 5.3 Efficient gra The gradient of the squ W is ∂L ∂W = ∂￿W h−y￿2 ∂W update to W would b learning rate. Again, co computational complex then to update all the will be sparse). To ove W Note that we can d N (a Boo ! Us com ! Ne " ac ! Ne " ac a) W ←W −2ηW b) W ←W + 2ηyh We will now see how only U and V respe date versions of Q = h in Equations ???? for update b) ). Solution: a) Unew = U −2η( b) Vnew = V + 2ηy Proof: VnewUn SHORT FORMU LINE CASE: Qn a) First update implicitly by updat Proof: Current workarounds are approximations: ‣ Sampling based approximations compute only a tiny fraction of the output’s dimensions sampled at random. Reconstruction sampling [2] and the use of Noise Contrastive Estimation [3] in [4, 5] fall under this category. ‣ Hierarchical softmax [6, 4] imposes a heuristically defined hierarchical tree structure for the computation of the normalized probability of the target class. [1] Bengio, Y., Ducharme, R., and Vincent, P. (2001). A neural probabilistic language model. NIPS 2000. [2] Dauphin, Y., Glorot, X., and Bengio, Y. (2011). Large-scale learning of embeddings with reconstruction sampling. ICML 2011. [3] G [4] M Prohibitive! References: Figure 1: The computational problem posed by very large sparse targets. Dealing with sparse input efficiently is trivial, with both the forward and backward propagation phases easily achieved in O(Kd). However this is not the case with large sparse targets. They incur a prohibitive computational cost of O(Dd) at the output layer as forward propagation, gradient backpropagation and weight update each require accessing all D × d elements of the large output weight matrix. propagating back the gradients ∇h(k) = ∂L ∂h(k) and ∇a(k) = ∂L ∂a(k) upstream through the network. The corresponding gradient contributions on parameters (weights and biases), collected along the way, are straightforward once we have the associated ∇a(k). Specifically they are ∇b(k) = ∇a(k) and ∇W (k) = h(k−1)(∇a(k))T . Similarly for the input layer ∇W (1) = x(∇a(1))T , and for the output layer ∇W = (o −y)hT . Parameters are then updated through a gradient descent step W (k) ←W (k) −η∇W (k) and b(k) ←b(k) −η∇b(k), where η is a positive learning-rate. Similarly for the output layer which will be our main focus here: W ←W −η∇W . 2.2 The easy part: input layer forward propagation and weight update It is easy and straightforward to efficiently compute the forward propagation, and the backpropagation and weight update part for the input layer when we have a very large Din-dimensional but K−sparse input vector x with appropriate sparse representation. Specifically we suppose that x is represented as a pair of vectors u, v of length (at most) K, where u contains integer indexes and v the associated real values of the elements of x such that xi = 0 if i /∈u, and xuk = vk. • Forward propagation through the input layer: The sparse representation of x as the positions of K elements together with their value makes it cheap to compute W (1)T x. Even though W (1) may be a huge full Din × d matrix, only K of its rows (those corresponding to the non-zero entries of x) need to be visited and summed to compute W (1)T x. Precisely, with our (u, v) sparse representation of x this operation can be written as W (1)T x = PK k=1 vkW (1) :uk where each W (1) :uk is a d-dimensional vector, making this an O(Kd) operation rather than O(Dd). • Gradient and update through input layer: Let us for now suppose that we were able to get gradients (through backpropagation) up to the first hidden layer activations a(1) ∈Rd in the form of gradient vector ∇a(1) = ∂L ∂a(1) . The corresponding gradient-based update to input layer weights W (1) is simply W (1) ←W (1) −ηx(∇a(1))T . This is a rank-one update to W (1). Here again, we see that only the K rows of W (1) associated to the (at most) K non-zero entries of x need to be modified. Precisely this operation can be written as: W (1) :uk ←W (1) :uk −ηvk∇a(1) ∀k ∈{1, . . . , K} making this again a O(Kd) operation rather than O(Dd). 3 2.3 The hard part: output layer propagation and weight update Given some network input x, we suppose we can compute without difficulty through forward propagation the associated last hidden layer representation h ∈Rd. From then on: • Computing the final output o = Wh incurs a prohibitive computational cost of O(Dd) since W is a full D × d matrix. Note that there is a-priori no reason for representation h to be sparse (e.g. with a sigmoid non-linearity) but even if it was, this would not fundamentally change the problem since it is D that is extremely large, and we supposed d reasonably sized already. Computing the residual (o −y) and associated squared error loss ∥o −y∥2 incurs an additional O(D) cost. • The gradient on h that we need to backpropagate to lower layers is ∇h = ∂L ∂h = 2W T (o −y) which is another O(Dd) matrix-vector product. • Finally, when performing the corresponding output weight update W ←W −η(o −y)hT we see that it is a rank-one update that updates all D ×d elements of W, which again incurs a prohibitive O(Dd) computational cost. For very large D, all these three O(Dd) operations are prohibitive, and the fact that y is sparse, seen from this perspective, doesn’t help, since neither o nor o −y will be sparse. 3 A computationally efficient algorithm for performing the exact online gradient update Previously proposed workarounds are approximate or use stochastic sampling. We propose a different approach that results in the exact same, yet efficient gradient update, remarkably without ever having to compute large output o. 3.1 Computing the squared error loss L and the gradient with respect to h efficiently Suppose that, we have, for a network input example x, computed the last hidden representation h ∈Rd through forward propagation. The network’s D dimensional output o = Wh is then in principle compared to the high dimensional target y ∈RD. The corresponding squared error loss is L = ∥Wh −y∥2. As we saw in Section 2.3, computing it in the direct naive way would have a prohibitive computational complexity of O(Dd + D) = O(Dd) because computing output Wh with a full D × d matrix W and a typically non-sparse h is O(Dd). Similarly, to backpropagate the gradient through the network, we need to compute the gradient of loss L with respect to last hidden layer representation h. This is ∇h = ∂L ∂h = ∂∥W h−y∥2 ∂h = 2W T (Wh −y). So again, if we were to compute it directly in this manner, the computational complexity would be a prohibitive O(Dd). Provided we have maintained an up-to-date matrix Q = W T W, which is of reasonable size d × d and can be cheaply maintained as we will see in Section 3.3, we can rewrite these two operations so as to perform them in O(d2): Loss computation: Gradient on h: L = ∥ O(Dd) z}|{ Wh −y∥2 = (Wh −y)T (Wh −y) = hT W T Wh −yT Wh −hT W T y + yT y = hT Qh −2hT (W T y) + yT y = hT ( Qh |{z} O(d2) −2 W T y | {z } O(Kd) ) + yT y |{z} O(K) (1) ∇h = ∂L ∂h = ∂∥Wh −y∥2 ∂h = 2W T (Wh −y) = 2 W T Wh −W T y  = 2( Qh |{z} O(d2) −W T y | {z } O(Kd) ) (2) The terms in O(Kd) and O(K) are due to leveraging the K-sparse representation of target vector y. With K ≪D and d ≪D, we get altogether a computational cost of O(d2) which can be several orders of magnitude cheaper than the prohibitive O(Dd) of the direct approach. 4 3.2 Efficient gradient update of W The gradient of the squared error loss with respect to output layer weight matrix W is ∂L ∂W = ∂∥W h−y∥2 ∂W = 2(Wh −y)hT . And the corresponding gradient descent update to W would be Wnew ←W −2η(Wh−y)hT , where η is a positive learning rate. Again, computed in this manner, this induces a prohibitive O(Dd) computational complexity, both to compute output and residual Wh −y, and then to update all the Dd elements of W (since generally neither Wh −y nor h will be sparse). All D ×d elements of W must be accessed during this update. On the surface this seems hopeless. But we will now see how we can achieve the exact same update on W in O(d2). The trick is to represent W implicitly as the factorization W |{z} D×d = V |{z} D×d U |{z} d×d and update U and V instead a) Unew = U −2η(Uh)hT (3) b) Vnew = V + 2ηy(U −T newh)T (4) This results in implicitly updating W as we did explicitly in the naive approach as we now prove: VnewUnew = (V + 2ηy(U −T newh)T ) Unew = V Unew + 2ηy(U −T newh)T Unew = V Unew + 2ηyhT U −1 newUnew = V (U −2η(Uh)hT ) + 2ηyhT (U −1 newUnew) = V U −2ηV UhhT + 2ηyhT = V U −2η(V Uh −y)hT = W −2η(Wh −y)T hT = Wnew We see that the update of U in Eq. 3 is a simple O(d2) operation. Following this simple rank-one update to U, we can use the Sherman-Morrison formula to derive the corresponding rank-one update to U −T which will also be O(d2): U −T new = U −T + 2η 1 −2η ∥h∥2 (U −T h)hT (5) It is then easy to compute the U −T newh, an O(d2) operation needed in Eq. 4. The ensuing rank-one update of V in Eq 4, thanks to the K-sparsity of y is only O(Kd): only the K rows V associated to non-zero elements in y are accessed and updated, instead of all D rows of W we had to modify in the naive update! Note that with the factored representation of W as V U, we only have W implicitly, so the W T y terms that entered in the computation of L and ∇h in the previous paragraph need to be adapted slightly as ˆy = W T y = U T (V T y), which becomes O(d2 + Kd) rather than O(Kd) in computational complexity. But this doesn’t change the overall O(d2) complexity of these computations. 3.3 Bookkeeping: keeping an up-to-date Q and U −T We have already seen, in Eq. 5, how we can cheaply maintain an up-to-date U −T following our update of U. Similarly, following our updates to U and V , we need to keep an up-to-date Q = W T W which is needed to efficiently compute the loss L (Eq. 1) and gradient ∇h (Eq. 2). We have shown that updates to U and V in equations 3 and 4 are equivalent to implicitly updating W as Wnew ←W −2η(Wh −y)hT , and this translates into the following update to Q = W T W: ˆz = Qh −U T (V T y) Qnew = Q −2η hˆzT + ˆzhT  + (4η2L)hhT (6) The proof is straightforward but due to space constraints we put it in supplementary material. One can see that this last bookkeeping operation also has a O(d2) computational complexity. 5 3.4 Putting it all together: detailed algorithm and expected benefits We have seen that we can efficiently compute cost L, gradient with respect to h (to be later backpropagated further) as well as updating U and V and performing the bookkeeping for U −T and Q. Algorithm 1 describes the detailed algorithmic steps that we put together from the equations derived above. Having K ≪d ≪D we see that the proposed algorithm requires O(d2) operations, whereas the standard approach required O(Dd) operations. If we take K ≈d , we may state more precisely that the proposed algorithm, for computing the loss and the gradient updates will require roughly 12d2 operations whereas the standard approach required roughly 3Dd operations. So overall the proposed algorithm change corresponds to a computational speedup by a factor of D 4d. For D = 200 000 and d = 500 the expected speedup is thus 100. Note that the advantage is not only in computational complexity, but also in memory access. For each example, the standard approach needs to access and change all D × d elements of matrix W, whereas the proposed approach only accesses the much smaller number K ×d elements of V as well as the three d×d matrices U, U −T , and Q. So overall we have a substantially faster algorithm, which, while doing so implicitly, will nevertheless perform the exact same gradient update as the standard approach. We want to emphasize here that our approach is completely different from simply chaining 2 linear layers U and V and performing ordinary gradient descent updates on them: this would result in the same prohibitive computational complexity as the standard approach, and such ordinary separate gradient updates to U and V would not be equivalent to the ordinary gradient update to W = V U. Algorithm 1 Efficient computation of cost L, gradient on h, and update to parameters U and V Step # Operation Computational complexity Number of multiply-adds 1: ˆh = Qh O(d2) d2 2: ˆy = U T (V T y) O(Kd + d2) Kd + d2 3: ˆz = ˆh −ˆy O(d) d 4: ∇h = 2ˆz O(d) d 5: L = hT ˆh −2hT ˆy + yT y O(2d + K) 2d + K + 1 6: Unew = U −2η(Uh)hT O(d2) 2d2 + d 7: U −T new = U −T + 2η 1−2η∥h∥2 (U −T h)hT O(d2) 2d2 + 2d + 3 8: Vnew = V + 2ηy(U −T newh)T O(d2 + Kd) d2 + K + Kd 9: Qnew = Q −2η hˆzT + ˆzhT  + (4η2L)hhT O(d2) 4 + 2d + 3d2 Altogether: O(d2) provided K < d ≪D ≈12d2 elementary operations 3.5 Controlling numerical stability and extension to the minibatch case The update of U in Equation 3 may over time lead U to become ill-conditioned. To prevent this, we regularly (every 100 updates) monitor its conditioning number. If either the smallest or largest singular value moves outside an acceptable range2, we bring it back to 1 by doing an appropriate rank-1 update to V (which costs Dd operations, but is only done rarely). Our algorithm can also be straightforwardly extended to the minibatch case (the derivations are given in the supplementary material section) and yields the same theoretical speedup factor with respect to the standard naive approach. But one needs to be careful in order to keep the computation of U −T h reasonably efficient: depending on the size of the minibatch m, it may be more efficient to solve the corresponding linear equation for each minibatch from scratch rather than updating U −T with the Woodbury equation (which generalizes the Sherman-Morrison formula for m > 1). 2More details on our numerical stabilization procedure can be found in the supplementary material 6 3.6 Generalization to a broader class of loss functions The approach that we just detailed for linear output and squared error can be extended to a broader, though restricted, family of loss functions. We call it the spherical family of loss functions because it includes the spherical alternative to the softmax, thus named in [14]. Basically it contains any loss function that can be expressed as a function of only the oc associated to non-zero yc and of ∥o∥2 = P j o2 j the squared norm of the whole output vector, which we can compute cheaply, irrespective of D, as we did above3. This family does not include the standard softmax loss log exp(oc) P j exp(oj), but it does include the spherical softmax4: log o2 c+ϵ P j(o2 j+ϵ). Due to space constraints we will not detail this extension here, only give a sketch of how it can be obtained. Deriving it may not appear obvious at first, but it is relatively straightforward once we realize that: a) the gain in computing the squared error loss comes from being able to very cheaply compute the sum of squared activations ∥o∥2 (a scalar quantity), and will thus apply equally well to other losses that can be expressed based on that quantity (like the spherical softmax). b) generalizing our gradient update trick to such losses follows naturally from gradient backpropagation: the gradient is first backpropagated from the final loss to the scalar sum of squared activations, and from there on follows the same path and update procedure as for the squared error loss. 4 Experimental validation We implemented both a CPU version using blas and a parallel GPU (Cuda) version using cublas of the proposed algorithm5. We evaluated the GPU and CPU implementations by training word embeddings with simple neural language models, in which a probability map of the next word given its preceding n-gram is learned by a neural network. We used a Nvidia Titan Black GPU and a i7-4820K @ 3.70GHz CPU and ran experiments on the one billion word dataset[15], which is composed of 0.8 billions words belonging to a vocabulary of 0.8 millions words. We evaluated the resulting word embeddings with the recently introduced Simlex-999 score [16], which measures the similarity between words. We also compared our approach to unfactorised versions and to a two-layer hierarchical softmax. Figure 2 and 3 (left) illustrate the practical speedup of our approach for the output layer only. Figure 3 (right) shows that out LST (Large Sparse Target) models are much faster to train than the softmax models and converge to only slightly lower Simlex-999 scores. Table 1 summarizes the speedups for the different output layers we tried, both on CPU and GPU. We also empirically verified that our proposed factored algorithm learns the model weights (V U) as the corresponding naive unfactored algorithm’s W, as it theoretically should, and followed the same learning curves (as a function of number of iterations, not time!). 5 Conclusion and future work We introduced a new algorithmic approach to efficiently compute the exact gradient updates for training deep networks with very large sparse targets. Remarkably the complexity of the algorithm is independent of the target size, which allows tackling very large problems. Our CPU and GPU implementations yield similar speedups to the theoretical one and can thus be used in practical applications, which could be explored in further work. In particular, neural language models seem good candidates. But it remains unclear how using a loss function other than the usual softmax might affect the quality of the resulting word embeddings so further research needs to be carried out in this direction. This includes empirically investigating natural extensions of the approach we described to other possible losses in the spherical family such as the spherical-softmax. Acknowledgements: We wish to thank Yves Grandvalet for stimulating discussions, Ça˘glar Gülçehre for pointing us to [14], the developers of Theano [17, 18] and Blocks [19] for making these libraries available to build on, and NSERC and Ubisoft for their financial support. 3In addition loss functions in this family are also allowed to depend on sum(o) = P j oj which we can also compute cheaply without computing o, by tracking ¯w = P j W:j whereby sum(o) = P j W T :j h = ¯wT h. 4where c is the correct class label, and ϵ is a small positive constant that we added to the spherical interpretation in [14] for numerical stability: to guarantee we never divide by 0 nor take the log of 0. 5Open source code is available at: https://github.com/pascal20100/factored_output_layer 7 Table 1: Speedups with respect to the baseline naive model on CPU, for a minibatch of 128 and the whole vocabulary of D = 793471 words. This is a model with two hidden layers of d = 300 neurons. Model output layer only speedup whole model speedup cpu unfactorised (naive) 1 1 gpu unfactorised (naive) 6.8 4.7 gpu hierarchical softmax 125.2 178.1 cpu factorised 763.3 501 gpu factorised 3257.3 1852.3 0 2000 4000 6000 8000 10000 Size of the vocabulary D 0.000 0.002 0.004 0.006 0.008 0.010 Timing (sec) of a minibatch of size 128 un-factorised CPU un-factorised GPU factorised GPU factorised CPU h_softmax GPU 101 102 103 104 105 106 Size of the vocabulary D 10-3 10-2 10-1 100 101 Timing (sec) of a minibatch of size 128 un-factorised CPU un-factorised GPU factorised GPU factorised CPU h_softmax GPU Figure 2: Timing of different algorithms. Time taken by forward and backward propagations in the output layer, including weight update, on a minibatch of size 128 for different sizes of vocabulary D on both CPU and GPU. The input size d is fixed to 300. The Timing of a 2 layer hierarchical softmax efficient GPU implementation (h_softmax) is also provided for comparison. Right plot is in log-log scale. As expected, the timings of factorized versions are independent of vocabulary size. 0 100 200 300 400 500 600 700 800 Size of the vocabulary D (in thousands) 0 200 400 600 800 1000 1200 1400 Speedup cpu_unfact / cpu_fact, experimental gpu_unfact / gpu_fact, experimental unfact / fact, theoretical cpu_unfact / gpu_fact, experimental cpu_unfact / gpu_unfact, experimental 10-1 100 101 102 103 Training time (hours) −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 0.25 SimLex-999 LST CPU LST GPU Softmax CPU Softmax GPU H-Softmax GPU Figure 3: Left: Practical and theoretical speedups for different sizes of vocabulary D and fixed input size d=300. The practical unfact / fact speedup is similar to the theoretical one. Right: Evolution of the Simlex-999 score obtained with different models as a function of training time (CPU softmax times were extrapolated from fewer iterations). Softmax models are zero hidden-layer models, while our large sparse target (LST) models have two hidden layers. These were the best architectures retained in both cases (surprisingly the softmax models with hidden layers performed no better on this task). The extra non-linear layers in LST may help compensate for the lack of a softmax. LST models converge to slightly lower scores at similar speed as the hierarchical softmax model but significantly faster than softmax models. 8 References [1] Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. In Advances in Neural Information Processing Systems 13 (NIPS’00), pages 932–938, 2001. [2] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537, 2011. [3] Y. Dauphin, X. Glorot, and Y. Bengio. Large-scale learning of embeddings with reconstruction sampling. In Proceedings of the 28th International Conference on Machine learning, ICML ’11, 2011. [4] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural machine translation. In ACL-IJCNLP’2015, 2015. arXiv:1412.2007. [5] M. Gutmann and A. Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS’10), 2010. [6] A. Mnih and K. Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in Neural Information Processing Systems 26, pages 2265–2273. 2013. [7] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS’2013, pages 3111–3119. 2013. [8] A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems 27, pages 2321–2329. 2014. [9] S. Vijayanarasimhan, J. Shlens, R. Monga, and J. Yagnik. Deep networks with large output spaces. arxiv:1412.7479, 2014. [10] F. Morin and Y. Bengio. Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, pages 246–252, 2005. [11] D. Rumelhart, G. Hinton, and R. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986. [12] Y. LeCun. Une procédure d’apprentissage pour Réseau à seuil assymétrique. In Cognitiva 85: A la Frontière de l’Intelligence Artificielle, des Sciences de la Connaissance et des Neurosciences, pages 599– 604, 1985. [13] Y. LeCun. Learning processes in an asymmetric threshold network. In Disordered Systems and Biological Organization, pages 233–240. Les Houches 1985, 1986. [14] Y. Ollivier. Riemannian metrics for neural networks. CoRR, abs/1303.0818, 2013. [15] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pages 2635–2639, 2014. [16] F. Hill, R. Reichart, and A. Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. CoRR, abs/1408.3456, 2014. [17] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), 2010. Oral Presentation. [18] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [19] B. van Merriënboer, D. Bahdanau, V. Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, and Y. Bengio. Blocks and Fuel: Frameworks for deep learning. ArXiv e-prints, June 2015. 9
2015
193
5,695
A Gaussian Process Model of Quasar Spectral Energy Distributions Andrew Miller∗, Albert Wu School of Engineering and Applied Sciences Harvard University acm@seas.harvard.edu, awu@college.harvard.edu Jeffrey Regier, Jon McAuliffe Department of Statistics University of California, Berkeley {jeff, jon}@stat.berkeley.edu Dustin Lang McWilliams Center for Cosmology Carnegie Mellon University dstn@cmu.edu Prabhat, David Schlegel Lawrence Berkeley National Laboratory {prabhat, djschlegel}@lbl.gov Ryan Adams † School of Engineering and Applied Sciences Harvard University rpa@seas.harvard.edu Abstract We propose a method for combining two sources of astronomical data, spectroscopy and photometry, that carry information about sources of light (e.g., stars, galaxies, and quasars) at extremely different spectral resolutions. Our model treats the spectral energy distribution (SED) of the radiation from a source as a latent variable that jointly explains both photometric and spectroscopic observations. We place a flexible, nonparametric prior over the SED of a light source that admits a physically interpretable decomposition, and allows us to tractably perform inference. We use our model to predict the distribution of the redshift of a quasar from five-band (low spectral resolution) photometric data, the so called “photoz” problem. Our method shows that tools from machine learning and Bayesian statistics allow us to leverage multiple resolutions of information to make accurate predictions with well-characterized uncertainties. 1 Introduction Enormous amounts of astronomical data are collected by a range of instruments at multiple spectral resolutions, providing information about billions of sources of light in the observable universe [1, 10]. Among these data are measurements of the spectral energy distributions (SEDs) of sources of light (e.g. stars, galaxies, and quasars). The SED describes the distribution of energy radiated by a source over the spectrum of wavelengths or photon energy levels. SEDs are of interesting because they convey information about a source’s physical properties, including type, chemical composition, and redshift, which will be an estimand of interest in this work. The SED can be thought of as a latent function of which we can only obtain noisy measurements. Measurements of SEDs, however, are produced by instruments at widely varying spectral resolutions – some instruments measure many wavelengths simultaneously (spectroscopy), while others ∗http://people.seas.harvard.edu/~acm/ †http://people.seas.harvard.edu/~rpa/ 1 u g r i z band 0 1 2 3 4 5 6 7 8 flux (nanomaggies) PSFFLUX Figure 1: Left: example of a BOSS-measured quasar SED with SDSS band filters, Sb(λ), b ∈ {u, g, r, i, z}, overlaid. Right: the same quasar’s photometrically measured band fluxes. Spectroscopic measurements include noisy samples at thousands of wavelengths, whereas SDSS photometric fluxes reflect the (weighted) response over a large range of wavelengths. average over large swaths of the energy spectrum and report a low dimensional summary (photometry). Spectroscopic data describe a source’s SED in finer detail than broadband photometric data. For example, the Baryonic Oscillation Spectroscopic Survey [5] measures SED samples at over four thousand wavelengths between 3,500 and 10,500 Å. In contrast, the Sloan Digital Sky Survey (SDSS) [1] collects spectral information in only 5 broad spectral bins by using broadband filters (called u, g, r, i, and z), but at a much higher spatial resolution. Photometric preprocessing models can then aggregate pixel information into five band-specific fluxes and their uncertainties [17], reflecting the weighted average response over a large range of the wavelength spectrum. The two methods of spectral information collection are graphically compared in Figure 1. Despite carrying less spectral information, broadband photometry is more widely available and exists for a larger number of sources than spectroscopic measurements. This work develops a method for inferring physical properties sources by jointly modeling spectroscopic and photometric data. One use of our model is to measure the redshift of quasars for which we only have photometric observations. Redshift is a phenomenon in which the observed SED of a source of light is stretched toward longer (redder) wavelengths. This effect is due to a combination of radial velocity with respect to the observer and the expansion of the universe (termed cosmological redshift) [8, 7]. Quasars, or quasi-stellar radio sources, are extremely distant and energetic sources of electromagnetic radiation that can exhibit high redshift [16]. Accurate estimates and uncertainties of redshift measurements from photometry have the potential to guide the use of higher spectral resolution instruments to study sources of interest. Furthermore, accurate photometric models can aid the automation of identifying source types and estimating physical characteristics of faintly observed sources in large photometric surveys [14]. To jointly describe both resolutions of data, we directly model a quasar’s latent SED and the process by which it generates spectroscopic and photometric observations. Representing a quasar’s SED as a latent random measure, we describe a Bayesian inference procedure to compute the marginal probability distribution of a quasar’s redshift given observed photometric fluxes and their uncertainties. The following section provides relevant application and statistical background. Section 3 describes our probabilistic model of SEDs and broadband photometric measurements. Section 4 outlines our MCMC-based inference method for efficiently computing statistics of the posterior distribution. Section 5 presents redshift and SED predictions from photometric measurements, among other model summaries, and a quantitative comparison between our method and two existing “photo-z”. We conclude with a discussion of directions for future work. 2 Background The SEDs of most stars are roughly approximated by Planck’s law for black body radiators and stellar atmosphere models [6]. Quasars, on the other hand, have complicated SEDs characterized by some salient features, such as the Lyman-α forest, which is the absorption of light at many wavelengths from neutral hydrogen gas between the earth and the quasar [19]. One of the most interesting properties of quasars (and galaxies) conveyed by the SED is redshift, which gives us insight into an object’s distance and age. Redshift affects our observation of SEDs by “stretching” the wavelengths, λ ∈Λ, of the quasar’s rest frame SED, skewing toward longer (redder) wavelengths. Denoting the rest frame SED of a quasar n as a function, f (rest) n : Λ →R+, the effect of redshift with value zn 2 Figure 2: Spectroscopic measurements of multiple quasars at different redshifts, z. The upper graph depicts the sample spectrograph in the observation frame, intuitively thought of as “stretched” by a factor (1 + z). The lower figure depicts the “de-redshifted” (rest frame) version of the same quasar spectra, The two lines show the corresponding locations of the characteristic peak in each reference frame. Note that the x-axis has been changed to ease the visualization - the transformation is much more dramatic. The appearance of translation is due to missing data; we don’t observe SED samples outside the range 3,500-10,500 Å. (typically between 0 and 7) on the observation-frame SED is described by the relationship f (obs) n (λ) = f (rest) n  λ 1 + zn  . (1) Some observed quasar spectra and their “de-redshifted” rest frame spectra are depicted in Figure 2. 3 Model This section describes our probabilistic model of spectroscopic and photometric observations. Spectroscopic flux model The SED of a quasar is a non-negative function f : Λ →R+, where Λ denotes the range of wavelengths and R+ are non-negative real numbers representing flux density. Our model specifies a quasar’s rest frame SED as a latent random function. Quasar SEDs are highly structured, and we model this structure by imposing the assumption that each SED is a convex mixture of K latent, positive basis functions. The model assumes there are a small number (K) of latent features or characteristics and that each quasar can be described by a short vector of mixing weights over these features. We place a normalized log-Gaussian process prior on each of these basis functions (described in supplementary material). The generative procedure for quasar spectra begins with a shared basis βk(·) iid∼GP(0, Kθ), k = 1, . . . , K, Bk(·) = exp(βk(·)) R Λ exp(βk(λ)) dλ , (2) where Kθ is the kernel and Bk is the exponentiated and normalized version of βk. For each quasar n, wn ∼p(w) , s.t. X wk wk = 1, mn ∼p(m) , s.t. mn > 0, zn ∼p(z), (3) where wn mixes over the latent types, mn is the apparent brightness, zn is the quasar’s redshift, and distributions p(w), p(m), and p(z) are priors to be specified later. As each positive SED basis function, Bk, is normalized to integrate to one, and each quasar’s weight vector wn also sums to one, the latent normalized SED is then constructed as f (rest) n (·) = X k wn,kBk(·) (4) and we define the unnormalized SED ˜f (rest) n (·) ≡mn · f (rest) n (·). This parameterization admits the interpretation of f (rest) n (·) as a probability density scaled by mn. This interpretation allows us to 3 xn,λ σ2 n,λ wn mn zn Bk ℓ, ν yn,b τ 2 n,b λ ∈Λ b ∈{u, g, r, i, z} K Nspec Nphoto Figure 3: Graphical model representation of the joint photometry and spectroscopy model. The left shaded variables represent spectroscopically measured samples and their variances. The right shaded variables represent photometrically measured fluxes and their variances. The upper box represents the latent basis, with GP prior parameters ℓand ν. Note that Nspec + Nphoto replicates of wn, mn and zn are instantiated. separate out the apparent brightness, which is a function of distance and overall luminosity, from the SED itself, which carries information pertinent to the estimand of interest, redshift. For each quasar with spectroscopic data, we observe noisy samples of the redshifted and scaled spectral energy distribution at a grid of P wavelengths λ ∈{λ1, . . . , λP }. For quasar n, our observation frame samples are conditionally distributed as xn,λ|zn, wn, {Bk} ind ∼N  ˜f (rest) n  λ 1 + zn  , σ2 n,λ  (5) where σ2 n,λ is known measurement variance from the instruments used to make the observations. The BOSS spectra (and our rest frame basis) are stored in units 10−17 · erg · cm−2 · s−1 · Å −1. Photometric flux model Photometric data summarize the amount of energy observed over a large swath of the wavelength spectrum. Roughly, a photometric flux measures (proportionally) the number of photons recorded by the instrument over the duration of an exposure, filtered by a bandspecific sensitivity curve. We express flux in nanomaggies [15]. Photometric fluxes and measurement error derived from broadband imagery have been computed directly from pixels [17]. For each quasar n, SDSS photometric data are measured in five bands, b ∈{u, g, r, i, z}, yielding a vector of five flux values and their variances, yn and τ 2 n,b. Each band, b, measures photon observations at each wavelength in proportion to a known filter sensitivity, Sb(λ). The filter sensitivities for the SDSS ugriz bands are depicted in Figure 1, with an example observation frame quasar SED overlaid. The actual measured fluxes can be computed by integrating the full object’s spectrum, mn · f (obs) n (λ) against the filters. For a band b ∈{u, g, r, i, z} µb(f (rest) n , zn) = Z f (obs) n (λ) Sb(λ) C(λ) dλ , (6) where C(λ) is a conversion factor to go from the units of fn(λ) to nanomaggies (details of this conversion are available in the supplementary material). The function µb takes in a rest frame SED, a redshift (z) and maps it to the observed b-band specific flux. The results of this projection onto SDSS bands are modeled as independent Gaussian random variables with known variance yn,b | f (rest) n , zn ind ∼N(µb(f (rest) n , zn), τ 2 n,b) . (7) Conditioned on the basis, B = {Bk}, we can represent f (rest) n with a low-dimensional vector. Note that f (rest) n is a function of wn, zn, mn, and B (see Equation 4), so we can think of µb as a function of wn, zn, mn, and B. We overload notation, and re-write the conditional likelihood of photometric observations as yn,b | wn, zn, mn, B ∼N(µb(wn, zn, mn, B), τ 2 n,b) . (8) Intuitively, what gives us statistical traction in inferring the posterior distribution over zn is the structure learned in the latent basis, B, and weights w, i.e., the features that correspond to distinguishing bumps and dips in the SED. Note on priors For photometric weight and redshift inference, we use a flat prior on zn ∈[0, 8], and empirically derived priors for mn and wn, from the sample of spectroscopically measured sources. Choice of priors is described in the supplementary material. 4 4 Inference Basis estimation For computational tractability, we first compute a maximum a posteriori (MAP) estimate of the basis, Bmap to condition on. Using the spectroscopic data, {xn,λ, σ2 n,λ, zn}, we compute a discretized MAP estimate of {Bk} by directly optimizing the unnormalized (log) posterior implied by the likelihood in Equation 5, the GP prior over B, and diffuse priors over wn and mn, p {wn, mn}, {Bk}|{xn,λ, σ2 n,λ, zn}  ∝ N Y n=1 p(xn,λ|zn, wn, mn, {Bk})p({Bk})p(wn)p(mn) . (9) We use gradient descent with momentum and LBFGS [12] directly on the parameters βk, ωn,k, and log(mn) for the Nspec spectroscopically measured quasars. Gradients were automatically computed using autograd [9]. Following [18], we first resample the observed spectra into a common rest frame grid, λ0 = (λ0,1, . . . , λ0,V ), easing computation of the likelihood. We note that although our model places a full distribution over Bk, efficiently integrating out those parameters is left for future work. Sampling wn, mn, and zn The Bayesian “photo-z” task requires that we compute posterior marginal distributions of z, integrating out w, and m. To compute these distributions, we construct a Markov chain over the state space including z, w, and m that leaves the target posterior distribution invariant. We treat the inference problem for each photometrically measured quasar, yn, independently. Conditioned on a basis Bk, k = 1, . . . , K, our goal is to draw posterior samples of wn, mn and zn for each n. The unnormalized posterior can be expressed p(wn, mn, zn|yn, B) ∝p(yn|wn, mn, zn, B)p(wn, mn, zn) (10) where the left likelihood term is defined in Equation 8. Note that due to analytic intractability, we numerically integrate expressions involving R Λ f (obs) n (λ)dλ and Sb(λ). Because the observation yn can often be well explained by various redshifts and weight settings, the resulting marginal posterior, p(zn|X, yn, B), is often multi-modal, with regions of near zero probability between modes. Intuitively, this is due to the information loss in the SED-to-photometric flux integration step. This multi-modal property is problematic for many standard MCMC techniques. Single chain MCMC methods have to jump between modes or travel through a region of near-zero probability, resulting in slow mixing. To combat this effect, we use parallel tempering [4], a method that is well-suited to constructing Markov chains on multi-modal distributions. Parallel tempering instantiates C independent chains, each sampling from the target distribution raised to an inverse temperature. Given a target distribution, π(x), the constructed chains sample πc(x) ∝π(x)1/Tc, where Tc controls how “hot” (i.e., how close to uniform) each chain is. At each iteration, swaps between chains are proposed and accepted with a standard Metropolis-Hastings acceptance probability Pr(accept swap c, c′) = πc(xc′)πc′(xc) πc(xc)πc′(xc′) . (11) Within each chain, we use component-wise slice sampling [11] to generate samples that leave each chain’s distribution invariant. Slice-sampling is a (relatively) tuning-free MCMC method, a convenient property when sampling from thousands of independent posteriors. We found parallel tempering to be essential for convincing posterior simulations. MCMC diagnostics and comparisons to single-chain samplers are available in the supplemental material. 5 Experiments and Results We conduct three experiments to test our model, where each experiment measures redshift predictive accuracy for a different train/test split of spectroscopically measured quasars from the DR10QSO dataset [13] with confirmed redshifts in the range z ∈(.01, 5.85). Our experiments split train/test in the following ways: (i) randomly, (ii) by r-band fluxes, (iii) by redshift values. In split (ii), we train on the brightest 90% of quasars, and test on a subset of the remaining. Split (iii) takes the lowest 85% of quasars as training data, and a subset of the brightest 15% as test cases. Splits (ii) 5 Figure 4: Top: MAP estimate of the latent bases B = {Bk}K k=1. Note the different ranges of the x-axis (wavelength). Each basis function distributes its mass across different regions of the spectrum to explain different salient features of quasar spectra in the rest frame. Bottom: model reconstruction of a training-sample SED. and (iii) are intended to test the method’s robustness to different training and testing distributions, mimicking the discovery of fainter and farther sources. For each split, we find a MAP estimate of the basis, B1, . . . , BK, and weights, wn to use as a prior for photometric inference. For computational purposes, we limit our training sample to a random subsample of 2,000 quasars. The following sections outline the resulting model fit and inferred SEDs and redshifts. Basis validation We examined multiple choices of K using out of sample likelihood on a validation set. In the following experiments we set K = 4, which balances generalizability and computational tradeoffs. Discussion of this validation is provided in the supplementary material. SED Basis We depict a MAP estimate of B1, . . . , BK in Figure 4. Our basis decomposition enjoys the benefit of physical interpretability due to our density-estimate formulation of the problem. Basis B4 places mass on the Lyman-α peak around 1,216 Å, allowing the model to capture the cooccurrence of more peaked SEDs with a bump around 1,550 Å. Basis B1 captures the H-α emission line at around 6,500 Å. Because of the flexible nonparametric priors on Bk our model is able to automatically learn these features from data. The positivity of the basis and weights distinguishes our model from PCA-based methods, which sacrifice physical interpretability. Photometric measurements For each test quasar, we construct an 8-chain parallel tempering sampler and run for 8,000 iterations, and discard the first 4,000 samples as burn-in. Given posterior samples of zn, we take the posterior mean as a point estimate. Figure 5 compares the posterior mean to spectroscopic measurements (for three different data-split experiments), where the gray lines denote posterior sample quantiles. In general there is a strong correspondence between spectroscopically measured redshift and our posterior estimate. In cases where the posterior mean is off, our distribution often covers the spectroscopically confirmed value with probability mass. This is clear upon inspection of posterior marginal distributions that exhibit extreme multi-modal behavior. To combat this multi-modality, it is necessary to inject the model with more information to eliminate plausible hypotheses; this information could come from another measurement (e.g., a new photometric band), or from structured prior knowledge over the relationship between zn, wn, and mn. Our method simply fits a mixture of Gaussians to the spectroscopically measured wn, mn sample to formulate a prior distribution. However, incorporating dependencies between zn, wn and mn, similar to the XDQSOz technique, will be incorporated in future work. 5.1 Comparisons We compare the performance of our redshift estimator with two recent photometric redshift estimators, XDQSOz [2] and a neural network [3]. The method in [2] is a conditional density estimator that discretizes the range of one flux band (the i-band) and fits a mixture of Gaussians to the joint distribution over the remaining fluxes and redshifts. One disadvantage to this approach is there there 6 Figure 5: Comparison of spectroscopically (x-axis) and photometrically (y-axis) measured redshifts from the SED model for three different data splits. The left reflects a random selection of 4,000 quasars from the DR10QSO dataset. The right graph reflects a selection of 4,000 test quasars from the upper 15% (zcutoff ≈2.7), where all training was done on lower redshifts. The red estimates are posterior means. Figure 6: Left: inferred SEDs from photometric data. The black line is a smoothed approximation to the “true” SED using information from the full spectral data. The red line is a sample from the posterior, f (obs) n (λ)|X, yn, B, which imputes the entire SED from only five flux measurements. Note that the bottom sample is from the left mode, which under-predicts redshift. Right: corresponding posterior predictive distributions, p(zn|X, yn, B). The black line marks the spectroscopically confirmed redshift; the red line marks the posterior mean. Note the difference in scale of the x-axis. is no physical significance to the mixture of Gaussians, and no model of the latent SED. Furthermore, the original method trains and tests the model on a pre-specified range of i-magnitudes, which is problematic when predicting redshifts on much brighter or dimmer stars. The regression approach from [3] employs a neural network with two hidden layers, and the SDSS fluxes as inputs. More features (e.g., more photometric bands) can be incorporated into all models, but we limit our experiments to the five SDSS bands for the sake of comparison. Further detail on these two methods and a broader review of “photo-z” approaches are available in the supplementary material. Average error and test distribution We compute mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE) to measure predictive performance. Table 1 compares prediction errors for the three different approaches (XD, NN, Spec). Our experiments show that accurate redshift measurements are attainable even when the distribution of training set is different from test set by directly modeling the SED itself. Our method dramatically outperforms [2] and [3] in split (iii), particularly for very high redshift fluxes. We also note that our training set is derived from only 2,000 examples, whereas the training set for XDQSOz and the neural network were ≈80,000 quasars and 50,000 quasars, respectively. This shortcoming can be overcome with more sophisticated inference techniques for the non-negative basis. Despite this, the 7 MAE MAPE RMSE split XD NN Spec XD NN Spec XD NN Spec random (all) 0.359 0.773 0.485 0.293 0.533 0.430 0.519 0.974 0.808 flux (all) 0.308 0.483 0.497 0.188 0.283 0.339 0.461 0.660 0.886 redshift (all) 0.841 0.736 0.619 0.237 0.214 0.183 1.189 0.923 0.831 random (z > 2.35) 0.247 0.530 0.255 0.091 0.183 0.092 0.347 0.673 0.421 flux (z > 2.33) 0.292 0.399 0.326 0.108 0.143 0.124 0.421 0.550 0.531 redshift (z > 3.20) 1.327 1.149 0.806 0.357 0.317 0.226 1.623 1.306 0.997 random (z > 3.11) 0.171 0.418 0.289 0.050 0.117 0.082 0.278 0.540 0.529 flux (z > 2.86) 0.373 0.493 0.334 0.112 0.144 0.103 0.606 0.693 0.643 redshift (z > 3.80) 2.389 2.348 0.829 0.582 0.569 0.198 2.504 2.405 1.108 Table 1: Prediction error for three train-test splits, (i) random, (ii) flux-based, (iii) redshift-based, corresponding to XDQSOz [2] (XD), the neural network approach [3] (NN), our SED-based model (Spec). The middle and lowest sections correspond to test redshifts in the upper 50% and 10%, respectively. The XDQSOz and NN models were trained on (roughly) 80,000 and 50,000 example quasars, respectively, while the Spec models were trained on 2,000. SED-based predictions are comparable. Additionally, because we are directly modeling the latent SED, our method admits a posterior estimate of the entire SED. Figure 6 displays posterior SED samples and their corresponding redshift marginals for test-set quasars inferred from only SDSS photometric measurements. 6 Discussion We have presented a generative model of two sources of information at very different spectral resolutions to form an estimate of the latent spectral energy distribution of quasars. We also described an efficient MCMC-based inference algorithm for computing posterior statistics given photometric observations. Our model accurately predicts and characterizes uncertainty about redshifts from only photometric observations and a small number of separate spectroscopic examples. Moreover, we showed that we can make reasonable estimates of the unobserved SED itself, from which we can make inferences about other physical properties informed by the full SED. We see multiple avenues of future work. Firstly, we can extend the model of SEDs to incorporate more expert knowledge. One such augmentation would include a fixed collection of features, curated by an expert, corresponding to physical properties already known about a class of sources. Furthermore, we can also extend our model to directly incorporate photometric pixel observations, as opposed to preprocessed flux measurements. Secondly, we note that our method is more more computationally burdensome than XDQSOz and the neural network approach. Another avenue of future work is to find accurate approximations of these posterior distributions that are cheaper to compute. Lastly, we can extend our methodology to galaxies, whose SEDs can be quite complicated. Galaxy observations have spatial extent, complicating their SEDs. The combination of SED and spatial appearance modeling and computationally efficient inference procedures is a promising route toward the automatic characterization of millions of sources from the enormous amounts of data available in massive photometric surveys. Acknowledgments The authors would like to thank Matthew Hoffman and members of the HIPS lab for helpful discussions. This work is supported by the Applied Mathematics Program within the Office of Science Advanced Scientific Computing Research of the U.S. Department of Energy under contract No. DE-AC02-05CH11231. This work used resources of the National Energy Research Scientific Computing Center (NERSC). We would like to thank Tina Butler, Tina Declerck and Yushu Yao for their assistance. References [1] Shadab Alam, Franco D Albareti, Carlos Allende Prieto, F Anders, Scott F Anderson, Brett H Andrews, Eric Armengaud, Éric Aubourg, Stephen Bailey, Julian E Bautista, et al. The 8 eleventh and twelfth data releases of the Sloan digital sky survey: Final data from SDSS-III. arXiv preprint arXiv:1501.00963, 2015. [2] Jo Bovy, Adam D Myers, Joseph F Hennawi, David W Hogg, Richard G McMahon, David Schiminovich, Erin S Sheldon, Jon Brinkmann, Donald P Schneider, and Benjamin A Weaver. Photometric redshifts and quasar probabilities from a single, data-driven generative model. The Astrophysical Journal, 749(1):41, 2012. [3] M Brescia, S Cavuoti, R D’Abrusco, G Longo, and A Mercurio. Photometric redshifts for quasars in multi-band surveys. The Astrophysical Journal, 772(2):140, 2013. [4] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. CRC press, 2011. [5] Kyle S Dawson, David J Schlegel, Christopher P Ahn, Scott F Anderson, Éric Aubourg, Stephen Bailey, Robert H Barkhouser, Julian E Bautista, Alessandra Beifiori, Andreas A Berlind, et al. The baryon oscillation spectroscopic survey of SDSS-III. The Astronomical Journal, 145(1):10, 2013. [6] RO Gray, PW Graham, and SR Hoyt. The physical basis of luminosity classification in the late a-, f-, and early g-type stars. ii. basic parameters of program stars and the role of microturbulence. The Astronomical Journal, 121(4):2159, 2001. [7] Edward Harrison. The redshift-distance and velocity-distance laws. The Astrophysical Journal, 403:28–31, 1993. [8] David W Hogg. Distance measures in cosmology. arXiv preprint astro-ph/9905116, 1999. [9] Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Autograd: Reverse-mode differentiation of native python. ICML workshop on Automatic Machine Learning, 2015. [10] D Christopher Martin, James Fanson, David Schiminovich, Patrick Morrissey, Peter G Friedman, Tom A Barlow, Tim Conrow, Robert Grange, Patrick N Jelinksy, Bruno Millard, et al. The galaxy evolution explorer: A space ultraviolet survey mission. The Astrophysical Journal Letters, 619(1), 2005. [11] Radford M Neal. Slice sampling. Annals of statistics, pages 705–741, 2003. [12] Jorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of computation, 35(151):773–782, 1980. [13] Isabelle Pâris, Patrick Petitjean, Éric Aubourg, Nicholas P Ross, Adam D Myers, Alina Streblyanska, Stephen Bailey, Patrick B Hall, Michael A Strauss, Scott F Anderson, et al. The Sloan digital sky survey quasar catalog: tenth data release. Astronomy & Astrophysics, 563:A54, 2014. [14] Jeffrey Regier, Andrew Miller, Jon McAuliffe, Ryan Adams, Matt Hoffman, Dustin Lang, David Schlegel, and Prabhat. Celeste: Variational inference for a generative model of astronomical images. In Proceedings of The 32nd International Conference on Machine Learning, 2015. [15] SDSSIII. Measures of flux and magnitude. 2013. https://www.sdss3.org/dr8/ algorithms/magnitudes.php. [16] Joseph Silk and Martin J Rees. Quasars and galaxy formation. Astronomy and Astrophysics, 1998. [17] Chris Stoughton, Robert H Lupton, Mariangela Bernardi, Michael R Blanton, Scott Burles, Francisco J Castander, AJ Connolly, Daniel J Eisenstein, Joshua A Frieman, GS Hennessy, et al. Sloan digital sky survey: early data release. The Astronomical Journal, 123(1):485, 2002. [18] Jakob Walcher, Brent Groves, Tamás Budavári, and Daniel Dale. Fitting the integrated spectral energy distributions of galaxies. Astrophysics and Space Science, 331(1):1–51, 2011. [19] David H Weinberg, Romeel Dav’e, Neal Katz, and Juna A Kollmeier. The Lyman-alpha forest as a cosmological tool. Proceedings of the 13th Annual Astrophysica Conference in Maryland, 666, 2003. 9
2015
194
5,696
Fast Convergence of Regularized Learning in Games Vasilis Syrgkanis Microsoft Research New York, NY vasy@microsoft.com Alekh Agarwal Microsoft Research New York, NY alekha@microsoft.com Haipeng Luo Princeton University Princeton, NJ haipengl@cs.princeton.edu Robert E. Schapire Microsoft Research New York, NY schapire@microsoft.com Abstract We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at O(T −3/4), while the sum of utilities converges to an approximate optimum at O(T −1)–an improvement upon the worst case O(T −1/2) rates. We show a blackbox reduction for any algorithm in the class to achieve ˜O(T −1/2) rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of Rakhlin and Shridharan [17] and Daskalakis et al. [4], who only analyzed two-player zero-sum games for specific algorithms. 1 Introduction What happens when players in a game interact with one another, all of them acting independently and selfishly to maximize their own utilities? If they are smart, we intuitively expect their utilities — both individually and as a group — to grow, perhaps even to approach the best possible. We also expect the dynamics of their behavior to eventually reach some kind of equilibrium. Understanding these dynamics is central to game theory as well as its various application areas, including economics, network routing, auction design, and evolutionary biology. It is natural in this setting for the players to each make use of a no-regret learning algorithm for making their decisions, an approach known as decentralized no-regret dynamics. No-regret algorithms are a strong match for playing games because their regret bounds hold even in adversarial environments. As a benefit, these bounds ensure that each player’s utility approaches optimality. When played against one another, it can also be shown that the sum of utilities approaches an approximate optimum [2, 18], and the player strategies converge to an equilibrium under appropriate conditions [6, 1, 8], at rates governed by the regret bounds. Well-known families of no-regret algorithms include multiplicative-weights [13, 7], Mirror Descent [14], and Follow the Regularized/Perturbed Leader [12]. (See [3, 19] for excellent overviews.) For all of these, the average regret vanishes at the worst-case rate of O(1/ p T), which is unimprovable in fully adversarial scenarios. However, the players in our setting are facing other similar, predictable no-regret learning algorithms, a chink that hints at the possibility of improved convergence rates for such dynamics. This was first observed and exploited by Daskalakis et al. [4]. For two-player zero-sum games, they developed a decentralized variant of Nesterov’s accelerated saddle point algorithm [15] and showed that each player’s average regret converges at the remarkable rate of O(1/T). Although the resulting 1 dynamics are somewhat unnatural, in later work, Rakhlin and Sridharan [17] showed surprisingly that the same convergence rate holds for a simple variant of Mirror Descent with the seemingly minor modification that the last utility observation is counted twice. Although major steps forward, both these works are limited to two-player zero-sum games, the very simplest case. As such, they do not cover many practically important settings, such as auctions or routing games, which are decidedly not zero-sum, and which involve many independent actors. In this paper, we vastly generalize these techniques to the practically important but far more challenging case of arbitrary multi-player normal-form games, giving natural no-regret dynamics whose convergence rates are much faster than previously possible for this general setting. Contributions. We show that the average welfare of the game, that is, the sum of player utilities, converges to approximately optimal welfare at the rate O(1/T), rather than the previously known rate of O(1/ p T). Concretely, we show a natural class of regularized no-regret algorithms with recency bias that achieve welfare at least (λ/(1 + µ))OPT −O(1/T), where λ and µ are parameters in a smoothness condition on the game introduced by Roughgarden [18]. For the same class of algorithms, we show that each individual player’s average regret converges to zero at the rate O ! T −3/4" . Thus, our results entail an algorithm for computing coarse correlated equilibria in a decentralized manner with significantly faster convergence than existing methods. We additionally give a black-box reduction that preserves the fast rates in favorable environments, while robustly maintaining ˜O(1/ p T) regret against any opponent in the worst case. Even for two-person zero-sum games, our results for general games expose a hidden generality and modularity underlying the previous results [4, 17]. First, our analysis identifies stability and recency bias as key structural ingredients of an algorithm with fast rates. This covers the Optimistic Mirror Descent of Rakhlin and Sridharan [17] as an example, but also applies to optimistic variants of Follow the Regularized Leader (FTRL), including dependence on arbitrary weighted windows in the history as opposed to just the utility from the last round. Recency bias is a behavioral pattern commonly observed in game-theoretic environments [9]; as such, our results can be viewed as a partial theoretical justification. Second, previous approaches in [4, 17] on achieving both faster convergence against similar algorithms while at the same time ˜O(1/ p T) regret rates against adversaries were shown via ad-hoc modifications of specific algorithms. We give a black-box modification which is not algorithm specific and works for all these optimistic algorithms. Finally, we simulate a 4-bidder simultaneous auction game, and compare our optimistic algorithms against Hedge [7] in terms of utilities, regrets and convergence to equilibria. 2 Repeated Game Model and Dynamics Consider a static game G among a set N of n players. Each player i has a strategy space Si and a utility function ui : S1 ⇥. . . ⇥Sn ! [0, 1] that maps a strategy profile s = (s1, . . . , sn) to a utility ui(s). We assume that the strategy space of each player is finite and has cardinality d, i.e. |Si| = d. We denote with w = (w1, . . . , wn) a profile of mixed strategies, where wi 2 ∆(Si) and wi,x is the probability of strategy x 2 Si. Finally let Ui(w) = Es⇠w[ui(s)], the expected utility of player i. We consider the setting where the game G is played repeatedly for T time steps. At each time step t each player i picks a mixed strategy wt i 2 ∆(Si). At the end of the iteration each player i observes the expected utility he would have received had he played any possible strategy x 2 Si. More formally, let ut i,x = Es−i⇠wt −i[ui(x, s−i)], where s−i is the set of strategies of all but the ith player, and let ut i = (ut i,x)x2Si. At the end of each iteration each player i observes ut i. Observe that the expected utility of a player at iteration t is simply the inner product hwt i, ut ii. No-regret dynamics. We assume that the players each decide their strategy wt i based on a vanishing regret algorithm. Formally, for each player i, the regret after T time steps is equal to the maximum gain he could have achieved by switching to any other fixed strategy: ri(T) = sup w⇤ i 2∆(Si) T X t=1 ⌦ w⇤ i −wt i, ut i ↵ . 2 The algorithm has vanishing regret if ri(T) = o(T). Approximate Efficiency of No-Regret Dynamics. We are interested in analyzing the average welfare of such vanishing regret sequences. For a given strategy profile s the social welfare is defined as the sum of the player utilities: W(s) = P i2N ui(s). We overload notation to denote W(w) = Es⇠w[W(s)]. We want to lower bound how far the average welfare of the sequence is, with respect to the optimal welfare of the static game: OPT = max s2S1⇥...⇥Sn W(s). This is the optimal welfare achievable in the absence of player incentives and if a central coordinator could dictate each player’s strategy. We next define a class of games first identified by Roughgarden [18] on which we can approximate the optimal welfare using decoupled no-regret dynamics. Definition 1 (Smooth game [18]). A game is (λ, µ)-smooth if there exists a strategy profile s⇤such that for any strategy profile s: P i2N ui(s⇤ i , s−i) ≥λOPT −µW(s). In words, any player using his optimal strategy continues to do well irrespective of other players’ strategies. This condition directly implies near-optimality of no-regret dynamics as we show below. Proposition 2. In a (λ, µ)-smooth game, if each player i suffers regret at most ri(T), then: 1 T T X t=1 W(wt) ≥ λ 1 + µOPT − 1 1 + µ 1 T X i2N ri(T) = 1 ⇢OPT − 1 1 + µ 1 T X i2N ri(T), where the factor ⇢= (1 + µ)/λ is called the price of anarchy (POA). This proposition is essentially a more explicit version of Roughgarden’s result [18]; we provide a proof in the appendix for completeness. The result shows that the convergence to POA is driven by the quantity 1 1+µ 1 T P i2N ri(T). There are many algorithms which achieve a regret rate of ri(T) = O( p log(d)T), in which case the latter theorem would imply that the average welfare converges to POA at a rate of O(n p log(d)/T). As we will show, for some natural classes of no-regret algorithms the average welfare converges at the much faster rate of O(n2 log(d)/T). 3 Fast Convergence to Approximate Efficiency In this section, we present our main theoretical results characterizing a class of no-regret dynamics which lead to faster convergence in smooth games. We begin by describing this class. Definition 3 (RVU property). We say that a vanishing regret algorithm satisfies the Regret bounded by Variation in Utilities (RVU) property with parameters ↵> 0 and 0 < β γ and a pair of dual norms (k · k, k · k⇤)1 if its regret on any sequence of utilities u1, u2, . . . , uT is bounded as T X t=1 ⌦ w⇤−wt, ut↵ ↵+ β T X t=1 kut −ut−1k2 ⇤−γ T X t=1 kwt −wt−1k2. (1) Typical online learning algorithms such as Mirror Descent and FTRL do not satisfy the RVU property in their vanilla form, as the middle term grows as PT t=1 kutk2 ⇤for these methods. However, Rakhlin and Sridharan [16] give a modification of Mirror Descent with this property, and we will present a similar variant of FTRL in the sequel. We now present two sets of results when each player uses an algorithm with this property. The first discusses the convergence of social welfare, while the second governs the convergence of the individual players’ utilities at a fast rate. 1The dual to a norm k · k is defined as kvk⇤= supkuk1 hu, vi. 3 3.1 Fast Convergence of Social Welfare Given Proposition 2, we only need to understand the evolution of the sum of players’ regrets PT t=1 ri(T) in order to obtain convergence rates of the social welfare. Our main result in this section bounds this sum when each player uses dynamics with the RVU property. Theorem 4. Suppose that the algorithm of each player i satisfies the property RVU with parameters ↵, β and γ such that β γ/(n −1)2 and k · k = k · k1. Then P i2N ri(T) ↵n. Proof. Since ui(s) 1, definitions imply: kut i −ut−1 i k⇤P s−i (((Q j6=i wt j,sj −Q j6=i wt−1 j,sj ((( . The latter is the total variation distance of two product distributions. By known properties of total variation (see e.g. [11]), this is bounded by the sum of the total variations of each marginal distribution: X s−i (((((( Y j6=i wt j,sj − Y j6=i wt−1 j,sj ((((((  X j6=i kwt j −wt−1 j k (2) By Jensen’s inequality, ⇣P j6=i kwt j −wt−1 j k ⌘2 (n −1) P j6=i kwt j −wt−1 j k2, so that X i2N kut i −ut−1 i k2 ⇤(n −1) X i2N X j6=i kwt j −wt−1 j k2 = (n −1)2 X i2N kwt i −wt−1 i k2. The theorem follows by summing up the RVU property (1) for each player i and observing that the summation of the second terms is smaller than that of the third terms and thereby can be dropped. Remark: The rates from the theorem depend on ↵, which will be O(1) in the sequel. The above theorem extends to the case where k · k is any norm equivalent to the `1 norm. The resulting requirement on β in terms of γ can however be more stringent. Also, the theorem does not require that all players use the same no-regret algorithm unlike previous results [4, 17], as long as each player’s algorithm satisfies the RVU property with a common bound on the constants. We now instantiate the result with examples that satisfy the RVU property with different constants. 3.1.1 Optimistic Mirror Descent The optimistic mirror descent (OMD) algorithm of Rakhlin and Sridharan [16] is parameterized by an adaptive predictor sequence Mt i and a regularizer2 R which is 1-strongly convex3 with respect to a norm k · k. Let DR denote the Bregman divergence associated with R. Then the update rule is defined as follows: let g0 i = argming2∆(Si) R(g) and Φ(u, g) = argmax w2∆(Si) ⌘· hw, ui −DR(w, g), then: wt i = Φ(Mt i, gt−1 i ), and gt i = Φ(ut i, gt−1 i ) Then the following proposition can be obtained for this method. Proposition 5. The OMD algorithm using stepsize ⌘and Mt i = ut−1 i satisfies the RVU property with constants ↵= R/⌘, β = ⌘, γ = 1/(8⌘), where R = maxi supf DR(f, g0 i ). The proposition follows by further crystallizing the arguments of Rakhlin and Sridaran [17], and we provide a proof in the appendix for completeness. The above proposition, along with Theorem 4, immediately yields the following corollary, which had been proved by Rakhlin and Sridharan [17] for two-person zero-sum games, and which we here extend to general games. Corollary 6. If each player runs OMD with Mt i = ut−1 i and stepsize ⌘= 1/( p 8(n −1)), then we have P i2N ri(T) nR/⌘n(n −1) p 8R = O(1). The corollary follows by noting that the condition β γ/(n −1)2 is met with our choice of ⌘. 2Here and in the sequel, we can use a different regularizer Ri for each player i, without qualitatively affecting any of the results. 3R is 1-strongly convex if R ! u+v 2 " R(u)+R(v) 2 −ku−vk2 8 , 8u, v. 4 3.1.2 Optimistic Follow the Regularized Leader We next consider a different class of algorithms denoted as optimistic follow the regularized leader (OFTRL). This algorithm is similar but not equivalent to OMD, and is an analogous extension of standard FTRL [12]. This algorithm takes the same parameters as for OMD and is defined as follows: Let w0 i = argminw2∆(Si) R(w) and: wT i = argmax w2∆(Si) * w, T −1 X t=1 ut i + MT i + −R(w) ⌘ . We consider three variants of OFTRL with different choices of the sequence Mt i, incorporating the recency bias in different forms. One-step recency bias: The simplest form of OFTRL uses Mt i = ut−1 i and obtains the following result, where R = maxi ⇣ supf2∆(Si) R(f) −inff2∆(Si) R(f) ⌘ . Proposition 7. The OFTRL algorithm using stepsize ⌘and Mt i = ut−1 i satisfies the RVU property with constants ↵= R/⌘, β = ⌘and γ = 1/(4⌘). Combined with Theorem 4, this yields the following constant bound on the total regret of all players: Corollary 8. If each player runs OFTRL with Mt i = ut−1 i and ⌘= 1/(2(n −1)), then we have P i2N ri(T) nR/⌘2n(n −1)R = O(1). Rakhlin and Sridharan [16] also analyze an FTRL variant, but require a self-concordant barrier for the constraint set as opposed to an arbitrary strongly convex regularizer, and their bound is missing the crucial negative terms of the RVU property which are essential for obtaining Theorem 4. H-step recency bias: More generally, given a window size H, one can define Mt i = Pt−1 ⌧=t−H u⌧ i /H. We have the following proposition. Proposition 9. The OFTRL algorithm using stepsize ⌘and Mt i = Pt−1 ⌧=t−H u⌧ i /H satisfies the RVU property with constants ↵= R/⌘, β = ⌘H2 and γ = 1/(4⌘). Setting ⌘= 1/(2H(n −1)), we obtain the analogue of Corollary 8, with an extra factor of H. Geometrically discounted recency bias: The next proposition considers an alternative form of recency bias which includes all the previous utilities, but with a geometric discounting. Proposition 10. The OFTRL algorithm using stepsize ⌘and Mt i = 1 Pt−1 ⌧=0 δ−⌧ Pt−1 ⌧=0 δ−⌧u⌧ i satisfies the RVU property with constants ↵= R/⌘, β = ⌘/(1 −δ)3 and γ = 1/(8⌘). Note that these choices for Mt i can also be used in OMD with qualitatively similar results. 3.2 Fast Convergence of Individual Utilities The previous section shows implications of the RVU property on the social welfare. This section complements these with a similar result for each player’s individual utility. Theorem 11. Suppose that the players use algorithms satisfying the RVU property with parameters ↵> 0, β > 0, γ ≥0. If we further have the stability property kwt i −wt+1 i k , then for any player PT t=1 hw⇤ i −wt i, ut ii ↵+ β2(n −1)2T. Similar reasoning as in Theorem 4 yields: kut i −ut−1 i k2 ⇤(n −1) P j6=i kwt j −wt−1 j k2 (n −1)22, and summing the terms gives the theorem. Noting that OFTRL satisfies the RVU property with constants given in Proposition 7 and stability property with = 2⌘(see Lemma 20 in the appendix), we have the following corollary. Corollary 12. If all players use the OFTRL algorithm with Mt i = ut−1 i and ⌘= (n−1)−1/2T −1/4, then we have PT t=1 hw⇤ i −wt i, ut ii (R + 4)pn −1 · T 1/4. 5 Similar results hold for the other forms of recency bias, as well as for OMD. Corollary 12 gives a fast convergence rate of the players’ strategies to the set of coarse correlated equilibria (CCE) of the game. This improves the previously known convergence rate p T (e.g. [10]) to CCE using natural, decoupled no-regret dynamics defined in [4]. 4 Robustness to Adversarial Opponent So far we have shown simple dynamics with rapid convergence properties in favorable environments when each player in the game uses an algorithm with the RVU property. It is natural to wonder if this comes at the cost of worst-case guarantees when some players do not use algorithms with this property. Rakhlin and Sridharan [17] address this concern by modifying the OMD algorithm with additional smoothing and adaptive step-sizes so as to preserve the fast rates in the favorable case while still guaranteeing O(1/ p T) regret for each player, no matter how the opponents play. It is not so obvious how this modification might extend to other procedures, and it seems undesirable to abandon the black-box regret transformations we used to obtain Theorem 4. In this section, we present a generic way of transforming an algorithm which satisfies the RVU property so that it retains the fast convergence in favorable settings, but always guarantees a worst-case regret of ˜O(1/ p T). In order to present our modification, we need a parametric form of the RVU property which will also involve a tunable parameter of the algorithm. For most online learning algorithms, this will correspond to the step-size parameter used by the algorithm. Definition 13 (RVU(⇢) property). We say that a parametric algorithm A(⇢) satisfies the Regret bounded by Variation in Utilities(⇢) (RVU(⇢)) property with parameters ↵, β, γ > 0 and a pair of dual norms (k · k, k · k⇤) if its regret on any sequence of utilities u1, u2, . . . , uT is bounded as T X t=1 ⌦ w⇤−wt, ut↵ ↵ ⇢+ ⇢β T X t=1 kut −ut−1k2 ⇤−γ ⇢ T X t=1 kwt −wt−1k2. (3) In both OMD and OFTRL algorithms from Section 3, the parameter ⇢is precisely the stepsize ⌘. We now show an adaptive choice of ⇢according to an epoch-based doubling schedule. Black-box reduction. Given a parametric algorithm A(⇢) as a black-box we construct a wrapper A0 based on the doubling trick: The algorithm of each player proceeds in epochs. At each epoch r the player i has an upper bound of Br on the quantity PT t=1 kut i−ut−1 i k2 ⇤. We start with a parameter ⌘⇤and B1 = 1, and for ⌧= 1, 2, . . . , T repeat: 1. Play according to A(⌘r) and receive u⌧ i . 2. If P⌧ t=1 |ut i −ut−1 i k2 ⇤≥Br: (a) Update r r + 1, Br 2Br, ⌘r = min n ↵ pBr , ⌘⇤ o , with ↵as in Equation (3). (b) Start a new run of A with parameter ⌘r. Theorem 14. Algorithm A0 achieves regret at most the minimum of the following two terms: T X t=1 ⌦ w⇤ i −wt i, ut i ↵ log(T) 2 + ↵ ⌘⇤+ (2 + ⌘⇤· β) T X t=1 kut i −ut−1 i k2 ⇤ ! −γ ⌘⇤ T X t=1 kwt i −wt−1 i k2; (4) T X t=1 ⌦ w⇤ i −wt i, ut i ↵ log(T) 0 @1 + ↵ ⌘⇤+ (1 + ↵· β) · v u u t2 T X t=1 kut i −ut−1 i k2⇤ 1 A (5) That is, the algorithm satisfies the RVU property, and also has regret that can never exceed ˜O( p T). The theorem thus yields the following corollary, which illustrates the stated robustness of A0. Corollary 15. Algorithm A0, with ⌘⇤= γ (2+β)(n−1)2 log(T ), achieves regret ˜O( p T) against any adversarial sequence, while at the same time satisfying the conditions of Theorem 4. Thereby, if all players use such an algorithm, then: P i2N ri(T) n log(T)(↵/⌘⇤+ 2) = ˜O(1). 6 0 2000 4000 6000 8000 10000 0 500 1000 1500 Sum of regrets Number of rounds Cumulative regret Hedge Optimistic Hedge 0 2000 4000 6000 8000 10000 0 50 100 150 200 250 300 350 400 Max of regrets Number of rounds Cumulative regret Hedge Optimistic Hedge Figure 1: Maximum and sum of individual regrets over time under the Hedge (blue) and Optimistic Hedge (red) dynamics. Proof. Observe that for such ⌘⇤, we have that: (2 + ⌘⇤· β) log(T) (2 + β) log(T)  γ ⌘⇤(n−1)2 . Therefore, algorithm A0, satisfies the sufficient conditions of Theorem 4. If A(⇢) is the OFTRL algorithm, then we know by Proposition 7 that the above result applies with ↵= R = maxw R(w), β = 1, γ = 1 4 and ⇢= ⌘. Setting ⌘⇤= γ (2+β)(n−1)2 = 1 12(n−1)2 , the resulting algorithm A0 will have regret at most: ˜O(n2p T) against an arbitrary adversary, while if all players use algorithm A0 then P i2N ri(T) = O(n3 log(T)). An analogue of Theorem 11 can also be established for this algorithm: Corollary 16. If A satisfies the RVU(⇢) property, and also kwt i −wt−1 i k ⇢, then A0 with ⌘⇤= T −1/4 achieves regret ˜O(T 1/4) if played against itself, and ˜O( p T) against any opponent. Once again, OFTRL satisfies the above conditions with = 2, implying robust convergence. 5 Experimental Evaluation We analyzed the performance of optimistic follow the regularized leader with the entropy regularizer, which corresponds to the Hedge algorithm [7] modified so that the last iteration’s utility for each strategy is double counted; we refer to it as Optimistic Hedge. More formally, the probability of player i playing strategy j at iteration T is proportional to exp ⇣ −⌘· ⇣PT −2 t=1 ut ij + 2uT −1 ij ⌘⌘ , rather than exp ⇣ −⌘· PT −1 t=1 ut ij ⌘ as is standard for Hedge. We studied a simple auction where n players are bidding for m items. Each player has a value v for getting at least one item and no extra value for more items. The utility of a player is the value for the allocation he derived minus the payment he has to make. The game is defined as follows: simultaneously each player picks one of the m items and submits a bid on that item (we assume bids to be discretized). For each item, the highest bidder wins and pays his bid. We let players play this game repeatedly with each player invoking either Hedge or optimistic Hedge. This game, and generalizations of it, are known to be (1 −1/e, 0)-smooth [20], if we also view the auctioneer as a player whose utility is the revenue. The welfare of the game is the value of the resulting allocation, hence not a constant-sum game. The welfare maximization problem corresponds to the unweighted bipartite matching problem. The POA captures how far from the optimal matching is the average allocation of the dynamics. By smoothness we know it converges to at least 1 −1/e of the optimal. Fast convergence of individual and average regret. We run the game for n = 4 bidders and m = 4 items and valuation v = 20. The bids are discretized to be any integer in [1, 20]. We find that the sum of the regrets and the maximum individual regret of each player are remarkably lower under Optimistic Hedge as opposed to Hedge. In Figure 1 we plot the maximum individual regret as well as the sum of the regrets under the two algorithms, using ⌘= 0.1 for both methods. Thus convergence to the set of coarse correlated equilibria is substantially faster under Optimistic Hedge, 7 0 2000 4000 6000 8000 10000 0.5 1 1.5 2 2.5 3 Expected bids of a player Number of rounds Expected bid Hedge Optimistic Hedge 0 2000 4000 6000 8000 10000 4 6 8 10 12 14 16 18 Utility of a player Number of rounds Utility Hedge Optimistic Hedge Figure 2: Expected bid and per-iteration utility of a player on one of the four items over time, under Hedge (blue) and Optimistic Hedge (red) dynamics. confirming our results in Section 3.2. We also observe similar behavior when each player only has value on a randomly picked player-specific subset of items, or uses other step sizes. More stable dynamics. We observe that the behavior under Optimistic Hedge is more stable than under Hedge. In Figure 2, we plot the expected bid of a player on one of the items and his expected utility under the two dynamics. Hedge exhibits the sawtooth behavior that was observed in generalized first price auction run by Overture (see [5, p. 21]). In stunning contrast, Optimistic Hedge leads to more stable expected bids over time. This stability property of optimistic Hedge is one of the main intuitive reasons for the fast convergence of its regret. Welfare. In this class of games, we did not observe any significant difference between the average welfare of the methods. The key reason is the following: the proof that no-regret dynamics are approximately efficient (Proposition 2) only relies on the fact that each player does not have regret against the strategy s⇤ i used in the definition of a smooth game. In this game, regret against these strategies is experimentally comparable under both algorithms, even though regret against the best fixed strategy is remarkably different. This indicates a possibility for faster rates for Hedge in terms of welfare. In Appendix H, we show fast convergence of the efficiency of Hedge for costminimization games, though with a worse POA . 6 Discussion This work extends and generalizes a growing body of work on decentralized no-regret dynamics in many ways. We demonstrate a class of no-regret algorithms which enjoy rapid convergence when played against each other, while being robust to adversarial opponents. This has implications in computation of correlated equilibria, as well as understanding the behavior of agents in complex multi-player games. There are a number of interesting questions and directions for future research which are suggested by our results, including the following: Convergence rates for vanilla Hedge: The fast rates of our paper do not apply to algorithms such as Hedge without modification. Is this modification to satisfy RVU only sufficient or also necessary? If not, are there counterexamples? In the supplement, we include a sketch hinting at such a counterexample, but also showing fast rates to a worse equilibrium than our optimistic algorithms. Convergence of players’ strategies: The OFTRL algorithm often produces much more stable trajectories empirically, as the players converge to an equilibrium, as opposed to say Hedge. A precise quantification of this desirable behavior would be of great interest. Better rates with partial information: If the players do not observe the expected utility function, but only the moves of the other players at each round, can we still obtain faster rates? 8 References [1] A. Blum and Y. Mansour. Learning, regret minimization, and equilibria. In Noam Nisan, Tim Roughgarden, ´Eva Tardos, and Vijay Vazirani, editors, Algorithmic Game Theory, chapter 4, pages 4–30. Cambridge University Press, 2007. [2] Avrim Blum, MohammadTaghi Hajiaghayi, Katrina Ligett, and Aaron Roth. Regret minimization and the price of total anarchy. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, STOC ’08, pages 373–382, New York, NY, USA, 2008. ACM. [3] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [4] Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for zero-sum games. Games and Economic Behavior, 92:327–348, 2014. [5] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. Internet advertising and the generalized second price auction: Selling billions of dollars worth of keywords. Working Paper 11765, National Bureau of Economic Research, November 2005. [6] Dean P. Foster and Rakesh V. Vohra. Calibrated learning and correlated equilibrium. Games and Economic Behavior, 21(12):40 – 55, 1997. [7] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 – 139, 1997. [8] Yoav Freund and Robert E Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29(1):79–103, 1999. [9] Drew Fudenberg and Alexander Peysakhovich. Recency, records and recaps: Learning and nonequilibrium behavior in a simple decision problem. In Proceedings of the Fifteenth ACM Conference on Economics and Computation, EC ’14, pages 971–986, New York, NY, USA, 2014. ACM. [10] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000. [11] Wassily Hoeffding and J. Wolfowitz. Distinguishability of sets of distributions. Ann. Math. Statist., 29(3):700–718, 1958. [12] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291 – 307, 2005. Learning Theory 2003 Learning Theory 2003. [13] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994. [14] AS Nemirovsky and DB Yudin. Problem complexity and method efficiency in optimization. 1983. [15] Yu. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127– 152, 2005. [16] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In COLT 2013, pages 993–1019, 2013. [17] Alexander Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pages 3066–3074, 2013. [18] T. Roughgarden. Intrinsic robustness of the price of anarchy. In Proceedings of the 41st annual ACM symposium on Theory of computing, pages 513–522, New York, NY, USA, 2009. ACM. [19] Shai Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn., 4(2):107–194, February 2012. [20] Vasilis Syrgkanis and ´Eva Tardos. Composable and efficient mechanisms. In Proceedings of the Fortyfifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 211–220, New York, NY, USA, 2013. ACM. 9
2015
195
5,697
Communication Complexity of Distributed Convex Learning and Optimization Yossi Arjevani Weizmann Institute of Science Rehovot 7610001, Israel yossi.arjevani@weizmann.ac.il Ohad Shamir Weizmann Institute of Science Rehovot 7610001, Israel ohad.shamir@weizmann.ac.il Abstract We study the fundamental limits to communication-efficient distributed methods for convex learning and optimization, under different assumptions on the information available to individual machines, and the types of functions considered. We identify cases where existing algorithms are already worst-case optimal, as well as cases where room for further improvement is still possible. Among other things, our results indicate that without similarity between the local objective functions (due to statistical data similarity or otherwise) many communication rounds may be required, even if the machines have unbounded computational power. 1 Introduction We consider the problem of distributed convex learning and optimization, where a set of m machines, each with access to a different local convex function Fi : Rd 7→R and a convex domain W ⊆Rd, attempt to solve the optimization problem min w∈W F(w) where F(w) = 1 m m X i=1 Fi(w). (1) A prominent application is empirical risk minimization, where the goal is to minimize the average loss over some dataset, where each machine has access to a different subset of the data. Letting {z1, . . . , zN} be the dataset composed of N examples, and assuming the loss function ℓ(w, z) is convex in w, then the empirical risk minimization problem minw∈W 1 N PN i=1 ℓ(w, zi) can be written as in Eq. (1), where Fi(w) is the average loss over machine i’s examples. The main challenge in solving such problems is that communication between the different machines is usually slow and constrained, at least compared to the speed of local processing. On the other hand, the datasets involved in distributed learning are usually large and high-dimensional. Therefore, machines cannot simply communicate their entire data to each other, and the question is how well can we solve problems such as Eq. (1) using as little communication as possible. As datasets continue to increase in size, and parallel computing platforms becoming more and more common (from multiple cores on a single CPU to large-scale and geographically distributed computing grids), distributed learning and optimization methods have been the focus of much research in recent years, with just a few examples including [25, 4, 2, 27, 1, 5, 13, 23, 16, 17, 8, 7, 9, 11, 20, 19, 3, 26]. Most of this work studied algorithms for this problem, which provide upper bounds on the required time and communication complexity. In this paper, we take the opposite direction, and study what are the fundamental performance limitations in solving Eq. (1), under several different sets of assumptions. We identify cases where existing algorithms are already optimal (at least in the worst-case), as well as cases where room for further improvement is still possible. 1 Since a major constraint in distributed learning is communication, we focus on studying the amount of communication required to optimize Eq. (1) up to some desired accuracy ϵ. More precisely, we consider the number of communication rounds that are required, where in each communication round the machines can generally broadcast to each other information linear in the problem’s dimension d (e.g. a point in W or a gradient). This applies to virtually all algorithms for large-scale learning we are aware of, where sending vectors and gradients is feasible, but computing and sending larger objects, such as Hessians (d × d matrices) is not. Our results pertain to several possible settings (see Sec. 2 for precise definitions). First, we distinguish between the local functions being merely convex or strongly-convex, and whether they are smooth or not. These distinctions are standard in studying optimization algorithms for learning, and capture important properties such as the regularization and the type of loss function used. Second, we distinguish between a setting where the local functions are related – e.g., because they reflect statistical similarities in the data residing at different machines – and a setting where no relationship is assumed. For example, in the extreme case where data was split uniformly at random between machines, one can show that quantities such as the values, gradients and Hessians of the local functions differ only by δ = O(1/√n), where n is the sample size per machine, due to concentration of measure effects. Such similarities can be used to speed up the optimization/learning process, as was done in e.g. [20, 26]. Both the δ-related and the unrelated setting can be considered in a unified way, by letting δ be a parameter and studying the attainable lower bounds as a function of δ. Our results can be summarized as follows: • First, we define a mild structural assumption on the algorithm (which is satisfied by reasonable approaches we are aware of), which allows us to provide the lower bounds described below on the number of communication rounds required to reach a given suboptimality ϵ. – When the local functions can be unrelated, we prove a lower bound of Ω( p 1/λ log(1/ϵ)) for smooth and λ-strongly convex functions, and Ω( p 1/ϵ) for smooth convex functions. These lower bounds are matched by a straightforward distributed implementation of accelerated gradient descent. In particular, the results imply that many communication rounds may be required to get a high-accuracy solution, and moreover, that no algorithm satisfying our structural assumption would be better, even if we endow the local machines with unbounded computational power. For non-smooth functions, we show a lower bound of Ω( p 1/λϵ) for λ-strongly convex functions, and Ω(1/ϵ) for general convex functions. Although we leave a full derivation to future work, it seems these lower bounds can be matched in our framework by an algorithm combining acceleration and Moreau proximal smoothing of the local functions. – When the local functions are related (as quantified by the parameter δ), we prove a communication round lower bound of Ω( p δ/λ log(1/ϵ)) for smooth and λ-strongly convex functions. For quadratics, this bound is matched by (up to constants and logarithmic factors) by the recentlyproposed DISCO algorithm [26]. However, getting an optimal algorithm for general strongly convex and smooth functions in the δ-related setting, let alone for non-smooth or non-strongly convex functions, remains open. • We also study the attainable performance without posing any structural assumptions on the algorithm, but in the more restricted case where only a single round of communication is allowed. We prove that in a broad regime, the performance of any distributed algorithm may be no better than a ‘trivial’ algorithm which returns the minimizer of one of the local functions, as long as the number of bits communicated is less than Ω(d2). Therefore, in our setting, no communication-efficient 1-round distributed algorithm can provide non-trivial performance in the worst case. Related Work There have been several previous works which considered lower bounds in the context of distributed learning and optimization, but to the best of our knowledge, none of them provide a similar type of results. Perhaps the most closely-related paper is [22], which studied the communication complexity of distributed optimization, and showed that Ω(d log(1/ϵ)) bits of communication are necessary between the machines, for d-dimensional convex problems. However, in our setting this does not lead to any non-trivial lower bound on the number of communication rounds (indeed, just specifying a d-dimensional vector up to accuracy ϵ required O(d log(1/ϵ)) bits). More recently, [2] considered lower bounds for certain types of distributed learning problems, but not convex ones in an agnostic 2 distribution-free framework. In the context of lower bounds for one-round algorithms, the results of [6] imply that Ω(d2) bits of communication are required to solve linear regression in one round of communication. However, that paper assumes a different model than ours, where the function to be optimized is not split among the machines as in Eq. (1), where each Fi is convex. Moreover, issues such as strong convexity and smoothness are not considered. [20] proves an impossibility result for a one-round distributed learning scheme, even when the local functions are not merely related, but actually result from splitting data uniformly at random between machines. On the flip side, that result is for a particular algorithm, and doesn’t apply to any possible method. Finally, we emphasize that distributed learning and optimization can be studied under many settings, including ones different than those studied here. For example, one can consider distributed learning on a stream of i.i.d. data [19, 7, 10, 8], or settings where the computing architecture is different, e.g. where the machines have a shared memory, or the function to be optimized is not split as in Eq. (1). Studying lower bounds in such settings is an interesting topic for future work. 2 Notation and Framework The only vector and matrix norms used in this paper are the Euclidean norm and the spectral norm, respectively. ej denotes the j-th standard unit vector. We let ∇G(w) and ∇2G(w) denote the gradient and Hessians of a function G at w, if they exist. G is smooth (with parameter L) if it is differentiable and the gradient is L-Lipschitz. In particular, if w∗= arg minw∈W G(w), then G(w) −G(w∗) ≤ L 2 ∥w −w∗∥2. G is strongly convex (with parameter λ) if for any w, w′ ∈ W, G(w′) ≥G(w) + ⟨g, w′ −w⟩+ λ 2 ∥w′ −w∥2 where g ∈∂G(w′) is a subgradient of G at w. In particular, if w∗= arg minw∈W G(w), then G(w) −G(w∗) ≥ λ 2 ∥w −w∗∥2. Any convex function is also strongly-convex with λ = 0. A special case of smooth convex functions are quadratics, where G(w) = w⊤Aw + b⊤w + c for some positive semidefinite matrix A, vector b and scalar c. In this case, λ and L correspond to the smallest and largest eigenvalues of A. We model the distributed learning algorithm as an iterative process, where in each round the machines may perform some local computations, followed by a communication round where each machine broadcasts a message to all other machines. We make no assumptions on the computational complexity of the local computations. After all communication rounds are completed, a designated machine provides the algorithm’s output (possibly after additional local computation). Clearly, without any assumptions on the number of bits communicated, the problem can be trivially solved in one round of communication (e.g. each machine communicates the function Fi to the designated machine, which then solves Eq. (1). However, in practical large-scale scenarios, this is non-feasible, and the size of each message (measured by the number of bits) is typically on the order of ˜O(d), enough to send a d-dimensional real-valued vector1, such as points in the optimization domain or gradients, but not larger objects such as d × d Hessians. In this model, our main question is the following: How many rounds of communication are necessary in order to solve problems such as Eq. (1) to some given accuracy ϵ? As discussed in the introduction, we first need to distinguish between different assumptions on the possible relation between the local functions. One natural situation is when no significant relationship can be assumed, for instance when the data is arbitrarily split or is gathered by each machine from statistically dissimilar sources. We denote this as the unrelated setting. However, this assumption is often unnecessarily pessimistic. Often the data allocation process is more random, or we can assume that the different data sources for each machine have statistical similarities (to give a simple example, consider learning from users’ activity across a geographically distributed computing grid, each servicing its own local population). We will capture such similarities, in the context of quadratic functions, using the following definition: Definition 1. We say that a set of quadratic functions Fi(w) := w⊤Aiw + biw + ci, Ai ∈Rd×d, bi ∈Rd, ci ∈R 1The ˜O hides constants and factors logarithmic in the required accuracy of the solution. The idea is that we can represent real numbers up to some arbitrarily high machine precision, enough so that finite-precision issues are not a problem. 3 are δ-related, if for any i, j ∈{1 . . . k}, it holds that ∥Ai −Aj∥≤δ, ∥bi −bj∥≤δ, |ci −cj| ≤δ For example, in the context of linear regression with the squared loss over a bounded subset of Rd, and assuming mn data points with bounded norm are randomly and equally split among m machines, it can be shown that the conditions above hold with δ = O(1/√n) [20]. The choice of δ provides us with a spectrum of learning problems ranked by difficulty: When δ = Ω(1), this generally corresponds to the unrelated setting discussed earlier. When δ = O(1/√n), we get the situation typical of randomly partitioned data. When δ = 0, then all the local functions have essentially the same minimizers, in which case Eq. (1) can be trivially solved with zero communication, just by letting one machine optimize its own local function. We note that although Definition 1 can be generalized to non-quadratic functions, we do not need it for the results presented here. We end this section with an important remark. In this paper, we prove lower bounds for the δ-related setting, which includes as a special case the commonly-studied setting of randomly partitioned data (in which case δ = O(1/√n)). However, our bounds do not apply for random partitioning, since they use δ-related constructions which do not correspond to randomly partitioned data. In fact, very recent work [12] has cleverly shown that for randomly partitioned data, and for certain reasonable regimes of strong convexity and smoothness, it is actually possible to get better performance than what is indicated by our lower bounds. However, this encouraging result crucially relies on the random partition property, and in parameter regimes which limit how much each data point needs to be “touched”, hence preserving key statistical independence properties. We suspect that it may be difficult to improve on our lower bounds under substantially weaker assumptions. 3 Lower Bounds Using a Structural Assumption In this section, we present lower bounds on the number of communication rounds, where we impose a certain mild structural assumption on the operations performed by the algorithm. Roughly speaking, our lower bounds pertain to a very large class of algorithms, which are based on linear operations involving points, gradients, and vector products with local Hessians and their inverses, as well as solving local optimization problems involving such quantities. At each communication round, the machines can share any of the vectors they have computed so far. Formally, we consider algorithms which satisfy the assumption stated below. For convenience, we state it for smooth functions (which are differentiable) and discuss the case of non-smooth functions in Sec. 3.2. Assumption 1. For each machine j, define a set Wj ⊂Rd, initially Wj = {0}. Between communication rounds, each machine j iteratively computes and adds to Wj some finite number of points w, each satisfying γw + ν∇Fj(w) ∈span n w′ , ∇Fj(w′) , (∇2Fj(w′) + D)w′′ , (∇2Fj(w′) + D)−1w′′ w′, w′′ ∈Wj , D diagonal , ∇2Fj(w′) exists , (∇2Fj(w′) + D)−1 exists o . (2) for some γ, ν ≥0 such that γ +ν > 0. After every communication round, let Wj := ∪m i=1Wi for all j. The algorithm’s final output (provided by the designated machine j) is a point in the span of Wj. This assumption requires several remarks: • Note that Wj is not an explicit part of the algorithm: It simply includes all points computed by machine j so far, or communicated to it by other machines, and is used to define the set of new points which the machine is allowed to compute. • The assumption bears some resemblance – but is far weaker – than standard assumptions used to provide lower bounds for iterative optimization algorithms. For example, a common assumption (see [14]) is that each computed point w must lie in the span of the previous gradients. This corresponds to a special case of Assumption 1, where γ = 1, ν = 0, and the span is only over gradients of previously computed points. Moreover, it also allows (for instance) exact optimization of each local function, which is a subroutine in some distributed algorithms (e.g. [27, 25]), by setting γ = 0, ν = 1 and computing a point w satisfying γw + ν∇Fj(w) = 0. By allowing the span to include previous gradients, we also incorporate algorithms which perform optimization of the 4 local function plus terms involving previous gradients and points, such as [20], as well as algorithms which rely on local Hessian information and preconditioning, such as [26]. In summary, the assumption is satisfied by most techniques for black-box convex optimization that we are aware of. Finally, we emphasize that we do not restrict the number or computational complexity of the operations performed between communication rounds. • The requirement that γ, ν ≥0 is to exclude algorithms which solve non-convex local optimization problems of the form minw Fj(w) + γ ∥w∥2 with γ < 0, which are unreasonable in practice and can sometimes break our lower bounds. • The assumption that Wj is initially {0} (namely, that the algorithm starts from the origin) is purely for convenience, and our results can be easily adapted to any other starting point by shifting all functions accordingly. The techniques we employ in this section are inspired by lower bounds on the iteration complexity of first-order methods for standard (non-distributed) optimization (see for example [14]). These are based on the construction of ‘hard’ functions, where each gradient (or subgradient) computation can only provide a small improvement in the objective value. In our setting, the dynamics are roughly similar, but the necessity of many gradient computations is replaced by many communication rounds. This is achieved by constructing suitable local functions, where at any time point no individual machine can ‘progress’ on its own, without information from other machines. 3.1 Smooth Local Functions We begin by presenting a lower bound when the local functions Fi are strongly-convex and smooth: Theorem 1. For any even number m of machines, any distributed algorithm which satisfies Assumption 1, and for any λ ∈[0, 1), δ ∈(0, 1), there exist m local quadratic functions over Rd (where d is sufficiently large) which are 1-smooth, λ-strongly convex, and δ-related, such that if w∗= arg minw∈Rd F(w), then the number of communication rounds required to obtain ˆw satisfying F( ˆw) −F(w∗) ≤ϵ (for any ϵ > 0) is at least 1 4 s 1 + δ  1 λ −1  −1 ! log λ ∥w∗∥2 4ϵ ! −1 2 = Ω r δ λ log λ ∥w∗∥2 ϵ !! if λ > 0, and at least q 3δ 32ϵ ∥w∗∥−2 if λ = 0. The assumption of m being even is purely for technical convenience, and can be discarded at the cost of making the proof slightly more complex. Also, note that m does not appear explicitly in the bound, but may appear implicitly, via δ (for example, in a statistical setting δ may depend on the number of data points per machine, and may be larger if the same dataset is divided to more machines). Let us contrast our lower bound with some existing algorithms and guarantees in the literature. First, regardless of whether the local functions are similar or not, we can always simulate any gradientbased method designed for a single machine, by iteratively computing gradients of the local functions, and performing a communication round to compute their average. Clearly, this will be a gradient of the objective function F(·) = 1 m Pm i=1 Fi(·), which can be fed into any gradient-based method such as gradient descent or accelerated gradient descent [14]. The resulting number of required communication rounds is then equal to the number of iterations. In particular, using accelerated gradient descent for smooth and λ-strongly convex functions yields a round complexity of O( p 1/λ log(∥w∗∥2 /ϵ)), and O(∥w∗∥ p 1/ϵ) for smooth convex functions. This matches our lower bound (up to constants and log factors) when the local functions are unrelated (δ = Ω(1)). When the functions are related, however, the upper bounds above are highly sub-optimal: Even if the local functions are completely identical, and δ = 0, the number of communication rounds will remain the same as when δ = Ω(1). To utilize function similarity while guaranteeing arbitrary small ϵ, the two most relevant algorithms are DANE [20], and the more recent DISCO [26]. For smooth and λ-strongly convex functions, which are either quadratic or satisfy a certain self-concordance condition, DISCO achieves ˜O(1+ p δ/λ) round complexity ([26, Thm.2]), which matches our lower bound in terms of dependence on δ, λ. However, for non-quadratic losses, the round complexity 5 bounds are somewhat worse, and there are no guarantees for strongly convex and smooth functions which are not self-concordant. Thus, the question of the optimal round complexity for such functions remains open. The full proof of Thm. 1 appears in the supplementary material, and is based on the following idea: For simplicity, suppose we have two machines, with local functions F1, F2 defined as follows, F1(w) = δ(1 −λ) 4 w⊤A1w −δ(1 −λ) 2 e⊤ 1 w + λ 2 ∥w∥2 (3) F2(w) = δ(1 −λ) 4 w⊤A2w + λ 2 ∥w∥2 , where A1 =   1 0 0 0 0 0 . . . 0 1 −1 0 0 0 . . . 0 −1 1 0 0 0 . . . 0 0 0 1 −1 0 . . . 0 0 0 −1 1 0 . . . ... ... ... ... ... ... ...   , A2 =   1 −1 0 0 0 0 . . . −1 1 0 0 0 0 . . . 0 0 1 −1 0 0 . . . 0 0 −1 1 0 0 . . . 0 0 0 0 1 −1 . . . 0 0 0 0 −1 1 . . . ... ... ... ... ... ... ...   It is easy to verify that for δ, λ ≤1, both F1(w) and F2(w) are 1-smooth and λ-strongly convex, as well as δ-related. Moreover, the optimum of their average is a point w∗with non-zero entries at all coordinates. However, since each local functions has a block-diagonal quadratic term, it can be shown that for any algorithm satisfying Assumption 1, after T communication rounds, the points computed by the two machines can only have the first T + 1 coordinates non-zero. No machine will be able to further ‘progress’ on its own, and cause additional coordinates to become non-zero, without another communication round. This leads to a lower bound on the optimization error which depends on T, resulting in the theorem statement after a few computations. 3.2 Non-smooth Local Functions Remaining in the framework of algorithms satisfying Assumption 1, we now turn to discuss the situation where the local functions are not necessarily smooth or differentiable. For simplicity, our formal results here will be in the unrelated setting, and we only informally discuss their extension to a δ-related setting (in a sense relevant to non-smooth functions). Formally defining δ-related non-smooth functions is possible but not altogether trivial, and is therefore left to future work. We adapt Assumption 1 to the non-smooth case, by allowing gradients to be replaced by arbitrary subgradients at the same points. Namely, we replace Eq. (2) by the requirement that for some g ∈∂Fj(w), and γ, ν ≥0, γ + ν > 0, γw + νg ∈span n w′ , g′ , (∇2Fj(w′) + D)w′′ , (∇2Fj(w′) + D)−1w′′ w′, w′′ ∈Wj , g′ ∈∂Fj(w′) , D diagonal , ∇2Fj(w′) exists , (∇2Fj(w′) + D)−1 exists o . The lower bound for this setting is stated in the following theorem. Theorem 2. For any even number m of machines, any distributed optimization algorithm which satisfies Assumption 1, and for any λ ≥0, there exist λ-strongly convex (1+λ)-Lipschitz continuous convex local functions F1(w) and F2(w) over the unit Euclidean ball in Rd (where d is sufficiently large), such that if w∗= arg minw:∥w∥≤1 F(w), the number of communication rounds required to obtain ˆw satisfying F( ˆw) −F(w∗) ≤ϵ (for any sufficiently small ϵ > 0) is 1 8ϵ −2 for λ = 0, and q 1 16λϵ −2 for λ > 0. As in Thm. 1, we note that the assumption of even m is for technical convenience. This theorem, together with Thm. 1, implies that both strong convexity and smoothness are necessary for the number of communication rounds to scale logarithmically with the required accuracy ϵ. We emphasize that this is true even if we allow the machines unbounded computational power, to perform arbitrarily many operations satisfying Assumption 1. Moreover, a preliminary analysis 6 indicates that performing accelerated gradient descent on smoothed versions of the local functions (using Moreau proximal smoothing, e.g. [15, 24]), can match these lower bounds up to log factors2. We leave a full formal derivation (which has some subtleties) to future work. The full proof of Thm. 2 appears in the supplementary material. The proof idea relies on the following construction: Assume that we fix the number of communication rounds to be T, and (for simplicity) that T is even and the number of machines is 2. Then we use local functions of the form F1(w) = 1 √ 2 |b −w1| + 1 p 2(T + 2) (|w2 −w3| + |w4 −w5| + · · · + |wT −wT +1|) + λ 2 ∥w∥2 F2(w) = 1 p 2(T + 2) (|w1 −w2| + |w3 −w4| + · · · + |wT +1 −wT +2|) + λ 2 ∥w∥2 , where b is a suitably chosen parameter. It is easy to verify that both local functions are λ-strongly convex and (1 + λ)-Lipschitz continuous over the unit Euclidean ball. Similar to the smooth case, we argue that after T communication rounds, the resulting points w computed by machine 1 will be non-zero only on the first T + 1 coordinates, and the points w computed by machine 2 will be non-zero only on the first T coordinates. As in the smooth case, these functions allow us to ’control’ the progress of any algorithm which satisfies Assumption 1. Finally, although the result is in the unrelated setting, it is straightforward to have a similar construction in a ‘δ-related’ setting, by multiplying F1 and F2 by δ. The resulting two functions have their gradients and subgradients at most δ-different from each other, and the construction above leads to a lower bound of Ω(δ/ϵ) for convex Lipschitz functions, and Ω(δ p 1/λϵ) for λ-strongly convex Lipschitz functions. In terms of upper bounds, we are actually unaware of any relevant algorithm in the literature adapted to such a setting, and the question of attainable performance here remains wide open. 4 One Round of Communication In this section, we study what lower bounds are attainable without any kind of structural assumption (such as Assumption 1). This is a more challenging setting, and the result we present will be limited to algorithms using a single round of communication round. We note that this still captures a realistic non-interactive distributed computing scenario, where we want each machine to broadcast a single message, and a designated machine is then required to produce an output. In the context of distributed optimization, a natural example is a one-shot averaging algorithm, where each machine optimizes its own local data, and the resulting points are averaged (e.g. [27, 25]). Intuitively, with only a single round of communication, getting an arbitrarily small error ϵ may be infeasible. The following theorem establishes a lower bound on the attainable error, depending on the strong convexity parameter λ and the similarity measure δ between the local functions, and compares this with a ‘trivial’ zero-communication algorithm, which just returns the optimum of a single local function: Theorem 3. For any even number m of machines, any dimension d larger than some numerical constant, any δ ≥3λ > 0, and any (possibly randomized) algorithm which communicates at most d2/128 bits in a single round of communication, there exist m quadratic functions over Rd, which are δ-related, λ-strongly convex and 9λ-smooth, for which the following hold for some positive numerical constants c, c′: • The point ˆw returned by the algorithm satisfies E  F( ˆw) −min w∈Rd F(w)  ≥cδ2 λ in expectation over the algorithm’s randomness. 2Roughly speaking, for any γ > 0, this smoothing creates a 1 γ -smooth function which is γ-close to the original function. Plugging these into the guarantees of accelerated gradient descent and tuning γ yields our lower bounds. Note that, in order to execute this algorithm each machine must be sufficiently powerful to obtain the gradient of the Moreau envelope of its local function, which is indeed the case in our framework. 7 • For any machine j, if ˆwj = arg minw∈Rd Fj(w), then F( ˆwj) −minw∈Rd F(w) ≤c′δ2/λ. The theorem shows that unless the communication budget is extremely large (quadratic in the dimension), there are functions which cannot be optimized to non-trivial accuracy in one round of communication, in the sense that the same accuracy (up to a universal constant) can be obtained with a ‘trivial’ solution where we just return the optimum of a single local function. This complements an earlier result in [20], which showed that a particular one-round algorithm is no better than returning the optimum of a local function, under the stronger assumption that the local functions are not merely δ-related, but are actually the average loss over some randomly partitioned data. The full proof appears in the supplementary material, but we sketch the main ideas below. As before, focusing on the case of two machines, and assuming machine 2 is responsible for providing the output, we use F1(w) = 3λw⊤  I + 1 2c √ d M −1 −1 2I ! w F2(w) = 3λ 2 ∥w∥2 −δej, where M is essentially a randomly chosen {−1, +1}-valued d × d symmetric matrix with spectral norm at most c √ d, and c is a suitable constant. These functions can be shown to be δ-related as well as λ-strongly convex. Moreover, the optimum of F(w) = 1 2(F1(w) + F2(w)) equals w∗= δ 6λ  I + 1 2c √ d M  ej. Thus, we see that the optimal point w∗depends on the j-th column of M. Intuitively, the machines need to approximate this column, and this is the source of hardness in this setting: Machine 1 knows M but not j, yet needs to communicate to machine 2 enough information to construct its j-th column. However, given a communication budget much smaller than the size of M (which is d2), it is difficult to convey enough information on the j-th column without knowing what j is. Carefully formalizing this intuition, and using some information-theoretic tools, allows us to prove the first part of Thm. 3. Proving the second part of Thm. 3 is straightforward, using a few computations. 5 Summary and Open Questions In this paper, we studied lower bounds on the number of communication rounds needed to solve distributed convex learning and optimization problems, under several different settings. Our results indicate that when the local functions are unrelated, then regardless of the local machines’ computational power, many communication rounds may be necessary (scaling polynomially with 1/ϵ or 1/λ), and that the worst-case optimal algorithm (at least for smooth functions) is just a straightforward distributed implementation of accelerated gradient descent. When the functions are related, we show that the optimal performance is achieved by the algorithm of [26] for quadratic and strongly convex functions, but designing optimal algorithms for more general functions remains open. Beside these results, which required a certain mild structural assumption on the algorithm employed, we also provided an assumption-free lower bound for one-round algorithms, which implies that even for strongly convex quadratic functions, such algorithms can sometimes only provide trivial performance. Besides the question of designing optimal algorithms for the remaining settings, several additional questions remain open. First, it would be interesting to get assumption-free lower bounds for algorithms with multiple rounds of communication. Second, our work focused on communication complexity, but in practice the computational complexity of the local computations is no less important. Thus, it would be interesting to understand what is the attainable performance with simple, runtime-efficient algorithms. Finally, it would be interesting to study lower bounds for other distributed learning and optimization scenarios. Acknowledgments: This research is supported in part by an FP7 Marie Curie CIG grant, the Intel ICRI-CI Institute, and Israel Science Foundation grant 425/13. We thank Nati Srebro for several helpful discussions and insights. 8 References [1] A. Agarwal, O. Chapelle, M. Dud´ık, and J. Langford. A reliable effective terascale linear learning system. CoRR, abs/1110.4198, 2011. [2] M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In COLT, 2012. [3] M.-F. Balcan, V. Kanchanapally, Y. Liang, and D. Woodruff. Improved distributed principal component analysis. In NIPS, 2014. [4] R. Bekkerman, M. Bilenko, and J. Langford. Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2011. [5] S.P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via ADMM. Foundations and Trends in Machine Learning, 3(1):1–122, 2011. [6] K. Clarkson and D. Woodruff. Numerical linear algebra in the streaming model. In STOC, 2009. [7] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated gradient methods. In NIPS, 2011. [8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13:165–202, 2012. [9] J. Duchi, A. Agarwal, and M. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Trans. Automat. Contr., 57(3):592–606, 2012. [10] R. Frostig, R. Ge, S. Kakade, and A. Sidford. Competing with the empirical risk minimizer in a single pass. arXiv preprint arXiv:1412.6606, 2014. [11] M. Jaggi, V. Smith, M. Tak´ac, J. Terhorst, S. Krishnan, T. Hofmann, and M. Jordan. Communication-efficient distributed dual coordinate ascent. In NIPS, 2014. [12] J. Lee, T. Ma, and Q. Lin. Distributed stochastic variance reduced gradient methods. CoRR, 1507.07595, 2015. [13] D. Mahajan, S. Keerthy, S. Sundararajan, and L. Bottou. A parallel SGD method with strong convergence. CoRR, abs/1311.0636, 2013. [14] Y. Nesterov. Introductory lectures on convex optimization: A basic course. Springer, 2004. [15] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127–152, 2005. [16] B. Recht, C. R´e, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011. [17] P. Richt´arik and M. Tak´ac. Distributed coordinate descent method for learning with big data. CoRR, abs/1310.2059, 2013. [18] O. Shamir. Fundamental limits of online and distributed algorithms for statistical learning and estimation. In NIPS, 2014. [19] O. Shamir and N. Srebro. On distributed stochastic optimization and learning. In Allerton Conference on Communication, Control, and Computing, 2014. [20] O. Shamir, N. Srebro, and T. Zhang. Communication-efficient distributed optimization using an approximate newton-type method. In ICML, 2014. [21] T. Tao. Topics in random matrix theory, volume 132. American Mathematical Soc., 2012. [22] J. Tsitsiklis and Z.-Q. Luo. Communication complexity of convex optimization. J. Complexity, 3(3):231–243, 1987. [23] T. Yang. Trading computation for communication: Distributed SDCA. In NIPS, 2013. [24] Y.-L. Yu. Better approximation and faster algorithm using proximal average. In NIPS, 2013. [25] Y. Zhang, J. Duchi, and M. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14:3321–3363, 2013. [26] Y. Zhang and L. Xiao. Communication-efficient distributed optimization of self-concordant empirical loss. In ICML, 2015. [27] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010. 9
2015
196
5,698
Large-Scale Bayesian Multi-Label Learning via Topic-Based Label Embeddings Piyush Rai†∗, Changwei Hu∗, Ricardo Henao∗, Lawrence Carin∗ †CSE Dept, IIT Kanpur ∗ECE Dept, Duke University piyush@cse.iitk.ac.in, {ch237,r.henao,lcarin}@duke.edu Abstract We present a scalable Bayesian multi-label learning model based on learning lowdimensional label embeddings. Our model assumes that each label vector is generated as a weighted combination of a set of topics (each topic being a distribution over labels), where the combination weights (i.e., the embeddings) for each label vector are conditioned on the observed feature vector. This construction, coupled with a Bernoulli-Poisson link function for each label of the binary label vector, leads to a model with a computational cost that scales in the number of positive labels in the label matrix. This makes the model particularly appealing for real-world multi-label learning problems where the label matrix is usually very massive but highly sparse. Using a data-augmentation strategy leads to full local conjugacy in our model, facilitating simple and very efficient Gibbs sampling, as well as an Expectation Maximization algorithm for inference. Also, predicting the label vector at test time does not require doing an inference for the label embeddings and can be done in closed form. We report results on several benchmark data sets, comparing our model with various state-of-the art methods. 1 Introduction Multi-label learning refers to the problem setting in which the goal is to assign to an object (e.g., a video, image, or webpage) a subset of labels (e.g., tags) from a (possibly very large) set of labels. The label assignments of each example can be represented using a binary label vector, indicating the presence/absence of each label. Despite a significant amount of prior work, multi-label learning [7, 6] continues to be an active area of research, with a recent surge of interest [1, 25, 18, 13, 10] in designing scalable multi-label learning methods to address the challenges posed by problems such as image/webpage annotation [18], computational advertising [1, 18], medical coding [24], etc., where not only the number of examples and data dimensionality are large but the number of labels can also be massive (several thousands to even millions). Often, in multi-label learning problems, many of the labels tend to be correlated with each other. To leverage the label correlations and also handle the possibly massive number of labels, a common approach is to reduce the dimensionality of the label space, e.g., by projecting the label vectors to a subspace [10, 25, 21], learning a prediction model in that space, and then projecting back to the original space. However, as the label space dimensionality increases and/or the sparsity in the label matrix becomes more pronounced (i.e., very few ones), and/or if the label matrix is only partially observed, such methods tend to suffer [25] and can also become computationally prohibitive. To address these issues, we present a scalable, fully Bayesian framework for multi-label learning. Our framework is similar in spirit to the label embedding methods based on reducing the label space dimensionality [10, 21, 25]. However, our framework offers the following key advantages: (1) computational cost of training our model scales in the number of ones in the label matrix, which makes our framework easily scale in cases where the label matrix is massive but sparse; (2) our likelihood model for the binary labels, based on a Bernoulli-Poisson link, more realistically models the extreme sparsity of the label matrix as compared to the commonly employed logistic/probit link; and (3) our model is more interpretable - embeddings naturally correspond to topics where each topic is a distribution over labels. Moreover, at test time, unlike other Bayesian methods [10], we do not need to infer the label embeddings of the test example, thereby leading to faster predictions. 1 In addition to the modeling flexibility that leads to a robust, interpretrable, and scalable model, our framework enjoys full local conjugacy, which allows us to develop simple Gibbs sampling, as well as an Expectation Maximization (EM) algorithm for the proposed model, both of which are simple to implement in practice (and amenable for parallelization). 2 The Model We assume that the training data are given in the form of N examples represented by a feature matrix X ∈RD×N, along with their labels in a (possibly incomplete) label matrix Y ∈{0, 1}L×N. The goal is to learn a model that can predict the label vector y∗∈{0, 1}L for a test example x∗∈RD. We model the binary label vector yn of the nth example by thresholding a count-valued vector mn yn = 1(mn ≥1) (1) which, for each individual binary label yln ∈yn, l = 1, . . . , L, can also be written as yln = 1(mln ≥1). In Eq. (1), mn = [m1n, . . . , mLn] ∈ZL denotes a latent count vector of size L and is assumed drawn from a Poisson mn ∼Poisson(λn) (2) Eq (2) denotes drawing each component of mn independently, from a Poisson distribution, with rate equal to the corresponding component of λn ∈RL +, which is defined as λn = Vun (3) Here V ∈RL×K + and un ∈RK + (typically K ≪L). Note that the K columns of V can be thought of as atoms of a label dictionary (or “topics” over labels) and un can be thought of as the atom weights or embedding of the label vector yn (or “topic proportions”, i.e., how active each of the K topics is for example n). Also note that Eq. (1)-(3) can be combined as yn = f(λn) = f(Vun) (4) where f jointly denotes drawing the latent counts mn from a Poisson (Eq. 2) with rate λn = Vun, followed by thresholding mn at 1 (Eq. 1). In particular, note that marginalizing out mn from Eq. 1 leads to yn ∼Bernoulli(1 −exp(−λn)). This link function, termed as the Bernoulli-Poisson link [28, 9], has also been used recently in modeling relational data with binary observations. In Eq. (4), expressing the label vector yn ∈{0, 1}L in terms of Vun is equivalent to a low-rank assumption on the L × N label matrix Y = [y1 . . . yN]: Y = f(VU), where V = [v1 . . . vK] ∈ RL×K + and U = [u1 . . . uN] ∈RK×N + , which are modeled as follows vk ∼ Dirichlet(η1L) (5) ukn ∼ Gamma(rk, pkn(1 −pkn)−1) (6) pkn = σ(w⊤ k xn) (7) wk ∼ Nor(0, Γ) (8) σ(z) = 1/(1 + exp(−z)), Γ = diag(τ −1 1 , . . . , τ −1 D ), and hyperparameters rk, τ1, . . . , τD are given improper gamma priors. Since columns of V are Dirichlet drawn, they correspond to distributions (i.e., topics) over the labels. It is important to note here that the dependence of the label embedding un = {ukn}K k=1 on the feature vector xn is achieved by making the scale parameter of the gamma prior on {ukn}K k=1 depend on {pkn}K k=1 which in turn depends on the features xn via regression weight W = {wk}K k=1 (Eq. 6 and 8). Figure 1: Graphical model for the generative process of the label vector. Hyperpriors omitted for brevity. 2 2.1 Computational scalability in the number of positive labels For the Bernoulli-Poisson likelihood model for binary labels, we can write the conditional posterior [28, 9] of the latent count vector mn as (mn|yn, V, un) ∼yn ⊙Poisson+(Vun) (9) where Poisson+ denotes the zero-truncated Poisson distribution with support only on the positive integers, and ⊙denotes the element-wise product. Eq. 9 suggests that the zeros in yn will result in the corresponding elements of the latent count vector mn being zero, almost surely (i.e., with probability one). As shown in Section 3, the sufficient statistics of the model parameters do not depend on latent counts that are equal to zero; such latent counts can be simply ignored during the inference. This aspect leads to substantial computational savings in our model, making it scale only in the number of positive labels in the label matrix. In the rest of the exposition, we will refer to our model as BMLPL to denote Bayesian Multi-label Learning via Positive Labels. 2.2 Asymmetric Link Function In addition to the computational advantage (i.e., scaling in the number of non-zeros in the label matrix), another appealing aspect of our multi-label learning framework is that the Bernoulli-Poisson likelihood is also a more realistic model for highly sparse binary data as compared to the commonly used logistic/probit likelihood. To see this, note that the Bernoulli-Poisson model defines the probability of an observation y being one as p(y = 1|λ) = 1 −exp(−λ) where λ is the positive rate parameter. For a positive λ on the X axis, the rate of growth of the plot of p(y = 1|λ) on the Y axis from 0.5 to 1 is much slower than the rate it drops from 0.5 to 0. This benavior of the BernoulliPoisson link will encourage a much fewer number of nonzeros in the observed data as compared to the number of zeros. On the other hand, a logistic and probit approach both 0 and 1 at the same rate, and therefore cannot model the sparsity/skewness of the label matrix like the Bernoulli-Poisson link. Therefore, in contrast to multilabel learning models based on logistic/probit likelihood function or standard loss functions such as the hinge-loss [25, 14] for the binary labels, our proposed model provides better robustness against label imbalance. 3 Inference A key aspect of our framework is that the conditional posteriors of all the model parameters are available in closed form using data augmentation strategies that we will describe below. In particular, since we model binary label matrix as thresholded counts, we are also able to leverage some of the inference methods proposed for Bayesian matrix factorization of count-valued data [27] to derive an efficient Gibbs sampler for our model. Inference in our model requires estimating V ∈RL×K + , W ∈RD×K, U ∈RK×N + , and the hyperparameters of the model. As we will see below, the latent count vectors {mn}N n=1 (which are functions of V and U) provide sufficient statistics for the model parameters. Each element of mn (if the corresponding element in yn is one) is drawn from a truncated Poisson distribution mln ∼Poisson+(Vl,:un) = Poisson+(λln) (10) Vl,: denotes the lth row of V and λln = PK k=1 λkln = PK k=1 vlkukn. Thus we can also write mln = PK k=1 mlkn where mlkn ∼Poisson+(λkln) = Poisson+(vlkukn). On the other hand, if yln = 0 then mln = 0 with probability one (Eq. (9)), and therefore need not be sampled because it does not affect the sufficient statistics of the model parameters. Using the equivalence of Poisson and multinomial distribution [27], we can express the decomposition mln = PK k=1 mlkn as a draw from a multinomial [ml1n, . . . , mlKn] ∼Mult(mln; ζl1n, . . . , ζlKn) (11) where ζlkn = vlkukn PK k=1 vlkukn . This allows us to exploit the Dirichlet-multinomial conjugacy and helps designing efficient Gibbs sampling and EM algorithms for doing inference in our model. As discussed before, the computational cost of both algorithms scales in the number of ones in the label matrix Y, which males our model especially appealing for dealing with multilabel learning problems where the label matrix is massive but highly sparse. 3 3.1 Gibbs Sampling Gibbs sampling for our model proceeds as follows Sampling V: Using Eq. 11 and the Dirichlet-multinomial conjugacy, each column of V ∈RL×K + can be sampled as vk ∼Dirichlet(η + m1k, . . . , η + mLk) (12) where mlk = P n mlnk, ∀l = 1, . . . , L. Sampling U: Using the gamma-Poisson conjugacy, each entry of U ∈RK×N + can be sampled as ukn ∼Gamma(rk + mkn, pkn) (13) where mkn = P l mlnk and pkn = σ(w⊤ k xn). Sampling W: Since mkn = P l mlnk and mlnk ∼Poisson+(vlkukn), p(mkn|ukn) is also Poisson. Further, since p(ukn|r, pkn) is gamma, we can integrate out ukn from p(mkn|ukn) which gives mkn = NegBin(rk, pkn) where NegBin(., .) denotes the negative Binomial distribution. Although the negative Binomial is not conjugate to the Gaussian prior on wk, we leverage the P´olyaGamma strategy [17] data augmentation to “Gaussianify” the negative Binomial likelihood. Doing this, we are able to derive closed form Gibbs sampling updates wk, k = 1, . . . , K. The P´olyaGamma (PG) strategy is based on sampling a set of auxiliary variables, one for each observation (which, in the context of sampling wk, are the latent counts mkn). For sampling wk, we draw N P´olya-Gamma random variables [17] ωk1, . . . , ωkN (one for each training example) as ωkn ∼PG(mkn + rk, w⊤ k xn) (14) where PG(., .) denotes the P´olya-Gamma distribution [17]. Given these PG variables, the posterior distribution of wk is Gaussian Nor(µwk, Σwk) where Σwk = (XΩkX⊤+ Γ−1)−1 (15) µwk = ΣwkXκk (16) where Ωk = diag(ωk1, . . . , ωkN) and κk = [(mk1 −rk)/2, . . . , (mkN −rk)/2]⊤. Sampling the hyperparameters: The hyperparameter rk is given a gamma prior and can be sampled easily. The other hyperparameters τ1, . . . , τD are estimated using Type-II maximum likelihood estimation [22]. 3.2 Expectation Maximization The Gibbs sampler described in Section 3.1 is efficient and has a computational complexity that scales in the number of ones in the label matrix. To further scale up the inference, we also develop an efficient Expectation-Maximization (EM) inference algorithm for our model. In the E-step, we need to compute the expectations of the local variables U, the latent counts, and the P´olya-Gamma variables ωk1, . . . , ωkN, for k = 1, . . . , K. These expectations are available in closed form and can thus easily be computed. In particular, the expectation of each P´olya-Gamma variable ωkn is very efficient to compute and is available in closed form [20] E[ωkn] = (mkn + rk) 2w⊤ k xn tanh(w⊤ k xn/2) (17) The M-step involves a maximization w.r.t. V and W, which essentially involves solving for their maximum-a-posteriori (MAP) estimates, which are available in closed form. In particular, as shown in [20], estimating wk requires solving a linear system which, in our case, is of the form Skwk = dk (18) where Sk = XΩkX⊤+ Γ−1, dk = Xκk, Ωk and κk are defined as in Section 3.1, except that the P´olya-Gamma random variables are replaced by their expectations given by Eq. 17. Note that Eq. 18 4 can be straighforwardly solved as wk = S−1 k dk. However, convergence of the EM algorithm [20] does not require solving for wk exactly in each EM iteration and running a couple of iterations of any of the various iterative methods that solves a linear system of equations can be used for this step. We use the Conjugate Gradient [2] method to solve this, which also allows us to exploit the sparsity in X and Ωk to very efficiently solve this system of equations, even when D and N are very large. Although in this paper, we only use the batch EM, it is possible to speed it up even further using an online version of this EM algorithm, as shown in [20]. The online EM processes data in small minibatches and in each EM iteration updates the sufficient statistics of the global parameters. In our case, these sufficient statistics include Sk and dk, for k = 1, . . . , K, and can be updated as S(t+1) k = (1 −γt)S(t) k + γtX(t)Ω(t) k X(t)⊤ d(t+1) k = (1 −γt)d(t) k + γtX(t)κ(t) k where X(t) denotes the set of examples in the current minibatch, and Ω(t) k and κ(t) k denote quantities that are computed using the data from the current minibatch. 3.3 Predicting Labels for Test Examples Predicting the label vector y∗∈{0, 1}L for a new test example x∗∈RD can be done as p(y∗= 1|x∗) = Z u∗ (1 −exp(−Vu∗))p(u∗)du∗ If using Gibbs sampling, the integral above can be approximated using samples {u(m) ∗ }M m=1 from the posterior of u∗. It is also possible to integrate out u∗(details skipped for brevity) and get closed form estimates of probability of each label yl∗in terms of the model parameters V and W, and it is given by p(yl∗= 1|x∗) = 1 − K Y k=1 1 [Vlk exp(w⊤ k x∗) + 1]rk (19) 4 Computational Cost Computing the latent count mln for each nonzero entry yln in Y requires computing [ml1n, . . . , mlKn], which takes O(K) time; therefore computing all the latent counts takes O(nnz(Y)K) time, which is very efficient if Y has very few nonzeros (which is true of most realworld multi-label learning problems). Estimating V, U, and the hyperparameters is relatively cheap and can be done very efficiently. The P´olya-Gamma variables, when doing Gibbs sampling, can be efficiently sampled using methods described in [17]; and when doing EM, these can be even more cheaply computed because the P´olya-Gamma expectations, which are available in closed form (as a hyperbolic tan function), can be very efficiently computed [20]. The most dominant step is estimating W; when doing Gibbs sampling, if done na¨ıvely, it would O(DK3) time if sampling W row-wise, and O(KD3) time if sampling column-wise. However, if using the EM algorithm, estimating W can be done much more efficiently, e.g., using Conjugate Gradient updates because, it is not even required to solved for W exactly in each iteration of the EM algorithm [20]. Also note that since most of the parameters updates for different k = 1, . . . , K, n = 1, . . . , N are all independent of each other, our Gibbs sampler and the EM algorithms can be easily parallelized/block-updated. 5 Connection: Topic Models with Meta-Data As discussed earlier, our multi-label learning framework is similar in spirit to a topic model as the label embeddings naturally correspond to topics - each Dirichlet-drawn column vk of the matrix V ∈RL×K + can be seen as representing a “topic”. In fact, our model, interestingly, can directly be seen as a topic model [3, 27] where we have side-information associated with each document (e.g., document features). For example, if each document yn ∈{0, 1}L (in a bag-of-words representation with vocabulary of size L) may also have some meta-data xn ∈RD associated with it. Our model can therefore also be used to perform topic modeling of text documents with such meta-data [15, 12, 29, 19] in a robust and scalable manner. 5 6 Related Work Despite a significant number of methods proposed in the recent years, learning from multi-label data continues to remain an active area of research, especially due to the recent surge of interest in learning when the output space (i.e., the number of labels) is massive. To handle the huge dimensionality of the label space, a common approach is to embed the labels in a lower-dimensional space, e.g., using methods such as Canonical Correlation Analysis or other methods for jointly embedding feature and label vectors [26, 5, 23], Compressed Sensing[8, 10], or by assuming that the matrix consisting of the weight vectors of all the labels is a low-rank matrix [25]. Another interesting line of work on label embedding methods makes use of random projections to reduce the label space dimensionality [11, 16], or use methods such as multitask learning (each label is a task). Our proposed framework is most similar in spirit to the aforementioned class of label embedding based methods (we compare with some of these in our experiments). In contrast to these methods, our framework reduces the label-space dimensionality via a nonlinear mapping (Section 2), our framework has accompanying inference algorithms that scale in the number of positive labels 2.1, has an underlying generative model that more realistically models the imbalanced nature of the labels in the label matrix (Section 2.2), can deal with missing labels, and is easily parallelizable. Also, the connection to topic models provide a nice interpretability to the results, which is usually not possible with the other methods (e.g., in our model, the columns of the matrix V can be seen as a set of topics over the labels; in Section 7.2, we show an experiment on this). Moreover, although in this paper, we have focused on the multi-label learning problem, our framework can also be applied for multiclass problems via the one-vs-all reduction, in which case the label matrix is usually very sparse (each column of the label matrix represents the labels of a single one-vs-all binary classification problem). Finally, although not a focus of this paper, some other important aspects of the multi-label learning problem have also been looked at in recent work. For example, fast prediction at test time is an important concern when the label space is massive. To deal with this, some recent work focuses on methods that only incur a logarithmic cost (in the number of labels) at test time [1, 18], e.g., by inferring and leveraging a tree structure over the labels. 7 Experiments We evaluate the proposed multi-label learning framework on four benchmark multi-label data sets bibtex, delicious, compphys, eurlex [25], with their statistics summarized in Table 1. The data sets we use in our experiments have both feature and label dimensions that range from a few hundreds to a several thousands. In addition, the feature and/or label matrices are also quite sparse. Training set Test set Data set D L Ntrain ¯L ¯ D Ntest ¯L ¯ D bibtex 1836 159 4880 2.40 68.74 2515 2.40 68.50 delicious 500 983 12920 19.03 18.17 3185 19.00 18.80 compphys 33,284 208 161 9.80 792.78 40 11.83 899.20 eurlex 5000 3993 17413 5.30 236.69 1935 5.32 240.96 Table 1: Statistics of the data sets used in our experiments. ¯L denotes average number of positive labels per example; ¯D denotes the average number of nonzero features per example. We compare the proposed model BMLPL with four state-of-the-art methods. All these methods, just like our method, are based on the assumption that the label vectors live in a low dimensional space. • CPLST: Conditional Principal Label Space Transformation [5]: CPLST is based on embedding the label vectors conditioned on the features. • BCS: Bayesian Compressed Sensing for multi-label learning [10]: BCS is a Bayesian method that uses the idea of doing compressed sensing on the labels [8]. • WSABIE: It assumes that the feature as well as the label vectors live in a low dimensional space. The model is based on optimizing a weighted approximate ranking loss [23]. • LEML: Low rank Empirical risk minimization for multi-label learning [25]. For LEML, we report the best results across the three loss functions (squared, logistic, hinge) they propose. 6 Table 2 shows the results where we report the Area Under the ROC Curve (AUC) for each method on all the data sets. For each method, as done in [25], we vary the label space dimensionality from 20% - 100% of L, and report the best results. For BMLPL, both Gibbs sampling and EM based inference perform comparably (though EM runs much faster than Gibbs); here we report results obtained with EM inference only (Section 7.4 provides another comparison between these two inference methods). The EM algorithms were run for 1000 iterations and they converged in all the cases. As shown in the results in Table 2, in almost all of the cases, the proposed BMLPL model performs better than the other methods (except for compphys data sets where the AUC is slightly worse than LEML). The better performance of our model justifies the flexible Bayesian formulation and also shows the evidence of the robustness provided by the asymmetric link function against sparsity and label imbalance in the label matrix (note that the data sets we use have very sparse label matrices). CPLST BCS WSABIE LEML BMLPL bibtex 0.8882 0.8614 0.9182 0.9040 0.9210 delicious 0.8834 0.8000 0.8561 0.8894 0.8950 compphys 0.7806 0.7884 0.8212 0.9274 0.9211 eurlex 0.8651 0.9456 0.9520 Table 2: Comparison of the various methods in terms of AUC scores on all the data sets. Note: CPLST and BCS were not feasible to run on the eurlex data, so we are unable to report those numbers here. 7.1 Results with Missing Labels Our generative model for the label matrix can also handle missing labels (the missing labels may include both zeros or ones). We perform an experiment on two of the data sets - bibtex and compphys - where only 20% of the labels from the label matrix are revealed (note that, of all these revealed labels, our model uses only the positive labels), and compare our model with LEML and BCS (both are capable of handling missing labels). The results are shown in Table 3. For each method, we set K = 0.4L. As the results show, our model yields better results as compared to the competing methods even in the presence of missing labels. BCS LEML BMLPL bibtex 0.7871 0.8332 0.8420 compphys 0.6442 0.7964 0.8012 Table 3: AUC scores with only 20% labels observed. 7.2 Qualitative Analysis: Topic Modeling on Eurlex Data Since in our model, each column of the L × K matrix V represents a distribution (i.e., a “topic”) over the labels, to assess its ability of discovering meaningful topics, we run an experiment on the Eurlex data with K = 20 and look at each column of V. The Eurlex data consists of 3993 labels (each of which is a tags; a document can have a subset of the tags), so each column in V is of that size. In Table 4, we show five of the topics (and top five labels in each topic, based on the magnitude of the entries in the corresponding column of V). As shown in Table 4, our model is able to discover clear and meaningful topics from the Eurlex data, which shows its usefulness as a topic model when each document yn ∈{0, 1}L has features in form of meta data xn ∈RD associated with it. Topic 1 (Nuclear) Topic 2 (Agreements) Topic 3 (Environment) Topic 4 (Stats & Data) Topic 5 (Fishing Trade) nuclear safety EC agreement environmental protection community statistics fishing regulations nuclear power station trade agreement waste management statistical method fishing agreement radioactive effluent EC interim agreement env. monitoring agri. statistics fishery management radioactive waste trade cooperation dangerous substance statistics fishing area radioactive pollution EC coop. agree. pollution control measures data transmission conservation of fish stocks Table 4: Most probable words in different topics. 7 7.3 Scalability w.r.t. Number of Positive Labels To demonstrate the linear scalability in the number of positive labels, we run an experiment on the Delicious data set by varying the number of positive labels used for training the model from 20% to 100% (to simulate this, we simply treat all the other labels as zeros, so as to have a constant label matrix size). We run each experiment for 100 iterations (using EM for the inference) and report the running time for each case. Fig. 2 (left) shows the results which demonstrates the roughly linear scalability w.r.t. the number of positive labels. This experiment is only meant for a small illustration. Note than the actual scalability will also depend on the relative values of D and L and the sparsity of Y. In any case, the amount of computations the involve the labels (both positive and negatives) only depend on the positive labels, and this part, for our model, is clearly linear in the number of positive labels in the label matrix. 20% 40% 60% 60% 100% 200 300 400 500 600 700 800 Fraction of Positive Labels Time Taken 10 −2 10 0 10 2 10 4 0.65 0.7 0.75 0.8 0.85 0.9 Time AUC EM−CG EM−Exact Gibbs Figure 2: (Left) Scalability w.r.t. number of positive labels. (Right) Time vs accuracy comparison for Gibbs and EM (with exact and with CG based M steps) 7.4 Gibbs Sampling vs EM We finally show another experiment comparing both Gibbs sampling and EM for our model in terms of accuracy vs running time. We run each inference method only for 100 iterations. For EM, we try two settings: EM with an exact M step for W, and EM with an approximate M step where we run 2 steps of conjugate gradient (CG). Fig. 2 (right), shows a plot comparing each inference method in terms of the accuracy vs running time. As Fig. 2 (right) shows, the EM algorithms (both exact as well as the one that uses CG) attain reasonably high AUC scores in a short amount of time, which the Gibbs sampling takes much longer per iteration and seems to converge rather slowly. Moreover, remarkably, EM with 2 iterations CG in each M steps seems to perform comparably to the EM with an exact M step, while running considerably faster. As for the Gibbs sampler, although it runs slower than the EM based inference, it should be noted that the Gibbs sampler would still be considerably faster than other fully Bayesian methods for multi-label prediction (such as BCS [10]) because it only requires evaluating the likelihoods over the positive labels in the label matrix). Moreover, the step involving sampling of the W matrix can be made more efficient by using cholesky decompositions which can avoid matrix inversions needed for computing the covariance of the Gaussian posterior on wk. 8 Discussion and Conclusion We have presented a scalable Bayesian framework for multi-label learning. In addition to providing a flexible model for sparse label matrices, our framework is also computationally attractive and can scale to massive data sets. The model is easy to implement and easy to parallelize. Both full Bayesian inference via simple Gibbs sampling and EM based inference can be carried out in this model in a computationally efficient way. Possible future work includes developing online Gibbs and online EM algorithms to further enhance the scalability of the proposed framework to handle even bigger data sets. Another possible extension could be to additionally impose label correlations more explicitly (in addition to the low-rank structure already imposed by the current model), e.g., by replacing the Dirichlet distribution on the columns of V with logistic normal distributions [4]. Because our framework allows efficiently computing the predictive distribution of the labels (as shown in Section 3.3), it can be easily extend for doing active learning on the labels [10]. Finally, although here we only focused on multi-label learning, our framework can be readily used as a robust and scalable alternative to methods that perform binary matrix factorization with side-information. Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA and ONR 8 References [1] Rahul Agrawal, Archit Gupta, Yashoteja Prabhu, and Manik Varma. Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In WWW, 2013. [2] Dimitri P Bertsekas. Nonlinear programming. Athena scientific Belmont, 1999. [3] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. JMLR, 2003. [4] Jianfei Chen, Jun Zhu, Zi Wang, Xun Zheng, and Bo Zhang. Scalable inference for logistic-normal topic models. In NIPS, 2013. [5] Yao-Nan Chen and Hsuan-Tien Lin. Feature-aware label space dimension reduction for multi-label classification. In NIPS, 2012. [6] Eva Gibaja and Sebasti´an Ventura. Multilabel learning: A review of the state of the art and ongoing research. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2014. [7] Eva Gibaja and Sebasti´an Ventura. A tutorial on multilabel learning. ACM Comput. Surv., 2015. [8] Daniel Hsu, Sham Kakade, John Langford, and Tong Zhang. Multi-label prediction via compressed sensing. In NIPS, 2009. [9] Changwei Hu, Piyush Rai, and Lawrence Carin. Zero-truncated poisson tensor factorization for massive binary tensors. In UAI, 2015. [10] Ashish Kapoor, Raajay Viswanathan, and Prateek Jain. Multilabel classification using bayesian compressed sensing. In NIPS, 2012. [11] Nikos Karampatziakis and Paul Mineiro. Scalable multilabel prediction via randomized methods. arXiv preprint arXiv:1502.02710, 2015. [12] Dae I Kim and Erik B Sudderth. The doubly correlated nonparametric topic model. In NIPS, 2011. [13] Xiangnan Kong, Zhaoming Wu, Li-Jia Li, Ruofei Zhang, Philip S Yu, Hang Wu, and Wei Fan. Large-scale multi-label learning with incomplete label assignments. In SDM, 2014. [14] Xin Li, Feipeng Zhao, and Yuhong Guo. Conditional restricted boltzmann machines for multi-label learning with incomplete labels. In AISTATS, 2015. [15] David Mimno and Andrew McCallum. Topic models conditioned on arbitrary features with dirichletmultinomial regression. In UAI, 2008. [16] Paul Mineiro and Nikos Karampatziakis. Fast label embeddings for extremely large output spaces. In ICLR Workshop, 2015. [17] Nicholas G Polson, James G Scott, and Jesse Windle. Bayesian inference for logistic models using p´olya– gamma latent variables. Journal of the American Statistical Association, 108(504):1339–1349, 2013. [18] Yashoteja Prabhu and Manik Varma. FastXML: a fast, accurate and stable tree-classifier for extreme multi-label learning. In KDD, 2014. [19] Maxim Rabinovich and David Blei. The inverse regression topic model. In ICML, 2014. [20] James G Scott and Liang Sun. Expectation-maximization for logistic regression. arXiv preprint arXiv:1306.0040, 2013. [21] Farbound Tai and Hsuan-Tien Lin. Multilabel classification with principal label space transformation. Neural Computation, 2012. [22] Michael E Tipping. Bayesian inference: An introduction to principles and practice in machine learning. In Advanced lectures on machine Learning, pages 41–62. Springer, 2004. [23] Jason Weston, Samy Bengio, and Nicolas Usunier. WSABIE: Scaling up to large vocabulary image annotation. In IJCAI, 2011. [24] Yan Yan, Glenn Fung, Jennifer G Dy, and Romer Rosales. Medical coding classification by leveraging inter-code relationships. In KDD, 2010. [25] Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit S Dhillon. Large-scale multi-label learning with missing labels. In ICML, 2014. [26] Yi Zhang and Jeff G Schneider. Multi-label output codes using canonical correlation analysis. In AISTATS, 2011. [27] M. Zhou, L. A. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and poisson factor analysis. In AISTATS, 2012. [28] Mingyuan Zhou. Infinite edge partition models for overlapping community detection and link prediction. In AISTATS, 2015. [29] Jun Zhu, Ni Lao, Ning Chen, and Eric P Xing. Conditional topical coding: an efficient topic model conditioned on rich features. In KDD, 2011. 9
2015
197
5,699
Probabilistic Line Searches for Stochastic Optimization Maren Mahsereci and Philipp Hennig Max Planck Institute for Intelligent Systems Spemannstraße 38, 72076 T¨ubingen, Germany [mmahsereci|phennig]@tue.mpg.de Abstract In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent. 1 Introduction Stochastic gradient descent (SGD) [1] is currently the standard in machine learning for the optimization of highly multivariate functions if their gradient is corrupted by noise. This includes the online or batch training of neural networks, logistic regression [2, 3] and variational models [e.g. 4, 5, 6]. In all these cases, noisy gradients arise because an exchangeable loss-function L(x) of the optimization parameters x ∈RD, across a large dataset {di}i=1 ...,M, is evaluated only on a subset {dj}j=1,...,m: L(x) := 1 M M X i=1 ℓ(x, di) ≈1 m m X j=1 ℓ(x, dj) =: ˆL(x) m ≪M. (1) If the indices j are i.i.d. draws from [1, M], by the Central Limit Theorem, the error ˆL(x) −L(x) is unbiased and approximately normal distributed. Despite its popularity and its low cost per step, SGD has well-known deficiencies that can make it inefficient, or at least tedious to use in practice. Two main issues are that, first, the gradient itself, even without noise, is not the optimal search direction; and second, SGD requires a step size (learning rate) that has drastic effect on the algorithm’s efficiency, is often difficult to choose well, and virtually never optimal for each individual descent step. The former issue, adapting the search direction, has been addressed by many authors [see 7, for an overview]. Existing approaches range from lightweight ‘diagonal preconditioning’ approaches like ADAGRAD [8] and ‘stochastic meta-descent’[9], to empirical estimates for the natural gradient [10] or the Newton direction [11], to problem-specific algorithms [12], and more elaborate estimates of the Newton direction [13]. Most of these algorithms also include an auxiliary adaptive effect on the learning rate. And Schaul et al. [14] recently provided an estimation method to explicitly adapt the learning rate from one gradient descent step to another. None of these algorithms change the size of the current descent step. Accumulating statistics across steps in this fashion requires some conservatism: If the step size is initially too large, or grows too fast, SGD can become unstable and ‘explode’, because individual steps are not checked for robustness at the time they are taken. 1 0 0.5 1 5.5 6 6.5 Œ  Ž   › distance t in line search direction function value f(t) Figure 1: Sketch: The task of a classic line search is to tune the step taken by a optimization algorithm along a univariate search direction. The search starts at the endpoint Œ of the previous line search, at t = 0. A sequence of exponentially growing extrapolation steps ,Ž, finds a point of positive gradient at . It is followed by interpolation steps ,› until an acceptable point › is found. Points of insufficient decrease, above the line f(0) + c1tf ′(0) (gray area) are excluded by the Armijo condition W-I, while points of steep gradient (orange areas) are excluded by the curvature condition W-II (weak Wolfe conditions in solid orange, strong extension in lighter tone). Point › is the first to fulfil both conditions, and is thus accepted. The principally same problem exists in deterministic (noise-free) optimization problems. There, providing stability is one of several tasks of the line search subroutine. It is a standard constituent of algorithms like the classic nonlinear conjugate gradient [15] and BFGS [16, 17, 18, 19] methods [20, §3].1 In the noise-free case, line searches are considered a solved problem [20, §3]. But the methods used in deterministic optimization are not stable to noise. They are easily fooled by even small disturbances, either becoming overly conservative or failing altogether. The reason for this brittleness is that existing line searches take a sequence of hard decisions to shrink or shift the search space. This yields efficiency, but breaks hard in the presence of noise. Section 3 constructs a probabilistic line search for noisy objectives, stabilizing optimization methods like the works cited above. As line searches only change the length, not the direction of a step, they could be used in combination with the algorithms adapting SGD’s direction, cited above. The algorithm presented below is thus a complement, not a competitor, to these methods. 2 Connections 2.1 Deterministic Line Searches There is a host of existing line search variants [20, §3]. In essence, though, these methods explore a univariate domain ‘to the right’ of a starting point, until an ‘acceptable’ point is reached (Figure 1). More precisely, consider the problem of minimizing L(x) : RD _ R, with access to ∇L(x) : RD _ RD. At iteration i, some ‘outer loop’ chooses, at location xi, a search direction si ∈RD (e.g. by the BFGS rule, or simply si = −∇L(xi) for gradient descent). It will not be assumed that si has unit norm. The line search operates along the univariate domain x(t) = xi + tsi for t ∈R+. Along this direction it collects scalar function values and projected gradients that will be denoted f(t) = L(x(t)) and f ′(t) = s⊺ i ∇L(x(t)) ∈R. Most line searches involve an initial extrapolation phase to find a point tr with f ′(tr) > 0. This is followed by a search in [0, tr], by interval nesting or by interpolation of the collected function and gradient values, e.g. with cubic splines.2 2.1.1 The Wolfe Conditions for Termination As the line search is only an auxiliary step within a larger iteration, it need not find an exact root of f ′; it suffices to find a point ‘sufficiently’ close to a minimum. The Wolfe [21] conditions are a widely accepted formalization of this notion; they consider t acceptable if it fulfills f(t) ≤f(0) + c1tf ′(0) (W-I) and f ′(t) ≥c2f ′(0) (W-II), (2) using two constants 0 ≤c1 < c2 ≤1 chosen by the designer of the line search, not the user. W-I is the Armijo [22], or sufficient decrease condition. It encodes that acceptable functions values should lie below a linear extrapolation line of slope c1f ′(0). W-II is the curvature condition, demanding 1In these algorithms, another task of the line search is to guarantee certain properties of surrounding estimation rule. In BFGS, e.g., it ensures positive definiteness of the estimate. This aspect will not feature here. 2This is the strategy in minimize.m by C. Rasmussen, which provided a model for our implementation. At the time of writing, it can be found at http://learning.eng.cam.ac.uk/carl/code/minimize/minimize.m 2 −10 1 ρ(t) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 distance t in line search direction pWolfe(t) weak strong 0 1 pb(t) 0 1 pa(t) Œ Ž   › 5.5 6 6.5 f(t) Figure 2: Sketch of a probabilistic line search. As in Fig. 1, the algorithm performs extrapolation (,Ž,) and interpolation (,›), but receives unreliable, noisy function and gradient values. These are used to construct a GP posterior (top. solid posterior mean, thin lines at 2 standard deviations, local pdf marginal as shading, three dashed sample paths). This implies a bivariate Gaussian belief (§3.3) over the validity of the weak Wolfe conditions (middle three plots. pa(t) is the marginal for W-I, pb(t) for W-II, ρ(t) their correlation). Points are considered acceptable if their joint probability pWolfe(t) (bottom) is above a threshold (gray). An approximation (§3.3.1) to the strong Wolfe conditions is shown dashed. a decrease in slope. The choice c1 = 0 accepts any value below f(0), while c1 = 1 rejects all points for convex functions. For the curvature condition, c2 = 0 only accepts points with f ′(t) ≥0; while c2 = 1 accepts any point of greater slope than f ′(0). W-I and W-II are known as the weak form of the Wolfe conditions. The strong form replaces W-II with |f ′(t)| ≤c2|f ′(0)| (W-IIa). This guards against accepting points of low function value but large positive gradient. Figure 1 shows a conceptual sketch illustrating the typical process of a line search, and the weak and strong Wolfe conditions. The exposition in §3.3 will initially focus on the weak conditions, which can be precisely modeled probabilistically. Section 3.3.1 then adds an approximate treatment of the strong form. 2.2 Bayesian Optimization A recently blossoming sample-efficient approach to global optimization revolves around modeling the objective f with a probability measure p(f); usually a Gaussian process (GP). Searching for extrema, evaluation points are then chosen by a utility functional u[p(f)]. Our line search borrows the idea of a Gaussian process surrogate, and a popular utility, expected improvement [23]. Bayesian optimization methods are often computationally expensive, thus ill-suited for a cost-sensitive task like a line search. But since line searches are governors more than information extractors, the kind of sample-efficiency expected of a Bayesian optimizer is not needed. The following sections develop a lightweight algorithm which adds only minor computational overhead to stochastic optimization. 3 A Probabilistic Line Search We now consider minimizing y(t) = ˆL(x(t)) from Eq. (1). That is, the algorithm can access only noisy function values and gradients yt, y′ t at location t, with Gaussian likelihood p(yt, y′ t | f) = N  yt y′ t  ;  f(t) f ′(t)  , σ2 f 0 0 σ2 f ′  . (3) The Gaussian form is supported by the Central Limit argument at Eq. (1), see §3.4 regarding estimation of the variances σ2 f, σ2 f ′. Our algorithm has three main ingredients: A robust yet lightweight Gaussian process surrogate on f(t) facilitating analytic optimization; a simple Bayesian optimization objective for exploration; and a probabilistic formulation of the Wolfe conditions as a termination criterion. 3.1 Lightweight Gaussian Process Surrogate We model information about the objective in a probability measure p(f). There are two requirements on such a measure: First, it must be robust to irregularity of the objective. And second, it must allow analytic computation of discrete candidate points for evaluation, because a line search should not call yet another optimization subroutine itself. Both requirements are fulfilled by a once-integrated Wiener process, i.e. a zero-mean Gaussian process prior p(f) = GP(f; 0, k) with covariance function k(t, t′) = θ2  1/3 min3(˜t, ˜t′) + 1/2|t −t′| min2(˜t, ˜t′)  . (4) 3 Here ˜t := t+τ and ˜t′ := t′ +τ denote a shift by a constant τ > 0. This ensures this kernel is positive semi-definite, the precise value τ is irrelevant as the algorithm only considers positive values of t (our implementation uses τ = 10). See §3.4 regarding the scale θ2. With the likelihood of Eq. (3), this prior gives rise to a GP posterior whose mean function is a cubic spline3 [25]. We note in passing that regression on f and f ′ from N observations of pairs (yt, y′ t) can be formulated as a filter [26] and thus performed in O(N) time. However, since a line search typically collects < 10 data points, generic GP inference, using a Gram matrix, has virtually the same, low cost. Because Gaussian measures are closed under linear maps [27, §10], Eq. (4) implies a Wiener process (linear spline) model on f ′: p(f; f ′) = GP  f f ′  ; 0,  k k∂ k ∂ k ∂ ∂  , (5) with (using the indicator function I(x) = 1 if x, else 0) k ∂i ∂j = ∂i+jk(t, t′) ∂ti∂t′j , thus k∂(t, t′) = θ2  I(t < t′)t2/2 + I(t ≥t′)(tt′ −t′2/2)  k ∂(t, t′) = θ2  I(t′ < t)t′2/2 + I(t′ ≥t)(tt′ −t2/2)  k ∂ ∂(t, t′) = θ2 min(t, t′) . (6) Given a set of evaluations (t, y, y′) (vectors, with elements ti, yti, y′ ti) with independent likelihood (3), the posterior p(f | y, y′) is a GP with posterior mean µ and covariance and ˜k as follows: µ(t) =  ktt k ∂ tt ⊺ktt + σ2 fI k∂tt k ∂ tt k ∂ ∂tt + σ2 f ′I −1 | {z } =:g⊺(t)  y y′  , ˜k(t, t′) = ktt′ −g⊺(t)  ktt′ k ∂ tt′  . (7) The posterior marginal variance will be denoted by V(t) = ˜k(t, t). To see that µ is indeed piecewise cubic (i.e. a cubic spline), we note that it has at most three non-vanishing derivatives4, because k ∂2 (t, t′) = θ2I(t ≤t′)(t′ −t) k ∂2 ∂(t, t′) = θ2I(t ≤t′) k ∂3 (t, t′) = −θ2I(t ≤t′) k ∂3 ∂(t, t′) = 0. (8) This piecewise cubic form of µ is crucial for our purposes: having collected N values of f and f ′, respectively, all local minima of µ can be found analytically in O(N) time in a single sweep through the ‘cells’ ti−1 < t < ti, i = 1, . . . , N (here t0 = 0 denotes the start location, where (y0, y′ 0) are ‘inherited’ from the preceding line search. For typical line searches N < 10, c.f. §4). In each cell, µ(t) is a cubic polynomial with at most one minimum in the cell, found by a trivial quadratic computation from the three scalars µ′(ti), µ′′(ti), µ′′′(ti). This is in contrast to other GP regression models—for example the one arising from a Gaussian kernel—which give more involved posterior means whose local minima can be found only approximately. Another advantage of the cubic spline interpolant is that it does not assume the existence of higher derivatives (in contrast to the Gaussian kernel, for example), and thus reacts robustly to irregularities in the objective. In our algorithm, after each evaluation of (yN, y′ N), we use this property to compute a short list of candidates for the next evaluation, consisting of the ≤N local minimizers of µ(t) and one additional extrapolation node at tmax + α, where tmax is the currently largest evaluated t, and α is an extrapolation step size starting at α = 1 and doubled after each extrapolation step. 3.2 Choosing Among Candidates The previous section described the construction of < N + 1 discrete candidate points for the next evaluation. To decide at which of the candidate points to actually call f and f ′, we make use of a popular utility from Bayesian optimization. Expected improvement [23] is the expected amount, 3Eq. (4) can be generalized to the ‘natural spline’, removing the need for the constant τ [24, §6.3.1]. However, this notion is ill-defined in the case of a single observation, which is crucial for the line search. 4There is no well-defined probabilistic belief over f ′′ and higher derivatives—sample paths of the Wiener process are almost surely non-differentiable almost everywhere [28, §2.2]. But µ(t) is always a member of the reproducing kernel Hilbert space induced by k, thus piecewise cubic [24, §6.1]. 4 0 0.5 1 1.5 0 1 t – constraining pWolfe(t) −0.2 0 0.2 f(t) σf = 0.0028 σf′ = 0.0049 0 2 4 0 1 t – extrapolation −2 0 2 σf = 0.28 σf′ = 0.0049 0 0.5 1 1.5 0 1 t – interpolation −0.2 0 0.2 σf = 0.082 σf′ = 0.014 0 0.5 1 1.5 0 1 t – immediate accept −0.5 0 0.5 σf = 0.17 σf′ = 0.012 0 0.5 1 1.5 0 1 t – high noise interpolation −0.2 0 0.2 σf = 0.24 σf′ = 0.011 Figure 3: Curated snapshots of line searches (from MNIST experiment, §4), showing variability of the objective’s shape and the decision process. Top row: GP posterior and evaluations, bottom row: approximate pWolfe over strong Wolfe conditions. Accepted point marked red. under the GP surrogate, by which the function f(t) might be smaller than a ‘current best’ value η (we set η = mini=0,...,N{µ(ti)}, where ti are observed locations), uEI(t) = Ep(ft | y,y′)[min{0, η −f(t)}] = η −µ(t) 2 1 + erf η −µ(t) p 2V(t) ! + r V(t) 2π exp  −(η −µ(t))2 2V(t)  . (9) The next evaluation point is chosen as the candidate maximizing this utility, multiplied by the probability for the Wolfe conditions to be fulfilled, which is derived in the following section. 3.3 Probabilistic Wolfe Conditions for Termination The key observation for a probabilistic extension of W-I and W-II is that they are positivity constraints on two variables at, bt that are both linear projections of the (jointly Gaussian) variables f and f ′:  at bt  =  1 c1t −1 0 0 −c2 0 1    f(0) f ′(0) f(t) f ′(t)  ≥0. (10) The GP of Eq. (5) on f thus implies, at each value of t, a bivariate Gaussian distribution p(at, bt) = N  at bt  ; ma t mb t  ,  Caa t Cab t Cba t Cbb t  , (11) with ma t = µ(0) −µ(t) + c1tµ′(0) and mb t = µ′(t) −c2µ′(0) (12) and Caa t = ˜k00 + (c1t)2 ˜k ∂ ∂ 00 + ˜ktt + 2[c1t(˜k∂ 00 −˜k ∂ 0t) −˜k0t] Cbb t = c2 2 ˜k ∂ ∂ 00 −2c2 ˜k ∂ ∂ 0t + ˜k ∂ ∂ tt Cab t = Cba t = −c2(˜k∂ 00 + c1t ˜k ∂ ∂ 00) + (1 + c2) ˜k ∂ 0t + c1t ˜k ∂ ∂ 0t −˜k∂ tt. (13) The quadrant probability pWolfe t = p(at > 0 ∧bt > 0) for the Wolfe conditions to hold is an integral over a bivariate normal probability, pWolfe t = Z ∞ − ma t √ Caa t Z ∞ − mb t √ Cbb t N  a b  ;  0 0  ,  1 ρt ρt 1  da db, (14) with correlation coefficient ρt = Cab t / p Caa t Cbb t . It can be computed efficiently [29], using readily available code5 (on a laptop, one evaluation of pWolfe t cost about 100 microseconds, each line search requires < 50 such calls). The line search computes this probability for all evaluation nodes, after each evaluation. If any of the nodes fulfills the Wolfe conditions with pWolfe t > cW , greater than some threshold 0 < cW ≤1, it is accepted and returned. If several nodes simultaneously fulfill this requirement, the t of the lowest µ(t) is returned. Section 3.4 below motivates fixing cW = 0.3. 5e.g. http://www.math.wsu.edu/faculty/genz/software/matlab/bvn.m 5 3.3.1 Approximation for strong conditions: As noted in Section 2.1.1, deterministic optimizers tend to use the strong Wolfe conditions, which use |f ′(0)| and |f ′(t)|. A precise extension of these conditions to the probabilistic setting is numerically taxing, because the distribution over |f ′| is a non-central χ-distribution, requiring customized computations. However, a straightforward variation to (14) captures the spirit of the strong Wolfe conditions, that large positive derivatives should not be accepted: Assuming f ′(0) < 0 (i.e. that the search direction is a descent direction), the strong second Wolfe condition can be written exactly as 0 ≤bt = f ′(t) −c2f(0) ≤−2c2f ′(0). (15) The value −2c2f ′(0) is bounded to 95% confidence by −2c2f ′(0) ≲−2c2(|µ′(0)| + 2 p V′(0)) =: ¯b. (16) Hence, an approximation to the strong Wolfe conditions can be reached by replacing the infinite upper integration limit on b in Eq. (14) with (¯b −mb t)/ p Cbb t . The effect of this adaptation, which adds no overhead to the computation, is shown in Figure 2 as a dashed line. 3.4 Eliminating Hyper-parameters As a black-box inner loop, the line search should not require any tuning by the user. The preceding section introduced six so-far undefined parameters: c1, c2, cW , θ, σf, σf ′. We will now show that c1, c2, cW , can be fixed by hard design decisions. θ can be eliminated by standardizing the optimization objective within the line search; and the noise levels can be estimated at runtime with low overhead for batch objectives of the form in Eq. (1). The result is a parameter-free algorithm that effectively removes the one most problematic parameter from SGD—the learning rate. Design Parameters c1, c2, cW Our algorithm inherits the Wolfe thresholds c1 and c2 from its deterministic ancestors. We set c1 = 0.05 and c2 = 0.8. This is a standard setting that yields a ‘lenient’ line search, i.e. one that accepts most descent points. The rationale is that the stochastic aspect of SGD is not always problematic, but can also be helpful through a kind of ‘annealing’ effect. The acceptance threshold cW is a new design parameter arising only in the probabilistic setting. We fix it to cW = 0.3. To motivate this value, first note that in the noise-free limit, all values 0 < cW < 1 are equivalent, because pWolfe then switches discretely between 0 and 1 upon observation of the function. A back-of-the-envelope computation (left out for space), assuming only two evaluations at t = 0 and t = t1 and the same fixed noise level on f and f ′ (which then cancels out), shows that function values barely fulfilling the conditions, i.e. at1 = bt1 = 0, can have pWolfe ∼0.2 while function values at at1 = bt1 = −ϵ for ϵ _ 0 with ‘unlucky’ evaluations (both function and gradient values one standard-deviation from true value) can achieve pWolfe ∼0.4. The choice cW = 0.3 balances the two competing desiderata for precision and recall. Empirically (Fig. 3), we rarely observed values of pWolfe close to this threshold. Even at high evaluation noise, a function evaluation typically either clearly rules out the Wolfe conditions, or lifts pWolfe well above the threshold. Scale θ The parameter θ of Eq. (4) simply scales the prior variance. It can be eliminated by scaling the optimization objective: We set θ = 1 and scale yi ^ (yi−y0)/|y′ 0|, y′ i ^ y′ i/|y′ 0| within the code of the line search. This gives y(0) = 0 and y′(0) = −1, and typically ensures the objective ranges in the single digits across 0 < t < 10, where most line searches take place. The division by |y′ 0| causes a non-Gaussian disturbance, but this does not seem to have notable empirical effect. Noise Scales σf, σf ′ The likelihood (3) requires standard deviations for the noise on both function values (σf) and gradients (σf ′). One could attempt to learn these across several line searches. However, in exchangeable models, as captured by Eq. (1), the variance of the loss and its gradient can be estimated directly within the batch, at low computational overhead—an approach already advocated by Schaul et al. [14]. We collect the empirical statistics ˆS(x) := 1 m m X j ℓ2(x, yj), and ˆ ∇S(x) := 1 m m X j ∇ℓ(x, yj).2 (17) 6 (where .2 denotes the element-wise square) and estimate, at the beginning of a line search from xk, σ2 f = 1 m −1  ˆS(xk) −ˆL(xk)2 and σ2 f ′ = si.2⊺ 1 m −1  ˆ ∇S(xk) −(∇ˆL).2 . (18) This amounts to the cautious assumption that noise on the gradient is independent. We finally scale the two empirical estimates as described in §3.4: σf ^ σf/|y′(0)|, and ditto for σf ′. The overhead of this estimation is small if the computation of ℓ(x, yj) itself is more expensive than the summation over j (in the neural network examples of §4, with their comparably simple ℓ, the additional steps added only ∼1% cost overhead to the evaluation of the loss). Of course, this approach requires a batch size m > 1. For single-sample batches, a running averaging could be used instead (single-sample batches are not necessarily a good choice. In our experiments, for example, vanilla SGD with batch size 10 converged faster in wall-clock time than unit-batch SGD). Estimating noise separately for each input dimension captures the often inhomogeneous structure among gradient elements, and its effect on the noise along the projected direction. For example, in deep models, gradient noise is typically higher on weights between the input and first hidden layer, hence line searches along the corresponding directions are noisier than those along directions affecting higher-level weights. 3.4.1 Propagating Step Sizes Between Line Searches As will be demonstrated in §4, the line search can find good step sizes even if the length of the direction si (which is proportional to the learning rate α in SGD) is mis-scaled. Since such scale issues typically persist over time, it would be wasteful to have the algorithm re-fit a good scale in each line search. Instead, we propagate step lengths from one iteration of the search to another: We set the initial search direction to s0 = −α0∇ˆL(x0) with some initial learning rate α0. Then, after each line search ending at xi = xi−1 + t∗si, the next search direction is set to si+1 = −1.3 · t∗α0∇ˆL(xi). Thus, the next line search starts its extrapolation at 1.3 times the step size of its predecessor. Remark on convergence of SGD with line searches: We note in passing that it is straightforward to ensure that SGD instances using the line search inherit the convergence guarantees of SGD: Putting even an extremely loose bound ¯αi on the step sizes taken by the i-th line search, such that P∞ i ¯αi = ∞and P∞ i ¯α2 i < ∞, ensures the line search-controlled SGD converges in probability [1]. 4 Experiments Our experiments were performed on the well-worn problems of training a 2-layer neural net with logistic nonlinearity on the MNIST and CIFAR-10 datasets.6 In both cases, the network had 800 hidden units, giving optimization problems with 636 010 and 2 466 410 parameters, respectively. While this may be ‘low-dimensional’ by contemporary standards, it exhibits the stereotypical challenges of stochastic optimization for machine learning. Since the line search deals with only univariate subproblems, the extrinsic dimensionality of the optimization task is not particularly relevant for an empirical evaluation. Leaving aside the cost of the function evaluations themselves, computation cost associated with the line search is independent of the extrinsic dimensionality. The central nuisance of SGD is having to choose the learning rate α, and potentially also a schedule for its decrease. Theoretically, a decaying learning rate is necessary to guarantee convergence of SGD [1], but empirically, keeping the rate constant, or only decaying it cautiously, often work better (Fig. 4). In a practical setting, a user would perform exploratory experiments (say, for 103 steps), to determine a good learning rate and decay schedule, then run a longer experiment in the best found setting. In our networks, constant learning rates of α = 0.75 and α = 0.08 for MNIST and CIFAR-10, respectively, achieved the lowest test error after the first 103 steps of SGD. We then trained networks with vanilla SGD with and without α-decay (using the schedule α(i) = α0/i), and SGD using the probabilistic line search, with α0 ranging across five orders of magnitude, on batches of size m = 10. Fig. 4, top, shows test errors after 10 epochs as a function of the initial learning rate α0 (error bars based on 20 random re-starts). Across the broad range of α0 values, the line search quickly identified good step sizes α(t), stabilized the training, and progressed efficiently, reaching test errors similar 6http://yann.lecun.com/exdb/mnist/ and http://www.cs.toronto.edu/˜kriz/cifar.html. Like other authors, we only used the “batch 1” sub-set of CIFAR-10. 7 10−4 10−3 10−2 10−1 100 101 0.6 0.7 0.8 0.9 intial learning rate test error CIFAR10 2layer neural net SGD fixed α SGD decaying α Line Search 10−4 10−3 10−2 10−1 100 101 10−2 10−1 100 intial learning rate MNIST 2layer neural net 0 2 4 6 8 10 0 2 4 6 8 10 epoch 0 2 4 6 8 10 0.6 0.8 1 test error 0 2 4 6 8 10 0 2 4 6 8 10 epoch 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Figure 4: Top row: test error after 10 epochs as function of initial learning rate (note logarithmic ordinate for MNIST). Bottom row: Test error as function of training epoch (same color and symbol scheme as in top row). No matter the initial learning rate, the line search-controlled SGD perform close to the (in practice unknown) optimal SGD instance, effectively removing the need for exploratory experiments and learning-rate tuning. All plots show means and 2 std.-deviations over 20 repetitions. to those reported in the literature for tuned versions of this kind of architecture on these datasets. While in both datasets, the best SGD instance without rate-decay just barely outperformed the line searches, the optimal α value was not the one that performed best after 103 steps. So this kind of exploratory experiment (which comes with its own cost of human designer time) would have led to worse performance than simply starting a single instance of SGD with the linesearch and α0 = 1, letting the algorithm do the rest. Average time overhead (i.e. excluding evaluation-time for the objective) was about 48ms per line search. This is independent of the problem dimensionality, and expected to drop significantly with optimized code. Analysing one of the MNIST instances more closely, we found that the average length of a line search was ∼1.4 function evaluations, 80% −90% of line searches terminated after the first evaluation. This suggests good scale adaptation and thus efficient search (note that an ‘optimally tuned’ algorithm would always lead to accepts). The supplements provide additional plots, of raw objective values, chosen step-sizes, encountered gradient norms and gradient noises during the optimization, as well as test-vs-train error plots, for each of the two datasets, respectively. These provide a richer picture of the step-size control performed by the line search. In particular, they show that the line search chooses step sizes that follow a nontrivial dynamic over time. This is in line with the empirical truism that SGD requires tuning of the step size during its progress, a nuisance taken care of by the line search. Using this structured information for more elaborate analytical purposes, in particular for convergence estimation, is an enticing prospect, but beyond the scope of this paper. 5 Conclusion The line search paradigm widely accepted in deterministic optimization can be extended to noisy settings. Our design combines existing principles from the noise-free case with ideas from Bayesian optimization, adapted for efficiency. We arrived at a lightweight “black-box” algorithm that exposes no parameters to the user. Our method is complementary to, and can in principle be combined with, virtually all existing methods for stochastic optimization that adapt a step direction of fixed length. Empirical evaluations suggest the line search effectively frees users from worries about the choice of a learning rate: Any reasonable initial choice will be quickly adapted and lead to close to optimal performance. Our matlab implementation will be made available at time of publication of this article. 8 References [1] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, Sep. 1951. [2] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Twenty-first International Conference on Machine Learning (ICML 2004), 2004. [3] L. Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 19th Int. Conf. on Computational Statistic (COMPSTAT), pages 177–186. Springer, 2010. [4] M.D. Hoffman, D.M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303–1347, 2013. [5] J. Hensman, M. Rattray, and N.D. Lawrence. Fast variational inference in the conjugate exponential family. In Advances in Neural Information Processing Systems (NIPS 25), pages 2888–2896, 2012. [6] T. Broderick, N. Boyd, A. Wibisono, A.C. Wilson, and M.I. Jordan. Streaming variational Bayes. In Advances in Neural Information Processing Systems (NIPS 26), pages 1727–1735, 2013. [7] A.P. George and W.B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Machine Learning, 65(1):167–198, 2006. [8] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [9] N.N. Schraudolph. Local gain adaptation in stochastic gradient descent. In Ninth International Conference on Artificial Neural Networks (ICANN) 99, volume 2, pages 569–574, 1999. [10] S.-I. Amari, H. Park, and K. Fukumizu. Adaptive method of realizing natural gradient learning for multilayer perceptrons. Neural Computation, 12(6):1399–1409, 2000. [11] N.L. Roux and A.W. Fitzgibbon. A fast natural Newton method. In 27th International Conference on Machine Learning (ICML), pages 623–630, 2010. [12] R. Rajesh, W. Chong, D. Blei, and E. Xing. An adaptive learning rate for stochastic variational inference. In 30th International Conference on Machine Learning (ICML), pages 298–306, 2013. [13] P. Hennig. Fast Probabilistic Optimization from Noisy Gradients. In 30th International Conference on Machine Learning (ICML), 2013. [14] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. In 30th International Conference on Machine Learning (ICML-13), pages 343–351, 2013. [15] R. Fletcher and C.M. Reeves. Function minimization by conjugate gradients. The Computer Journal, 7(2):149–154, 1964. [16] C.G. Broyden. A new double-rank minimization algorithm. Notices of the AMS, 16:670, 1969. [17] R. Fletcher. A new approach to variable metric algorithms. The Computer Journal, 13(3):317, 1970. [18] D. Goldfarb. A family of variable metric updates derived by variational means. Math. Comp., 24(109):23– 26, 1970. [19] D.F. Shanno. Conditioning of quasi-Newton methods for function minimization. Math. Comp., 24(111):647– 656, 1970. [20] J. Nocedal and S.J. Wright. Numerical Optimization. Springer Verlag, 1999. [21] P. Wolfe. Convergence conditions for ascent methods. SIAM Review, pages 226–235, 1969. [22] L. Armijo. Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1–3, 1966. [23] D.R. Jones, M. Schonlau, and W.J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13(4):455–492, 1998. [24] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT, 2006. [25] G. Wahba. Spline models for observational data. Number 59 in CBMS-NSF Regional Conferences series in applied mathematics. SIAM, 1990. [26] S. S¨arkk¨a. Bayesian filtering and smoothing. Cambridge University Press, 2013. [27] A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, 3rd ed. edition, 1991. [28] R.J. Adler. The Geometry of Random Fields. Wiley, 1981. [29] Z. Drezner and G.O. Wesolowsky. On the computation of the bivariate normal integral. Journal of Statistical Computation and Simulation, 35(1-2):101–107, 1990. 9
2015
198